text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
address: |
Jagiellonian University,\
Faculty of Mathematics and Computer Science,\
Department of Theoretical Computer Science,\
ul. Prof. S. Łojasiewicza 6,\
30-348, Kraków, Poland
author:
- 'Paweł M. Idziak AND Jacek Krzaczkowski'
date: 'July 23, 2017'
title: 'Satisfiability in multi-valued circuits'
---
[^1]
[^1]: The project is partially supported by Polish NCN Grant \# 2014/14/A/ST6/00138.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $\pi$ be a pro-$\ell$ completion of a free group, and let $G$ be a profinite group acting continuously on $\pi$. First suppose the action is given by a character. Then the boundary maps $\delta_n: H^1(G, \pi/[\pi]_n) \rightarrow H^2(G, [\pi]_n/[\pi]_{n+1})$ are Massey products. When the action is more general, we partially compute these boundary maps. Via obstructions of Jordan Ellenberg, this implies that $\pi_1$ sections of ${\mathbb P^1_{k} - \{0,1,\infty \}}$ satisfy the condition that associated $n^{th}$ order Massey products in Galois cohomology vanish. For the $\pi_1$ sections coming from rational points, these conditions imply that $\langle (1-x)^{-1}, x^{-1}, x^{-1}, \ldots, x^{-1} \rangle = 0$ where $x$ in $H^1({\operatorname{Gal}}(\overline{k}/k), {\mathbb{Z}}_{\ell}(\chi))$ is the image of an element of $k^*$ under the Kummer map.'
address: 'Dept. of Mathematics, Harvard'
author:
- Kirsten Wickelgren
bibliography:
- 'DN.bib'
date: 'June 11, 2011'
title: '$n$-Nilpotent obstructions to $\pi_1$ sections of $\mathbb P^1 - \{0,1,\infty \}$ and Massey products'
---
Introduction
============
Grothendieck’s section conjecture predicts that the rational points of a proper smooth hyperbolic curve $X$ over a number field $k$ are in natural bijection with the conjugacy classes of sections of the homotopy exact sequence for the étale fundamental group $$\label{hes} \xymatrix{1 \ar[r] & \pi_1(X_{\overline{k}}) \ar[r] & \pi_1(X) \ar[r] & \pi_1({\operatorname{Spec}}k)={\operatorname{Gal}}(\overline{k}/k) \ar[r] & 1},$$ where the conjugacy class of a section $s: {\operatorname{Gal}}(\overline{k}/k) \rightarrow \pi_1(X)$ is the set of those sections $g \mapsto \gamma s(g) \gamma^{-1}$ where $\gamma$ is a fixed element of $\pi_1(X_{\overline{k}})$. The phrase “$\pi_1$ section" in the title refers to a section $s$ of $\pi_1(X) \rightarrow \pi_1({\operatorname{Spec}}k)$. For a non-proper smooth hyperbolic curve, rational points “at infinity" determine “bouquets" of sections of (\[hes\]) in bijection with $H^1({\operatorname{Gal}}(\overline{k}/k), {\hat{\mathbb{Z}}}(\chi))$ –see [@Popbipadicsc p. 2]. (Here, $\chi$ denotes the cyclotomic character, and ${\hat{\mathbb{Z}}}(\chi^n)$ denotes ${\hat{\mathbb{Z}}}$ with Galois action given by $\chi^n$.) More specifically, let $X$ be a smooth, geometrically integral curve over $k$ with negative Euler characteristic. Let $\overline{X}$ denote the smooth compactification of $X$. The section conjecture predicts that $$( \coprod_{(\overline{X}-X)(k)} H^1({\operatorname{Gal}}(\overline{k}/k), {\hat{\mathbb{Z}}}(\chi))) \coprod X(k)$$ is in bijection with the conjugacy classes of sections of (\[hes\]) via a “non-abelian Kummer map" discussed in \[kappadefn\].
Consider the problem of counting the conjugacy classes of sections of (\[hes\]). When (\[hes\]) is split, this is equivalent to computing the pointed set $H^1({\operatorname{Gal}}(\overline{k}/k), \pi_1(X_{\overline{k}}))$, which is difficult. In [@Ellenberg_2_nil_quot_pi], Jordan Ellenberg suggested studying instead the image of $$H^1({\operatorname{Gal}}(\overline{k}/k), \pi_1(X_{\overline{k}})) \rightarrow H^1({\operatorname{Gal}}(\overline{k}/k), \pi_1(X_{\overline{k}})^{ab})$$ by filtering $ \pi_1(X_{\overline{k}})$ by its lower central series. More specifically, let $\pi$ abbreviate $\pi_1(X_{\overline{k}})$, let $\pi^{ab}$ denote the abelianization of $\pi$, and let $[\pi]_n$ denote the $n^{th}$ subgroup of the lower central series (cf. \[notationsubsection2\]). Ellenberg proposed successively computing the images of $$H^1({\operatorname{Gal}}(\overline{k}/k), \pi/[\pi]_n) \rightarrow H^1({\operatorname{Gal}}(\overline{k}/k), \pi^{ab})$$ via the boundary maps $$\delta_n: H^1({\operatorname{Gal}}(\overline{k}/k), \pi/[\pi]_n) \rightarrow H^1({\operatorname{Gal}}(\overline{k}/k), [\pi]_n/[\pi]_{n+1})$$ coming from the central extensions $$\xymatrix{1 \ar[r] & [\pi]_n/[\pi]_{n+1} \ar[r] & \pi/[\pi]_{n+1} \ar[r] & \pi/[\pi]_n \ar[r] &1} .$$ This paper makes two group cohomology computations relating $\delta_n$ to Massey products (Propositions \[nomonodromy\_delta\_n\] and \[muJdeltanwithmonodromy\]), and then applies them to study the $\pi_1$ sections of ${\mathbb P^1_{k} - \{0,1,\infty \}}$ (Corollary \[pmkpi1restcor\]) and Massey products of elements of $H^1({\operatorname{Gal}}(\overline{k}/k), {\mathbb{Z}}^{\Sigma}(\chi))$, where $\Sigma$ is the set of primes not dividing any integer less than the order of the Massey product, and ${\mathbb{Z}}^{\Sigma}$ denotes the pro-$\Sigma$ completion of ${\mathbb{Z}}$ (Corollary \[vanishMasseyCor\]).
More specifically, the content of this paper is as follows: section \[freegroupcharaction\] computes $\delta_n : H^1(G, \pi/[\pi]_n) \rightarrow H^2(G, [\pi]_n/[\pi]_{n+1})$ when $\pi$ is a pro-$\Sigma$ completion of a free group with generators $\{ \gamma_1, \gamma_2, \ldots, \gamma_r \}$, where $\Sigma$ is any set of primes not dividing $n!$, and $G$ is a profinite group acting on $\pi$ by $$g \gamma_i = \gamma_i^{\chi(g)}$$ where $\chi: G \rightarrow {\mathbb{Z}}^{\Sigma}$ is a character. In this case, $\delta_n$ is determined by $n^r$ order $n$ Massey products – see Proposition \[nomonodromy\_delta\_n\]. The case of the trivial character with $G$ and $\pi$ replaced by discrete groups is essentially contained in [@Dwyer]. The generalization to non-trivial characters is not immediate; for instance, it depends on the existence of certain upper triangular matrices whose $N^{th}$ powers are given by multiplying the $i^{th}$ upper diagonal by $N^i$–see (\[hom\_for\_defining\]) and Lemma \[Aldef\]. (To obtain these matrices one must invert $n!$ or work with pro-$\Sigma$ groups. We do the later, although the former works as well.) This computation is then used to study $\delta_n$ where $\pi$ is as above for $r=2$, and $G$ is a group acting on $\pi$ by $$g(\gamma_1) =\gamma_1^{\chi(g)}$$ $$g(\gamma_2) =\mathfrak{f}(g)^{-1}\gamma_2^{\chi(g)}\mathfrak{f}(g)$$ where $\chi: G \rightarrow {\mathbb{Z}}^{\Sigma}$ is a character, and $\mathfrak{f}: G \rightarrow [\pi]_2$ is a cocycle taking values in the commutator subgroup of $\pi$. In this case, $\delta_n$ pushed forward by certain Magnus coefficients are Massey products. The Magnus coefficients in question are those associated to degree $n$ non-commuative monomials in two variables containing $n-1$ factors of one variable – see Proposition \[muJdeltanwithmonodromy\]. This calculation imposes restrictions on the image of $$H^1({\operatorname{Gal}}(\overline{k}/k), \pi/[\pi]_{n+1}) \rightarrow H^1({\operatorname{Gal}}(\overline{k}/k), \pi^{ab})$$ for $X = {\mathbb P^1_{k} - \{0,1,\infty \}}$, $\pi = \pi_1(X_{{\overline{k}}})^{\Sigma}$. Identifying $H^1({\operatorname{Gal}}(\overline{k}/k), \pi^{ab})$ with $H^1({\operatorname{Gal}}(\overline{k}/k), {\mathbb{Z}}^{\Sigma}(\chi))^2$, these restrictions are that the image is contained in the subset of elements $x_1 \times x_2$ such that the Massey products $\langle -x_{J(1)}, -x_{J(2)}, \ldots, -x_{J(n)} \rangle$ vanish for all $J: \{1,2,\ldots,n \} \rightarrow \{1,2\}$ which only assume the value $2$ once – see Corollary \[pmkpi1restcor\]. Corollary \[vanishMasseyCor\] writes these restrictions for the $\pi_1$ sections coming from rational points and tangential points, and concludes that the $n^{th}$ order Massey products $$\langle x^{-1}, \ldots, x^{-1}, (1-x)^{-1}, x^{-1}, \ldots, x^{-1} \rangle \textrm { and } \langle x,\ldots x,-x, x, \ldots, x \rangle$$ vanish, where $x$ in $H^1({\operatorname{Gal}}(\overline{k}/k), {\hat{\mathbb{Z}}}(\chi))$ denotes the image of an element of $k^*$ under the Kummer map. Much of this vanishing behavior was previously shown by Sharifi [@Sharifi], who calculates Massey products of the form $\langle x,x,\ldots,x,y\rangle$ under certain hypotheses and using different methods –see remarks \[pi1restremarks\] and \[Sharifivanishing\]. Triple Massey products in Galois cohomology with restricted ramification are studied by Vogel in [@Vogel_thesis].
The first subsections of sections \[deltansection\] and \[application\_section\] contain only well-known material. They are meant to be expository and to fix notation. [*Acknowledgments:*]{} I wish to thank Romyar Sharifi for useful correspondence.
$n^{th}$ order Massey products and $\delta_n$ {#deltansection}
=============================================
\[notationsubsection2\] For elements $g_1$, $g_2$ of $G$, let $[g_1,g_2] = g_1 g_2 g_1^{-1} g_2^{-1}$ denote the commutator. For a profinite group $\pi$, let $\pi=[\pi]_1 \supset [\pi]_2 \supset [\pi]_3 \ldots$ denote the lower central series: $[\pi]_n$ is defined to be the closure of the subgroup generated by the elements of $[\pi, [\pi]_{n-1}]$.
\[Z(chi)notation\] For a (profinite) group $G$, a profinite abelian group $A$, and a (continuous) homomorphism $\chi: G \rightarrow {\operatorname{Aut}}(A)$, let $A(\chi)$ denote the associated profinite group with $G$ action. For example, if $A$ is a ring and $\chi$ is a homomorphism $G \rightarrow A^*$, then for any integer $n$, $A(\chi^n)$ is a profinite group with $G$ action.
Let $\Sigma$ denote a set of primes (of ${\mathbb{Z}}$). For any group $G$, let $G^{\Sigma}$ denote the pro-$\Sigma$ completion of $G$, i.e. the inverse limit of all quotients of $G$ whose order divides a product of powers of primes in $\Sigma$.
\[Masseydefn\]
For a profinite group $G$ and a profinite abelian group $A$ with a continuous action of $G$, let $(C^*(G,A), D)$ be the complex of inhomogeneous cochains of $G$ with coefficients in $A$ as in [@coh_num_fields I.2 p. 14]. For $c \in C^p(G, A)$ and $d \in C^q(G, A)$, let $c \cup d$ denote the cup product $c \cup d \in C^{p+q}(G, A \otimes A)$ $$(c \cup d)(g_1,\ldots,g_{p+q}) = c(g_1,\ldots,g_p) \otimes ((g_1\cdots g_p)d(g_{p+1},\ldots, g_{p+q})).$$ This product induces a well defined map on cohomology. If $A$ is a ring with $G$ action, then the action is given by a homomorphism $\chi: G \rightarrow A^*$. Recall the notation $A(\chi^n)$ defined in \[Z(chi)notation\]. The $G$ equivariant multiplication map $A(\chi^n) \otimes A(\chi^m) \rightarrow A(\chi^{n+m})$ induces cup products $$C^p(G,A(\chi^n)) \otimes C^q(G,A(\chi^m)) \rightarrow C^{p+q}(G, A(\chi^{n+m}))$$ $$H^p(G,A(\chi^n)) \otimes H^q(G,A(\chi^m)) \rightarrow H^{p+q}(G, A(\chi^{n+m})) .$$
For a profinite group $Q$, no longer assumed to be abelian, the set of continuous functions $G \rightarrow Q$ is denoted $C^1(G,Q)$. An element $s$ of $C^1(G,Q)$ such that $s(g_1 g_2) = s(g_1) g_1 s(g_2)$ is a [*cocycle*]{} or [*twisted homomorphism*]{}. $H^1(G,Q)$ is defined as equivalence classes of cocycles in the usual manner (cf. [@serre:localfields VII Appendix]).
\[Massey\_prod\_def\] Let $t_1,\ldots, t_n$ be elements of $H^1(G,A(\chi)).$ The $n^{th}$ order Massey product of the ordered $n$-tuple $(t_1,\ldots, t_n)$ is defined if there exist $ T_{ij}$ in $C^1(G,A(\chi^{j-i}))$ for $i,j$ in $\{1,2,\ldots, n+1 \}$ such that $i<j$ and $(i,j) \neq (1,n+1)$ satisfying
- $T_{i,i+1}$ represents $t_i$.
- $D T_{ij} = \sum_{p=i+1}^{j-1} T_{ip} \cup T_{pj}$ for $i+1<j$
$T$ is called a [*defining system*]{}. The [*Massey product relative to $T$*]{} is defined by $$\langle t_1,\ldots t_{n-1} \rangle_T = \sum_{p=2}^{n} T_{1p} \cup T_{p,n+1} .$$
Let $U_{n+1}$ denote the multiplicative group of $(n+1) \times (n+1)$ upper triangular matrices with coefficients in $A$ whose diagonal entries are $1$. (“U" stands for unipotent– not unitary.) Let $a_{ij}$ be the function taking a matrix to its $(i,j)$-entry. $U_{n+1}$ inherits an action of $G$ by $a_{ij} (g M )= \chi(g)^{j-i} a_{ij}M$. We have a $G$ equivariant inclusion $A(\chi^n) \rightarrow U_{n+1}$ sending $a$ in $A$ to the matrix with $a$ in the $(1,n)$-entry, and with all other off diagonal matrix entries $0$. This inclusion gives rise to a central extension
$$\label{A_U_barU}
1 \rightarrow A(\chi^n) \rightarrow U_{n+1} \rtimes G \rightarrow \overline{U}_{n+1} \rtimes G \rightarrow 1.$$
where $\overline{U}_{n+1}$ is defined as the quotient $U_{n+1}/A(\chi^n)$.
The element of $H^2(\overline{U}_{n+1} \rtimes G, A(\chi^n))$ classifying (\[A\_U\_barU\]) is an order $n$ Massey product. (\[H2kelement\]See [@Brown_coh_groups IV §3] for the definition of the element of $H^2$ classifying a short exact sequence of groups; to apply the same discussion to profinite groups, one needs continuous sections of profinite quotient maps. For this, see [@Profinite_Groups Prop 2.2.2].) Note that $-a_{i,j}$ determines an element of $C^1(\overline{U}_{n+1} \rtimes G, A(\chi^{j-i}))$. As $(i,j)$ ranges through the set of pairs of elements of $\{1,2,\ldots, n+1 \}$ such that $i<j$ and $(i,j) \neq (1,n+1)$, $-a_{i,j}$ is a defining system for $(-a_{1,2}, -a_{2,3}, \ldots, -a_{n,n+1})$. The element of $H^2(\overline{U}_{n+1} \rtimes G, A(\chi^n))$ classifying (\[A\_U\_barU\]) is $\langle -a_{1,2}, -a_{2,3}, \ldots, -a_{n,n+1} \rangle$, where the Massey product is taken with respect to the defining system $-a_{i,j}$. This follows immediately from the definition of matrix multiplication.
\[Magnus\_embedding\_recall\] For later use, we recall some well known properties of the Magnus embedding. Let $F$ denote the free group on the $r$ generators $\gamma_i$, $i = 1,\ldots, r$. For any ring $A$, let $A \langle \langle z_1,\ldots, z_r \rangle \rangle$ be the ring of associative power series in the non-commuting variables $z_1, \ldots,z_r$ with coefficients in $A$. Let $A \langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}$ denote the subgroup of the multiplicative group of units of $A \langle \langle z_1,\ldots,z_r \rangle \rangle$ consisting of power series with constant coefficient $1$. The [*Magnus embedding*]{} is defined $$F \rightarrow {\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}$$ by $x_j \mapsto 1+ z_j$ for all $j$.
Since ${\mathbb{Z}}^{\Sigma} \langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}$ is pro-$\Sigma$, $F \rightarrow {\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}$ gives rise to a commutative diagram $$\xymatrix{ F^{\Sigma} \ar[r] & {\mathbb{Z}}^{\Sigma} \langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}\\
F \ar[u] \ar[r] & \ar[u] {\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle^{(1,\times)}}.$$ Let $J:\{1,\ldots,n\} \rightarrow \{1,\ldots,r\}$ be any function. The degree $n$ monomial $ z_{J(1)} \cdots z_{J(n)}$ determines the [*Magnus coefficient*]{} $\mu_J: F^{\Sigma} \rightarrow {\mathbb{Z}}^{\Sigma}$ (or $\mu_J : F \rightarrow {\mathbb{Z}}$ ) given by taking an element of $F^{\Sigma}$ to the coefficient of $ z_{J(1)} \cdots z_{J(n)}$ in its image under the Magnus embedding. It is well known that $\mu_J (\gamma) =0$ for $\gamma \in [F]_m$ and $m>n \geq 1$ (see [@Magnus_Karrass_Solitar §5.5, Cor. 5.7]), and it follows by continuity that $\mu_J (\gamma) =0$ for $\gamma$ in $F^{\Sigma}$ and $m>n \geq 1$.
The [*Lie elements*]{} of ${\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle$ are the elements in the image of the Lie algebra map $\zeta_i \mapsto z_i$ from the free Lie algebra over ${\mathbb{Z}}$ on $r$ generators $\zeta_i$ to ${\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle$, where ${\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle$ is considered as a Lie algebra with bracket $[z,z'] = z z' - z' z$. It is well known that the Magnus embedding induces an isomorphism from $[F]_n/[F]_{n+1}$ to the homogeneous degree $n$ Lie elements of ${\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle$ [@Magnus_Karrass_Solitar §5.7, Cor. 5.12(i)]. The Lie basis theorem [@Magnus_Karrass_Solitar §5.6, Thm. 5.8(ii)] implies that the inclusion of the Lie elements of degree $n$ into all the degree $n$ elements of ${\mathbb{Z}}\langle \langle z_1,\ldots,z_r \rangle \rangle$ is a direct summand. It follows that $$\xymatrix{
[F]_n/[F]_{n+1} \ar[rr]^{\oplus_J \mu_J}&& \oplus_J {\mathbb{Z}}}$$ is the inclusion of a (free) direct summand. By definition of $\mu_J$, we have the commutative diagram $$\label{UJcomdiag}\xymatrix{ [F^{\Sigma}]_n/[F^{\Sigma}]_{n+1} \ar[rr]^{\oplus_J \mu_J} && \oplus_J {\mathbb{Z}}^{\Sigma} \\
[F]_n/[F]_{n+1} \ar[u] \ar[rr]^{\oplus_J \mu_J}&& \oplus_J {\mathbb{Z}}\ar[u]}$$ where the direct sums are taken over all functions $$J:\{1,\ldots,n\} \rightarrow \{1,\ldots,r\}.$$
We claim that the top horizontal morphism in (\[UJcomdiag\]) is the inclusion of a direct summand of the form $\oplus {\mathbb{Z}}^{\Sigma}$, and that the left vertical morphism is the pro-$\Sigma$ completion. To see this: note that since $[F]_n/[F]_{n+1}$ is a free ${\mathbb{Z}}$ submodule of $\oplus_J {\mathbb{Z}}$, we have a commutative diagram $$\xymatrix{ ([F]_n/[F]_{n+1})^{\Sigma} \ar[rr] && \oplus_J {\mathbb{Z}}^{\Sigma} \\
[F]_n/[F]_{n+1} \ar[u] \ar[rr]^{\oplus_J \mu_J}&& \oplus_J {\mathbb{Z}}\ar[u]}$$ where the bottom horizontal morphism is the inclusion of a direct summand which is a free ${\mathbb{Z}}$ module, the top horizontal morphism is the inclusion of a direct summand which is a free ${\mathbb{Z}}^{\Sigma}$ module, and both vertical maps are pro-$\Sigma$ completions. The map $([F]_n/[F]_{n+1})^{\Sigma} \rightarrow \oplus_J {\mathbb{Z}}^{\Sigma}$ factors through $[F^{\Sigma}]_n/[F^{\Sigma}]_{n+1}$ by the universal property of pro-$\Sigma$ completion, and it follows that $([F]_n/[F]_{n+1})^{\Sigma}\rightarrow [F^{\Sigma}]_n/[F^{\Sigma}]_{n+1} $ is injective. Since $[F]_n$ has dense image in $[F^{\Sigma}]_n$, and since the image of a compact set under a continuous map to a Hausdorff topological space is closed, we have that $([F]_n/[F]_{n+1})^{\Sigma} \rightarrow [F^{\Sigma}]_n/[F^{\Sigma}]_{n+1} $ is surjective. Thus $([F]_n/[F]_{n+1})^{\Sigma} \rightarrow [F^{\Sigma}]_n/[F^{\Sigma}]_{n+1} $ is an isomorphism of profinite groups (because a continuous bijection between compact Hausdorff topological spaces is a homeomorphism). From this it also follows that the top horizontal morphism in (\[UJcomdiag\]) is the inclusion of a direct summand of the form $\oplus {\mathbb{Z}}^{\Sigma}$.
\[freegroupcharaction\] Let $n$ be a positive integer and let $\Sigma$ be the set of primes not dividing $n!$. Let $\pi$ be the pro-$\Sigma$ completion of the free group on the generators $\{ \gamma_1, \gamma_2, \ldots, \gamma_r \}$. Let $G$ be any profinite group and let $\chi: G \rightarrow ({\mathbb{Z}}^{\Sigma})^*$ be a (continuous) character of $G$. Let $G$ act on $\pi$ via $g \gamma_i = \gamma_i^{\chi(g)}$. Then the map $$\delta_n: H^1(G, \pi/[\pi]_{n}) \rightarrow H^2(G, [\pi]_n/[\pi]_{n+1})$$ is given by $n^{th}$ order Massey products in the following manner:
Recall that $U_{n+1}$ denotes the group of $(n+1) \times (n+1)$ upper triangular matrices with diagonal entries equal to $1$, that $a_{i,j}: U_{n+1} \rightarrow {\mathbb{Z}}^{\Sigma}$ denotes the $(i,j)^{th}$ matrix entry, and that $U_{n+1}$ inherits a $G$-action defined by $a_{i,j} (g ( M )) = \chi(g)^{j-i} a_{i,j} (M)$ for all $M$ in $U_{n+1}$ (see \[Masseydefn\]).
For each $J: \{1,2,\ldots, n \} \rightarrow \{1,2,\ldots,r\}$, let $\varphi_J: \pi \rightarrow U_{n+1}$ be the homomorphism defined $$\label{hom_for_defining} a_{i, j} \varphi_J (\gamma_k) =
\begin{cases}
\frac{1}{l!} & j = i+ l, l>0 \mbox{ and } k = J(v)\mbox{ for all }i \leq v < i+l \\
1 & $j=i$ \\
0 & \mbox{otherwise}
\end{cases}$$
It is a straightforward consequence of the following lemma that $\varphi_J$ is $G$ equivariant:
\[Aldef\][*Let $A_l$ be the matrix in $U_{l+1}$ defined by $a_{i, i+j}A_l = \frac{1}{j!}$ for $j>0$. Then for all positive integers $N$, $a_{i,i+j}(A_l^N )= N^j a_{i,i+j}(A_l)$.*]{}
[[*Proof.* ]{}]{}By induction on $l$. For $l=1$, the lemma is clear. By induction and symmetry, it is sufficient to check that $a_{1,l+1}(A_l^N )= N^l a_{1,l+1}(A_l)$. Now induct on $N$, so in particular, $a_{1,1+j}(A_l^{N-1}) = (N-1)^j \frac{1}{j!}$ for $j=0,\ldots,l$. Thus $$a_{1,l+1}(A_l^N )= \sum_{j=0}^{l} a_{1, 1+j}(A_l^{N-1}) \frac{1}{(l-j)!} =$$ $$\sum_{j=0}^{l} (N-1)^j \frac{1}{j!} \frac{1}{(l-j)!} = ((N-1) + 1)^l \frac{1}{l!},$$ completing the proof.
Since $[U_{n+1}]_{n+1} = 1$ and $[\overline{U}_{n+1}]_n = 1$, $\varphi_J$ descends to $G$-equivariant homomorphisms $$\pi/[\pi]_{n+1} \rightarrow U_{n+1},$$ $$\pi/[\pi]_n \rightarrow \overline{U}_{n+1},$$ $$[\pi]_n/[\pi]_{n+1} \rightarrow {\mathbb{Z}}^{\Sigma}$$ which we also denote by $\varphi_J$.
The basis $\{\gamma_1, \gamma_2, \ldots, \gamma_r \}$ determines an isomorphism $$\pi/[\pi]_2 \cong {\mathbb{Z}}^{\Sigma}(\chi)^r,$$ and therefore an isomorphism $H^1(G, \pi^{ab}) \cong H^1(G, {\mathbb{Z}}^{\Sigma}(\chi))^r$. An element $x$ of $H^1(G, \pi/[\pi]_{n})$ projects to an element of $H^1(G, \pi^{ab})$. Let $x_1 \oplus \ldots \oplus x_r$ in $H^1(G, {\mathbb{Z}}^{\Sigma}(\chi))^r$ denote the image of the projection.
Note that applying $a_{i,i+1} \varphi_J$ to a cocycle $x: G \rightarrow \pi/[\pi]_{n}$ produces a cocycle representing $x_{J(i)}$. Furthermore, $$\{ - a_{i,j} \varphi_J x : i< j, (i,j) \neq (1,n+1) \}$$ is a defining system for the Massey product $\langle -x_{J(1)}, -x_{J(2)}, \ldots, -x_{J(n)} \rangle$.
\[nomonodromy\_delta\_n\][*For any cocycle $x: G \rightarrow \pi/[\pi]_n$, let $[x]$ denote the corresponding element of $H^1(G, \pi/[\pi]_n)$. Then $\delta_n ([x]) = 0$ if and only if $\langle - x_{J(1)}, - x_{J(2)}, \ldots,- x_{J(n)} \rangle = 0$ for every $J: \{1,2,\ldots, n \} \rightarrow \{1,2,\ldots,r\}$, where the Massey product is taken with respect to the defining system $\{ - a_{i,j} \varphi_J x : i< j, (i,j) \neq (1,n+1) \}$.*]{}
[[*Proof.* ]{}]{}Choose $J: \{1,2,\ldots, n \} \rightarrow \{1,2,\ldots,r\}$. $\varphi_J$ induces a commutative diagram $$\label{Utopi}
\xymatrix{1 \ar[r] & {\mathbb{Z}}^{\Sigma}(\chi^n) \ar[r] & U_{n+1} \rtimes G \ar[r] & \overline{U}_{n+1} \rtimes G \ar[r] & 1 \\
1 \ar[r] & [\pi]_n/[\pi]_{n+1} \ar[r] \ar[u] &\pi/[\pi]_{n+1} \rtimes G \ar[r] \ar[u] & \pi/[\pi]_{n} \rtimes G \ar[u] \ar[r] & 1}$$ All the vertical morphisms in (\[Utopi\]) will be denoted by $\varphi_J$. Let $\kappa$ denote the element of $H^2(\pi/[\pi]_{n} \rtimes G, [\pi]_n/[\pi]_{n+1})$ classifying the bottom horizontal row, and let $\kappa'$ denote the element of $H^2(\overline{U_{n+1}} \rtimes G,{\mathbb{Z}}^{\Sigma}(\chi^n))$ classifying the top horizontal row (c.f. \[H2kelement\]). The morphism of short exact sequences (\[Utopi\]) gives the equality $(\varphi_J)_* \kappa = \varphi_J^* \kappa'$ in $H^2(\pi/[\pi]_{n} \rtimes G,{\mathbb{Z}}^{\Sigma}(\chi^n))$.
Choose a cocycle $x: G \rightarrow \pi/[\pi]_n$. Let $x \rtimes \operatorname{id}: G \rightarrow \pi/[\pi]_{n} \rtimes G$ denote the homomorphism $g \mapsto x(g) \rtimes g$ induced by the twisted homomorphism $x$. Then, $\delta_n([x]) = (x \rtimes \operatorname{id})^* \kappa$. Since $(\varphi_J)_* \kappa = \varphi_J^* \kappa'$, we have that $(\varphi_J)_* \delta_n([x]) = (\varphi_J \circ (x \rtimes \operatorname{id}))^* \kappa'$. By \[Masseydefn\], $(\varphi_J \circ (x \rtimes \operatorname{id}))^* \kappa'$ is the Massey product $\langle - x_{J(1)},- x_{J(2)}, \ldots,- x_{J(n)} \rangle$ computed with the defining system $\{- a_{i,j} \varphi_J x : i< j, (i,j) \neq (1,n+1) \}$.
It is therefore sufficient to see that $$\oplus_J (\varphi_J)_*: H^2(G, [\pi]_n/[\pi]_{n+1}) \rightarrow H^2(G, \oplus_J {\mathbb{Z}}^{\Sigma}(\chi^n))$$ is injective. This follows from a result of Dwyer: let $\mu_J$ denote the Magnus coefficient as in \[Magnus\_embedding\_recall\]. By [@Dwyer Lem 4.2], $\mu_J (\gamma)= \varphi_J (\gamma)$ for any element $\gamma$ in the free group generated by the $\gamma_i$, and the equality $\mu_J = \varphi_J$ for any element of $\pi$ follows by continuity. Thus the map $\oplus_J (\varphi_J): [\pi]_n/[\pi]_{n+1} \rightarrow \oplus_J {\mathbb{Z}}^{\Sigma}(\chi^n)$ is the split injection $\oplus_J \mu_J$ induced by the homogeneous degree $n$ piece of the Magnus embedding – see (\[UJcomdiag\]) in \[Magnus\_embedding\_recall\].
Thus, if the element $x_1 \oplus x_2 \oplus \ldots \oplus x_r$ of $H^1(G, \pi^{ab})$ lifts to $x$ in $$H^1(G, \pi/[\pi]_{n+1}),$$ all the order $n$ Massey products $$\langle -x_{J(1)}, - x_{J(2)}, \ldots, - x_{J(n)}\rangle = (\varphi_J)_* \delta_n(x)$$ vanish. Furthermore, if the vanishing of the order $n$ Massey products occurs with respect to defining systems which are compatible in the sense of Proposition \[nomonodromy\_delta\_n\], the converse holds as well.
\[monodromy\_deltan\_bpoint\] Choose a positive integer $n$, and let $\Sigma$ denote the set of primes not dividing $n!$. Let $\pi$ be the pro-$\Sigma$ completion of the free group on two generators $\{ \gamma_1, \gamma_2\}$. Let $\chi: G \rightarrow ({\mathbb{Z}}^{\Sigma})^*$ be a (continuous) character of a profinite group $G$. Let $G$ act on $\pi$ via $$\begin{aligned}
\label{G_actionpiP1-}
g(\gamma_1) &=\gamma_1^{\chi(g)} \\ g(\gamma_2) &=\mathfrak{f}(g)^{-1}\gamma_2^{\chi(g)}\mathfrak{f}(g), \nonumber
\end{aligned}$$
where $\mathfrak{f}: G \rightarrow [\pi]_2$ is a cocyle. For instance, the Galois action on the pro-$\Sigma$ étale fundamental group of ${\mathbb P^1_{k} - \{0,1,\infty \}}$ has this form with respect to an appropriate base point. (See, for instance, [@Ihara_GT]. This situation will be considered in section \[application\_section\].)
Then there are obstructions to $\delta_n = 0$ given by order $n$ Massey products:
Choose $i_0$ in $ \{1,2,\ldots,n\}$ and let $J: \{1,2,\ldots,n\} \rightarrow \{1,2\}$ be the function $J(i_0) = 2$, $J(j) = 1$ for $j\neq i_0$. Let $\varphi_J: \pi \rightarrow U_{n+1}$ be the homomorphism given by equation (\[hom\_for\_defining\]) in \[freegroupcharaction\]. The next two lemmas are used to show that $\varphi_J$ is $G$-equivariant.
\[Ui0j0normalcommutative\][*Let $$U_{i_0, j_0} = \{ M \in U_{n+1}: a_{ij}(M) = 0 \textrm{ for } i \neq j \textrm{ unless } i \leq i_0 \textrm{ and } j \geq j_0 \}$$ Then $U_{i_0, j_0}$ is a normal subgroup of $U$ which is commutative for $i_0 < j_0$.*]{}
[[*Proof.* ]{}]{}It is straightforward to see that $U_{i_0, j_0}$ is a subgroup. (Indeed, it suffices to note that for $M_1$,$M_2$ in $U_{i_0, j_0}$, we have $a_{ij} ((M_1 -1)(M_2 -1)) = 0$ for $i>i_0$ or $j<j_0$; for instance, this implies that $U_{i_0, j_0}$ is closed under inverses because $M^{-1} = 1 + \sum_{k \geq 1} (-1)^k (M-1)^k$.)
To see that $U_{i_0, j_0}$ is normal, choose $Z$ in $U_{n+1}$ and $M$ in $U_{i_0, j_0}$. Note that $$a_{ij}(Z (M-1) Z^{-1}) = \sum_{k,k'} a_{ik}(Z_{ik}) a_{k k'} (M-1) a_{k'j} (Z^{-1})$$ which is only non-zero if there exists $k$ and $k$ such that $ i \leq k \leq k' \leq j$, $k \leq i_0$, and $k' \geq j_0$. This can only occur for $i \leq i_0$ and $j \geq j_0$. So, $U_{i_0, j_0}$ is normal.
Suppose that $i_0 < j_0$, and let $M_1$, $M_2$ be in $U_{i_0, j_0}$. To see that $U_{i_0, j_0}$ is commutative, it suffices to see that $(M_1 -1) (M_2 -1) = 0$. To see this equality, note that for all $i,j,k$, we have $a_{i k}(M_1 - 1) a_{k j} (M_2 -1) = 0$, because if $k< j_0$, then $a_{i k}(M_1 - 1) =0$, and if $k \geq j_0$, then $k > i_0$, whence $ a_{k j} (M_2 -1) = 0$.
[*$\varphi_J(\gamma_2)$ commutes with any element of $\varphi_J([\pi]_2)$.*]{}
[[*Proof.* ]{}]{}Let $X = \varphi_J(\gamma_1)$, $Y = \varphi_J(\gamma_2)$, and $\varpi$ be the closure of the subgroup generated by $X$ and $Y$. By Lemma \[Ui0j0normalcommutative\], it is sufficient to show that $[\varpi]_2$ is contained in $U_{i_0, i_0 + 1}$.
$[\varpi]_2$ is topologically generated by elements of the form $$[\cdots [[X,Y], Z_1], Z_2, \ldots], Z_k]$$ where $Z_i$ is either $X$ or $Y$ and $k = 0,1,\ldots$. By Lemma \[Ui0j0normalcommutative\], if $W$ is in $U_{i_0, i_0 + 1}$, so is $[W,Z]$ for any $Z$ in $\varpi$. Since $Y$ is in $U_{i_0, i_0 + 1}$, it follows that $[\cdots [[X,Y], Z_1], Z_2, \ldots], Z_k]$ is as well.
In particular, $\varphi_J(g(\gamma_2)) = \varphi_J(\gamma_2)^{\chi(g)}$, so $\varphi_J$ is $G$-equivariant by \[freegroupcharaction\].
Since $\varphi_J$ is $G$-equivariant, we have the commutative diagram (\[Utopi\]). Choose a cocycle $x: G \rightarrow \pi/[\pi]_n$. $(\varphi_J)_* \delta_n([x]) $ is the Massey product $\langle - a_{1,2} \varphi_J x ,- a_{2,3} \varphi_J x , \ldots,- a_{n,n+1} \varphi_J x \rangle$ computed with the defining system $\{- a_{i,j} \varphi_J x : i< j, (i,j) \neq (1,n+1) \}$ by \[Masseydefn\] (see the proof of Proposition \[nomonodromy\_delta\_n\]).
Note that $\{\gamma_1,\gamma_2\}$ is a ${\mathbb{Z}}^{\Sigma}(\chi)$ basis for $\pi/[\pi]_2$, giving an isomorphism $H^1(G, \pi^{ab}) \cong H^1(G, {\mathbb{Z}}^{\Sigma}(\chi))^2$. As above, an element $x$ of $H^1(G, \pi/[\pi]_{n})$ projects to an element of $H^1(G, \pi^{ab})$. Let $x_1 \oplus x_2$ in $H^1(G, {\mathbb{Z}}^{\Sigma}(\chi))^2$ denote the image of the projection. Note that $- a_{j,j+1} \varphi_J x = x_{J(j)}$. We have therefore shown:
\[muJdeltanwithmonodromy\][*Let $x: G \rightarrow \pi/[\pi]_n$ be a cocycle, and let $[x]$ denote the corresponding cohomology class. If $\delta_n([x]) = 0$, then $\langle - x_{J(1)}, - x_{J(2)}, \ldots, - x_{J(n)} \rangle = 0$ where this Massey product is taken with respect to the defining system $\{ - a_{i,j} \varphi_J x : i< j, (i,j) \neq (1,n+1) \}$ defined above.*]{}
As in \[freegroupcharaction\], the Massey product in Proposition \[muJdeltanwithmonodromy\] is $\mu_J \delta_n$, where $\mu_J$ is the Magnus coefficient defined in \[Magnus\_embedding\_recall\]. In other words, Proposition \[muJdeltanwithmonodromy\] computes $\mu_J \delta_n$ for all functions $J$ which only assume the value $2$ once.
Application to $\pi_1$ sections of punctured ${\mathbb P}^1$ and Massey products in Galois cohomology {#application_section}
=====================================================================================================
\[Notation3\] Let $k$ be a field of characteristic $0$ and let $\overline{k}$ be an algebraic closure of $k$. Let $G_k= {\operatorname{Gal}}(\overline{k}/k)$ denote the absolute Galois group of $k$.
\[etpi1rev\] A geometric point $b$ of a scheme $X$ (i.e. a map $b: {\operatorname{Spec}}\Omega \rightarrow X$ where $\Omega$ is an algebraically closed field) determines a functor from the finite étale covers of $X$ to the category of sets, called a [*fiber functor*]{}. Given two geometric points $b_1$, $b_2$, define ${\operatorname{Path}}(b_1,b_2)$ to be the natural transformations from the fiber functor associated to $b_1$ to the fiber functor associated to $b_2$. ${\operatorname{Path}}(b_1,b_2)$ naturally has the structure of a profinite set. Path composition will be in “functional order," so given $\wp_1$ in ${\operatorname{Path}}(b_1,b_2)$ and $\wp_2$ in ${\operatorname{Path}}(b_2,b_3)$, we have $\wp_2 \wp_1$ in ${\operatorname{Path}}(b_1,b_3)$. The étale fundamental group $\pi_1(X,b)$ is the profinite group ${\operatorname{Path}}(b,b)$ (see [@sga1] [@Mezard]).
Suppose that $X$ is defined over a field $k$. Let $\overline{k}$ denote a fixed algebraic closure of $k$, and let $X_{\overline{k}} = X \times_{{\operatorname{Spec}}k} {\operatorname{Spec}}\overline{k}$ denote the base change of $X$ to $\overline{k}$. A rational point ${\operatorname{Spec}}k \rightarrow X$ gives rise to a geometric point ${\operatorname{Spec}}\overline{k} \rightarrow X_{\overline{k}}$ of $X_{\overline{k}}$, and there is a natural action of $G_k$ on the associated fiber functor, induced by the commutative diagram $$\xymatrix{ {\operatorname{Spec}}\overline{k} \ar[r]^{g} \ar[d] & {\operatorname{Spec}}\overline{k} \ar[d] \\ X_{\overline{k}} \ar[r]^{g} & X_{\overline{k}} }$$ where $g$ is any element of $G_k$.
Now suppose that $X$ is a smooth, geometrically connected curve over $k$. Let $\overline{X}$ denote the smooth compactification, and let $x: {\operatorname{Spec}}k \rightarrow \overline{X}$ be a rational point. The completed local ring of $\overline{X}$ at the image of $x$ is isomorphic to $k[[\epsilon]]$ and the choice of such an isomorphism gives a map ${\operatorname{Spec}}k((\epsilon)) \rightarrow X$, where $k((\epsilon))$ denotes the field of Laurent power series. Such a map will be called a [*rational tangential point*]{}. To a rational tangential point, we can naturally associate a map ${\operatorname{Spec}}\overline{k}((\epsilon)) \rightarrow X_{\overline{k}}$. Since $k$ is characteristic $0$, the field of Puiseux series $\cup_{n \in {\mathbb{Z}}_{> 0}} \overline{k} ((\epsilon^{1/n}))$ is algebraically closed. Embedding $\overline{k}((\epsilon))$ in $\cup_{n \in {\mathbb{Z}}_{> 0}} \overline{k} ((\epsilon^{1/n}))$ in the obvious manner allows us to associate to a rational tangential point a corresponding geometric point ${\operatorname{Spec}}\cup_{n \in {\mathbb{Z}}_{> 0}} \overline{k} ((\epsilon^{1/n})) \rightarrow X_{\overline{k}}$ and fiber functor. There is a $G_k$ action on this fiber functor given by the previous commutative diagram with the field of Puiseux series replacing $\overline{k}$, and where $g$ in $G_k$ is taken to act on the field of Puiseux series via the action on the $\overline{k}$ coefficients. Tangential points are discussed in greater generality and more detail in [@Deligne §15] and [@Nakamura].
\[tgtvecA1\] Let $U$ be an open subset of $\mathbb{A}^1_{k} = {\operatorname{Spec}}k[z]$. A tangent vector of $\mathbb{A}^1_{k}$ $${\operatorname{Spec}}k[\epsilon]/\langle \epsilon^2 \rangle \rightarrow \mathbb{A}^1_{k}$$ $$z \mapsto a + v \epsilon$$ where $a$ in $k$, $v$ in $k^*$, gives a rational tangential point ${\operatorname{Spec}}k((\epsilon)) \rightarrow U$ of $U$ by $z \mapsto a + v \epsilon $.
By a [*rational base point*]{}, we will mean either a rational point or a rational tangential point, and by a slight abuse of notation, [*rational base point*]{} will also refer to the geometric points given above and their fiber functors.
\[kappadefn\] Let $X$ be a smooth curve over a field $k$. Let $X^{bp}(k)$ denote the set of rational base points of $X$, and assume that $X^{bp}(k) \neq \emptyset$. Fix $b$ in $X^{bp}(k)$. There is a “non-abelian Kummer map" based at $b$ $$\kappa_b: X^{bp}(k) \rightarrow H^1(G_k, \pi_1(X_{\overline{k}}, b))$$ defined as follows: for $x$ in $X^{bp}(k)$, choose $\wp$ in ${\operatorname{Path}}(b, x)$ and define a $1$-cocycle $\kappa_{(b, \wp)} (x): G_k \rightarrow \pi_1(X_{\overline{k}}, b)$ by $$\label{eqkappadef}\kappa_{(b, \wp)} (x) (g) = \wp^{-1} (g \wp).$$ \[kappagammadefn\] The cohomology class of this cocycle is independent of the choice of $\wp$ and $\kappa_b (x)$ is defined to be this cohomology class. When the base point is clear, $\kappa_b$ will also be denoted by $\kappa$.
Note that associated to a rational tangential point of $X$, there is a tangent vector $${\operatorname{Spec}}k[\epsilon]/\langle \epsilon^2 \rangle \rightarrow \overline{X}$$ of the smooth compactification (see the above definition of a rational tangential point; the tangent vector is induced by the chosen isomorphism of $k[[\epsilon]]$ with the completed local ring of $\overline{X}$). It is not difficult to check that the images under $\kappa_b$ of two rational tangential points with the same tangent vector are equal (see [@PIA p 6]).
Let $k$ be a field of characteristic $0$, and choose an isomorphism of the roots of unity in $\overline{k}$ with ${\hat{\mathbb{Z}}}(\chi)$, where $\chi$ denotes the cyclotomic character. The short exact sequence $$\xymatrix{1 \ar[r] & {\mathbb{Z}}/m(\chi) \ar[r] & \overline{k}^* \ar[r]^{x \mapsto x^m} & \overline{k}^* \ar[r] & 1}$$ of $G_k$ modules gives a boundary map $k^* \rightarrow H^1(G_k, {\mathbb{Z}}/m(\chi))$. Letting $m$ vary gives the Kummer map $$k^* \rightarrow H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))$$ We will adopt the notational convention that an element of $k^*$ will also denote the corresponding class in $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))$.
\[tgtl\_pts\_0\_Gm\][*For ${\mathbb{G}}_m$ based at the rational point $1$, $\kappa(x) = x$ and $\kappa(0 + v \epsilon) = v$ for all $x,v$ in $k^*$.*]{}
We sketch of a proof of this well-known fact.
[[*Proof.* ]{}]{} The connected finite étale covers of ${\mathbb{G}}_{m, \overline{k}} = {\operatorname{Spec}}\overline{k}[z, \frac{1}{z}]$ are $\xymatrix{{\mathbb{G}}_{m, \overline{k}} \ar[rr]^{z \mapsto z^n} &&{\mathbb{G}}_{m, \overline{k}}}$ for $n$ in ${\mathbb{Z}}_{>0}$. Let $\mathcal{F} (0+ v \epsilon, n)$ denote the fiber of $z \mapsto z^n$ over (the geometric point associated to) $0 + v \epsilon$, where $0 + v \epsilon$ denotes the tangent vector $ k[\epsilon]/\langle \epsilon^2 \rangle \rightarrow {\operatorname{Spec}}k[z] $ given by $z \mapsto v \epsilon$ (cf. \[tgtvecA1\]). Note that the $n^{th}$ roots of $v$ are in bijection with $\mathcal{F} (0+ v \epsilon, n)$; namely, an $n^{th}$ root $\sqrt[n]{v}$ of $v$ gives a map $\overline{k}[z^{1/n}, \frac{1}{z}] \rightarrow \cup_{n' \in {\mathbb{Z}}_{>0}} \overline{k} (( \epsilon^{1/n'}))$ which is tautologically a point of this fiber. Define $\mathcal{F} (1, n)$ similarly, and note that there is an identification of $\mathcal{F} (1, n)$ with the $n^{th}$ roots of unity in $\overline{k}$. A choice $\{ \sqrt[n]{v} : n \in {\mathbb{Z}}\}$ of compatible $n^{th}$ roots of unity of $v$ gives rise to $\wp$ in ${\operatorname{Path}}(1, 0 + v \epsilon)$; $\wp$ is the natural transformation such that the induced map $\mathcal{F} (1, n) \rightarrow \mathcal{F} (0+ v \epsilon, n)$ takes $1$ to $\sqrt[n]{v}$. It follows that $g \wp$ takes $g 1$ to $g \sqrt[n]{v}$, from which we see that $\kappa(0 + v \epsilon) = v$. The equality $\kappa(x) = x$ is shown similarly.
For a topological space $X$ with a $G$ action and fixed points $b$, $x$ let ${\operatorname{Path}}(b,x)$ denote the space of paths from $b$ to $x$. Note that ${\operatorname{Path}}(b,x)$ has a $G$ action. We can therefore define a map $\kappa$ from the fixed points of $X$ to $H^1(G, \pi_1(X,b))$ by (\[eqkappadef\]) given above. For a $K(\pi,1)$ with $G$ action, $\kappa$ is $\pi_0$ applied to the canonical map from fixed points to homotopy fixed points.
\[Gequiv\_baseptchange\_pi1\] Let $X$ be a scheme over $k$, and let $b_1$, $b_2$ be rational base points. A choice of path $\wp$ in ${\operatorname{Path}}(b_1, b_2)$ gives an isomorphism of profinite groups $\theta: \pi_1(X_{\overline{k}}, b_2) \rightarrow \pi_1(X_{\overline{k}}, b_1)$, defined $$\theta ( \gamma ) = \wp^{-1} \gamma \wp.$$ Note that $\theta$ is not $G_k$ equivariant. Rather, for any $g$ in $G_k$, $$g \theta (\gamma) = \kappa_{(b_1, \wp)} (b_2)^{-1} \theta(g\gamma) \kappa_{(b_1, \wp)} (b_2)$$ (cf. \[kappagammadefn\] for the definition of $\kappa_{(b_1, \wp)} (b_2))$.
\[Gkinertiapreserve\] Let $X$ be a smooth curve over $k$, and let $\overline{X}$ be its smooth compactification. Suppose that $x$ is a rational point of $\overline{X} - X$. Choose a rational tangential base point $b$ at $x$. Let $\gamma$ in $\pi_1(X_{\overline{k}}, b)$ be the path determined by a small loop around the puncture at $x$. Then $\gamma$ generates the inertia group at $x$ ([@sga1 XIII 2.12]), and it follows that for any $g$ in $G_k$, $g \gamma = \gamma^{m(g)}$ for some $m(g)$ in ${\hat{\mathbb{Z}}}$, where ${\hat{\mathbb{Z}}}$ denotes the profinite completion of ${\mathbb{Z}}$. Furthermore, $g \gamma = \gamma^{\chi(g)}$ where $\chi: G_k \rightarrow {\hat{\mathbb{Z}}}^*$ is the cyclotomic character. One way to see this last assertion is to note that it is sufficient to assume that $X \cup x$ is non-proper and show that the kernel of $\pi_1(X_{\overline{k}}, b)^{ab} \rightarrow \pi_1((X \cup x)_{\overline{k}}, b)^{ab}$ is ${\hat{\mathbb{Z}}}(\chi)$. Denote this kernel by $M$. As a profinite group, $M \cong {\hat{\mathbb{Z}}}$. ${\operatorname{Hom}}(\pi_1(X_{\overline{k}}, b)^{ab}, {\hat{\mathbb{Z}}})$ is the étale cohomology group $H^1(X_{\overline{k}}, {\hat{\mathbb{Z}}})$ and the analogous statement holds with $(X \cup x)_{\overline{k}}$ replacing $X_{\overline{k}}$. By the long exact sequence in cohomology of the pair $((X \cup x)_{\overline{k}}, X_{\overline{k}})$ and the purity isomorphism $$H^*((X \cup x)_{\overline{k}}, X_{\overline{k}}, {\hat{\mathbb{Z}}}) \cong H^{*-2}(x_{\overline{k}}, {\hat{\mathbb{Z}}}(\chi^{-1}))= \begin{cases}{\hat{\mathbb{Z}}}(\chi^{-1}) & \text{if $*=2$}
\\
0 &\text{otherwise,}
\end{cases}$$ we have a short exact sequence $$1 \rightarrow {\operatorname{Hom}}(\pi_1((X \cup x)_{\overline{k}}, b)^{ab}, {\hat{\mathbb{Z}}})\rightarrow {\operatorname{Hom}}(\pi_1(X_{\overline{k}}, b)^{ab}, {\hat{\mathbb{Z}}})\rightarrow {\hat{\mathbb{Z}}}(\chi^{-1}) \rightarrow 1$$ It follows that $M \cong {\hat{\mathbb{Z}}}(\chi)$ as $G_k$ modules as desired.
\[Gkaction\_puncturedP1\_bpoint\] Let $b_i$ be a rational tangential base point of ${\mathbb P}^1_{k} - \{\infty, a_1, a_2, \ldots, a_n \}$ at $a_i$. Let $\wp_i$ in ${\operatorname{Path}}(b_1, b_i)$ be a path from $b_1$ to $b_i$ for $i=2, 3, \ldots, n$, and let $\wp_1$ be the trivial path from $b_1$ to itself. Let $\ell_i$ in ${\operatorname{Path}}(b_i, b_i)$ be the path determined by a small loop around the puncture at $a_i$. The loops based at $b_1$ defined $$\gamma_i =\wp_i^{-1} \ell_i \wp_i$$ are free topological generators for $\pi_1({\mathbb P}^1_{\overline{k}} - \{\infty, a_1, a_2, \ldots, a_n \}, b_1)$, and it follows from \[Gequiv\_baseptchange\_pi1\] and \[Gkinertiapreserve\] that the $G_k$ action on $$\pi_1({\mathbb P}^1_{\overline{k}} - \{\infty, a_1, a_2, \ldots, a_n\}, b_1)$$ has the form $$g \gamma_i = \mathfrak{f}_i(g)^{-1} \gamma_i^{\chi(g)} \mathfrak{f}_i(g)$$ where $\mathfrak{f}_i = \kappa_{(b_1, \wp_i)} (b_i)$ and $g$ is any element of $G_k$.
Let $\pi$ abbreviate $\pi_1({\mathbb P}^1_{\overline{k}} - \{\infty, a_1, a_2, \ldots, a_n \}, b_1)$. Choose $v_i$ in $k^*$ for $i = 1, \ldots, n$, and suppose that $b_i$ is a rational tangential point associated to the tangent vector $a_i + v_i \epsilon$ (see \[tgtvecA1\] for this notation). The image of $\mathfrak{f}_i$ in $H^1(G_k, \pi^{ab})$ can be expressed in terms of the Kummer map: the basis $\{ \gamma_1, \gamma_2, \ldots, \gamma_n \}$ of $\pi^{ab}$ as a free ${\hat{\mathbb{Z}}}(\chi)$ module determines an isomorphism $H^1(G_k, \pi^{ab}) \rightarrow H^1(G_k, {\hat{\mathbb{Z}}}(\chi))^n$. Let $(\mathfrak{f}_i)^{ab}_j$ denote the image of $\mathfrak{f}_i$ in the $j^{th}$ factor of $H^1(G_k, {\hat{\mathbb{Z}}}(\chi))$. Let $\kappa_j$ denote the map defined in \[kappadefn\] for $\mathbb{A}^1 - \{a_j\}$ based at $a_j + 1$. Since the étale fundamental group of $\mathbb{A}^1_{\overline{k}} - \{a_j\}$ is abelian, there are canonical isomorphisms between the fundamental groups of this scheme taken with respect to different base points. In particular, $\gamma_j$ determines an isomorphism of this fundamental group with ${\hat{\mathbb{Z}}}(\chi)$, and $\kappa_j$ can be considered to take values in $H^1(G_k, {\hat{\mathbb{Z}}}(\chi))$. Since the cohomology class of $\mathfrak{f}_i$ could be computed by choosing a path from $b_1$ to $b_i$ passing through $a_j + 1$, $(\mathfrak{f}_i)^{ab}_j = \kappa_j (a_i + v_i \epsilon) - \kappa_j (a_1 + v_1 \epsilon)$. By functoriality of $\kappa$ and Lemma \[tgtl\_pts\_0\_Gm\], $$\kappa_j (a + v \epsilon) = \begin{cases}
a - a_j & \mbox{ if } a \neq a_j \\
v & \mbox{ if } a = a_j
\end{cases}$$ Thus $$\label{kappaabcomp} (\mathfrak{f}_i)^{ab}_j = \begin{cases}
v_i/(a_1-a_i) & \mbox{ if } j = i \\
(a_i - a_1)/v_1 & \mbox{ if } j = 1 \\
(a_i - a_j)/(a_1 - a_j) & \mbox{ if } j \neq 1,i \\
\end{cases}$$
In particular, it follows that if $v_1 = a_i -a_1 = -v_i$, then the quotient of $\pi^{\Sigma}$ by $\langle \gamma_j: j \neq 1,i \rangle$ is a pro-$\Sigma$ group with $G_k$ action of the form considered in \[monodromy\_deltan\_bpoint\]. (Here, $\langle \gamma_i: i \neq 1,2 \rangle$ denotes the closed normal subgroup generated by the $\gamma_i$ for $i \neq 1,2$.)
Much more interesting information is known about the $\mathfrak{f}_i$ due to contributions of Anderson, Coleman, Deligne, Ihara, Kaneko, and Yukinari – see for instance, [@Ihara_Braids_Gal_grps 6.3 Thm p.115].
\[bpointrestriction\_pisec\] Let $X= {\mathbb P^1_{k} - \{0,1,\infty \}}$, and base $X$ at $0 + 1 \epsilon$, where $0 + 1 \epsilon$ denotes the tangent vector $ k[\epsilon]/\langle \epsilon^2 \rangle \rightarrow {\operatorname{Spec}}k[z] $ given by $z \mapsto \epsilon$ (cf. \[tgtvecA1\]). Fix a positive integer $n \geq 2$ and let $\pi= \pi_1(X_{\overline{k}})^{\Sigma}$, where $\Sigma$ denotes the set of primes not dividing $n!$. By (\[kappaabcomp\]), the presentation of $\pi_1(X_{\overline{k}})$ given in \[Gkaction\_puncturedP1\_bpoint\] with its $G_k$ action is of the form (\[G\_actionpiP1-\]). Thus, the calculation of $\mu_J \delta_n$ given in \[monodromy\_deltan\_bpoint\] places restrictions on the sections of $\pi_1 (X) \rightarrow G_k$.
The ${\mathbb{Z}}^{\Sigma}(\chi)$ basis $\{ \gamma_1, \gamma_2 \}$ of $\pi^{ab}$ determines an isomorphism $$H^1(G_k, \pi^{ab}) \cong H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2.$$ The quotient map $\pi/[\pi]_{n+1} \rightarrow \pi^{ab}$ therefore defines a map $$H^1(G_k, \pi/[\pi]_{n+1}) \rightarrow H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2.$$ Note that the sections of $\pi_1 (X) \rightarrow G_k$ are in natural bijection with $H^1(G_k, \pi)$. These sections map to $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2$ and the image is restricted by the following corollary of Proposition \[muJdeltanwithmonodromy\].
\[pmkpi1restcor\][*The image of $H^1(G_k, \pi/[\pi]_{n+1}) \rightarrow H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2$ lies in the subset of elements $x_1 \times x_2$ such that $$\langle -x_{J(1)}, -x_{J(2)}, \ldots, -x_{J(n)} \rangle = 0$$ for all $J: \{1,2,\ldots,n \} \rightarrow \{1,2\}$ which only assume the value $2$ once.*]{}
[[*Proof.* ]{}]{}An element of $H^1(G_k, \pi/[\pi]_{n+1})$ determines an element $s_n$ of $H^1(G_k, \pi/[\pi]_n)$ satisfying $\delta_n(s_n) = 0$. Applying Proposition \[muJdeltanwithmonodromy\] to $s_n$ shows the claim.
For $n=2,3$ these restrictions are studied in [@PIA].
Note that in the presentation of $$\pi_1({\mathbb P}^1_{\overline{k}} - \{\infty, a_1, a_2, \ldots, a_m \})$$ given in \[Gkaction\_puncturedP1\_bpoint\], it is only possible to arrange that one of the $\mathfrak{f}_i$ for $i>1$ has image contained in the commutator subgroup, so the restrictions on $\pi_1$ sections for ${\mathbb P}^1_{\overline{k}} - \{\infty, a_1, a_2, \ldots, a_m \}$ placed by Proposition \[muJdeltanwithmonodromy\] will be pulled back from a map to ${\mathbb P^1_{k} - \{0,1,\infty \}}$.
\[pi1restremarks\] Sharifi [@Sharifi Thm 4.3] shows the vanishing of the $n^{th}$ order Massey products $\langle x,x, \ldots, x,y \rangle$ in $H^2(G_k, {\mathbb{Z}}/p^{m})$ for $x,y$ in $k^*$ such that $y$ is in the image of the norm $k(\sqrt[p^{M}]{x}) \rightarrow k$, assuming $k$ contains the $(p^M)^{th}$ roots of unity and $m \leq M-r_n$, where $r_n$ is the largest integer such that $p^{r_n}\leq n$. Furthermore, Sharifi’s methods should produce similar results under weaker hypotheses and with larger coefficient rings, although this has not been written down in detail. Sharifi’s result also implies the vanishing of the Massey product $\langle y,x,x, \ldots, x, \rangle$ by formal properties of Massey products; namely, if $\langle x_1, x_2,\ldots,x_n \rangle$ is defined, then $\langle x_n, x_{n-1}, \ldots, x_1\rangle$ is defined and $$\langle x_1, x_2,\ldots,x_n\rangle = \pm \langle x_n, x_{n-1}, \ldots, x_1\rangle$$ (c.f. [@Kraines Thm 8]). This suggests redundancy among the restrictions placed by Corollary \[pmkpi1restcor\] for $n=2$ and higher $n$.
Since rational base points produce sections of $\pi_1(X) \rightarrow G_k$, applying Corollary \[pmkpi1restcor\] to these sections produces Massey products of elements of $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))$ which vanish.
We identify the elements of $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2$ corresponding to the rational base points to identify these Massey products. Let $\kappa$ denote the map of \[kappadefn\] for $X = {\mathbb P^1_{k} - \{0,1,\infty \}}$ based at $0 + 1 \epsilon$ (cf. \[tgtvecA1\]), and let $\kappa^{ab}$ denote the composition of $\kappa$ with the projection $H^1(G_k, \pi) \rightarrow H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2$. For an element $x$ of $k^*$, let $x$ also denote the image in $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))$ of $x$ under the Kummer map.
\[kabratlbasepoints\]
**
- For $x$ in ${\mathbb P^1_{k} - \{0,1,\infty \}}(k) = k - \{0,1\}$, $\kappa^{ab} (x) = (x, 1-x)$.
- For $v$ in $k^*$, $\kappa^{ab} (1+ v \epsilon) = (1, -v)$ and $\kappa^{ab} (0 + v \epsilon) = (v ,1)$.
- For $v$ in $k^*$, $\kappa^{ab} (\iota(0 + v \epsilon)) = (1/v ,-1/v)$, where $\iota: {\mathbb P^1_{k} - \{0,1,\infty \}}= {\operatorname{Spec}}k[z,\frac{1}{z}, \frac{1}{z-1}] \rightarrow {\mathbb P^1_{k} - \{0,1,\infty \}}$ is given by $z \mapsto \frac{1}{z}$.
[[*Proof.* ]{}]{}Lemma \[kabratlbasepoints\] follows directly from \[Gkaction\_puncturedP1\_bpoint\]. More specifically, applying (\[kappaabcomp\]) with $a_i = x$, $a_j = 1$, $a_1 + v_1 \epsilon = 0 + 1 \epsilon$, in the cases $j = 1$ and $j \neq 1,i$ gives the formula $\kappa^{ab} (x) = (x, 1-x)$ for any $x$ in ${\mathbb P^1_{k} - \{0,1,\infty \}}(k)$. By (\[kappaabcomp\]) with $a_i + v_i \epsilon= 1 + v \epsilon$, in the cases $j = 1$ and $j = i$, it follows that $\kappa^{ab} (1+ v \epsilon) = (1, -v)$ for any tangential base point $1+ v \epsilon$ at $1$. Similarly, $\kappa^{ab} (0 + v \epsilon) = (v ,1)$. Note that $\iota$ induces multiplication by $-1$ on $\pi_1({\mathbb{G}}_{m, \overline{k}},1)$. Let $K$ denote the map of \[kappadefn\] for ${\mathbb{G}}_m$ based at $1$. By functoriality of $\kappa$, $K(\iota(0 + v \epsilon)) = - K(0 + v \epsilon)$. By Lemma \[tgtl\_pts\_0\_Gm\], $K(0 + v \epsilon) = v$. The second coordinate of $\kappa^{ab} (\iota(0 + v \epsilon))$ minus the first is $-1$ by functoriality of $\kappa$ (cf. the argument producing equation (\[kappaabcomp\])). Thus $\kappa^{ab} (\iota(0 + v \epsilon)) = (1/v ,-1/v)$.
\[vanishMasseyCor\][*Let $(X,Y)$ in $H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2$ be $(x^{-1}, (1-x)^{-1})$ for $x$ in $k^* -\{1\}$, or $(x,-x)$ for $x$ in $k^*$. Then the $n^{th}$ order Massey products $$\langle X,\ldots, X, Y, X, \ldots, X \rangle$$ vanish in $H^2(G_k, {\mathbb{Z}}^{\Sigma}(\chi^2))$. Here, the Massey products have $(n-1)$ factors of $X$ and one factor of $Y$. The $Y$ can occur in any position.*]{}
[[*Proof.* ]{}]{}By Lemma \[kabratlbasepoints\], $-(X,Y)$ is in the image of $$H^1(G_k, \pi) \rightarrow H^1(G_k, {\mathbb{Z}}^{\Sigma}(\chi))^2.$$ (Note that $-(X,Y)= (x,1-x)$ or $(x^{-1},(-x)^{-1})$.) Applying Corollary \[pmkpi1restcor\] gives the result.
The vanishing of these Massey products occurs with the defining systems determined by Proposition \[muJdeltanwithmonodromy\] and $\kappa$ applied to $x \in {\mathbb P^1_{k} - \{0,1,\infty \}}(k)$ or $\iota(0+ x \epsilon)$ for $x$ in $k^*$.
Corollary \[vanishMasseyCor\] is also true for $(X,Y) = (x,1)$ or $(1,x)$ with $x \in k^*$ by the same proof, but this result is a formal consequence of the linearity of the Massey product [@Fenn Lemma 6.2.4], since $1$ vanishes under the Kummer map.
\[Sharifivanishing\] The result of Sharifi discussed in Remark \[pi1restremarks\] gives a different proof of the vanishing of $\langle X, X, \ldots, X, Y \rangle$ and $\langle Y, X, \ldots, X, X \rangle$ reduced mod $p^m$ when $k$ contains the $(p^M)^{th}$ roots of unity for $M \geq m + r_n$, and his methods should produce more general results as well. They also show vanishing mod $p^m$ for more general $(X,Y)$ under his hypotheses– see \[pi1restremarks\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have observed a quadratic x-ray magneto-optical effect in near-normal incidence reflection at the $M$ edges of iron. The effect appears as the magnetically induced rotation of $\sim$0.1$^\circ$ of the polarization plane of linearly polarized x-ray radiation upon reflection. [A comparison of the measured rotation spectrum with results from x-ray magnetic linear dichroism data demonstrates that this is the first observation of the Schäfer-Hubert effect in the x-ray regime. *Ab initio* density-functional theory calculations reveal that hybridization effects of the $3p$ core states necessarely need to be considered when interpreting experimental data. The discovered magneto-x-ray effect holds promise for future ultrafast and element-selective studies of ferromagnetic as well as antiferromagnetic materials.]{}'
author:
- 'S. Valencia'
- 'A. Kleibert'
- 'A. Gaupp'
- 'J. Rusz'
- 'D. Legut'
- 'J. Bansmann'
- 'W. Gudat'
- 'P.M. Oppeneer'
title: 'Quadratic X-ray Magneto-Optical Effect in Near-normal Incidence Reflection'
---
X-ray magneto-optical spectroscopy techniques are widespread, sensitive methods for element-selective characterization of magnetic systems [@StohrSiegmann]. In particular the great sensitivity of resonant magnetic scattering methods has been demonstrated in many experiments exciting the $L_{3,2}$ edges [($2p \rightarrow 3d$ transitions)]{} of $3d$ transition metals (TM) in magnetic nanostructures with system sizes down to the atomic scale [@Gambardella2003]. These experiments have recently been extended from static investigations towards magnetization dynamics [@Stamm2007]. While the temporal structure of synchrotron radiation restricted the time resolution to nanoseconds in the past, studies on the ultrafast magnetization dynamics have become nowadays feasible using femtosecond x-ray pulses provided, e.g., by novel femtoslicing facilities at third generation synchrotron radiation sources such as the ALS (Berkeley, USA), BESSY (Berlin, Germany), and the SLS (Villigen, Switzerland) [@schoenlein00]. Much higher photon flux and thus improved experimental sensitivity in magneto-optical experiments will become available with the advent of soft x-ray free electron lasers (FEL). However, the existing FEL facility FLASH (Hamburg, Germany) is designed to provide photon energies of up to 200 eV, that is, the $L_{3,2}$ edges of $3d$ TM (in the range of 650 to 950 eV) are currently not accessible.An alternative is provided by resonant [3*p* $\rightarrow$ 3*d* transitions, i.e.,]{} the $M$ edges (at 50 to 65 eV), where the observable magneto-optical effects may possess almost the same order of magnitude when compared to the $L_{3,2}$ edges (see, e.g., [Refs. [@hecker2005; @ValenciaNJP]]{}). Moreover, the importance of the $M$ edges for the investigation of TM compounds might reach soon beyond large scale facilities. Berlasso [*et al.*]{} [@Berlasso] have recently demonstrated the feasibility of performing ultrafast, [*table-top*]{} experiments at the $M$ edges of TM through the higher order harmonic generation (HHG) of fs laser pulses. Consequently, ultrafast, element-selective magneto-optical techniques [exciting the $3p$ core level electrons]{} can become accessible to most laboratories. Despite of these promising properties the $M$ edges are rarely investigated so far and their capabilities for the above described experiments have not yet been explored. [To fully profit from current FEL capacities and future HHG possibilities for element-specific static and time-resolved magnetization studies it is then necessary to further explore magneto-optical techniques in this promising energy range.]{}
In this Letter we report the discovery of a novel quadratic x-ray magneto-optical effect at the $M$ edges of TM occuring upon reflection of linearly polarized radiation in near-normal incidence. By comparison with [additional x-ray magnetic linear dichroism (XMLD) measurements and]{} *ab initio* calculations we show that the reported effect is the x-ray analogon to a similar observation made by Sch[ä]{}fer and Hubert in the nineties using visible light [@Schafervoigt] which subsequently proved to be a valuable tool for the visualization of magnetic domains [@Schaferbook].
The Sch[ä]{}fer-Hubert effect results from the symmetry-breaking that occurs due to the [preferred]{} magnetization axis in a magnetically ordered material. As a consequence, the indices of refraction are different for linearly polarized light propagating with electric polarization $\mbox{\boldmath$E$}$ parallel to $\mbox{\boldmath$M$}$ ($n_{||}$) and perpendicular to $\mbox{\boldmath$M$}$ ($n_{\perp}$), respectively. Light traversing the material with $\mbox{\boldmath$E$}$ and $\mbox{\boldmath$M$}$ at an angle of $45^{\circ}$ contains equal components $E_{||}$ and $E_{\perp}$. In near-normal incidence reflection the magnetic modification embodied in $n_{||}$ and $n_{\perp}$ leads to the magnetic Sch[ä]{}fer-Hubert rotation of the polarization plane upon reflection, which, using Fresnel theory, can be expressed as $$\theta_\mathrm{SH}\approx {\rm Re} \left[ \frac{(n_{||}-n_{\perp})n_0}{n_{||}n_{\perp} -n_0^2} \right]
\approx {\rm Re} \left[ \frac{(\epsilon_{||} -\epsilon_{\perp})n_0}{(n^2 -n_0^2)n}\right],
\label{eq1}$$ where $n = (n_{||}+n_{\perp})/2$, $\epsilon_{||}$, $\epsilon_{\perp}$ are the permittivities for $\mbox{\boldmath$E$}||\mbox{\boldmath$M$}$, $\mbox{\boldmath$E$}\perp \mbox{\boldmath$M$}$, respectively, [and $n_0$ is the refractive index of the cap layer.]{} The dominating quantity for the effect is $\Delta$= $ \epsilon_{||}- \epsilon_{\perp}$ which also is essential to the XMLD and the x-ray Voigt effect that are both observable in transmission [@XMLDOppeneer03]. Earlier investigations proved that $ \epsilon_{||}- \epsilon_{\perp}$ is to lowest order proportional to $\left<M^2\right>$ [@MertinsPRL]. Therefore the Sch[ä]{}fer-Hubert effect can be observed for ferromagnetic (FM) as well as antiferromagnetic (AFM) materials.
![(Color online) (a) Experimental setup for the detection of the Schäfer-Hubert effect. (b) Reflectivity in the vicinity of the iron $M$ edges at near perpendicular incidence. (c) Normalized detector signal when rotating the analyzer by an angle of $\gamma$. []{data-label="exper"}](Figure1.pdf){width="1.0\columnwidth"}
The experiments were performed at the U125/PGM beamline of the synchrotron radiation source BESSY [@U1252]. The spectral resolution $E/\Delta E$ was set to 5000. The degree of linear polarization of the incident light was P$_{Lin}$ = 0.99 [@U1252]. The BESSY polarimeter chamber [@polarimeter] was used for acquisition of the data. In order to guarantee a solely magnetic origin for $\Delta$, i.e. $\Delta$= $ \epsilon_{||}- \epsilon_{\perp}\propto \left<M^2\right>$ \[[cf.]{} Eq. (\[eq1\])\] the sample must necessarily be cubic or amorphous [@kunes03]. The investigated sample was a magnetron-sputtered 50 nm thick Fe layer deposited on a 100 nm thick Si$_3$N$_4$ membrane. A 3 nm Al cap layer was deposited to prevent oxidation. The in-coming radiation was set at an angle of incidence of $\phi=10^{\circ}$ with respect to the [sample]{} normal as shown in Fig. \[exper\](a). Two magnetic coils allowed magnetic saturation of [$\mbox{\boldmath$M$}$]{} along two orthogonal directions in the sample plane. The polarization state of the reflected light was analyzed using a rotatable gold mirror analyzer and measuring its reflected intensity at the detector being a GaAs:P photodiode.
In the present experiments a differential detection scheme was employed in order to detect the rotation of the polarization plane of the radiation upon reflection, i.e. the Schäfer-Hubert effect. We profit from the general property of quadratic MO effects which in the present case can be expressed by $\theta(\alpha) = \theta_\mathrm{SH}\sin2\alpha$, where $\alpha$ is the angle between the incident polarization and the magnetization as stated above. The measured rotation is thus maximized when setting the magnetization at angle of $\pm\pi/4$ with respect to the polarization. The respective experimental orientations are denoted by $m_\mathrm{T}$ and $m_\mathrm{L}$, respectively, in Fig. \[exper\](a). Subtracting the corresponding rotation angles of the polarization plane then yields $2\theta_\mathrm{SH}$. Note that due to the deviation of $10^\circ$ from normal incidence an additional small linear MO Kerr rotation may be present in the data. It is eliminated by reversing the magnetization at each orientation and averaging the measurements.
The near-normal reflectivity of the sample in the vicinity of the $M$ edges is depicted in Fig. \[exper\](b). It shows a peak at about 55 eV that accounts for the resonant Fe $3p \rightarrow 3d$ transitions. Contrary to the well studied $L_{3,2}$ edges the core level spin-orbit interaction is much smaller and does not allow to resolve the $M_{2}$ and $M_{3}$ edges separately. [The reflectivity is of the order of $10^{-2}$. At the $L_{3,2}$ edges the reflectivity in near normal incidence would be several orders of magnitude smaller, and hence, be below current detector capabilities. Therefore the only feasible detection of an element-selective Sch[ä]{}fer-Hubert effect are the $M$ edges]{}. Fig. \[exper\](c) shows the intensity at the detector when rotating the analyzer by an angle of $\gamma$ from 0 to $2\pi$ (open circles) [at an off-resonant energy]{}. Any [additional]{} magnetically-induced rotation $\theta$ of the polarization plane causes a shift of this curve according to $I(\gamma)=R_0\cdot \left[ 1+P\cdot \cos 2\left(\gamma +\theta\right) \right]$, where $R_0$ denotes the product of the reflectivity of the sample with that of the Au analyzer and *P* is the product of the polarizing power of the Au layer and the degree of linear polarization of the reflected radiation. Here we can set $P=1$ [@MertinsPRL]. Fitting the above equation to the data \[cf. the red line in Fig. \[exper\](c)\] we obtain $\theta$ for each photon energy and [magnetization]{}. The Schäfer-Hubert rotation is finally given by $\theta_\mathrm{SH}=[\theta(m_\mathrm{T})-\theta(m_\mathrm{L})]/2$.
![[Photon e]{}nergy dependent Schäfer-Hubert rotation of the polarization plane of linearly polarized x-rays in the vicinity of the Fe *M* edges.[]{data-label="fig:SHrotation"}](Figure2.pdf){width="43.00000%"}
The [resulting]{} Sch[ä]{}fer-Hubert rotation spectrum $\theta_\mathrm{SH}$ is given in Fig. \[fig:SHrotation\]. It shows a resonant behavior with a twofold sign reversal close to the $M$ edges [and maximum values of about $\pm0.1^\circ$. For comparison we have measured the corresponding XMLD effect in transmission geometry [@MertinsPRL].]{} [Figure \[fig:comparisonXMLD\](a) shows $\rm{Im}\,\Delta$ (open circles) being directly deduced from the transmission data and $\rm{Re}\,\Delta$ (solid circles) obtained from a Kramers-Kronig transformation. Using]{} Eq. (\[eq1\]) and [the experimental]{} $\Delta$ values together with reported [data]{} [@Berlasso] for the permittivity $\epsilon$, [yields]{} the theoretically expected Sch[ä]{}fer-Hubert rotation. [As depicted in Fig. \[fig:comparisonXMLD\](b) the calculated (upper white triangles) and measured $\theta_\mathrm{SH}$ (solid circles) spectra agree nicely]{}. [It is worth to mention that]{} the XMLD data also allow us to deduce a maximum x-ray Voigt rotation in transmission of 8$^\circ$$/\mu$m at the $M$ edges, which is remarkably similar to that measured at the Co $L$ edges [(7.5$^\circ /\mu$m) [@MertinsPRL].]{} [This is surprising, since, in the conventional understanding it is the larger spin-orbit splitting of the core $j_{3/2}$ and $j_{1/2}$ levels, being nearly a factor ten larger at the $L$ edges than at the $M$ edges, that is believed to be responsible for the large $L$ edge magneto-x-ray effects. As we will show below through [*ab initio*]{} calculations, the microscopic mechanism leading to the XMLD at the $M$ edges is actually quite different from that at the $L$ edges.]{} The [size of the rotation]{} demonstrates that quadratic magneto-optical effects at the $M$ edges can serve as an equal alternative to respective experiments at the $L$ edges.
 (open triangles) and (ii) the [XMLD asymmetry measured in reflection]{} (red triangles). Both data are compared to the experimentally determined $\theta_\mathrm{SH}$ values (solid circles, cf. Fig. \[fig:SHrotation\]).[]{data-label="fig:comparisonXMLD"}](Figure3.pdf){width="43.00000%"}
$A_{\rm{R}} = (R_{\perp} - R_{||}) / (R_{\perp} + R_{||})$ with $R_{\perp}$ and $R_{||}$ being the reflectivity for the magnetization perpendicular or parallel to the polarization plane. At near normal incidence it has been shown that $A_{\rm{R}}=2 \theta_{\rm SH}$ [@oppeneer03]. A [respective]{} $\theta_\mathrm{SH}$ spectr[um]{} computed from the experimentally determined [$A_{\rm{R}}$ data]{} is given in Fig. \[fig:comparisonXMLD\](b) (red triangles). The agreement with the measured $\theta_\mathrm{SH}$ rotation is again excellent as it reproduces the experimentally measured rotation spectrum both in shape and magnitude.
![(Color online) (a) $j, j_z$ resolved density of states (DOS) of the $3p$ core levels of Fe. (b) Calculated Schäfer-Hubert rotation spectrum at the Fe $M$ edge, with $j, j_z $ hybridization included ([full]{} curve) or without (dashed curve). []{data-label="fig:calculations"}](test-v6.pdf){width="43.00000%"}
[[*Ab initio*]{} density functional theory calculations have been performed using a full-potential linearized augmented plane wave (FLAPW) method in the <span style="font-variant:small-caps;">WIEN2k</span> implementation [@wien2k]. We may note that]{} a particular difficulty for the theoretical description of the $3p$ semi-core states is related to the relative sizes of the exchange splitting and spin-orbit splitting of the $3p$ levels. Whereas at the $L$ edges the exchange splitting of the $2p$ states is quite small and, consequently, can be treated as a perturbation to the spin-orbit split $j_{3/2}$ and $j_{1/2}$ levels, this can no longer be done for the $3p$ states. In our relativistic calculations [exchange and spin-orbit splitting were therefore included on an equal footing. Also, a considerable hybridization of the $j_{1/2}$ and $j_{3/2}$ states can be expected at the $3p$ level. To allow for this, the $3p$ states of iron have been treated as valence states in our calculations]{}. The combined effect of the exchange and spin-orbit interaction as well as of hybridization on the $3p$ states is illustrated in Fig. \[fig:calculations\][(a)]{}, where we show the computed $3p$ density of states (DOS). Clearly, the $3p$ states are [not anymore]{} separate $j_{1/2}$ and $j_{3/2}$ levels, but are mixtures of all $|jj_z\rangle$ components, a situation which is markedly different from that of the $2p$ levels. Our relativistically calculated energies of the $3p$ levels are in good agreement with a previous calculation [@bansmann], which, however, did not consider the hybridization of the $|jj_z\rangle$ components.To obtain the Sch[ä]{}fer-Hubert rotation spectrum we first calculated the complex dielectric tensor of bcc Fe and subsequently applied the four-vector Yeh formalism [@yeh] to obtain $\theta_{\rm SH}$. The [theoretically derived]{} $\theta_{\rm SH}$ spectrum shown in Fig. \[fig:calculations\][(b)]{} agrees well with the experimental one. [Respective simulations show that the smaller magnitude of the experimental data relative to the theory is due to the Al capping layer, which has been neglected in our calculations given in Fig. \[fig:calculations\](b). ]{}Note that in the experimental procedure of reversing the magnetizations $m_{\rm T}$ and $m_{\rm L}$ we cancel out a constant background signal. [The corresponding background has accordingly been subtracted for the theoretical $\theta_{\rm SH}$ rotation. To evaluate the influence of $j,j_z$ mixing in the $3p$ states we have computed $\theta_{\rm SH}$ also without including the $j,j_z$ hybridization. As shown [in]{} Fig. \[fig:calculations\](b) this leads to significant deviations from both the experimental as well the calculated data with hybridization. [Apart from the]{} different spectral shape the magnitude [of $\theta_{\rm SH}$]{} is about five times larger than the one with $j,j_z$ mixing included. This demonstrates that for proper description and interpretion of x-ray magneto-optical effects at the $M$ edges it is essential to take the hybridization of the $j, j_z$ levels into account.]{} We gratefully acknowledge financial support by the German Federal Ministry of Education and Research (BMBF) Grant No. 05 KS4HR2/4, the Swedish Research Council (VR), STINT, the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 214810, “FANTOMAS‘", and the Swedish National Infrastructure for Computing (SNIC).
[99]{} J. Stöhr and H. C. Siegmann, [*Magnetism: From Fundamentals to Nanoscale Dynamics*]{} (Springer, Berlin, 2006).
P. Gambardella, S. Rusponi, M. Veronese, S. S. Dhesi, C. Grazioli, A. Dallmeyer, I. Cabria, R. Zeller, P. H. Dederichs, K. Kern, C. Carbone, and H. Brune, Science **300**, 1130 (2003).
C. Stamm, T. Kachel, N. Pontius, R. Mitzner, T. Quast, K. Holldack, S. Khan, C. Lupulescu, E. F. Aziz, M. Wietstruk, H. A. Dürr, and W. Eberhardt, Nature Mater. **6**, 740 (2007).
R. W. Schoenlein, S. Chattopadhyay, H. H. W. Chong, T. E. Glover, P. A. Heimann, C. V. Shank, A. A. Zholents, and M. S. Zolotorev, Science [**287**]{}, 2237 (2000); S. Khan, K. Holldack, T. Kachel, R. Mitzner, and T. Quast, Phys. Rev. Lett. [**97**]{}, 074801 (2006).
M. Hecker, P. M. Oppeneer, S. Valencia, H.-Ch. Mertins, and C. M. Schneider, J. Electron Spectrosc. Relat. Phenom. [**144**]{}, 881 (2005).
S. Valencia, A. Gaupp, W. Gudat, H.-Ch. Mertins, P. M. Oppeneer, D. Abramsohn, and C. M. Schneider, New J. Phys. **8**, 254 (2006).
R. Berlasso, C. Dallera, F. Borgatti, C. Vozzi, G. Sansone, S. Stagira, M. Nisoli, G. Ghiringhelli, P. Villoresi, L. Poletto, M. Pascolini, S. Nannarone, S. De Silvestri, and L. Braicovich, Phys. Rev. B **73**, 115101 (2006).
R. Schäfer and A. Hubert, Phys. Status Solidi (a) **118**, 271 (1990). R. Schäfer, J. Magn. Magn. Mater. **148**, 226 (1995).
A. Hubert and R. Schäfer, [*Magnetic Domains*]{} (Springer, Berlin, 1998).
P. M. Oppeneer, H.-Ch. Mertins, D. Abramsohn, A. Gaupp, W. Gudat, J. Kunes, and C. M. Schneider, Phys. Rev. B **67**, 052401 (2003).
H.-Ch. Mertins, P. M. Oppeneer, J. Kunes, A. Gaupp, D. Abramsohn, and F. Sch[ä]{}fers, Phys. Rev. Lett. **87**, 047401 (2001).
R. Follath, F. Senf, and W. Gudat, J. Synchrotron Radiat. **5**, 769 (1998).
F. Schäfers, H.-Ch. Mertins, A. Gaupp, W. Gudat, M. Mertin, I. Packe, F. Schmolla, S. Di Fonzo, G. Soullie, W. Jark, R. Walker, X. Le Cann, M. Eriksson, and R. Nyholm, Appl. Opt. **38**, 4074 (1994).
J. Kunes and P. M. Oppeneer, Phys. Rev. B **67**, 024431 (2003).
P. M. Oppeneer, H.-Ch. Mertins, and O. Zaharko, J. Phys.: Condens. Matter **15**, 7803 (2003).
P. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnicka, and J. Luitz, 2001 *WIEN2k*, Vienna University of Technology (ISBN 3-9501031-1-2).
J. Bansmann, L. Lu, K. H. Meiwes-Broer, T. Schlath[ö]{}lter, and J. Braun, Phys. Rev. B [**60**]{}, 13860 (1999).
P. Yeh, J. Opt. Soc. Am. **69**, 742 (1979).
M. Osugi, K. Tanaka, N. Sakaya, K. Hamamoto, T. Watanabe, and H. Kinoshita, Jpn. J. Appl. Phys. **47**, 4872 (2008); J. Gautier, F. Delmotte, M. Roulliay, F. Bridou, M.-F. Ravet, and A. J[è]{}rome, Appl. Opt. **44**, 384 (2005).
A.-S. Morlens, R. Lopez-Martens, O. Boyko, P. Zeitoun, P. Balcou, K. Varj?, E. Gustafsson, T. Remetter, A. L’Huillier, S. Kazamias, J. Gautier, F. Delmotte, and M.-F. Ravet, Opt. Lett. **31**, 1558 (2006).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The resonance lines of formed at $\lambda$1394 and 1403 [Å]{} are the most critical for the diagnostics of the solar transition region in the observations of the Interface Region Imaging Spectrograph (IRIS). Studying the intensity ratios of these lines (1394[Å]{}/1403[Å]{}), which under optically thin condition is predicted to be two, helps us to diagnose the optical thickness of the plasma being observed. Here we study the evolution of the distribution of intensity ratios in 31 IRIS rasters recorded for four days during the emergence of an active region. We found that during the early phase of the development, the majority of the pixels show intensity ratios smaller than two. However, as the active region evolves, more and more pixels show the ratios closer to two. Besides, there are a substantial number of pixels with ratio values larger than 2. At the evolved stage of the active region, the pixels with ratios smaller than two were located on the periphery, whereas those with values larger than 2 were in the core. However, for quiet Sun regions, the obtained intensity ratios were close to two irrespective of the location on the disk. Our findings suggest that the lines observed in active regions are affected by the opacity during the early phase of the flux emergence. The results obtained here could have important implications for the modelling of the solar atmosphere, including the initial stage of the emergence of an active region as well as quiet Sun.'
author:
- Durgesh Tripathi
- Nived V N
- Hiroaki Isobe
- 'J. Gerry Doyle'
bibliography:
- 'ref.bib'
title: 'On the ratios of lines ($\lambda$1394/$\lambda$1403) in an emerging flux region'
---
INTRODUCTION {#sec:intro}
============
The transition region is the atmospheric layer that separates the cooler chromosphere from the hotter corona. Studying the structure and dynamics of the transition region requires greater attention as these provide critical information on the supply of mass and energy from the lower atmosphere to the corona and the solar wind. Emission lines originating in the transition region mostly fall in the ultraviolet part of the spectra. These UV photons can propagate through the upper atmosphere without significant absorption, re-emission and scattering. Thus transition region emission lines are generally considered as optically thin. However, some of the transition region lines are affected by opacity effects. For example emission lines of , , , , , show the effects of optical thickness [@keenan; @doyle; @brook; @mason].
How do we determine if the emission line understudy is optically thin or thick? This problem have been studied in great detail by [@jordan], [@doyle], [@keenan]. According to [@jordan], under the optically thin condition, the ratio of intensities of two lines originating from the same upper level is simply the ratio of their transition probabilities. For such lines, any deviation from this theoretical ratio value would be the effects of opacity. For Li-like (e.g., ) resonance lines, this ratio is exactly two [@yan]. Similarly, as theoretically predicted [e.g. by CHIANTI; @chianti_15; @chianti_1], in optically thin conditions, the intensity of the 1394 [Å]{} line is twice the intensity of the 1403 [Å]{} line, though we note that the two lines originate from two different levels, namely 2P3/2 and 2P1/2, respectively. However, in optically thick conditions, the intensity of the stronger line decreases and the ratio of these lines can be less than two and in some instances greater than two (see [@keenan_2014] work on the Li-like ion O VI). The intensity of the stronger line changes because the line with the largest oscillator strength is strongly affected by opacity [@mason; @keenan_2014]. In other words, the deviation of the ratio of the resonance lines from its theoretical value provides potentially useful information about the physical environment of the transition region and the atmosphere above. For example, [@yan] confirmed the presence of self-absorption features using line ratios. During extreme ultraviolet (EUV) brightening event, they found a decrease in ratio from 2 to 1.7. It was suggested that the self-absorption is either due to the pre-existing ions in the upper atmosphere or the self-absorption is arising within the region of brightness. Similarly, [@resonant_sct] observed pixels with intensity ratios greater than 2 in an active region and suggested that they are due to resonant scattering.
The main aim of this paper is to study the opacity effects on the two lines in an emerging flux region (EFR) and how these effects vary during the evolution of the active region. Solar magnetic flux is generated by the dynamo process in the tachocline – the region that separates the radiative zone from the convection zone. Due to magnetic buoyancy [@parker], the flux then rises and emerges through the photosphere forming active regions and sunspots. These are called emerging flux regions [@zirin]. Interaction of this emerging flux with pre-existing coronal magnetic field gives rise to a wide variety of phenomena such as X-ray jets, explosive events, flares, coronal mass ejections (CMEs) etc. [see, e.g. @chifor2; @chifor1]. In H$\alpha$ EFRs show the presence of both bright and dark loops [@zirin_filaments; @bruzek_filaments]. The dark and thick threads of loops are known as arch filaments; they are considered as the trace of rising flux tubes [@chou]. The rising magnetic flux lifts cool and dense chromospheric plasma into the corona, forming so-called arch filament system [@isobe]. Due to the intermittent nature of heating, hot and cool plasmas often coexist in EFRs [@isobe06]. Moreover, below the emerging loops there usually exist small scale brightenings in the chromosphere, called Ellerman bombs, produced by magnetic reconnection between the newly emerging magnetic loops [@pariat; @IsoTA]. Recent observations from IRIS found that similar “bombs” also seen as UV brightenings [@peter; @GupT]. Therefore, EFR is a likely place that will have large densities along the line of sight, where the opacity effects may play a significant role in the formation of transition region lines in EFRs.
For the above-described purpose, we use the observations recorded by the Interface Region Imaging Spectrograph (IRIS, [@iris]) observations. The very high spatial and spectral resolution of IRIS provides us with an excellent opportunity to study the transition region in great detail. The rest of the paper is structured as follows. We provide the details of observation and data analysis in §\[sec:od\]. The obtained results are presented in §\[sec:results\]. Finally, we discuss the results and conclude in §\[sec:conclusion\].
![ Top panels: A portion of Sun’s disk observed by AIA using 193 [Å]{} channel during the evolution of the active region. Middle panels: HMI magnetograms corresponding to the FOV of coronal images shown in the top rows. The over-plotted rectangles display the IRIS raster field of view. Bottom panels: IRIS raster images obtained in 1394 [Å]{}.[]{data-label="fig:fig1"}](fig_1.eps){width="80.00000%"}
Observations and Data Analysis {#sec:od}
==============================
The primary aim of this work is to study the ratios and the evolution of the two lines formed at 1394 [Å]{} and 1403 [Å]{} observed by IRIS in an emerging flux region. For this purpose, we have analysed 31 IRIS rasters obtained over emerging active region NOAA 12711 that eventually evolves into an old active region. IRIS obtains UV spectra in far UV (1332 to 1407 [Å]{}) and in near UV (2783 to 2835 [Å]{}) with an effective spectral resolution of 26 m[Å]{} and 53 m[Å]{} respectively [@iris]. The effective spatial resolution of the far UV spectra is 0.33 and that for near UV spectra is 0.4. The rasters were recorded during 11:24 UT on 2018 May 22 to 02:00 UT on 2018 May 25. In this study, we have also used the coronal observations recorded with the Atmospheric Imaging Assembly [AIA @aia; @aia2; @aia3] and line-of-sight (LOS) magnetic field maps obtained by the Helioseismic magnetic imager [HMI; @hmi], both on board the Solar Dynamics Observatory.
Fig. \[fig:fig1\] displays three snapshots of the emerging active region as was observed with the AIA in 193 [Å]{} passband (top panels) and HMI (middle panels). The over-plotted white boxes in the top and middle panels show the IRIS raster FOV. The corresponding intensity maps are shown in the bottom panels. Note that we have applied standard software for data processing and analysis provided in *SolarSoft* [@solarsoft].
All IRIS observations were taken in dense raster mode. The exposure time for most of the rasters was 4 s. However, exposure times of eight and 15 s were used for a few rasters. We note that since we are interested in studying the ratio of the lines, the varying exposure times do not affect our results. We provide the details of the rasters in Table \[table:tab1\] with their corresponding $\mu$ values, where $\mu$ is defined as the cosine of the longitude. Thus, $\mu$ values close to 1 represent the disk centre observations, whereas those close to 0 represent the limb observations.
IRIS provides intensity values in the units of counts (DN). To convert IRIS spectra to a physical unit (ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$), we performed radiometric calibration[^1]. We then obtained the intensity maps for 1394 and 1403 [Å]{} line by fitting a Gaussian to every pixel. The ratio maps were subsequently generated by dividing the 1394 [Å]{} maps with 1403 [Å]{} maps. As mentioned earlier, ideally, under the optically thin condition, the intensity ratios should be precisely 2. However, we found that the ratio lied within a large range of values around 2. This obtained deviation from the theoretical line ratio could be attributed to the opacity effects as well as the bad fitting of the spectra. Therefore, it is necessary to remove all the poorly fitted and noisy data before studying the statistics of ratio values and their evolution.
![Scatter plot of intensity as a function of full width at half maxima for 1394 [Å]{} (left column) and 1403 [Å]{} (right column) after applying the filtering condition given in §\[sec:od\], step wise. Numbers on the top left corner of each plot correspond to the filtering process applied.[]{data-label="fig:fig3"}](fig_3.eps){width="80.00000%"}
To get away from poorly fitted and noisy spectra, which could potentially affect the intensities and thereby the ratios, we apply the following filtering conditions (similar to @resonant_sct).
1. We applied the Gaussian fitting at each pixel within the raster.
2. We consider only those pixels for which the full-width half maximum (FWHM) of the fitted Gaussian lies within 70 m[Å]{} and 350 m[Å]{} in both the spectral lines of . The purpose of this condition is to identify badly fitted spectra. Moreover, pixels with cosmic ray hit, which could not be removed by de-spiking, could also lead to incorrect Gaussian fitting. Such pixels are also removed using this condition.
3. In addition, for each raster, we demanded that the peak of the fitted Gaussian should be more than 50 ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$ for 1394 [Å]{} and 38 for 1403 [Å]{}. Note that we found this threshold values by trial and error method. This additional condition takes care of the dark region in the raster where the signals are very weak.
4. Finally, we perform our statistics only on those pixels that showed ratios between 1.5 and 2.5. Other higher and lower ratios may be unphysical and therefore, removed from our analysis.
The results of filtering processes on IRIS raster recorded on 2018 May 22 at 11:24:00 UT after each step are shown in Figure \[fig:fig3\]. The scatter plot of intensity in 1394 and 1403 [Å]{} line as a function FWHM are displayed in the left and right panels, respectively. The numbers mentioned in the left top corner of each plot corresponds to the filtering process applied. The scatter plots in the first row are plotted after removing pixels which are polluted by cosmic rays. The scatter plots shown in the fourth row are obtained by applying all four filtering conditions, as mentioned earlier. We continue our analysis on this data. We note that by using such criteria, we may have missed some of the genuine pixels. However, this does not significantly affect the statistical result.
Results {#sec:results}
=======
Intensity Ratio Maps
--------------------
![IRIS raster images obtained in 1394 [Å]{} (panel a) and 1403 [Å]{} (panel b) on 2018 may 23 18:28 UT. The obtained ratio map and the distribution of the ratio is shown in panels c and d, respectively.[]{data-label="fig:fig4"}](fig_4.eps){width="\textwidth"}
Figure \[fig:fig4\] presents the intensity maps obtained on May 23 18:28 UT for the 1394 [Å]{} (top left), & 1403 [Å]{} (top right). The bottom left panel shows the obtained ratio map. The white coloured pixels in the ratio map represent the line ratio value between 1.9 and 2.1. The pixels with black colour are the missing pixels. The red, orange, yellow coloured pixels correspond to the ratio values less than 1.9 and the blue, violet, sky blue colours represent the pixels with a ratio above 2.1 as shown in the colour bar. From the ratio map, it is clear that the low ratio values are located at the periphery of the active region and a large fraction of the active region has a ratio value between 1.9 and 2.1. We plot the histogram of the corresponding ratio map in the bottom right panel. The y-axis of the histogram corresponds to the fraction of pixels which falls in each bin. As can be seen, the peak of the histogram is at 1.9. However, there are a significant number of pixels with ratios smaller as well as larger than 2. Note that this raster was taken when the active region was going through the evolutionary process.
![Evolution of the intensity ratio maps. The date and time is labelled.[]{data-label="fig:fig5"}](fig_5.eps){width="\textwidth"}
![Panel a: 1394 [Å]{} intensity image recorded on 23 May 2018 18:28 UT. Location of pixels with line ratio above 2 (panel:b), 2.1 (panel:c), 2.2 (panel:d), 2.3 (panel:e), 2.4 (panel:f) []{data-label="fig:fig6"}](fig_6.eps){width="80.00000%"}
![Scatter plot for intensity ratio (Panel a: for ratios $>$2; Panel b: ratios $<$ 2) as function of distance to nearest bright point. The corresponding histograms are shown in panel c and d, respectively.[]{data-label="fig:distance"}](distance.eps){width="80.00000%"}
To study the temporal evolution of the intensity ratios, in Fig. \[fig:fig5\], we plot the intensity ratio maps of the active region. The dates and time (in UT) of the rasters are labelled. The observation recorded on 22nd May represent the initial stage of flux emergence, where there are no bright structures observed as yet in AIA 193 [Å]{} images (see Fig. \[fig:fig1\]). Whereas those taken on May 23, 24, 25 show the development of bright structures with time, thus they represent the growth phase of the active region. In the ratio maps, all the pixels with ratios lying between 1.9 and 2.1 are shown in white (see the colour bar in Fig. \[fig:fig5\]). As can be depicted from the ratio maps, during the initial phase of the emergence, the ratio maps contain a significant fraction of pixels with ratios different from being between 1.9 and 2.1. With passing time, the active region further evolves, and more and more white pixels appear. The ratio maps also show that the pixels with lower ratios are present all through the evolution of the active region. However, during later stages of the development, the white pixels representing ratios closer to two are dominant and the pixels with ratios less than two were observed primarily at the peripheral regions (similar to that reported by [@resonant_sct]).
We further note that there are significant number of pixels showing the intensity ratios higher than 2. Such observations have also been reported by [@resonant_sct] and are attributed to resonant scattering. We note that in the present study, we find that $\sim$ 20% of the pixels show ratios significantly greater than two. This is significantly larger than those obtained by [@resonant_sct], which is 2.4%. This may be attributed to the fact that the active region under study is an emerging active region. The distribution shown in Fig. \[fig:fig7\] shows that the number of pixels with ratios larger than 2 are present throughout the course of evolution of the active region. In Fig. \[fig:fig6\], we locate the pixels with ratios larger than 2 (panel b), 2.1 (panel c), 2.2 (panel d), 2.3 (panel e) and 2.4 (panel f) above the intensity map. The figure reveals that the pixels with ratios larger than 2 are primarily located within the core of the active region when it is fully evolved.
In the solar and spectral data of two lines, [@RosMM_2008] found that that this ratio was greater than that predicted by theory. In another paper, [@keenan_2014] showed the ratio of the two resonance lines can in some instances be greater than two, which is the optically thin value. These authors suggested that this could be explained due to the geometry where the emitters and absorbers are not spatially distinct, and where the geometry is such that resonant pumping of the upper level has a greater effect on the observed line intensity than resonant absorption in the line-of-sight. [@raymond_2000] suggested an alternative explanation, i.e. that the brightness of a low-density region next to a bright could be significantly enhanced by scattered photons.
In order to check if such an explanation is possible for the results obtained here, in Fig. \[fig:distance\] we show scatter plot for intensity ratio (R$>$2 in panel a and R$<2$ in panel b) as a function of distance to the nearest bright point. The bright point is defined as those with intensities larger than 3000 ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$. Note that we have only considered the pixels with statistically significant deviation in line ratio from optically thin value (see Section 3.4). The corresponding histograms are shown in panels c and d. Our analysis shows that $\sim$78% ($\sim$79%) of pixels with ratio greater (lesser) than 2 are close to a bright region. In such a scenario, the locations with R$>$2.0 may be explained in a manner similar to that by [@raymond_2000]. The remainder can be interpreted via the mechanism suggested by [@RosMM_2008; @keenan_2014]. However, the locations with R$<$2.0 requires further detailed modelling.
Distribution of the Intensity Ratios
------------------------------------
![Evolution of the distribution of intensity ratios.[]{data-label="fig:fig7"}](fig_7.eps){width="80.00000%"}
To understand the statistical behaviour, we plot the histograms of the intensity ratios for all 31 rasters. A subset of those depicting different phase of evolution is shown in Fig. \[fig:fig7\]. Note that all the distributions are plotted with precisely the same bin size of 0.1. The histograms demonstrate a clear pattern in the distribution. During the initial stages of flux emergence, histograms are asymmetric, and they become more symmetric as the active region evolve. The peaks are close to 1.7 and 1.8 during the initial stage and remain constant at 1.9 during the growth phase. There are significantly large number of pixels showing ratios smaller than 2. We further note that, though smaller, there are significant number of pixels showing ratios larger than 2. With the evolution, the number of pixels showing both smaller as well as larger intensity ratios reduces and become minimal at the time of full evolution of the active region.
![Temporal evolution of skewness (panel a) and kurtosis (panel b) of the distribution of intensity ratio distribution.[]{data-label="fig:fig9"}](fig_9.eps){width="80.00000%"}
![Temporal evolution of the median and mean of the distribution of intensity ratios.[]{data-label="fig:fig8"}](fig_8.eps){width="80.00000%"}
In order to confirm the asymmetry of the ratio distribution during the initial stage of flux emergence, we calculated skewness and the kurtosis of the ratio distribution in each raster and plotted them as a function of time in Fig. \[fig:fig9\]. The skewness, which is a measure of asymmetry in the distribution, is shown in panel (a), whereas the kurtosis, which is the measure of the flatness of the distribution is plotted in panel (b). As can be noted, the skewness is more substantial at the beginning stage, and it decreases as the active region grow indicating that the distribution is asymmetric during the initial phase of flux emergence and it becomes symmetric during the evolution of the active region. The kurtosis plot further confirms that the distribution is more peaked during the later phase of the evolution when the active region is older.
In Fig. \[fig:fig8\], we plot the median (left panel) and mean (right panel) of the ratio distribution from all 31 observations as a function of time. In each plot, time zero represents the initial observation time 11:24 UT on May 22 and the difference in time is calculated by taking this observation time as the reference. The left panel of Fig. \[fig:fig8\] demonstrate a clear trend of increasing median with time. During the early stage of the flux emergence, the median of the ratio distribution is lower than 2 that increases with the evolution of the active region. Similarly, the mean of the ratio distribution increases with time (right panel of Fig. \[fig:fig8\]. The larger error bar in initial observations is due to a smaller number of data points. The gradual increase in median and mean of the ratio distribution is suggestive of the fact that opacity effects are much larger during the early stages of flux emergence and decreases with the age of the active region.
Intensity Ratios in Quiet Sun
-----------------------------
![Top panel (a,b,c): QS ratio maps obtained using the observation from different location on the disk of the Sun. Middle panel (d,e,f): Corresponding ratio histograms. Bottom panel (g,h,i): Average spectra of the raster.[]{data-label="fig:fig10"}](fig_10.eps){width="80.00000%"}
It is a known fact that the intensities of the optically thick spectral lines show centre-to-limb-variation (CLV). Since the 31 IRIS rasters studied here are observed at different spatial locations on the Sun, the CLV may affect our results if lines are optically thick in all kinds of spatial structures on the Sun. Therefore, we turn our attention to Quiet Sun observations. For this purpose, we have studied three different IRIS rasters recorded at different locations on the Sun to obtain the intensity ratios of the lines. We note that we have applied the same criteria for quiet Sun spectra as detailed in §\[sec:od\].
We plot the obtained ratio maps and the corresponding distributions for the three quiet Sun rasters in Fig. \[fig:fig10\]. The colour scheme shown here is exactly the same as for the active regions, shown in Fig. \[fig:fig4\]. The mean and median of the distribution is also labelled. The plots clearly show that the ratio maps are dominated by white pixels, showing the ratios lying between 1.9 and 2.1. The distribution of the ratios looks symmetric with a peak at 2.0 for all the three rasters, irrespective of their spatial location. For both limb and disk centre observations the median of the ratios are very close to 2. The spectra shown in the bottom panel of Fig. \[fig:fig10\] also confirm that the intensity in the Quiet Sun is not changing as a function of location.
![Error in line ratio as a function of line ratio for nine different observations taken during the evolution of the active region.[]{data-label="fig:fig11"}](fig_11.eps){width="90.00000%"}
![Error in line ratio as a function of line ratio for the quiet Sun rasters observed at different $\mu$ values.[]{data-label="fig:fig12"}](fig_12.eps){width="90.00000%"}
![Location of pixels with significantly reduced line ratio ( $r+\delta r < 2$) during the evolution of active region. Pixels with $r+\delta r < 2$ are colour coded in black on 1394 [Å]{} intensity map[]{data-label="fig:fig13"}](fig_13.eps){width="\textwidth"}
![Location of pixels with significant enhancement in line ratio ( $r-\delta r > 2$) during the evolution of active region. Pixels with $r-\delta r > 2$ are colour coded in black on 1394 [Å]{} intensity map[]{data-label="fig:fig14"}](fig_14.eps){width="\textwidth"}
Error in line ratio {#error}
-------------------
Here we calculate the uncertainty in the observed line ratio to identify the pixels which show statistically significant deviation from the theoretical line ratio. Assuming the Poisson error statistics for photons, we first calculated the error in integrated line intensities of lines and then propagated the errors to obtain the uncertainty in line ratios (see Appendix \[error\] for details). In Fig. \[fig:fig11\] we plot the error in line ratio ($\delta r$) as a function of line ratio (r) for nine different observations, spread over the duration of the flux emergence. On each plot, line $r-\delta r = 2$ is shown in blue colour and the line $r+\delta r = 2$ is plotted in black colour. These lines are plotted to identify the pixels with statistically significant deviation. Pixels on the left side of the black line have ratio plus the measurement error on the ratio $r + \delta r <2 $. On the other hand pixels on the right side of the blue line have $r-\delta r>2 $. Therefore these pixels can be considered as the pixels with significant deviation in line ratio. There are a large fraction of pixels on the left side of the black line, which indicates the opacity effects in the active region. Similar plots for the quiet Sun are shown in Fig. \[fig:fig12\]. From Fig. \[fig:fig12\] it is clear that pixels with a statistically significant ratio above and below 2.0 are equally present in the Quiet Sun.
The spatial location of pixels with a statistically significant reduction in line ratios are shown in Fig \[fig:fig13\]. Here we display these pixels on 1394 [Å]{} intensity map which are observed during the flux emergence. Location of pixels with significantly reduced line ratios are colour coded in black. From Fig \[fig:fig13\] it is clear that most of the opacity affected pixels are located at the periphery of the active region. Similarly, the location of pixels with statistically significant ratio above 2 are shown in Fig \[fig:fig14\]. These pixels are densely populated close to the active region core.
DISCUSSION AND CONCLUSIONS {#sec:conclusion}
==========================
In this paper, we study the temporal evolution of line ratio in an EFR. The obtained results are summarised as follows. Throughout the development of the EFR, the histograms of the ratio are asymmetric (skewed as well as shifted) towards the low ratio values. This tendency is stronger during the initial stage of flux emergence. Both the average and median of the distribution were lower than 2 in the initial phase, and they gradually approached to 2, which is the theoretically expected value for optically thin plasmas. On the other hand, in the quiet Sun, the distribution is highly symmetric and peaks at 2. There are also significant number of pixels with ratios larger than 2. Our analysis shows that the pixels with ratios smaller than 2 are predominantly located at the periphery of the active region whereas those with ratios larger than 2 are in the core.
The reduced ratio is usually attributed to optical effect, as the line with the largest oscillator strength is strongly affected by opacity. In EFRs, particularly during the early phase, dense chromospheric plasma is lifted to the corona by emerging loops. Heating of the plasma often occurs intermittently due to the filamentary structure in the rising magnetic flux [@isobe]. The magnetic reconnection between the neighbouring rising loops also causes small scale brightenings in and above the chromospheric heights [@IsoTA; @peter; @GupT]. Such dynamic events increase the densities along the line of sight resulting into reabsorption of photons by ions. Since the probability of reabsorption for the1394 line is a factor 2 greater than for the 1403 line, so its intensity gets reduced more and the ratios less than 2 is observed.
Our result showed that the fraction of the pixels with ratio less than 2 is larger in the emerging active region compared with the quiet sun. Moreover, the distribution is more skewed and shifted toward lower value in the early phase of the emergence than in the later phase. Also, from the measured line ratio error, it is clear that the large fraction of active region pixels shows a statistically significant reduction in line ratio. These results support the interpretation that larger opacity may be the primary reason for the smaller values of the ratio.
A correlation study of pixels with ratios smaller (larger) than 2 shows that about 79% ( 78%) of pixels are close to a bright region ($>$3000 ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$). The results for R$>$2 is in the line of suggestion made by [@raymond_2000] that the brightness of a low-density region next to a bright could be significantly enhanced by scattered photons. However, the dichotomy persists for locations with R$<$ 2 and further study in modelling is required to understand the emission in the transition region in greater detail.
Finally, the observation that the line intensity ratios are significantly smaller than two and progressively changes to 2 with the evolution of the active region may hold potential diagnostics for the understanding of the physics of flux emergence. Such an analysis will require full-scale magneto-hydrodynamic simulations of emerging flux regions and forward modelling to shed further lights, which is out of the scope of this paper.
We thank an anonymous referee for the insightful comments that has helped improve the paper. DT thanks J. A. Klimchuk for initial discussions on this subject. DT and NVN acknowledged the support from the Max-Planck India Partner Group of MPS at IUCAA. HI is supported by JSPS KAKENHI Grant Number 18H01254. Armagh Observatory and Planetarium is core funded by the Northern Ireland Government through the Department for Communities. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. We would like to thank AIA, HMI, and IRIS teams for providing valuable data. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). NVN is funded via a studentship from Armagh Observatory and Planetarium.
Error Analysis {#error}
==============
The ratio of lines are defined as $$\begin{aligned}
R &=\frac{I_{1394}}{I_{1403}}\end{aligned}$$ Where $I_{1394}$ and $I_{1403}$ are the integrated line intensities of 1394 and 1403 [Å]{} respectively. Error in the line ratio can be obtained using the formula $$\begin{aligned}
\frac{\delta R}{R} &=\sqrt{\left(\frac{\delta I_{1394}}{I_{1394}}\right)^{2}+\left(\frac{\delta I_{1403}}{I_{1403}}\right)^{2}}\end{aligned}$$
Where $\delta R$ is the line ratio error. $\delta I_{1394}$ and $\delta I_{1403}$ are the error integrated line intensities of 1394 and 1403 [Å]{} respectively. Error in the intensity measurement can be calculated using photon statistics. To do this first we have to convert the intensities in DN unit to the photon counts. This can be done using the formula $$\begin{aligned}
I(photons) &=\frac{g}{yield}\times I(DN)\end{aligned}$$ Where $g$ is the gain which is the number of electrons released in the detector that yield 1 DN. Yield is the number of electrons released by one incident photon. For FUV spectra gain in 6 and the yield is 1.5 [@iris]. Therefore the above equation can be rewritten as $$\begin{aligned}
I(photons) &=4\times I(DN)\end{aligned}$$ According to the photon statistics, the error in photon counts is the given by, $$\begin{aligned}
\delta I(photons) &=\sqrt{I(photons)}\end{aligned}$$ Using the above equations, error in the line ratio can be written as $$\begin{aligned}
\delta R &=R \times \sqrt{\frac{1}{I_{1394}(photons)}+\frac{1}{I_{1403}(photons)}}\end{aligned}$$
Details of the Active Region and Quiet Sun Observations
=======================================================
-------- ------------- ------------------ ------------------------ ---------- -------------
Data Date of Time of FOV Exposure $\mu$-value
Observation Observation (UT) (arcsec) time (S)
Set 1 22-May-2018 11:24:59 $112''\,\times175\,''$ 4 0.99
Set 2 22-May-2018 12:02:33 $67''\,\times119\,''$ 4 0.98
Set 3 22-May-2018 12:19:11 $67''\,\times119\,''$ 4 0.98
Set 4 22-May-2018 12:35:48 $67''\,\times119\,''$ 4 0.98
Set 5 22-May-2018 13:25:40 $67''\,\times119\,''$ 4 0.98
Set 6 22-May-2018 13:58:55 $67''\,\times119\,''$ 4 0.98
Set 7 22-May-2018 18:06:37 $112''\,\times175\,''$ 4 0.98
Set 8 22-May-2018 18:44:11 $67''\,\times119\,''$ 4 0.98
Set 9 22-May-2018 19:00:49 $67''\,\times119\,''$ 4 0.98
Set 10 22-May-2018 19:38:30 $67''\,\times119\,''$ 4 0.98
Set 11 22-May-2018 19:55:08 $67''\,\times119\,''$ 4 0.98
Set 12 22-May-2018 20:11:45 $67''\,\times119\,''$ 4 0.97
Set 13 22-May-2018 20:28:23 $67''\,\times119\,''$ 4 0.97
Set 14 23-May-2018 00:36:37 $112''\,\times175\,''$ 4 0.97
Set 15 23-May-2018 01:14:11 $67''\,\times119\,''$ 4 0.97
Set 16 23-May-2018 15:01:20 $112''\,\times175\,''$ 4 0.93
Set 17 23-May-2018 15:38:54 $67''\,\times119\,''$ 4 0.92
Set 18 23-May-2018 15:55:32 $67''\,\times119\,''$ 4 0.92
Set 19 23-May-2018 18:28:17 $112''\,\times175\,''$ 4 0.91
Set 20 23-May-2018 19:05:51 $67''\,\times119\,''$ 4 0.91
Set 21 23-May-2018 19:22:29 $67''\,\times119\,''$ 4 0.91
Set 22 23-May-2018 19:59:30 $67''\,\times119\,''$ 4 0.91
Set 23 23-May-2018 20:16:08 $67''\,\times119\,''$ 4 0.91
Set 24 23-May-2018 20:49:23 $67''\,\times119\,''$ 4 0.90
Set 25 24-May-2018 12:45:22 $112''\,\times175\,''$ 15 0.81
Set 26 24-May-2018 17:03:22 $112''\,\times175\,''$ 4 0.79
Set 27 24-May-2018 17:58:37 $112''\,\times175\,''$ 4 0.79
Set 28 24-May-2018 20:19:59 $112''\,\times175\,''$ 15 0.77
Set 29 24-May-2018 23:32:53 $112''\,\times175\,''$ 8 0.76
Set 30 25-May-2018 00:21:51 $112''\,\times175\,''$ 8 0.75
Set 31 25-May-2018 01:59:45 $112''\,\times175\,''$ 8 0.74
-------- ------------- ------------------ ------------------------ ---------- -------------
: Details of EFR Observations.[]{data-label="table:tab1"}
------- --------------- ------------------ ------------------------ ---------- -------------
Data Date of Time of FOV Exposure $\mu$-value
Observation Observation (UT) (arcsec) time (S)
Set 1 07-June-2014 23:09:36 $141''\,\times175\,''$ 15 0.67
Set 2 07-June-2014 12:09:34 $141''\,\times175\,''$ 15 0.96
Set 3 06-March-2014 10:04:51 $141''\,\times175\,''$ 30 0.48
------- --------------- ------------------ ------------------------ ---------- -------------
: Details of quiet Sun Observations.[]{data-label="table:tab2"}
Distribution of intensity ratios by defining the domain of emerging flux
========================================================================
The analysis in this paper has been performed for pixels that belonged to the rectangular region surrounding the EFR. Therefore, it is possible that the neighbouring QS pixels that are included in the selected rectangular region could affect our results. To test this, we performed a similar analysis by defining a boundary to the EFR. Photospheric magnetic field strength recorded by HMI on board SDO can be used to define the boundary between EFR and the neighbouring quiet Sun. To do this we co-aligned the LOS magnetogram with IRIS raster using SJI 1400, AIA 1600 images. Then, by manual inspection defined the EFR as the region with absolute magnetic field strength greater than 20 G and performed a similar analysis for the pixels that fall in the EFR region.
The evolution of distribution of line ratio in EFR is shown in Fig. \[fig:figc1\]. Similar to the previous analysis, histograms are asymmetric during the initial stages of flux emergence, and they become more symmetric as the active region evolve. This observation is further confirmed in the skewness plot shown Fig. \[fig:figc2\] panel (a), where we see a decrease in the skewness of the ratio distribution as the active region evolve. The peaks ratios in Fig. \[fig:figc1\] are also close to 1.7 and 1.8 during the initial stage and remain constant at 1.9 during the growth phase as seen in the previous analysis. The kurtosis plot shown in panel (b) of Fig. \[fig:figc2\] confirms that the distribution is more peaked during the later phase of the evolution when the active region is older.
Finally, we plot the evolution of median and average values of intensity ratios in Fig. \[fig:figc3\]. From the figure, it is clear that both the median and the average value of intensity ratios are smaller than 2 during the initial stage of flux emergence and they both gradually show an increase as the active region evolves. Large error bars in the average line at the beginning are due to the less number of data points with an absolute magnetic field strength greater than 20 Gauss. The above results confirm that the opacity effects are stronger during the initial stage of flux emergence and decreases as the active region evolve. Therefore, we conclude that results obtained by defining a rectangular boundary to the EFR and by a boundary based on magnetic field strength are very similar.
![Evolution of the distribution of intensity ratios.[]{data-label="fig:figc1"}](fig_c1.eps){width="80.00000%"}
![Temporal evolution of skewness (panel a) and kurtosis (panel b) of the distribution of intensity ratio distribution.[]{data-label="fig:figc2"}](fig_c2.eps){width="80.00000%"}
![Temporal evolution of the median and mean of the distribution of intensity ratios.[]{data-label="fig:figc3"}](fig_c3.eps){width="80.00000%"}
[^1]: IRIS technical note 24
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report on recent results on nucleon structure that are helping guide the search for new physics at the precision frontier. Results discussed include the electroweak elastic form factors, charge symmetry breaking in parton distributions and the strangeness content of the nucleon.'
author:
- 'R. D. Young'
bibliography:
- 'references2.bib'
title: Nucleon structure in the search for new physics
---
[ address=[Special Research Centre for the Subatomic Structure of Matter (CSSM)\
and Centre of Excellence in Particle Physics at the Tera-Scale (CoEPP),\
School of Chemistry and Physics, University of Adelaide, SA 5005, Australia]{} ]{}
Introduction
============
The Standard Model has been enormously successful at describing experiments in nuclear and particle physics. The search for new physical phenomena beyond the Standard Model is primarily driven by two complementary experimental strategies. The first is to build high-energy colliders, such as the Large Hadron Collider (LHC) at CERN, which aim to excite a new form of matter from the vacuum. The second, more subtle approach is to perform precision measurements at moderate energies, where an observed discrepancy can signify the existence such new forms of matter.
The significance of measurements at the precision frontier depends on careful experimental techniques, in conjunction with robust theoretical predictions of contributing Standard Model phenomena. Here we report on some recent progress in nucleon structure that is contributing to the low-energy search for new physics. In particular, the nucleon electroweak elastic form factors and charge symmetry breaking in parton distributions both play a significant roles in precision tests of the weak interaction. In the context of ongoing dark matter searhces, improved knowledge of the strangeness scalar content of the nucleon is leading to better constrained predicted cross sections.
Quark weak charges
==================
At low energies, the weak interaction is manifest in the effective current–current correlators $${\cal L}_{PV}=-\frac{G_F}{\sqrt 2}\sum_q\left[C_{1q}\,\overline{e}\gamma^\mu \gamma_5 e\, \overline{q}\gamma_\mu q
+C_{2q}\,\overline{e}\gamma^\mu e\,\overline{q}\gamma_\mu \gamma_5 q
\right]$$ where $G_F$ is the weak coupling constant, and the $C_{iq}$ denotes flavour-dependence of the effective neutral current interaction — at tree level they are simply $C_{1(2)q}\sim g_e^{A(V)} g_q^{V(A)}$. The full couplings are determined within the Standard Model by combining precision $Z$-pole measurements [@ALEPH:2005ema] with the scale evolution to the low-energy domain [@Marciano:1983ss; @Erler:2003yk].
Experimental constraints on the weak neutral current at low energies have been rather limited. One celebrated result is the precision measurement of atomic cesium’s $6s\to 7s$ transition polaririzability, and the resulting extraction of the weak nuclear charge of cesium [@Bennett:1999pd]. The weak charge extraction depends crucially on the precision calculation of the atomic wave functions, where the latest theoretical update gives $Q_w^{Cs}\equiv
-376C_{1u}-422C_{1d}=-73.16(29)_{\rm exp}(20)_{\rm th}$ [@Porsev:2009pr] — in complete agreement with the Standard Model value $-73.15(2)$ [@Nakamura:2010zzi]. This agreement with the Standard Model is depicted by the narrow, almost horizontal (orange) band in Figure \[fig:c1q\].
The cesium measurement places very restrictive bounds on the form of parity-violation interactions within new physics scenarios. In terms of a generic contact interaction describing new physics [@Erler:2003yk] $${\cal L}_{PV}^{\rm new}=-\frac{g^2}{4\Lambda^2}\overline{e}\gamma^\mu \gamma_5 e\sum_q h_V^q \overline{q}\gamma_\mu q\,,$$ the cesium measurement, at 1-sigma, restricts the magnitude of any new physics contribution to be less than $$\frac{g^2}{\Lambda^2}\left(0.67 h_V^u+0.75 h_V^d\right)\sim \left[7{\,\rm TeV}\right]^{-2}.$$
The atomic measurements are mostly insensitive to hadronic or nuclear structure because of the small energy transfers involved[^1]. In electron scattering, the neutral current can be probed by measuring parity-violating asymmetries. Given the typical energy scales involved, the extraction of the weak interaction parameters also requires knowledge of nucleon structure. Measurements of this sort date back to the pioneering work of Prescott [*et al.*]{} [@Prescott:1979dh] at SLAC, where a parity-violating asymmetry in deep inelastic scattering was measured (see the almost vertical band in Figure \[fig:c1q\]).
More recently, measurements of the parity-violating [*elastic*]{} scattering asymmetries have now been carried out by a number of experiments, including: SAMPLE at MIT-Bates [@Spayde:2003nr]; PVA4 at Mainz [@Maas:2004ta; @Maas:2004dh; @Baunack:2009gy]; and G0 [@Armstrong:2005hs] and HAPPEX [@Aniol:2004hp; @Aniol:2005zg; @Acha:2006my] at Jefferson Lab. The principal focus of these programs was the study of the electroweak form factors of the nucleon, and particularly, the determination of the strange quark component of these form factors.
In addition to the study of the electroweak structure, the kinematic coverage of these measurements, together with the standard electromagnetic form factors, provides a reliable extrapolation to the $Q^2\to 0$ limit, and thereby an extraction of the proton’s weak charge [@Young:2007zs]. Figure \[fig:extrap\] displays this extrapolation, where the observed scattering asymmetries (projected onto the forward limit) are shown. The displayed asymmetry has been normalised to give the weak charge of the proton at $Q^2=0$.
The slope of the line describes the knowledge of the neutral current form factors.
The extraction of the proton’s weak charge from this modern data improves on the earlier results by about a factor of 5 — see the ellipse in Figure \[fig:c1q\]. Following the generic contact interaction described above, the observed agreement with the Standard Model sets the characteristic mass scale to above $\sim 2{\,{\rm TeV}}$ (at 1-sigma).
Charge symmetry breaking in parton distributions
================================================
With the improved technology and expertise gained in performing the precision measurements of the electroweak elastic form factors, there are now plans to revisit parity-violation in DIS [@PVDIS]. This program is proposing to improve the precision of the early SLAC measurements of Prescott et al. by roughly an order of magnitude.
The new Jefferson Lab program is aiming at a sub-1% measurement of the PVDIS asymmetry from deuterium. With possible contributions from supersymmetry, for example, estimated to be as large as $\sim 1\%$ [@Kurylov:2003xa], this program is just at the threshold of a Standard Model test[^2] — provided Standard Model corrections are well understood.
One of the potentially largest hadronic corrections to the physics asymmetry is that arising from charge symmetry violation (CSV). Based on the phenomenological extraction by Martin et al. [@Martin:2003sk], the 90% confidence level bounds on CSV lead to $\sim$1.5–2% variations in the PVDIS asymmetry [@Hobbs:2008mm]. At typical kinematics of the JLab program, such fluctuations appear to be more significant than other possible corrections, such as higer twist [@Hobbs:2008mm; @Mantry:2010ki] or target-mass corrections [@Hobbs:2011dy].
With CSV (potentially) at the scale of $\pm$1.5–2% of the PVDIS asymmetry, a precision measurement could provide the best direct measurement of charge symmetry violation in parton distributions. While such a measurement would be of great interest for hadronic physics [@Londergan:2006he; @londergan:2009kj], it will disguise any signature of new physics. Fortunately lattice QCD offers the opportunity to constrain this hadronic physics independently. In a recent study, lattice calculations of the hyperon quark momentum fractions have been used to extract charge symmetry breaking in nucleon parton distibutions [@Horsley:2010th]. These results suggest CSV in the quark momentum fractions of $\sim 0.20\pm0.06\%$, corresponding to a $\sim 0.4$–$0.6\%$ correction to the PVDIS asymmetry. Importantly, the statistical precision represents an order of magnitude improvement on the bounds reported in Ref. [@Martin:2003sk].
With future work to constrain the systematics of the lattice calculation of CSV and continued theoretical development in higher-twist and target mass corrections, mentioned above, there is a strong case that the PVDIS program at JLab will be able to provide an important new low-energy test of the Standard Model.
We also note that the lattice result of [@Horsley:2010th] also makes an important contribution to the famous NuTeV anomaly [@Zeller:2001hh]. Whereas the original report of a 3-sigma discrepancy with the Standard Model assumed CSV to be negligible, the value extracted from the lattice acts to reduce this discrepancy by 1-sigma. The remaining 2-sigma also appear to be naturally described within the Standard Model as a nuclear medium modification effect [@Cloet:2009qs; @Bentz:2009yy].
Strangeness scalar content
==========================
The strange quark condensate in the nucleon is of particular significance in the current search for dark matter. The relatively large coupling of strange quarks to condidate dark matter, combined with a typically large uncertainty in the strangeness sigma term, have led to considerable variation in the predicted cross sections for direct detection measurements [@Bottino:1999ei; @Ellis:2008hf].
The traditional method for extracting the strangeness sigma term in the nucleon, $\sigma_s$, uses the observed hyperon spectrum in conjuction with the pion-nucleon sigma term [@Gasser:1980sb; @Nelson:1987dg]. Even with a perfect extraction of the light-quark sigma term and best-estimates of higher-order corrections [@Borasoy:1996bx], this method is limited to an uncertainty in $\sigma_s$ of $\sim 90{\,{\rm MeV}}$ [@Young:2009ps].
Advances in lattice QCD calculations now provide significantly better constraint on the strangeness sigma term [@Young:2009ps]. There is general consensus that the strangeness sigma term is on the small side of early estimates [@Ohki:2008ff; @Young:2009zb; @Toussaint:2009pz; @Ohki:2009mt; @MartinCamalich:2010fp; @Takeda:2010cw] — with a couple recent hints that it may not be quite so small [@Collins:2010gr; @Babich:2010at].
A small strange quark sigma term leads to a dramatic reduction in the uncertainties of dark matter cross sections [@Giedt:2009mr]. For a range of candidate supersymmetric models of dark matter, the predicted cross sections are found to be substantially smaller than previously suggested.
Acknowledgements
================
This work was supported by the Australian Research Council.
[^1]: Though such effects will become increasingly more significant as higher-precision measurements are performed, see Ref. [@Brown:2008ib], for instance
[^2]: Of course, in conjuction with other low-energy measurements, correlations can enhance the significance of possible new physics limits.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In a distributed quantum computer scalability is accomplished by networking together many elementary nodes. Typically the network is optical and inter-node entanglement involves photon detection. In complex networks the entanglement fidelity may be degraded by the twin problems of photon loss and dark counts. Here we describe an entanglement protocol which can achieve high fidelity even when these issues are arbitrarily severe; indeed the method succeeds with finite probability even if the detectors are entirely removed from the network. An experimental demonstration should be possible with existing technologies.'
author:
- Yuichiro Matsuzaki
- 'Simon C. Benjamin[^1]'
- Joseph Fitzsimons
title: Distributed quantum computation with arbitrarily poor photon detection
---
A key challenge in the field of quantum information processing (QIP) is scaling from few-qubit systems to large scale devices. One approach is [*distributed*]{} QIP, where small devices (‘nodes’) comparable in complexity to systems already achieved experimentally, are networked together to constitute a full scale machine. The nodes may be trapped atoms or solid state nanostructures such as NV centres [@Benjamin:2009p374] and can be presumed to be under good control. Given such an architecture the challenging task is then to entangle the physically remote nodes. Various protocols have been advanced since the first ideas in 1999 [@Cabrillo:1999p339; @Bose:1999p326], typically these involve the use of optical measurements that simultaneously observe two, or more [@Benjamin:2005p362], such systems. Experimental demonstrations of this type of approach have already been achieved both with ensemble systems [@Chou:2005p333] and with individual atoms [@MMOYMDM1a].
A remote entangling operation (EO) may fail. The consequences depend on the level of complexity within each node. If each node contains multiple qubits then we can nominate a logical qubit and insulate it from failures using the other qubits [@BriegelDur03; @BBFM01a]. Unfortunately, many physical systems may have only very limited complexity. If the logical qubit at each node cannot be protected from failure, then it is inevitable that any large scale entangled state will be damaged repeatedly during its creation. Every time we wish to entangle two specific qubits, there is a significant risk that the EO will fail and therefore the two qubits in question will need to be reset, losing any prior entanglement with other qubits. Given heralding, i.e. we know when a failure has occurred, it is established that that a ‘divide and conquer’ approach can still yield positive growth [*on average*]{} for any finite [@Lim:2005p364; @Barrett:2005p363; @BK_comment; @Nielsen:2004p371; @Duan:2005p369; @Rohde:2007p370; @UTprl]. Generally the solution involves generating small resource states and subsequently connecting them.
In order to make efficient use of such strategies, it is desirable to be able to directly perform EOs between arbitrarily chosen nodes. However this implies that the optical network must be complex, with a considerable number of switches; such complexity will compound the inherent imperfections of photon detectors, leading to high photon loss rates and potentially also aggravating the problem of dark counts. Therefore one should look for an EO scheme that is very robust against such failings. Previous proposals for EOs are vulnerable to dark counts and/or photon losses, in the sense that the entanglement fidelity is reduced by one, of both, of these effects. we present an analysis of a protocol in which the eventual fidelity does not depend on these effects. Indeed, one can completely remove the detectors and yet achieve high fidelity with finite probability.
The basic idea of our scheme is to revisit an old concept, that of “single particle entanglement” suggested by S. J. Van Enk [@SJvan01a]. We may introduce the idea as follows: suppose that one sends a single photon to a half mirror to split it into two paths, and in each path there is a two-level atom in free space, prepared in its ground state. By means of an appropriately shaped lens one can focus the photon at each path to a small area to be absorbed by the atom. Let us make the (highly unrealistic) assumption that the absorption probability is unity. As a result one of the atoms will be excited and, since one cannot distinguish which atom is excited, one obtains a Bell state represented as $|\Psi _e^{(+)}\rangle
=\frac{1}{\sqrt{2}}(|0\rangle _1|e\rangle _2
+|e\rangle _1|0\rangle _2)$ [@SJvan01a] where $|0\rangle _i$ and $|e\rangle _i$ $(i=1,2)$ denote the ground state and the excited state of the $i$ the atom, respectively.
There are of course a number of difficulties with this simple picture. Firstly, the lifetime of the excited state is usually very short [@NKJ01a] and so it is difficult to maintain the coherence of the state. Therefore, instead of the two-level system, we adopt a lambda-system having a ground state $|0\rangle
$, an exited state $|e\rangle $, and a metastable state $|1\rangle $ as shown in Fig. \[streem\]. In the lambda system, after obtaining the state $|\Psi_e ^{(+)}\rangle
=\frac{1}{\sqrt{2}}(|0\rangle _1|e\rangle _2
+|e\rangle _1|0\rangle _2)$, one can use a $\pi $ laser pulse to perform a unitary operation $U_{\pi}=|e\rangle \langle 1|+|1\rangle \langle
e|$ to both of the qubits so to obtain the stable state $|\Psi ^{(+)}\rangle =\frac{1}{\sqrt{2}}(|0\rangle _1|1\rangle _2
+|1\rangle _1|0\rangle _2)$.
![ Schematic of an apparatus for the the basic entanglement operation (EO). A half mirror splits a single photon into two paths; in each path there is a trapped atom (or other suitable nanostructure). The relevant optical transitions in the atom are shown; not shown is the additional level structure corresponding to a second qubit in each path. Photons are focused into the regions of the trapped atoms by using a lens, and may be absorbed by the atom with probability $p_{abs}$. A photon, which is not absorbed, may be collected by the second lens to the photodetectors. The probabilities of failing to detect an incident photon, or of losing the photon at any stage in the process, or of registering a dark count, can be high without impairing the entanglement fidelity. []{data-label="streem"}](fig_schematic.pdf){width="7.0cm"}
In a literal implementation of the simplistic scheme described above, only very weak entanglement would be induced because the interaction between the photon and atom in a free space is weak so that the photon usually passes the atom without absorption. The atoms would be left in a mixed state involving (primarily) their ground state and (weakly) the desired Bell state. Ways to increase the absorption probability to nearly unity by using an appropriate lens have been suggested by several authors [@TML01a; @SMKL01a]. However this goal would be very challenging with existing technology. A recent experimental paper has reported nearly $10$ percent photon absorption probability for an atom in free space [@TCA01a], but this impressive result would still generate only very weak entanglement.
Atomic ensembles are one of the solutions to obtain a higher absorption probability, because they can enhance the effective coupling between atoms and photons [@DLCZ01a; @BPT01a]. Furthermore, experimentally one can generate a Bell state between two atomic ensembles through optical absorption [@CDLK01a]. However, the drawback of the atomic ensemble is that local qubit operations are difficult: one (or both) of the qubit basis states involve collective excitation and so unitary rotations cannot be performed in a direct fashion.
Therefore, we pursue the idea of single atoms (or equivalent small nanostructures) as nodes of our distributed computer. We adopt a two-step protocol in order to surmount the difficulty of weak entanglement alluded to above. The protocol requires two qubits at each node. This is a modest requirement, achievable with certain species of atom and with nanostructures such as the nitrogen-vacancy (NV) defect centre in diamond [@DCJTMJZHL01a; @NMRHWYJGJW01a]. We will assume that high-fidelity [*local*]{} operations are possible within each node, although we remark on the impact of errors presently. We take it that the primary error sources are associated with the inter-node EO, including the limited absorption probability, photon loss, asymmetry of photon-absorption probability of the atoms, path-length variation between alternative routes of the photons, and dark counts.
Initially we describe the scheme without any photon detectors involved in the remote entanglement generation, and plot the success probability in this case. We then proceed to introduce detectors and determine how they would improve the efficiency of the EO (see setup as shown in Fig. \[streem\]). We find that even highly imperfect detection can significantly improve the performance of our EO.
In the following, we refer to the optically active three-level system at each node as the [*optical qubit*]{}, and the secondary two level system at each node as the [*logical qubit*]{}. Obviously these need not be physically separate systems; for example the electron and nuclear spins in a single atom or NV centre can provide an appropriate level structure. After a single photon split by the half mirror is focused to the optical qubits, and a $\pi $-pulse is applied to both of these qubits, they are in the following state: $$\begin{aligned}
\rho_{op}=\frac{P^{(1)}_{\text{abs}} +P^{(2)}_{\text{abs}}}{2}\hat{\mathcal{Z}}_{1}^{\phi ,\Delta}
|\Psi
^{(+)}\rangle _{1,2}\langle \Psi ^{(+)}|\hat{\mathcal{Z}}_{1}^{\phi
,-\Delta} \nonumber\\
+(1-\frac{P^{(1)}_{\text{abs}} +P^{(2)}_{\text{abs}}}{2})|00\rangle
_{1,2}\langle 00|
\label{optical_state}\end{aligned}$$ where $P^{(i)}_{\text{abs}}$ $(i=1,2)$ is an absorption probability of the $i$th atom and $\hat{\mathcal{Z}}_{1}^{\phi ,\Delta}$ represents the effect of the asymmetry of the absorption probability and the path-length variation of photons defined as $\hat{\mathcal{Z}}^{\phi ,\Delta}_1=[\cos (\phi) \openone +\sin (\phi) \hat{\sigma }_z^{(1)}]
[\cos (\Delta ) \openone +i\sin (\Delta ) \hat{\sigma }_z^{(1)}] $ where $$\sin 2\phi =\frac{P^{(2)}_{\text{abs}}-P^{(1)}_{\text{abs}}}{P^{(1)}_{\text{abs}}+P^{(2)}_{\text{abs}}}.$$ Here, $\phi $ denotes the asymmetry rate of the photon absorption probability, $\Delta $ denotes a phase shift caused by the path-length variation, and $\hat{\sigma }_z^{(1)}$ denotes a Pauli operator.
Fortunately this state is of the same basic form as the key state considered in Ref. [@CB01a], and therefore we can adapt the technique described there in order to accomplish high fidelity entanglement. In essence, we employ $ \rho_{op}$ as a resource to perform a parity projection on the two logical qubits (i.e. a projector of two qubits into a subspace of a specific parity). Since $ \rho_{op}$ is mixed, this parity projection is also impure. However the protocol has a second step: we generate a new state $ \rho_{op}$ on the optical qubits (by reinitialising them to the ground state and sending a new photon) and use this to perform a second parity projection. If the results of the parity projections concur in a specific fashion, then one concludes that a pure parity projection has indeed occurred. We refer to this two round process as a parity projection protocol (PPP), it is our particular choice of entanglement operation (EO).
We successfully perform a parity projection between the logical qubits with a probability of $$p_{\text{s}}=\frac{\cos ^2 2\phi }{2}(\frac{
P^{(1)}_{\text{abs} }+P^{(1)}_{\text{abs} }
}{2})^2$$ while with probability $(1-p_s)$ the logical qubits are projected into separable state [@CB01a]. Importantly, the effect of path-length variation is canceled out as long as the discrepancy has not drifted during the protocol, while photon-loss and antisymmetry of the absorption probability only affect the success probability of the entanglement operation and does not decrease the fidelity. Here we are assuming that the local operations within each node required during the PPP are high fidelity. Errors here will lead to imperfections on the parity projection, but since only a few operations are necessary this does not represent major issue [@CB01a].
One knows whether the PPP succeeds (performing an EO) or fails (projecting the client qubits into separable states) from the results of single-qubit measurements performed locally within each node. Physically the measurement system may be optical or, for example, electronic via a mapping to an electron current [@K01a]. If indeed it is optical then obviously local photon detectors are required; however, note that the high fidelity measurement of a single qubit is straightforward even with limited detector efficiency, because one can generate a stream of photons rather than relying on a single detection event [@PhysRevLett.Lucas.readout]. We emphasise that, regardless of how local measurement of the single qubits is performed, we can in principle accomplish inter-node entanglement (generation of $\rho_{op}$) without the need for photon detectors in the network.
In the scenario described so far, while the fidelity of the entangling operation performed by the PPP is high, the probability of actually achieving this entanglement is low. It is bonded by $p_s=1/8$ in the limit of high absorption probability , and it falls quadratically with . Given such a failure rate the time and resource cost for obtaining a large scale entangled state may be impractical [@Duan:2005p369; @UTprl]. (As an aside, we note that the introduction of a third qubit at each node would resolve this problem by “brokered entanglement” [@BBFM01a].) Therefore we now consider introducing detectors which watch for photons passing through the network without absorption; i.e. if a detector clicks, then we know that an entangled state has not been generated. This information is always useful: It tells us not to attempt a round of the PPP.
We now require a modified form of Eqn. (\[optical\_state\]) describing the state of the optical qubits given that the ‘no click’ criterion is satisfied. The following Kraus operators describe the presence of detectors watching for photons that pass though (fail to be absorbed) at the $i^{th}$ node (i=1,2), predicated on ‘no click’. $$\hat{V}^{(i)}= \sqrt{1-d }(|\text{vac}\rangle _i
\langle \text{vac}|+\sqrt{1-\eta }\hat{a}^{\dagger }|\text{vac}\rangle _i\langle \text{vac}|\hat{a})$$ The state of the photon will be traced out because we are interested in the atomic state. Here, $d$, $\eta $, $|\text{vac}\rangle $, and $\hat{a} ^{\dagger}$($\hat{a}$) denote a dark count rate, a detector efficiency, a vacuum state and creation(annihilation) operator of a photon respectively.
Given ‘no click’ the resulting state $\rho _{op}^\prime$ is employed in a round of the PPP. In Fig. \[eo-detector\] we show the performance of this system against the parameters of absorption and detector efficiency (which of course includes all photon losses within the network as well as actual detector failure). This graph shows that even very imperfect detectors can increase the success probability.
![Success probability $p_s$ of our parity projection protocol. The $x$ axis denotes the absorption probability of the photon $P_{\text{abs}}$ and the $y$ axis denotes the detector efficiency $\eta $. Here, we assume a symmetric absorption probability for the two atoms. The horizontal surface is at $p_s=1/16$ which is representative of the probability below which the growth of large scale entangled states is impractical [@UTprl][]{data-label="eo-detector"}](fig_detectorEfficiency_bitmap.pdf){width="8.0cm"}
Dark counts are a primary error source in most of the previous remote EO schemes. For example for a typical path erasure scheme, even when the photon capture probability is unity the dark count rate should be less than $0.1$ percent to obtain a minimum acceptable fidelity [@CB01a]. Also, in the first experimental realization to perform EO between macroscopically distant atoms by the path-eraser schemes, the fidelity of the entangled state is around only $0.63$ and this limitation is mainly caused by the dark counts [@MMOYMDM1a]. However, in our scheme, neither the fidelity nor the success probability of the EO is affected by the dark counts of the photodetectors, because in our scheme the optical qubits will be reset and no operation will be performed on the logical qubits when a dark count occurs. Thus dark counts only increase the necessary number of instances when we send a single photon; if the dark count rate were very high we might need many such trials before seeing a ‘no click’ event. We plot the number trials against both dark counts and finite absorption in Fig. \[time-dark\]. The graph shows that, except near the unity dark count rate and near zero absorption probability where the number of the trials goes to infinity, the number of the trials is within a reasonable range. For example, for $10$ percent of the absorption probability, the necessary number of the trials is less than $40$ as long as the dark count rate is less than $0.5$. Thus the present scheme is highly robust against against dark counts.
![The average number of the trials (emitted single photons) is plotted, when performing the PPP with imperfect photodetectors. The $x$ axis denotes the absorption probability $P_{\text{abs}} $ and the $y$ axis denotes the dark count rate $d$. Here, we assume a symmetric absorption probability for the two atoms. []{data-label="time-dark"}](fig_darkCounts_bitmap.pdf){width="8.0cm"}
We have assumed a perfect single photon source in the above discussion. The ideal single photon source should emit one and only one photon when the device is triggered, which can be realized in principle by the photon antibunching effect [@KDM01a]. However with current technology it is inevitable that the pulse generated by a source may contain either no photons, or multiple photons, with finite probability. Suppose that $P_m$ denotes a probability to send $m$ photons and, since $P_m\ll 1$ $(m\geq 3)$ is satisfied for most of the single photon sources, we consider only $P_m$ for $m=0,1,2$. We have calculated and plotted the concurrence of the Bell pair after performing the PPP on the logical qubits prepared in $|++\rangle $ when the absorption probability of the photon is $10$ percent.
![The effect of an imperfect single photon source. The plots shows the concurrence of the Bell pair after performing a PPP on the logical qubits $|++\rangle $, given finite probabilities of having erroneously emitted zero ($P_0$) or two ($P_2$) photons. Here, we assume that the absorption probability of the photon at the atom is $10$ percent and also assume no photodetectors. []{data-label="2-f-two-zero"}](fig_imperfectSource_bitmap.pdf){width="7.0cm"}
As we show in the Fig. \[2-f-two-zero\], even when $P_0$ is large, one can obtain a high fidelity entanglement provided that $P_2$ is very small. In a recent experiment [@LM01a], a single photon source whose $P_0$ and $P_2$ are $14$ percent and $0.08$ percent respectively was realized; by using these values, one can obtain a Bell pair whose fidelity is more than $0.996$ which is above the threshold for fault tolerant quantum computation [@RHG01a].
In conclusion, we have suggested a novel scheme to perform an entanglement operation between distant atoms (or other optically active nanostructures). Our scheme is designed to minimise the impact of photon losses, dark counts, and other issues that will be significant in distributed QIP architectures. Indeed, in principle our scheme can be performed without any photodetectors. The introduction of photon detection, even on a highly imperfect basis, is beneficial: issues such as dark counts have no impact on entanglement fidelity or success probability. Our results indicate that currently available technologies can support high-fidelity remote entanglement operations, the crucial ingredient in scalable quantum computation.
[31]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , , , , , , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, ().
, , (), .
, , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , , , , , ****, ().
, ****, ().
, ****, ().
, , , , , , , , , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, (), .
[^1]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We construct a simple accretion model of a rotating gas sphere onto a Schwarzschild black hole. We show how to build analytic solutions in terms of Jacobi elliptic functions. This construction represents a general relativistic generalisation of the Newtonian accretion model first proposed by @ulrich. In exactly the same form as it occurs for the Newtonian case, the flow naturally predicts the existence of an equatorial rotating accretion disc about the hole. However, the radius of the disc increases monotonically without limit as the flow reaches its minimum allowed angular momentum for this particular model.'
author:
- 'E. A. Huerta & S. Mendoza'
bibliography:
- 'acc.bib'
title: A simple accretion model of a rotating gas sphere onto a Schwarzschild black hole
---
Introduction
============
Steady spherically symmetric accretion onto a central gravitational potential (e.g. a star) was first investigated by @bondi52. This pioneering work turned out to have many different applications to astrophysical phenomena (see e.g. @frank02), despite of the fact that it was only constructed for curiosity, rather than a realistic idea to a particular astrophysical situation [@bondi05]. A general relativistic generalisation to the work of @bondi52 was made by @michel72. Both models can be seen as astrophysical examples of transonic flows that naturally occur in the Universe.
Realistic models of spherical accretion require an extra ingredient that seems inevitable in many astronomical situations. This is so because gas clouds, where compact objects are embedded, have a certain degree of rotation. This rotation enables the formation of an equatorial accretion disc for which gas particles rotate about the central object. The first steady accretion model, in which a rotating gas sphere with infinite extent is accreted to a central object was first investigated by @ulrich. In his model, @ulrich considered a gas cloud rotating as a rigid body and took no account of pressure gradients associated to the infalling gas. In other words, his analysis is approximately ballistic. For, the the initial specific angular momentum of an infalling particle is small and heating by radiation as well as viscosity effects are negligible. In addition, pressure gradients and internal energy changes along the streamlines of a supersonic flow provide negligible contributions to the momentum and energy balances respectively [cf. @ulrich; @cassen; @mendoza].
A first order general relativistic approximation of a rotating gas sphere was made by @beloborodov01. In their model, they used approximate solutions for the integration of the geodesic equation and their boundary conditions are such that the specific angular momentum for a single particle $ h \leq 2 r_\text{g} $, where $ r_\text{g} $ is the Schwarzschild radius. In here and in what follows we use a system of units for which $ G = c = 1 $, where $ G $ is the gravitational constant and $ c $ the speed of light. In this article we show that such a model is not a general relativistic @ulrich flow, since its appropriate generalisation must satisfy the inequality $ h \geq 2 r_\text{g} $. A pseudo–Newtonian @paczynsky80 numerical approximation of the extreme hyperbolic $
h = 2 r_\text{g} $ case was discussed by @lee05. We show that this pseudo–Newtonian numerical approach differs in a significant way when compared with the complete general relativistic solution.
In this article we develop a full general relativistic model of a rotating gas sphere of infinite extent that accretes matter onto a centrally symmetric Schwarzschild space–time. We assume that heating by radiation and viscosity effects are small so that the flow can be treated as ideal. Since pressure gradients and internal energy changes along the streamlines of a supersonic flow provide negligible contributions to the momentum and energy balances respectively, the flow is well approximated by ballistic trajectories. We also assume that the self–gravity of the accreting gas does not change the structure of the Schwarzschild space–time. This is of course true if the mass of the central object that shapes the space–time is much greater than the mass of the rotating cloud. With these assumptions, we find velocity and particle number density fields as well as the streamlines of the flow in an exact analytic form using Jacobi Elliptic functions. The remaining thermodynamic quantities are easily found by assuming a polytropic flow, for which the pressure is proportional to a power of the particle number density [see for e.g. @stanyukovich]. In section \[celestial\] we state the main results from general relativity used to solve the model introduced in section \[accretion-model\]. We show in section \[convergence\] that the general solution converges to the accretion model considered by @ulrich and that for the case of a null value for the specific angular momentum, the velocity field converges to the one described by @michel72 for a null value of the pressure gradients on the fluid. The particular case of a minimum specific angular momentum $
h = 2 r_\text{g} $ is calculated in section \[ultrarelativistic\], and it is shown that the solutions can be found with the aid of simple hyperbolic functions. Finally, in section \[discussion\] we discuss the physical consequences implied by this general relativistic model.
Background in celestial mechanics for general relativity {#celestial}
========================================================
The main results from relativistic gravity to be used throughout the article are stated in this section. The reader is referred to the general relativity textbooks by @MTW [@chandra; @daufields; @novi] and @wald for further details.
It is well known that the vacuum Schwarzschild solution describing the final product of gravitational collapse contains a singularity which is hidden by a horizon. The solution corresponding to an exterior gravitational field of static, spherically symmetric body is given by the Schwarzschild metric:
$$\mathrm{d}s^{2} = - \left( 1 - \frac{ 2M } { r }
\right) \mathrm{d}
t^{2} + \left( 1 - \frac{ 2M }{ r } \right)^{-1} \mathrm{d} r^{2}+
r^{2} \mathrm{d} \Omega^{2},
\label{eq.1}$$
where $ \mathrm{d} \Omega^{2} = \mathrm{d} \theta^{2} +
\sin^{2}\theta \, \mathrm{d}\varphi^{2} $ represents the square of an angular displacement. The total mass of the Schwarzschild field is represented by $ M $. The temporal, radial, polar and azimuthal coordinates are represented respectively by t, r, $ \theta $ and $
\varphi $. In equation , we have chosen a signature $
(-, +,+,+) $ for the metric. In what follows, Greek indices such as $
\alpha $, $ \beta $, etc., are used to denote space–time components, taking values $ 0,\ 1,\ 2 $ and $ 3 $.
@birkhoff showed that it is possible to solve the vacuum Einstein field equations for a general spherically symmetric space–time, without the static field assumption. It follows from his calculations that the Schwarzschild solution remains the only solution of this more general space–time.
The behaviour of light rays and test bodies in the exterior gravitational field of a spherical body is described by analysing both, timelike and null geodesics. In order to do that, we first note that the Schwarzschild metric has a parity reflection symmetry, i.e. the transformation $
\theta \rightarrow \pi - \theta $ leaves the metric unchanged. Under these considerations it follows that if the initial position and tangent vector of a geodesic lies in the equatorial plane $ \theta= \pi /
2 $, then the entire geodesic must lie in that particular plane. Every geodesic can be brought to an initially equatorial plane by a rotational isometry and so, without loss of generality, it is possible to restrict our attention to the study of equatorial geodesics only.
In what follows we denote the coordinate basis components by $ x^{\mu}
$ and the tangent vector to a curve by $ u^\alpha = \mathrm{d} x^\alpha /
\mathrm{d}\tau $. For timelike geodesics the parameter $ \tau$ can be made to coincide with the proper time and for null geodesics it only represents an affine parameter. Under the above circumstances, the geodesics take the following form (cf. @wald):
$$- \kappa = \ g_{\alpha \beta} \, u^{\alpha} u^{\beta} =- \left(
1-\frac{ 2M }{r} \right ) \dot{t}^2 + \left( 1 - \frac{ 2M }{ r }
\right)^{-1} \dot{r}^2+ r^2 \dot{\varphi}^2,
\label{eq.2}$$
where
$$\kappa := \left\{
\begin{array}{ll}
1 & \text{for timelike geodesics,} \\
0 & \text{for null geodesics.}
\end{array}\right.$$
In the derivation of the geodesic equation , there are two important constants of motion that must be taken into account. The first of them is
$$E := - g_{\alpha \beta} \, \xi^{\alpha} u^{\beta}
= \left( 1 - \frac{ 2M } { r }
\right)\frac{\mathrm{d}t}{\mathrm{d}\tau},
\label{eq.3}$$
where $ \xi^{\alpha}$ represents the static Killing vector and $E$ is a constant of motion. For timelike geodesics $E$ represents the specific energy of a single particle following a given geodesic, relative to a static observer at infinity.
The second constant of motion $ h $ is related to the rotational Killing field $ \psi^{\alpha}$ by the following relation:
$$h := g_{\alpha \beta} \psi^{\alpha} u^{\beta}= r^{2} \sin^{2}\theta
\frac{\mathrm{d}\varphi}{\mathrm{d}\tau}.
\label{eq.4}$$
Since we have chosen $ \theta = \pi / 2 $ without loss of generalisation, the previous equation takes the form
$$h= r ^{2} \frac{ \mathrm{d} \varphi }{ \mathrm{d} \tau }.
\label{eq.5}$$
For timelike geodesics $ h $ is the specific angular momentum. The final equation for the geodesics is found by direct substitution of equations and into relation . From now on, we restrict the analysis to timelike geodesics only and so, the equation of motion takes the following form:
$$\left( \frac{ \mathrm{d}r }{ \mathrm{d} \tau } \right)^2 +
\left( 1 - \frac{ 2M }{ r } \right) \left( 1 + \frac{ h^{2} }{ r^{2} }
\right) = E^{2}.
\label{eq.6}$$
This equation shows that the radial motion of a geodesic is very similar to that of a unit mass particle of energy $ E^{2} $ in ordinary one dimensional non-relativistic mechanics. The feature provided by general relativity in equation is that, apart from a Newtonian gravitational term $ - 2M / r $ and the centrifugal barrier $ h^{2} / r^{2} $, there is a new attractive potential term $
-2 M h^{2} / r^{3}$, that dominates over the centrifugal barrier for sufficiently small $r$.
As it is done in the analysis of the Keplerian orbit for Newtonian gravity (see for example @daumec), it is useful to consider $r$ as a function of $ \varphi $ instead of $ \tau $. Therefore, equation takes the form
$$\left( \frac{ \mathrm{d} r }{ \mathrm{d} \varphi } \right)^2 = \frac{
2 M r^3 }{ h^2 } - r^2 + 2 M r + \left( E^2 - 1 \right ) \left(
\frac{ r^4 }{ h^2 } \right)
\label{eq.7}.$$
Now, letting $ E^{2}-1 :=2 E_\text{tot} $, where $ E_\text{tot} $ is the total energy of the particle and $ u = r^{-1}$ equation takes the final form
$$\left( \frac{ \mathrm{d} u }{\mathrm{d} \varphi } \right)^2 =
2 M u^3 - u^2 + \frac{ 2 M u }{ h^2 } + \frac{ 2 E_\text{tot} }{
h^2 }.
\label{eq.8}$$
Let us define $ u := M v / h^2 $ so that the previous equation simplifies to
$$\left( \frac{ \mathrm{d} v }{ \mathrm{d} \phi } \right)^2 =
\alpha v^3 -v^2 + 2 v + \epsilon,
\label{eq.9}$$
where
$$\alpha :=2 \left( \frac{ M }{ h } \right)^2 ,\qquad \epsilon := \frac{2
E_\text{tot} h^2 }{ M^2 }.
\label{eq.10}$$
This equation determines the geometry of the geodesics on the invariant plane labelled by $ \theta = \pi / 2 $. In fact, this equation governs the geometry of the orbits described on the invariant plane due to the fact that the geometry of the geodesics is determined by the roots of the cubic equation
$$f(v) = \alpha v^{3} -v^{2} +2v + \epsilon.
\label{eq.10a}$$
The parameter $ \alpha $ provides the difference between the general relativistic and the Newtonian case. In fact, $ \alpha \rightarrow 0 $ in the Newtonian limit. Finally, the eccentricity $ \boldsymbol{e} $ of the Newtonian orbit is related to $ \epsilon $ through the relation
$$\boldsymbol{e}^{2} =1+ \epsilon.$$
Accretion model
===============
The model first proposed by @ulrich describes a non–relativistic steady accretion flow that considers a central object for which fluid particles fall onto it due to its gravitational potential. Their initial angular momentum $ h_\infty $ at infinity is considered small in such a way that this model is a small perturbation of @bondi52’s spherical accretion model. The specific initial conditions far away from the origin combined with the assumption that radiative processes and viscosity play no important role on the flow, imply that the streamlines have a parabolic shape. When fluid particles arrive at the equator they thermalise their velocity component normal to the equator. Since the angular momentum for a particular fluid particle is conserved, it follows that particles orbit about the central object once they reach the equator. The radius $
r_\text{dN} $ of the Newtonian accretion disc, where particles orbit about the central object, is given by [@ulrich; @mendoza]
$$r_\text{dN} = h_\infty^2 / M.
\label{eq.10b}$$
The velocity field and the density profiles are calculated by energy and mass conservation arguments.
We consider now a general relativistic @ulrich situation in which rotating fluid particles fall onto a central object that generates a Schwarzschild space–time. As described in section \[introduction\], our analysis is well described by a ballistic approximation. The equation of motion for each fluid particle is thus described by relation .
In order to get quantitative results it is important to establish the boundary conditions at infinity. The angular momentum is given by equation , so if a particle that falls onto the black hole has an initial velocity $ v_{0} $ at an initial polar angle $ \theta_{0}
$ and the radial distance between the particle and the black hole is $
r_{0} $, then the angular momentum is given by
$$h_{\infty * } = r_0^2 \frac{ \mathrm{d} \varphi }{ \mathrm{d} \tau } =
r_0\, \gamma_0\, v_0\, \sin\theta_0,
\label{eq.11}$$
where $ \gamma_0:= \left( 1- v_{0} ^{2} \right)^{-1/2} $ is the Lorentz factor for the velocity $ v_0 $.
In addition, $h_{\infty *}$ is related to the angular momentum $ h $ perpendicular to the invariant plane through the relation
$$h = h_{\infty *} \sin\theta_{0}.
\label{eq.12a}$$
In the Newtonian case, the specific angular momentum $h_{\infty
*}$ converges to the value calculated by @ulrich.
With the above relations it is then possible to calculate the equation for a given fluid particle falling onto the central object. First of all, equation states that, if $f(v)$ is a cubic polynomial in $ v $, then either all of its roots are real or one of them is real and the two remaining are a complex–conjugate pair. The fact that the particle’s energy is insufficient to permit its escape from the black hole’s gravitational field requires that $ \epsilon <
0$. This implies that the roots $ v_1,\ v_2,\ $ and $ v_3 $ of $
f(v) $ are all real and satisfy the inequality $ v_{1} < v_{2} <
v_{3} $. Thus, $ f(v) $ can be written as
$$f(v) = \alpha \left( v - v_1 \right) \left( v_2 - v \right)
\left( v_{3} - v \right).
\label{eq.14}$$
Direct substitution of this relation in equation yields the integration
$$- \frac{ 2 }{ \left[ \left( v_2 - v_1 \right) \left( v_3 - v_1 \right)
\right]^{1/2} } \int \frac{ \mathrm{d} w }{ \left[ \left( w^2 - w_1^2
\right) \left( w^{2} - w_{2} ^{2} \right) \right]^{1/2} } =
\alpha^{1/2}\varphi,
\label{eq.16}$$
where $ w_1^2 = 1 / \left( v_{2} - v_{1} \right) $, $ w_{2} ^{2} = 1 / \left( v_{3} - v_{1} \right) $ and $ v =
v_{1} + w^{-2} $. This elliptic integral can be calculated in terms of Jacobi elliptic functions (see for example @cayley [@hancock]) yielding the following result
$$\frac{ 1 }{ \left( v_{3} - v_{1} \right)^{ 1/2 } }
\text{ns}^{-1} \left\{ w \left( v_{2} - v_{1} \right)^{1/2}
\right\} = \alpha^{ 1/2 } \varphi.
\label{eq.17}$$
The modulus $ k $ of the Jacobi elliptic function for this particular problem is given by
$$k^{2}= \frac{ v_{2} - v_{1} }{ v_{3} - v_{1} }.
\label{eq.17a}$$
With the aid of relation , the equation of the orbit is now obtained:
$$v =v_{1} + \left( v_{2} - v_{1} \right) \ \text{sn}^{2} \Big\{ \frac{
\varphi }{ 2 } \left[ \alpha \left( v_{3} - v_{1} \right) \right]^{
1/2 } \Big\}.
\label{eq.18}$$
This is a general equation for the orbit. For the particular case we are interested in, it must resemble the orbit proposed by Ulrich when $ \alpha = 0$. Thus, the equation of the orbit must converge to a parabola in this limit. This is possible if and only if the eccentricity $\boldsymbol{e} = 1$, which in turn implies $ \epsilon =
0$. All these conditions mean that the roots of equation are given by
$$v_{1} = 0, \quad v_{2} = \frac { 1 - \left( 1 - 8 \alpha \right)^{ 1/2 }
}{ 2 \alpha }, \quad v_{3} = \frac{ 1 + \left( 1 - 8 \alpha \right)^{ 1/2
} }{ 2 \alpha },
\label{eq.18a}$$
and so, the modulus $ k $ of the Jacobi elliptic functions in equation takes the form
$$k^2 = \frac{ 1 - ( 1 - 8 \alpha )^{ 1/2 } }{ 1 + ( 1 - 8 \alpha )^{
1/2 } }.
\label{eq.18b}$$
Note that the previous equations restrict the value of $ \alpha
$ in such a way that
$$0 \leq \alpha \leq 1/8.
\label{eq.18ba}$$
When $ \alpha = 0 $, Ulrich solutions are obtained and the case $ \alpha = 1 / 8 $ corresponds to the case for which the angular momentum $ h = 4 M = 2 r_\text{g} $ reaches a minimum value.
The orbit followed by a single particle falling onto a Schwarzschild black hole with the Ulrich prescription is then given by
$$\begin{gathered}
v = \frac{ p }{ r } = v_{2} \, \text{sn}^{2}
\varphi \beta,
\label{eq.21} \\
\intertext{where}
\beta := \frac{ \left( \alpha v_{3} \right)^{ 1/2 } }{ 2 } = \left(
\frac{ 1 + \left( 1 - 8 \alpha \right)^{ 1/2 } }{ 8 }\right)^{ 1/2 },
\quad p:= \frac{ h^{2} }{ M } = \frac{ h^{2}_{\infty
*} }{ M } \sin^{2} \theta_0 = r_* \sin^{2} \theta_0,
\label{eq.21a}\end{gathered}$$
and $p$ is the *latus rectum* of the generalised conic. Note that in the Newtonian limit, the length $r_*$ defined by equation converges to the radius of the Newtonian disc $ r_{\text{dN}} $ as shown by relation .
Before using the equation of the orbit to find out the velocity field and the particle number density, it is useful to mention some important properties of the Jacobi elliptic functions, such as [@cayley; @hancock]
$$\begin{gathered}
\text{sn}^{2}(z,k) + \text{cn}^{2}(z,k) = 1
\notag \\
\text{sn}(z,k) \rightarrow \sin(z), \quad
\text{cn}(z,k) \rightarrow \cos(z), \quad
\text{dn}(z,k) \rightarrow 1, \quad
\text{as} \quad k \rightarrow 0,
\notag \\
\frac{ \mathrm{d}}{\mathrm{d}z} \text{sn}(z,k) = \text{cn}(z,k) \
\text{dn}(z,k), \qquad
\frac{\mathrm{d}}{\mathrm{d}z} \text{cn}(z,k) = -\text{sn}(z,k) \
\text{dn}(z,k).
\label{eq.22t}\end{gathered}$$
The relativistic conic equation is obtained by direct substitution of these relations onto equation , giving
$$r = \frac{ p }{ v_{2} \left( 1- \text{cn}^{2}\varphi \beta \right) }
\label{eq.22}.$$
This orbit lies on the invariant plane $ \theta = \pi /
2 $. We now obtain an equation of motion in terms of the polar coordinate $ \theta $ and the initial polar angle $ \theta_{0}$ made by a particle when it starts falling onto the black hole. To do so, we note the fact that in order to recover the geometry of the spherical 3D space as $\alpha \rightarrow 0 $ it should be fulfilled that[^1]
$$\text{cn}^{2} \varphi \beta = \frac { \text{cn}^{2} \theta_{0} \beta +
\text{cn}^{2} \theta \beta - 1 }{ 2 \text{cn}^{2} \theta_{0} \beta
- 1 }.
\label{eq.23}$$
Since the invariant plane passes through the origin of coordinates, then the radial coordinate $ r $ remains the same if another plane is taken instead of the invariant one. Therefore, the angle $
\theta_{0}$ is the same as the one related to the value of the angular momentum of the particle at infinity (cf. equation ). Thus, the equation of the orbit is found by direct substitution of equations and into , and is given by
$$r = \frac{ r_* \sin^{2}\theta_0 \left( 2 \, \text{cn}^{2} \theta_{0}
\beta -1 \right) }{ v_{2} \left( \text{cn}^{2}\theta_{0} \beta-
\text{cn}^{2}\theta \beta \right) }.
\label{eq.24}$$
In order to work with dimensionless variables, let us make the following transformations
$$\frac{ r }{ r_* } \rightarrow r, \qquad
\frac{ v_{i} }{ v_\text{k} } \rightarrow v_{i} \qquad
(i = r, \ \theta,\ \varphi), \qquad
\frac{ n }{ n_{0} } \rightarrow n,$$
where
$$v_{i} := \frac{ \mathrm{d} x^i }{ \mathrm{d}\tau }, \qquad
n_{0} := \frac{ \dot{ M } }{ 4 \pi v_\text{k} r^{2}_* }, \qquad
v_\text{k} := \left( \frac{ M }{ r_* } \right)^{ 1/2 }.$$
In the previous relations, the mass accretion rate onto the black hole is represented by $ \dot M $. The velocity $ v_\text{k} $ converges to the Keplerian velocity of a single particle orbiting about the central object in a circular orbit when $ \alpha = 0 $. The particle number density $ n_{0} $ converges to the one calculated by @bondi52 in the Newtonian limit for the same null value of $
\alpha $.
Under the above considerations, the equations for the streamlines $ r(\theta) $, the velocity field $ v_r,\ v_\theta, v_\varphi $ and the proper particle number density $ n $ are given by
$$\begin{gathered}
r = \frac{ \sin^2\theta_0 \left( 2 \text{cn}^{2}\theta_{0} \beta
-1 \right) }{ v_{2 } \left( \text{cn}^{2}\theta_{0} \beta -
\text{cn}^{2}\theta \beta \right) },
\label{eq.25} \\
v_r = -2 r^{-1/2} \beta \ \frac{ \text{cn} \beta\theta \
\text{sn}\beta\theta \ \text{dn}\beta\theta }{ \sin\theta } \ f^{
1/2 }_1 \left( \theta,\, \theta_{0},\, v_{2},\, \beta \right),
\label{eq.28} \\
v_{\theta} = r^{ -1/2 } \frac{ \text{cn}^{2}\theta_{0} \beta -
\text{cn}^{2}\theta \beta }{ \sin\theta } \ f^{ 1/2 }_{1} \left(
\theta, \theta_{0}, v_{2}, \beta \right),
\label{eq.27} \\
v_{\varphi} = r^{ -1/2 } \frac{ \sin\theta_{0} }{ \sin\theta } \left(
\frac{v_{2} \ \left( \text{cn}^{2} \theta_{0} \beta- \text{cn}^{2}
\theta \beta \right) }{ 2 \text{cn}^2 \theta_{0} \beta -1 } \right)^{
1/2 },
\label{eq.26} \\
n = \frac { r^{ -3/2 } \sin\theta_{0} }{ 2 f^{ 1/2 }_1 \left( \theta,\,
\theta_{0},\, v_{2},\, \beta \right) \ f_{2} \left( \theta,\,
\theta_{0},\, v_{2},\, \beta \right) },
\label{eq.28a}\end{gathered}$$
where the functions $f_{1}$ and $f_{2}$ are defined by the following relations:
$$\begin{gathered}
f_{1} \left( \theta, \theta_{0}, v_{2}, \beta \right) :=
\frac{ 2 \ \sin^{2}\theta \ \left( 2 \text{cn}^{2}\theta_{0}
\beta -1 \right) - v_{2} \ \sin^{2}\theta_{0} \left( \text{cn}^2
\theta_{0} \beta - \text{cn}^2 \theta \beta \right) }{
\left( 2 \text{cn}^{2}\theta_{0} \beta - 1 \right)
\left\{ \left( \text{cn}^{2} \theta_{0} \beta - \text{cn}^{2} \theta
\beta \right)^2 + \left( 2 \ \beta \text{cn} \beta \theta \
\text{sn} \beta \theta \ \text{dn} \beta \theta \right)^2
\right\} },
\\
\begin{split}
f_{2} \left( \theta, \theta_{0}, v_{2}, \beta \right) := \ & \beta \text{cn}
\beta \theta_{0} \ \text{sn} \beta \theta_{0} \ \text{dn} \beta
\theta_{0} + \left\{ \sin\theta_{0} \ \cos\theta_{0} \ \left(
2 \text{cn}^{2} \theta_{0} \beta -1 \right) - \right.
\\
& \left. - 2 \beta \text{cn}
\beta \theta_{0} \ \text{sn} \beta \theta_{0} \ \text{dn} \beta
\theta_{0} \ \sin^{2} \theta_{0} \right\} / v_{2} r.
\end{split}\end{gathered}$$
Equations - are the solutions to the problem of a rotating gas sphere onto a Schwarzschild black hole, i.e. they represent a relativistic generalisation of the accretion model first proposed by @ulrich.
Convergence to known accretion models {#convergence}
=====================================
We have mentioned before (cf. section \[accretion-model\]) that the analytical solution must converge to the Ulrich accretion model when $ \alpha \rightarrow 0 $. In order to prove this, note that three very important conditions are fulfilled when $ \alpha \rightarrow 0 $: (a) the modulus $k$ of the Jacobi elliptic functions vanishes, (b) the root $ v_{2} \rightarrow 2 $, and (c) the parameter $\beta \rightarrow
1 / 2 $. These conditions together with equation imply that relations - naturally converge to the non-relativistic Ulrich model (see for example @mendoza), that is:
$$\begin{gathered}
r = \frac { \sin^{2} \theta_{0} }{ 1 - \cos\theta \cos\theta_{0} },
\label{eq.29} \\
v_{r} = -r^{ -1/2 } \left( 1 + \frac{ \cos\theta }{ \cos \theta_{0}}
\right)^{ 1/2 },
\label{eq.32} \\
v_\theta = r^{ -1/2 } \frac{ \cos\theta_0 - \cos\theta }{ \sin\theta }
\left( 1 + \frac{ \cos \theta }{ \cos \theta_0 } \right)^{ 1/2 },
\label{eq.31} \\
v_\varphi = r^{ -1/2 } \frac{ \sin \theta_0 }{ \sin \theta } \left(
1 - \frac{ \cos \theta }{ \cos \theta_0 } \right)^{ 1/2 },
\label{eq.30} \\
\rho = r^{ -3/2 } \left( 1 + \frac{ \cos \theta }{ \cos \theta_0 }
\right)^{- 1/2 } \left\{ 1 + 2 r^{ -1 } P_2 \left( \cos \theta_{0}
\right) \right\}^{ -1 },
\label{eq.33} \end{gathered}$$
where $ P_2(\chi) $ is the Legendre second order polynomial given by $ P_{2}( \chi ) := \left( 3 \cos^2 \chi - 1 \right) / 2 $.
On the other hand, if we consider a particular case for which the angular momentum of the fluid particles is null, then equations - converge to
$$v_r = - \left( 2 M / r \right)^{ 1/2 },
\qquad v_\theta = 0, \qquad
v_\varphi = 0, \qquad
n = 2^{ -1/2 } r^{ -3/2 }.
\label{eq.34}$$
These equations describe a radial accretion model onto a Schwarzschild black hole. They correspond to the model first constructed by @michel72 when pressure gradients in his calculations are negligible.
The extreme hyperbolic model {#ultrarelativistic}
============================
As mentioned in section \[accretion-model\], the parameter $ \alpha $ reaches its maximum value when $ \alpha = 1/8 $, which corresponds to a minimum angular momentum $ h = 2 r_\text{g} $. In this limit the module $ k $ of the Jacobi elliptic functions is such that $ k = 1 $, $ v_2 = 4 $ and $ \beta = \sqrt{8} $ as can be seen from equations , and . Also, when $ k \rightarrow 1 $, the following identities are valid [@lawden]:
$$\text{sn} \, w \rightarrow \text{tanh} \, w, \quad
\text{cn} \, w \rightarrow \text{sech} \, w, \quad
\text{dn} \, w \rightarrow \text{sech} \, w.
\label{eq.34a}$$
Using all these relations it follows that solutions - converge to
$$\begin{gathered}
r = \frac{ \sin^{2} \theta_{0} \left( 2 \textrm{sech}^{2} \frac{ \sqrt{
2 } }{ 4 } \theta_{0} -1 \right) }{ 4 \left( \textrm{sech}^{2} \frac{
\sqrt{ 2 } }{ 4 } \theta_{0}- \textrm{sech}^{2} \frac{ \sqrt{ 2 }
}{ 4 } \theta
\right ) },
\label{eq.35} \\
v_{r} = - r^{ -1/2 } \frac{ \sqrt 2 }{ 2 } \ \frac{ \textrm{sech}^{2}
\frac{ \sqrt{ 2 } }{ 4 } \theta \ \textrm{tanh} \frac{ \sqrt 2 }{
4 } \theta }{ \sin\theta } \ f^{1/2}_{1\text{H}} \left( \theta,\,
\theta_{0} \right),
\label{eq.36} \\
v_{\theta} = r^{-1/2} \frac{ \textrm{sech}^{2} \frac{ \sqrt 2 }{
4 } \theta_{0} - \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 } \theta }{
\sin\theta } \ f^{1/2}_{1\text{H}} \left( \theta, \theta_{0} \right),
\label{eq.37}\\
v_{\varphi} = 2 r^{-1/2} \frac{ \sin\theta_{0} }{ \sin\theta }
\left( \frac{ \ \left( \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 }
\theta_{0} - \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 } \theta \right)
}{ 2 \textrm{sech}^{ 2 } \frac{ \sqrt 2 }{ 4 } \theta_{0} - 1 }
\right)^{ 1/2 },
\label{eq.38}\\
n = \frac { r^{ -3/2 } \sin\theta_{0} }{ 2 f^{1/2}_{1\text{H}} \left(\theta,
\, \theta_{0} \right) \ f_{2\text{H}} \left( \theta, \,\theta_{0}
\right) },
\label{eq.39} \\
\intertext{where}
f_{1\text{H}} \left( \theta, \theta_{0} \right) := \frac { 2 \ \sin^{2}\theta
\ \left( 2 \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 } \theta_{0} -
1 \right) - 4 \ \sin^{2} \theta_{0} \left( \textrm{sech}^{2} \frac{
\sqrt 2 }{ 4 }\theta_{0} - \textrm{sech}^{2} \frac{ \sqrt 2 }{
4 } \theta \right) }{ \left( 2 \textrm{sech}^{2} \frac{ \sqrt 2 }{
4 } \theta_{0} - 1 \right) \left\{ \left( \textrm{sech}^{2} \frac{
\sqrt 2 }{ 4 } \theta_{0} - \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4
}\theta \right)^{ 2 }+ \left( \frac{ \sqrt 2 }{ 2 } \textrm{sech}^{2}
\frac{ \sqrt 2 }{ 4 } \theta \ \textrm{tanh} \frac{ \sqrt 2 }{ 4 }
\theta \right)^2 \right\} },
\notag \\
\begin{split}
f_{2\text{H}} \left( \theta, \theta_{0} \right) &:= \frac{ \sqrt 2 }{ 4 }
\textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 } \theta_{0} \ \textrm{tanh}
\frac{ \sqrt 2 }{ 4 }\theta_{0} + \left\{ \sin\theta_{0} \
\cos\theta_{0} \ \left( 2 \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 }
\theta_{0} - 1 \right) - \right.
\\
& \left. - \frac{ \sqrt 2 }{ 2 } \textrm{sech}^{2} \frac{ \sqrt 2 }{ 4 }
\theta_{0} \ \textrm{tanh} \frac{ \sqrt 2 }{ 4 } \theta_{0} \
\sin^{2}\theta_{0} \right\} / 4 r.
\notag
\end{split}\end{gathered}$$
This model does not formally represent a relativistic Ulrich solution, since the orbit followed by a particular fluid particle has a hyperbolic Newtonian counterpart. The solutions described by equations - are the exact relativistic solutions to the numerical problem discussed by @lee05 who used a @paczynsky80 pseudo–Newtonian potential.
Discussion
==========
@ulrich’s Newtonian accretion model predicts the existence of an accretion disc of radius $ r_\text{dN} $. This is a natural property of an accreting flow with rotation and has to be valid in the relativistic case as well. In order to see the modifications that a full relativistic model imposes to the structure of the accretion disc, let us start by observing what happens to a fluid particle when it reaches the equator. First, in the @ulrich accretion model, when any particle reaches the equator $ \theta = \pi / 2 $, it does so at a radius $ r = h^2 / M $ according to the dimensional form of equation . This corresponds to a stable circular orbit about the central object only in the case of a particle with azimuthal velocity that lies on the equatorial plane. For the relativistic model we have discussed so far, if this were the case, then particles would arrive at the equator at a radius [@wald]
$$r_\text{circ} = \frac{ h^2 }{ 2 M } \bigg\{ 1 + \left( 1 - 6 \alpha
\right)^{1/2} \bigg\},
\label{eq.50}$$
which corresponds to the radius of stable circular orbits. However, when $ \theta = \pi / 2 $, equation implies that the value of $ r $ is very different from the one that would appear if a stable circular orbit is expected according to equation . In fact, fluid particles arrive at a radius greater than $ r_\text{circ}
$.
We can also discuss what happens to the radius of the disc $ r_\text{d}
$ for any $ \alpha $. This radius is obtained by taking a particle that arrives from a streamline just above the equator, i.e. $ \theta_0
= \pi / 2 - \eta $, where the positive quantity $ \eta \ll 1 $. Figure \[fig01\] shows how this radius varies as a function of $
\alpha $. As it can be seen, the radius $ r_\text{d} $ grows monotonically from the value $ r_\text{dN} $ when $ \alpha = 0 $ to infinity when $ \alpha = 1 / 8 $. This behaviour strongly modifies the traditional view, particularly since the disc occupies all the equatorial plane in the extreme hyperbolic model. The fact that the disc radius diverges when $ \alpha = 1/8 $ can be prooved directly using the results obtained in section \[ultrarelativistic\]. Indeed, evaluating equation for $ \theta = \pi / 2 $ and then taking the limit when $ \theta_0 \rightarrow \pi / 2 $ it follows that $ r \rightarrow \infty $.
![ The figure shows a plot of the radius of the disc $ r' $ measured in arbitrary units, as a function of the parameter $ \alpha $. In the non–relativistic case, for which $ \alpha = 0 $, the radius of the disc is exactly the same as the one predicted by @ulrich. For the extreme hyperbolic model, when $ \alpha \rightarrow 1 / 8 $, the radius of the disc grows without limit.[]{data-label="fig01"}](fig01.eps)
The fact that the disc radius grows monotonically as $ \alpha $ approaches the value $ 1/8 $ means that the density of the disc should be distributed in a more homogeneous way. Figure \[fig02\] shows density profiles evaluated in the equatorial plane $ \theta
= \pi / 2 $ as a function of the distance to the central object. In all cases the particle number density diverges in the origin because it represents a point of accumulated material. The case $ \alpha =
0 $ corresponds to the non–relativistic Ulrich model and apart from the divergence that the particle number density has at $ r = 0 $ it also grows without limit at the radius of the disc $ r_\text{dN} $. This is generally attributed to border effects that appear because the disc has been assumed to be thin [see e.g. @mendoza and references therein]. However, as Figure \[fig02\] shows, the divergence of the particle number density at the border of the disc disappears as soon as $ \alpha $ moves away from a null value. Furthermore, it does so in such a way that the density of the disc varies very smoothly throughout the disc as $ \alpha \rightarrow 1 / 8 $.
![ The plots represent different particle number densities $ n $ measured in units of $ n_0
$, as a function of the radial distance $ R $ (measured in units of $r_{*}$) evaluated in the equator, i.e. for which the polar angle $ \theta = \pi
/ 2 $. From bottom to top the models correspond to values $
\alpha $ of $ 1/8,\ 10^{-1},\ 10^{-2},\ 10^{-3},\ 10^{-4}
$ and $ 10^{-5} $. All profiles diverge at the origin because of accumulated material at that point. The particle number density diverges in the Newtonian limit (for which $
\alpha \rightarrow 0 $) at the border of the disc, which corresponds for that particular case to $ R \rightarrow 1
$ [@mendoza]. However, this singularity disappears and softens the density profile in the disc as $ \alpha
\rightarrow 1/8 $. []{data-label="fig02"}](fig02.eps)
The results of section \[ultrarelativistic\] can be used to compare with the pseudo–Newtonian @paczynsky80 approximation used by @lee05. Figure \[fig03\] shows a comparison between the full relativistic solution with the pseudo–Newtonian approximation. It is clear from the images that the solution differs not only at small radii, near the Schwarzschild radius, but also at large scales. This is due to the strength of the gravitational field produced by the central source, which makes particles approach the equator quite rapidly. For instance, near the event horizon there are fluid particles that appear to be swallowed by the hole when described by a pseudo–Newtonian potential. However, the complete relativistic solution shows that for this particular case some of those particles are not swallowed directly by the hole, but are injected to the accretion disc.
![ The figure shows a comparison between the fully relativistic solution presented in this article (continuous lines) with the Newtonian @paczynsky80 numerical approximations made by @lee05 (dotted lines). Distances in the plot are measured in units of the Schwarzschild radius. The plot is a projection at an azimuthal angle $ \varphi = \text{const}
$. The length $ R $ is the radial distance measured in the equator. In both cases, the streamlines were calculated in the extreme hyperbolic case for which $ \alpha = 1 / 8
$, i.e. the specific angular momentum for a particular fluid particle is twice the Schwarzschild radius. Particles were considered to be uniformly rotating at a distance of $ 50
$ Schwarzschild radius measured from the origin. Both, the small and large scale panels show that the complete relativistic solution differs significantly from their calculations. The pseudo–Newtonian @paczynsky80 approximations were kindly provided by W. H. Lee. []{data-label="fig03"}](fig03a.eps "fig:") ![ The figure shows a comparison between the fully relativistic solution presented in this article (continuous lines) with the Newtonian @paczynsky80 numerical approximations made by @lee05 (dotted lines). Distances in the plot are measured in units of the Schwarzschild radius. The plot is a projection at an azimuthal angle $ \varphi = \text{const}
$. The length $ R $ is the radial distance measured in the equator. In both cases, the streamlines were calculated in the extreme hyperbolic case for which $ \alpha = 1 / 8
$, i.e. the specific angular momentum for a particular fluid particle is twice the Schwarzschild radius. Particles were considered to be uniformly rotating at a distance of $ 50
$ Schwarzschild radius measured from the origin. Both, the small and large scale panels show that the complete relativistic solution differs significantly from their calculations. The pseudo–Newtonian @paczynsky80 approximations were kindly provided by W. H. Lee. []{data-label="fig03"}](fig03b.eps "fig:")
The work presented in this article represents a general relativistic approach to the Newtonian accretion flow first proposed by @ulrich. The main features of the accretion flow are still valid with the important consequence that, the radius of the equatorial accretion disc grows from its Newtonian value for the Ulrich case up to infinity in the extreme hyperbolic situation, for which the angular momentum is twice the Schwarzschild radius. As a consequence, the particle number density diverges on the border of the disc only for the Newtonian case described by @ulrich. This is due to the fact that, when the radius of the disc grows, the particle number density on it rearranges in such a way that it smoothly softens as the extreme hyperbolic case is approached. Figures \[fig04\] and \[fig05\] show streamlines and density isocontours for different values of the parameter $ \alpha $.
![Streamlines for values of the parameter $ \alpha = 10^{-5},\ 0.12 $ from left to right are shown in the figure. Lengths are measured in units of the radius $ r_* $. The equatorial radius is labelled by $ R $. The case $ \alpha = 10^{-5} $ is very close to the Newtonian one (see for example @mendoza). This particular case shows that the streamlines are accumulated at $ R = 1
$, which corresponds to the Newtonian radius $ r_\text{dN} $. However, the right panel shows that as $ \alpha $ approaches the value $ 1/8 $ the streamlines are not packed together any longer.[]{data-label="fig04"}](fig04a.eps "fig:") ![Streamlines for values of the parameter $ \alpha = 10^{-5},\ 0.12 $ from left to right are shown in the figure. Lengths are measured in units of the radius $ r_* $. The equatorial radius is labelled by $ R $. The case $ \alpha = 10^{-5} $ is very close to the Newtonian one (see for example @mendoza). This particular case shows that the streamlines are accumulated at $ R = 1
$, which corresponds to the Newtonian radius $ r_\text{dN} $. However, the right panel shows that as $ \alpha $ approaches the value $ 1/8 $ the streamlines are not packed together any longer.[]{data-label="fig04"}](fig04b.eps "fig:")
![ Particle number density isocontours for $ \alpha = 10^{-5},\ 0.05,\ 0.12 $ are shown in each diagram. The left panel roughly corresponds to the non–relativistic case as described by @mendoza. All models show a density divergence at the origin. However, only the Newtonian case exhibits another divergence at the border of the disc $ R =
1 $. Lengths in the plot are measured in units of the radius $ r_*
$ and the density isocontours correspond to values of $ n / n_0 =
0.1,\ 0.6,\ 1.1,\ 1.6,\ 2.1,\ 2.6 $. []{data-label="fig05"}](fig05a.eps "fig:") ![ Particle number density isocontours for $ \alpha = 10^{-5},\ 0.05,\ 0.12 $ are shown in each diagram. The left panel roughly corresponds to the non–relativistic case as described by @mendoza. All models show a density divergence at the origin. However, only the Newtonian case exhibits another divergence at the border of the disc $ R =
1 $. Lengths in the plot are measured in units of the radius $ r_*
$ and the density isocontours correspond to values of $ n / n_0 =
0.1,\ 0.6,\ 1.1,\ 1.6,\ 2.1,\ 2.6 $. []{data-label="fig05"}](fig05b.eps "fig:") ![ Particle number density isocontours for $ \alpha = 10^{-5},\ 0.05,\ 0.12 $ are shown in each diagram. The left panel roughly corresponds to the non–relativistic case as described by @mendoza. All models show a density divergence at the origin. However, only the Newtonian case exhibits another divergence at the border of the disc $ R =
1 $. Lengths in the plot are measured in units of the radius $ r_*
$ and the density isocontours correspond to values of $ n / n_0 =
0.1,\ 0.6,\ 1.1,\ 1.6,\ 2.1,\ 2.6 $. []{data-label="fig05"}](fig05c.eps "fig:")
Acknowledgements
================
We dedicate the present article to the vivid memory of Sir Hermann Bondi who pioneered the studies of spherical accretion. We would like to thank William Lee for providing his numerical @paczynsky80 pseudo–Newtonian results in order to make comparisons with the exact analytic solution presented in this article. The authors gratefully acknowledge financial support from DGAPA–UNAM (IN119203).
\[lastpage\]
[^1]: @ulrich showed that $ \cos \varphi = \cos \theta
/ \cos \theta_0 $ using geometrical arguments. For the general relativistic limit, one is tempted to generalise this result to $
\text{cn} \varphi \beta = \text{cn} \theta \beta / \text{cn} \theta_0
\beta $. However, this very simple analogy does not reproduce the velocity and particle number density fields for the Newtonian limit.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Only some special open surfaces satisfying the shape equation of lipid membranes can be compatible with the boundary conditions. As a result of this compatibility, the first integral of the shape equation should vanish for axisymmetric lipid membranes, from which two theorems of non-existence are verified: (i) There is no axisymmetric open membrane being a part of torus satisfying the shape equation; (ii) There is no axisymmetric open membrane being a part of a biconcave discodal surface satisfying the shape equation. Additionally, the shape equation is reduced to a second-order differential equation while the boundary conditions are reduced to two equations due to this compatibility. Numerical solutions to the reduced shape equation and boundary conditions agree well with the experimental data \[A. Saitoh *et al.*, Proc. Natl. Acad. Sci. USA **95**, 1026 (1998)\].'
author:
- 'Z. C. Tu'
title: Compatibility between shape equation and boundary conditions of lipid membranes with free edges
---
Introduction
============
The elasticity and configuration of lipid vesicles have attracted much theoretical attention of physicists [@Lipowsky91; @Seifert97; @oybook; @tuoyjctn2008] since Helfrich proposed the spontaneous curvature model of lipid bilayers in his seminal work [@helfrich]. The shape equation to describe equilibrium configurations of lipid vesicles was derived in 1987 [@oy87; @oy89] based on Helfrich’s model. There are two typical analytical solutions to the shape equations. One is a torus with a ratio ($\sqrt{2}$) of its two generation radii [@oytorus; @Seiferttorus]. Another is a vesicle with biconcave discoidal shape [@Naitooy]. In fact, the latter solution does not correspond to a vesicle free of external force because of a logarithmic singularity in the solution [@Podgornikpre95; @Guvenjpa07; @Guvenpre07].
The opening-up process of lipid vesicles by talin was observed by Saitoh *et al.* [@Hotani], which pushes us to investigate the shape equation and boundary conditions of lipid membranes with free exposed edges. This topic was discussed theoretically and numerically by several researchers [@Capovilla; @Capovilla2; @tzcpre; @tzcjpa04; @yinyjjmb; @WangDu06; @Umeda05]. Based on Helfrich’s model, the shape equation and boundary conditions were derived by Capovilla *et al.* [@Capovilla; @Capovilla2], Tu *et al.* [@tzcpre; @tzcjpa04], and Yin *et al.* [@yinyjjmb] in different forms. Wang and Du obtained various shapes of open membranes through numerical simulations by phase field method [@WangDu06]. Using the area difference elasticity model, Umeda *et al.* derived the shape equation and boundary conditions and then compared their numerical results with the experiment [@Umeda05]. They found that the line tension of the free edge of the open lipid membrane increases with decreasing the concentration of talin [@Umeda05]. The above theoretical and numerical results can be generalized to investigate adhesions between lipid vesicles [@DesernoCM07; @Lv2009], configurations of lipid vesicles with different lipid domains [@Lipowskypre96; @tzcjpa04; @WangDu06; @Baumgart05], and vesicle formation [@WangHeJCP09]. However, the above theoretical researches [@Capovilla; @Capovilla2; @tzcpre; @tzcjpa04; @yinyjjmb] do not contain sufficient discussions on analytical solutions to the shape equation with the boundary conditions. Additionally, the authors merely compared their numerical results with the experimental ones qualitatively in their numerical work [@WangDu06; @Umeda05]. It is still lack of the quantitative comparison between the numerical and experimental results. Two natural questions are led to: Can we find analytical solutions? At least, it is instructive to investigate the possibility of finding the analytical solutions. Can we use the numerical results to fit the experimental data quantitatively? We hope we can do that by taking the number of parameters as small as possible.
Generally speaking, the shape equation derived from Helfrich’s model is a fourth-order nonlinear differential equation, while the boundary conditions include three nonlinear equations describing the shapes of the free edges of lipid membranes. In principle, one can obtain the general solution with unknown constants to a linear differential equation, and then determine the unknown constants by using the linear boundary conditions. Thus there is no mathematical difficulty to find the solution satisfying both the linear differential equation and linear boundary conditions. However, the problem becomes more complicated if both the differential equation and boundary conditions are nonlinear. There is no general solution to a nonlinear differential equation in mathematics. Consequently, one can only conjecture some special solutions in a few cases. If we further consider the boundary conditions, only a few ones among the above known solutions can fit them. Therefore, it is quite helpful to investigate the feature of the special solutions that can satisfy both the nonlinear differential equation and the boundary conditions. Since it is very difficult to obtain solutions to the shape equation with the boundary conditions, we may first conjecture a surface satisfying the shape equation, and then find a curve in the surface satisfying the boundary conditions as an edge of the surface. However, one might not find any curve satisfying the boundary conditions for a given surface satisfying the shape equation. Only some special ones among the surfaces satisfying the shape equation can admit the boundary conditions. The profound reason is that the points in the boundary curve should satisfy not only the boundary conditions, but also the shape equation because they also locate in the surface. In other words, there exist some additional constraints between the shape equation and the boundary conditions. These constraints which have not been touched in Refs. [@Capovilla; @Capovilla2; @tzcpre; @tzcjpa04; @yinyjjmb; @WangDu06; @Umeda05] are called the compatibility condition in this paper.
It is not a straightforward task to find the compatibility condition in general case. The axisymmetric lipid membranes with edges will give us some clues. The shape equation is reduced to a third-order differential equation in axisymmetric case [@seifertpra91; @jghupre93]. Zheng and Liu proved that it was integrable [@zhengliu93], and could be further transformed into a second-order differential equation with an integral constant. In this paper, we will show that the compatibility condition is that this integral constant vanishes for axisymmetric membranes. Due to this compatibility, the shape equation is reduced to a second-order differential equation while the boundary conditions are reduced to two equations. The rest of this paper is organized as follows: In Sec. \[shap-bcs\], we present the general shape equation and boundary conditions of lipid membranes with free edges. In Sec. \[sec-const\], we discuss the compatibility between the shape equation and boundary conditions in axisymmetric case, and then verify two theorems of non-existence. In Sec.\[sec-axisym\], we find some axisymmetric numerical solutions and compare them with experimental data quantitatively. A brief summary is given in the last section.
Shape equation and boundary conditions \[shap-bcs\]
===================================================
A lipid membrane with a free edge is represented as an open surface with a boundary curve $C$. As shown in Fig. \[figframe\], we can construct an orthogonal right-handed frame $\{\mathbf{e}_1
,\mathbf{e}_2 ,\mathbf{e}_3\}$ at each point of the surface such that $\mathbf{e}_3$ is the normal vector of the surface. For each point in the boundary curve, we take $\mathbf{e}_2$ to be perpendicular to the tangent direction $\mathbf{e}_1$ of the boundary curve and point at the side that the surface is located.
![\[figframe\] Right-handed frame of an open surface with a boundary curve $C$.](fig1.eps){width="7.5cm"}
The free energy of the membrane may be expressed as $$F=\int [(k_c/2)(2H+c_0)^2 +\bar{k}K] dA + \lambda A +\gamma L, \label{eq-frenergy}$$ where the first and second terms are the bending energy [@helfrich] and the surface energy of the membrane, respectively, while the third term is the line energy of the free exposed edge. $H$ and $K$ are the mean curvature and gaussian curvature of the surface, respectively. $dA$ is the area element of the surface. $A$ and $L$ are the total area of the surface and the total length of the boundary curve, respectively. $k_c$ and $\bar{k}$ are the bending moduli. $c_0$ is the spontaneous curvature. $\lambda$ and $\gamma$ are the surface tension and line tension, respectively.
By calculating the variation of free energy (\[eq-frenergy\]), we can obtain [@tzcpre] $$(2H+c_{0})(2H^{2}-c_{0}H-2K)-2\tilde\lambda H+\nabla
^{2}(2H) =0, \label{eq-shape}$$ and $$\begin{aligned}
&&\left. \lbrack (2H+c_{0})+\tilde{k}\kappa_n]\right\vert _{C} =0,\label{bound1} \\
&&\left. \lbrack -2{\partial H}/{\partial\mathbf{e}_2}+\tilde\gamma
\kappa_n+\tilde{k} {d\tau_g}/{ds}]\right\vert _{C} =0,\label{bound2}\\
&&\left. \lbrack (1/{2})(2H+c_{0})^{2}+\tilde{k}K+\tilde\lambda
+\tilde\gamma \kappa_{g}]\right\vert _{C}=0,\label{bound3}\end{aligned}$$ where $\tilde{\lambda}\equiv\lambda/k_c$, $\tilde{k}\equiv\bar{k}/k_c$, $\tilde{\gamma}\equiv\gamma/k_c$ are the reduced surface tension, reduced bending modulus, and reduced line tension, respectively. $\kappa_n$, $\kappa_g$, $\tau_g$, and $ds$ are the normal curvature, geodesic curvature, geodesic torsion, and arc length element of the boundary curve, respectively. Equation (\[eq-shape\]) determines the equilibrium shape of the membrane, thus we call it shape equation. For a given surface satisfying the shape equation, Eqs. (\[bound1\])-(\[bound3\]) determine the shape of the boundary curve and its position in the surface, thus we call them boundary conditions. Equation (\[eq-shape\]) expresses the normal force balance of the membrane. Equation (\[bound1\]) is the moment balance equation around $\mathbf{e}_1$ at each point in curve $C$. Equations (\[bound2\]) and (\[bound3\]) are the force balance equations along $\mathbf{e}_3$ and $\mathbf{e}_2$ at each point in curve $C$, respectively. Thus, in general, the above four equations are independent of each other.
compatibility between the shape equation and boundary conditions\[sec-const\]
=============================================================================
We have mentioned that only some special ones among the surfaces satisfying the shape equation (\[eq-shape\]) can admit the boundary conditions (\[bound1\])-(\[bound3\]). What is the common feature of these special surfaces? we will find this feature for axisymmetric surfaces.
![\[figoutline\] Outline of an open surface. Each open surface can be generated by a planar curve AC rotating around z axis. $\psi$ is the angle between the tangent line and the horizontal plane.](fig2.eps){width="7.5cm"}
When a planar curve AC shown in Fig.\[figoutline\] revolves around $z$ axis, an axisymmetric surface is generated. Let $\psi$ represent the angle between the tangent line and the horizontal plane. Each point in the surface can be expressed as vector form $\mathbf{r}=\{\rho\cos \phi,\rho\sin \phi,z(\rho)\}$ where $\rho$ and $\phi$ are radius and azimuth angle that the point corresponds to. Introduce a notation $\sigma$ such that $\sigma =1$ if $\mathbf{e}_1$ is parallel to $\partial\mathbf{r} /\partial \phi$, and $\sigma =-1$ if $\mathbf{e}_1$ is antiparallel to $\partial\mathbf{r} /\partial \phi$ in the boundary curve generated by point C. The above equations (\[eq-shape\])-(\[bound3\]) are transformed into $$\begin{aligned}
(h-c_{0})\left(\frac{h^{2}}{2}+\frac{c_{0}h}{2}-2K\right)-\tilde{\lambda}
h+\frac{\cos \psi }{\rho}(\rho\cos \psi h')'=0,\label{sequilib}
\\
\left[h-c_{0}+\tilde{k}{\sin \psi }/{\rho}\right]_C=0,\label{sbound1}\\
\left[-\sigma\cos \psi h'+\tilde{\gamma}{\sin \psi
}/{\rho}\right]_C=0,\label{sbound2}\\
\left[\frac{\tilde{k}^2}{2}\left(\frac{\sin \psi
}{\rho}\right)^2+\tilde{k}K+\tilde{\lambda}-\sigma\tilde{\gamma}
\frac{\cos \psi }{\rho}\right]_C=0,\label{sbound3}\end{aligned}$$ where $h\equiv {\sin \psi }/{\rho}+(\sin\psi)'$ and $K\equiv{\sin \psi
}(\sin\psi)'/{\rho}$. The ‘prime’ represents the derivative with respect to $\rho$.
The shape equation (\[sequilib\]) is a third-order differential equation. Following Zheng and Liu’s work [@zhengliu93], we can transform it into a second order differential equation $$\begin{aligned}
\cos\psi h'
&+&(h-c_{0}) \sin\psi\psi^{\prime}
-\tilde{\lambda} \tan\psi\nonumber\\&+&\frac{\eta_{0}}{\rho\cos\psi}-\frac{\tan\psi}%
{2}(h-c_{0})^{2} =0\label{firstintg}\end{aligned}$$ with an integral constant $\eta_{0}$ (so called the first integral). The configuration of an axisymmetric open lipid membrane should satisfy the shape equation (\[sequilib\]) or (\[firstintg\]) and boundary conditions (\[sbound1\])-(\[sbound3\]). In particular, the points in the boundary curve should satisfy not only the boundary conditions, but also the shape equation (\[firstintg\]) because they also locate in the surface. That is, Eqs. (\[sbound1\])-(\[sbound3\]) and (\[firstintg\]) should be compatible with each other in the edge. Substituting Eqs. (\[sbound1\])-(\[sbound3\]) into (\[firstintg\]), we derive the compatibility condition to be $$\eta_{0}=0.\label{const-condit}$$
No we will discuss two examples and verify two theorems of non-existence by considering the above compatibility condition.
![(Color online)\[figtorus\] Two non-existent axisymmetric open membranes: (a) A part of a torus; (b) A part of biconcave discodal surface.](fig23ab.eps){width="7.5cm"}
First, let us consider a part of a torus shown in Fig. \[figtorus\]a generated by an arc expressed by $\sin\psi=\alpha\rho+\beta$ with two non-vanishing constants $\alpha$ and $\beta$. Substituting it into the shape equation (\[firstintg\]), we obtain $c_0 =0$, $\beta=\sqrt{2}$, $\tilde{\lambda}=0$, and $\eta_0 = - \alpha$. That is, the torus can be a solution to the shape equation. However, $\eta_0 = - \alpha
\neq 0$ contradicts to the compatibility condition (\[const-condit\]). Thus we arrive at:
*Theorem 1*. There is no axisymmetric open membrane being a part of torus generated by a circle expressed by $\sin\psi=\alpha\rho+\sqrt{2}$.
Secondly, we consider a biconcave discodal surface [@Naitooy] generated by a planar curve expressed by $\sin\psi=\alpha \rho
\ln(\rho/\beta)$ with two non-vanishing constants $\alpha$ and $\beta$. To avoid the logarithmic singularity at two poles, we may dig two holes around the poles in the surface as shown in Fig. \[figtorus\]b. Substituting $\sin\psi=\alpha \rho
\ln(\rho/\beta)$ into the shape equation (\[firstintg\]), we obtain $\tilde{\lambda}=0$, $\alpha=c_0$, and $\eta_0 = -2 c_0$. That is, the biconcave discodal surface can be a solution to the shape equation. However, $\eta_0 = - 2c_0 \neq 0$ contradicts to the compatibility condition (\[const-condit\]). Thus we arrive at:
*Theorem 2*. There is no axisymmetric open membrane being a part of a biconcave discodal surface generated by a planar curve expressed by $\sin\psi=c_0 \rho \ln(\rho/\beta)$.
In the above discussion, the theorems of non-existence are deduced as natural corollaries of the compatibility condition. It does not mean that the proofs are unique. The other proofs are presented in Appendix \[app-proof\]. In Ref. [@tzc09rome], the present author has proved that there is no open lipid membrane being a part of a constant mean curvature surface. These theorems reveal that it is almost hopeless to find analytical solutions to the shape equations with the boundary conditions. Thus we need to seek for numerical solutions.
Axisymmetric numerical solutions\[sec-axisym\]
==============================================
The compatibility condition leads to a more important result that the shape equation can be simplified as $$\begin{aligned}
\cos\psi h'
+(h-c_{0}) \sin\psi\psi^{\prime}
-\tilde{\lambda} \tan\psi-\frac{\tan\psi}%
{2}(h-c_{0})^{2} =0,\label{newshapeq}\end{aligned}$$ while the boundary conditions can be taken only two equations (\[sbound1\]) and (\[sbound3\]) because Eq. (\[sbound2\]) is not independent of Eqs. (\[sbound1\]), (\[sbound3\]), and (\[newshapeq\]). However, it is still very difficult to obtain analytical solutions to Eq.(\[newshapeq\]) with boundary conditions (\[sbound1\]) and (\[sbound3\]). We will find axisymmetric numerical solutions and compare them with experimental data [@Hotani] in this section.
Because $\psi$ might be a multi-valued function of the independent variable $\rho$, the above equations are unsuitable for numerical solutions. Here we take the arc-length of curve AC in Fig. \[figoutline\] as an independent variable. Then we have $\dot{\rho}=\cos\psi$ and $\dot{z}=\sin\psi$, where the ‘dot’ represents the derivative with respect to the arc-length. The shape equation can be transformed into $$\ddot{\psi}=-\frac{\tan\psi}{2}\dot{\psi}^{2}-\frac{\cos\psi\dot{\psi}}%
{\rho}+\frac{\sin2\psi}{2\rho^{2}}+\tilde{\lambda}\tan\psi+\frac{\tan\psi}{2}\left(
\frac{\sin\psi}{\rho}-c_{0}\right) ^{2},\label{neweq2}$$ while the boundary conditions become $$\left[\dot{\psi}-c_{0}+\left( 1+\tilde{k}\right) \frac{\sin\psi}{\rho}\right]_C=0,\label{newbcs1}$$ and $$\left[\tilde{k}c_{0}\frac{\sin\psi}{\rho}-\left(
1+\frac{\tilde{k}}{2}\right) \tilde{k}\left(
\frac{\sin\psi}{\rho}\right) ^{2}+\tilde{\lambda}+\tilde{\gamma
}\frac{\cos\psi}{\rho}\right]_C =0.\label{newbcs2}$$ In fact, these equations can be also derived from the Lagrangian method as shown in Appendix \[app-deriv\]. In addition, we impose the initial conditions $z(0)=\rho(0)=0~\mu$m, and $\psi(0)=0$. We can use the shooting method to find numerical solutions to Eq. (\[neweq2\]) with boundary conditions \[Eqs. (\[newbcs1\]) and (\[newbcs2\])\] and these initial conditions, and then fit the parameters ($\tilde{k}$, $c_0$, $\tilde\lambda$, $\tilde\gamma$) with experimental data. The basic idea is as below. For the given values of $\tilde{k}$, $c_0$, $\tilde\lambda$, $\tilde\gamma$ and $\dot{\psi}(0)$, we can solve Eq. (\[neweq2\]) with boundary conditions (\[newbcs1\]) and (\[newbcs2\]). Then we compare the graph of the solution to the outline of the open membrane in the experiment. Tune the values of the parameters until the graph of the solution and the outline of the membrane almost superpose each other. Thus we obtain a group of proper values of the parameters ($\tilde{k}$, $c_0$, $\tilde\lambda$, $\tilde\gamma$).
 Numerical results (solid, dash, and dot lines) and experimental data (squares, circles, and triangles extracted from Fig. 3.I to K in Ref. [@Hotani]) of the outlines of an axisymmetric lipid membrane at different concentration of talin.](fig3i-k.eps){width="7.7cm"}
In the experiment [@Hotani], the hole of the lipid membrane is enlarged with increasing the concentration of talin, and vice versa. Talin molecules adhere to the edge of the membrane. Thus it is reasonable to assume that the line tension of the edge depends on the concentration of talin, while the bending moduli and spontaneous curvature of the membrane do not. That is, we should have the common values of $\tilde{k}$ and $c_0$ for a membrane at different concentrations of talin. This gives a constraint in our fitting. As shown in Fig. \[fig3i-k\], our numerical results (solid, dash, and dot lines) obtained from Eqs. (\[neweq2\])-(\[newbcs2\]) agree well with the experimental data (squares, circles, and triangles extracted from the outlines of the membrane with decreasing the concentration of talin). The common parameters are fitted as $\tilde{k}= -0.122$ and $c_0 = 0.4~\mu$m$^{-1}$. The negative $\tilde{k}$ reveals that the surface with positive Gaussian curvature is more energetically favorable than that with negative Gaussian curvature. The positive $c_0$ reflects the asymmetry of the bilayer composed in the experiment [@Hotani], which makes the membrane bend like a standing upright cup. The other parameters are shown in the figure. The reduced line tension $\tilde\gamma$ increases from 0.66 $\mu$m$^{-1}$ to 0.79 $\mu$m$^{-1}$ with decreasing the concentration of talin, which is the same as the conclusion of Ref. [@Umeda05]. The surface tension depends on the shapes and line tension. Intuitively, the line tension of the edge induces compression stress in the membranes with the similar shapes generated by solid line in Fig. \[fig3i-k\], thus the surface tension is negative. On the contrary, the tension of the edge induces stretching stress in the membranes with the similar shapes generated by dash or dot lines in Fig. \[fig3i-k\], thus the surface tension is positive. The variation of surface tension can be comprehensively understood from Eq. (\[constraitg\]). For the membrane in Fig. \[fig3i-k\], $2H=-({\sin\psi}/{\rho}+\dot{\psi})<0$ and decreases according to the sequence of the solid, dash and dot lines. Therefore, the surface tension of the membrane increases according to the same sequence with considering Eq. (\[constraitg\]). Furthermore, we also examine that our numerical results indeed satisfy the constraint (\[constraitg\]).
Conclusion
==========
In the above discussion, we investigate the compatibility between shape equation and boundary conditions of lipid membranes with free edges. The main results obtained in this paper are as follows.
\(i) The compatibility condition for axisymmetric lipid membranes with free edges is that the first integral of the shape equation (\[sequilib\]) should be vanishing, i.e., Eq. (\[const-condit\]).
\(ii) Two theorems (*Theorem 1* and *Theorem 2* in Sec. \[sec-const\]) of non-existence are verified as natural corollaries of the compatibility condition, which give two examples to reveal that one indeed might not find any curve satisfying the boundary conditions in a given surface satisfying the shape equation. These theorems also correct two flaws on analytical solutions in Ref. [@tzcpre].
\(iii) The shape equation of axisymmetric lipid membranes is reduced to Eq.(\[newshapeq\]). Then only two equations in boundary conditions are independent. This conclusion is the same as the case in Ref. [@Capovilla] with vanishing $\bar{k}$ and $c_0$.
\(iv) As shown in Fig. \[fig3i-k\], the numerical solutions to the reduced shape equation (\[newshapeq\]) with boundary conditions (\[sbound1\]) and (\[sbound3\]) agree well with the experimental data [@Hotani].
Finally, we would like to point out two difficulties that we have not fully overcome yet: (i) The compatibility condition between shape equation and boundary conditions for asymmetric (not axisymmetric) lipid membranes with edges is unclear. We do not even know whether it exists, much less what it is. (ii) We use the shooting method to find numerical solutions. But this method is not so efficient to the numerical solutions due to the complicated boundary conditions. A much more efficient method is expected. The above challenges should be addressed in the future work.
Acknowledgement {#acknowledgement .unnumbered}
===============
The author is grateful to X. H. Zhou and M. Li for their instructive discussions, and to Nature Science Foundation of China (grant no. 10704009) and the Foundation of National Excellent Doctoral Dissertation of China (grant no. 2007B17) for their financial supports.
Other proofs to theorems non-existence\[app-proof\]
===================================================
The proofs can be divided into two classes in terms of different starting points. One is based on the stress analysis, another is based on the scaling argument.
Stress analysis
---------------
Capovilla *et al.* proposed the stress tensor in a lipid membrane and then derived the shape equation and boundary conditions from the stress tensor [@Capovilla; @Capovilla2]. Recently, they found that [@Guvenjpa07; @Guvenpre07] the line integral $$\oint_\Gamma ds l^a \mathbf{f}_a \cdot \hat{z}=c,$$ where $\Gamma$ is any circle perpendicular to the symmetric axis in an axisymmetric membrane. $l^a$ and $\mathbf{f}_a$ represent the normal of $\Gamma$ tangent to the membrane surface and the stress in the membrane, respectively. $\hat{z}$ is the unit vector along the symmetric axis. $c$ is a constant dependent on the topology and the curvature singularity of the membrane.
First, the constant $c$ is non-vanishing for an axisymmetric torus free of curvature singularity, which implies that the stress in each circle $\Gamma$ perpendicular to the symmetric axis in the torus surface cannot be zero. However, the stress in the free edges should be vanishing. Thus, we cannot find any $\Gamma$ as a free edge of an axisymmetric open membrane being a part of the torus, i.e., theorem 1 is arrived at.
Secondly, there exist singularity points at two poles of the biconcave discodal surface generated by a planar curve expressed by $\sin\psi=\alpha \rho
\ln(\rho/\beta)$. The singularity results in a non-vanishing $c$ [@Guvenjpa07; @Guvenpre07], and then non-zero stress in each circle $\Gamma$ in the biconcave discodal surface. Thus, we cannot find any $\Gamma$ as a free edge of an axisymmetric open membrane being a part of the biconcave discodal surface, i.e., theorem 2 is arrived at.
Scaling argument
----------------
The free energy (\[eq-frenergy\]) can be written in another form $$\begin{aligned}
F&=&\int [(k_c/2)(2H)^2 +\bar{k}K] dA\nonumber\\ &+& 2k_c c_0\int H dA+(\lambda+k_c c_0^2 /2) A +\gamma L. \label{eq-frenergyn2}\end{aligned}$$ Let us consider the scaling transformation $\mathbf{r}\rightarrow \Lambda\mathbf{r}$, where the vector $\mathbf{r}$ represents the position of each point in the membrane and $\Lambda$ is a scaling parameter [@Capovilla2]. Under this transformation, we have $A\rightarrow \Lambda^2 A$, $L\rightarrow \Lambda L$, $H\rightarrow \Lambda^{-1} H$, and $K\rightarrow \Lambda^{-2} K$. Thus, Eq. (\[eq-frenergyn2\]) is transformed into $$\begin{aligned}
F(\Lambda)&=&\int [(k_c/2)(2H)^2 +\bar{k}K] dA\nonumber\\ &+& 2k_c c_0 \Lambda \int H dA+(\lambda+k_c c_0^2 /2)\Lambda^2 A +\gamma \Lambda L. \label{eq-frenergyn3}\end{aligned}$$
The equilibrium configuration should satisfy $\partial F/\partial\Lambda =0$ when $\Lambda=1$ [@Capovilla2]. Thus we obtain $$2 c_0 \int H dA+(2\tilde\lambda+c_0^2) A +\tilde\gamma L=0.\label{constraitg}$$ This equation is an additional constraint for open membranes.
As shown in Sec. \[sec-const\], if there exists an open membrane being a part of torus, then the shape equation (\[eq-shape\]) requires $\tilde\lambda =0$ and $c_0 =0$, which contradicts the constraint (\[constraitg\]) because $\tilde\gamma L>0$. Thus we arrive at theorem 1.
Because Willmore surfaces satisfy the special form of Eq. (\[eq-shape\]) with vanishing $\tilde\lambda$ and $c_0$ [@Willmore82], as a byproduct of the constraint (\[constraitg\]), we obtain a much stronger theorem of non-existence: There is no open membrane being a part of a Willmore surface.
Next, we turn to the biconcave discodal surface. If there exists an open membrane being a part of a biconcave discodal surface generated by a planar curve expressed by $\sin\psi=c_0 \rho \ln(\rho/\beta)$, the shape equation (\[eq-shape\]) requires $\tilde\lambda =0$. Substituting $2H=-c_0[1+2 \ln(\rho/\beta)]$ into Eq. (\[constraitg\]), we will not obtain a contradiction. Thus theorem 2 cannot be deduced from the scaling argument.
Derivation of the reduced shape equation and boundary conditions by using the Lagrange method \[app-deriv\]
===========================================================================================================
For the revolving surface generated by the planar curve shown in Fig. \[figoutline\], Eq. (\[eq-frenergy\]) can be transformed into $${F}/{2\pi
k_c}=\int_0^{s_2}[\rho
f^{2}/2+\tilde{k}\sin\psi\dot{\psi}+\tilde{\lambda} \rho
+\tilde{\gamma} \dot{\rho}]ds,$$ with $f={\sin\psi}/{\rho} +\dot{\psi}-c_{0}$. We should minimize ${F}/{2\pi k_c}$ with the constraints $\dot{\rho}=\cos\psi$ and $\dot{z}=\sin\psi$, thus we construct an action $S=\int_0^{s_2}
\mathcal{L} ds$ with a Lagrangian [@oybook] $$\mathcal{L}=\rho f^{2}/2+\tilde{k}\sin\psi\dot{\psi}+\tilde{\lambda}
\rho +\tilde{\gamma}
\dot{\rho}+\zeta(\dot{\rho}-\cos\psi)+\eta(\dot{z}-\sin\psi),\label{lagrangL}$$ where $\zeta$ and $\eta$ are two Lagrange multipliers. In terms of the variational theory, we can derive $$\begin{aligned}
\delta S & =&\int_{0}^{s_{2}}\delta \mathcal{L}ds-\mathcal{H}\delta s_{2}\nonumber\\
&=&\int_{0}^{s_{2}}\left( \frac{\partial \mathcal{L}}{\partial\psi}-\frac{d}%
{ds}\frac{\partial \mathcal{L}}{\partial\dot{\psi}}\right)
\delta\psi ds+\left.
\frac{\partial \mathcal{L}}{\partial\dot{\psi}}\delta\psi\right\vert _{0}^{s_2}\nonumber\\
&+&\int_{0}^{s_{2}}\left( \frac{\partial \mathcal{L}}{\partial \rho}-\frac{d}%
{ds}\frac{\partial \mathcal{L}}{\partial\dot{\rho}}\right) \delta
\rho ds+\left.
\frac{\partial \mathcal{L}}{\partial\dot{\rho}}\delta \rho\right\vert _{0}^{s_2}\nonumber\\
&+&\int_{0}^{s_{2}}\left( 0-\frac{d}{ds}\frac{\partial
\mathcal{L}}{\partial \dot{z}}\right) \delta zds+\left.
\frac{\partial \mathcal{L}}{\partial\dot{z}}\delta
z\right\vert _{0}^{s_2}\nonumber\\
&+&\int_{0}^{s_{2}}\left( \dot{\rho}-\cos\psi\right) \delta\zeta
ds+\int_{0}^{s_{2}}\left( \dot{z}-\sin\psi\right) \delta\eta ds\nonumber\\
&-&\mathcal{H}\vert_C\delta s_{2} =0,\label{deltaS}\end{aligned}$$ where the Hamiltonian $\mathcal{H}=\dot{\psi}\frac{\partial
\mathcal{L}}{\partial\dot{\psi}}+\dot{\rho}\frac{\partial
\mathcal{L}}{\partial\dot{\rho}}+\dot{z}\frac{\partial
\mathcal{L}}{\partial\dot{z}}-\mathcal{L}$. Imposing $\psi(0)=0$, $\rho(0)=z(0)=0~\mu$m, and substituting Eq. (\[lagrangL\]) into Eq. (\[deltaS\]), we can obtain $$\begin{aligned}
&&\zeta\sin\psi-\rho \dot{f} =0,\label{tmpeq1}\\
&&\eta=\mathrm{constant},\label{tmpeq2}\\
&&\dot{\rho}=\cos\psi,\\
&&\dot{z}=\sin\psi,\end{aligned}$$ with boundary conditions $$\begin{aligned}
&&\left[f +\tilde{k}\frac{\sin\psi}{\rho}\right]_C=0,\label{tmpbc1}\\
&&\zeta\vert _{C}+\tilde{\gamma}=0,\label{tmpbc2}\\
&&\eta\vert_{C}=0,\label{tmpbc3}\\
&&\mathcal{H}\vert_C=0.\label{tmpbc4}\end{aligned}$$ Eq. (\[tmpbc1\]) is equivalent to boundary condition (\[newbcs1\]).
Because $\mathcal{L}$ does not explicitly contain $s$, $\mathcal{H}$ is a constant. Combining Eqs. (\[tmpeq2\]), (\[tmpbc3\]), (\[tmpbc4\]) and the definition of $\mathcal{H}$, we derive $$\zeta\cos\psi=\rho [f (f-2\dot{\psi})/2 +\tilde{\lambda} ].\label{tmpeq3}$$ From Eqs. (\[tmpeq1\]) and (\[tmpeq3\]) we can obtain the shape equation (\[neweq2\]). Equation (\[tmpbc4\]) can be transformed into the boundary condition (\[newbcs2\]) with Eq. (\[tmpbc1\]). From Eqs. (\[tmpeq1\]) and (\[tmpbc2\]) we can also obtain the other boundary condition, which is not independent of the shape equation (\[neweq2\]) and boundary conditions (\[newbcs1\]) and (\[newbcs2\]).
R. Lipowsky, Nature **349**, 475 (1991). U. Seifert, Adv. Phys. **46**, 13 (1997). Z. C. Ou-Yang, J. X. Liu and Y. Z. Xie, *Geometric Methods in the Elastic Theory of Membranes in Liquid Crystal Phases* (World Scientific, Singapore, 1999). Z. C. Tu and Z. C. Ou-Yang, J. Comput. Theor. Nanosci. **5**, 422 (2008). W. Helfrich, Z. Naturforsch. **28C**, 693 (1973). O. Y. Zhongcan and W. Helfrich, Phys. Rev. Lett. **59**, 2486, (1987). O. Y. Zhongcan and W. Helfrich, Phys. Rev. A **39**, 5280 (1989). Z. C. Ou-Yang, Phys. Rev. A **41**, 4517 (1990). U. Seifert, Phys. Rev. Lett. **66**, 2404 (1991). H. Naito, M. Okuda, and Z. C. Ou-Yang, Phys. Rev. E **48**, 2304 (1993). R. Podgornik, S. Svetina, and B. Žekš, Phys. Rev. E **51**, 544 (1995). P. Castro-Villarreal and J. Guven, J. Phys. A **40**, 4273 (2007). P. Castro-Villarreal and J. Guven, Phys. Rev. E **76**, 011922 (2007). A. Saitoh, K. Takiguchi, Y. Tanaka, and H. Hotani, Proc. Natl. Acad. Sci. **95**, 1026 (1998). R. Capovilla, J. Guven, and J. A. Santiago, Phys. Rev. E **66**, 021607 (2002). R. Capovilla and J. Guven, J. Phys. A: Math. Gen. **35**, 6233 (2002). Z. C. Tu and Z. C. Ou-Yang, Phys. Rev. E **68**, 061915 (2003). Z. C. Tu and Z. C. Ou-Yang, J. Phys. A **37**, 11407 (2004). Y. Yin , J. Yin, and D. Ni, J. Math. Biol. **51**, 403 (2005). X. Wang and Q. Du, J. Math. Biol. **56**, 347 (2008). T. Umeda, Y. Suezaki, K. Takiguchi, and H. Hotani, Phys. Rev. E **71**, 011913 (2005). M. Deserno, M. M. Müller, and J. Guven, Phys. Rev. E 76, 011605 (2007). C. Lv, Y. Yin, and J. Yin, Colloids Surf. B **74**, 380 (2009). F. Jülicher and R. Lipowsky, Phys. Rev. E **53**, 2670 (1996). T. Baumgart, S. Das, W. W. Webb, and J. T. Jenkins, Biophys. J. **89**, 1067 (2005). Z. Wang and X. He, J. Chem. Phys. 130, 094905 (2009). U. Seifert, K. Berndl and R. Lipowsky, Phys. Rev. A **44**, 1182 (1991). J. G. Hu and Z. C. Ou-Yang, Phys. Rev. E **47**, 461 (1993). W. M. Zheng and J. X. Liu, Phys. Rev. E **48**, 2856 (1993). Z. C. Tu, Proceedings of the Fifth China-Italy Joint Conference on Computational and Applied Mathematics Mathematical Models in Life Science: Theory and Simulation, Roma, November 9-12, 2009, in press. T. J. Willmore, *Total Curvature in Riemannian Geometry* (John Wiley & Sons, New York, 1982).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The three different helicity states of W bosons produced in the reaction $\mathrm{e}^{+}\mathrm{e}^{-}~\rightarrow
\mathrm{W}^{+}\mathrm{W}^{-}~\rightarrow~\mathrm{\ell\nu{}q\bar{q}'}$ at LEP are studied using leptonic and hadronic W decays. Data at centre-of-mass energies $\sqrt{s}$ = $183-209$ are used to measure the polarisation of W bosons, and its dependence on the W boson production angle. The fraction of longitudinally polarised W bosons is measured to be 0.218 $\pm$ 0.027 $\pm$ 0.016 where the first uncertainty is statistical and the second systematic, in agreement with the Standard Model expectation.
author:
- The L3 Collaboration
date: 'December 17, 2002'
title: Measurement of W Polarisation at LEP
---
Introduction {#introduction .unnumbered}
============
The existence of all three W boson helicity states, $+1$, $-1$ and $0$, is a consequence of the non-vanishing mass of the W boson, that, in the Standard Model [@standard_model], is generated by the Higgs mechanism of electroweak symmetry breaking. The measurement of the fractions of longitudinally and transversely polarised W bosons constitutes a test of the Standard Model predictions for the triple gauge boson couplings $\gamma$WW and ZWW.
To determine the W helicity fractions, events of the type $\ell\nu\qqbar'$ are used, with $\ell$ denoting either an electron or a muon. These events are essentially background free and allow a measurement, with good accuracy, of the W momentum vector, the W charge and the polar decay angles. The W helicity states are accessible in a model independent way through the shape of the distributions of the polar decay angle, [[$\theta^*_\ell$]{}]{}, between the charged lepton and the W direction in the W rest frame. Transversely polarised W bosons have angular distributions $(1 \mp \cos {{\ensuremath{\theta^*_\ell}}})^{2}$ for a with helicity $\pm1$, and $(1 \pm \cos {{\ensuremath{\theta^*_\ell}}})^{2}$ for a with helicity $\pm1$. For longitudinally polarised W bosons, a $\sin^{2} {{\ensuremath{\theta^*_\ell}}}$ dependence is expected. For simplicity, we refer in the following only to the fractions [[$f_{-}$]{}]{}, [[$f_{+}$]{}]{} and [[$f_{0}$]{}]{} of the helicity states $-1$, $+1$ and 0 of the boson, respectively. Assuming CP invariance these equal the fractions of the corresponding helicity states $+1$, $-1$ and 0 of the boson.
The differential distribution of leptonic decays at Born level is: $$\frac{1}{N}\frac{dN}{d\cos{{\ensuremath{\theta^*_\ell}}}{}} = {{\ensuremath{f_{-}}}}{} \frac{3}{8}~(1+\cos{{\ensuremath{\theta^*_\ell}}}{})^{2} +
{{\ensuremath{f_{+}}}}{} \frac{3}{8}~(1-\cos{{\ensuremath{\theta^*_\ell}}}{})^{2} + {{\ensuremath{f_{0}}}}{} \frac{3}{4} \sin^{2} {{\ensuremath{\theta^*_\ell}}}{}.$$
For hadronic W decays, the quark charge is difficult to reconstruct experimentally and only the absolute value of the cosine of the decay angle, [$|\cos{{\ensuremath{\theta^*_{\text{q}}}}}|$]{}, is used: $$\frac{1}{N}\frac{dN}{d{\ensuremath{|\cos{{\ensuremath{\theta^*_{\text{q}}}}}|}}} = {{\ensuremath{f_{\pm}}}}{} \frac{3}{4} (1+{\ensuremath{|\cos{{\ensuremath{\theta^*_{\text{q}}}}}|}}{}^{2})
+ {{\ensuremath{f_{0}}}}{} \frac{3}{2} (1-{\ensuremath{|\cos{{\ensuremath{\theta^*_{\text{q}}}}}|}}{}^2),$$ with [[$f_{\pm}$]{}]{}=[[$f_{+}$]{}]{}+[[$f_{-}$]{}]{}.
After correcting the data for selection efficiencies and background, the different fractions of W helicity states are obtained from a fit to these distributions. The fractions [[$f_{-}$]{}]{}, [[$f_{+}$]{}]{} and [[$f_{0}$]{}]{} are also determined as a function of the production angle [$\Theta_\Wm$]{} in the laboratory frame. The helicity composition of the W bosons depends strongly on the centre-of-mass energy, .
Data and Monte Carlo {#data-and-monte-carlo .unnumbered}
====================
The analysis presented in this Letter is based on the whole data set collected with the L3 detector [@l3_00] and supersedes our previous results [@wwlong] based on about one third of the data. An integrated luminosity of 684.8 , collected at different centre-of-mass energies between 183 and 209 , as shown in Table \[tab:table1\], is analysed.
The [[$\text{e}\nu\text{q}\bar{\text{q}}'$]{}]{}, [[$\mu\nu\text{q}\bar{\text{q}}'$]{}]{} Monte Carlo events are generated using KORALW [@KORALW1]. The Standard Model predictions for [[$f_{-}$]{}]{}, [[$f_{+}$]{}]{} and [[$f_{0}$]{}]{} are obtained from these samples by fitting the generated decay angular distributions for each value of . As an example, the expected fraction of longitudinally polarised W bosons changes from 0.271 at = 183 to 0.223 at = 206 . The luminosity averaged Standard Model expectations for [[$f_{-}$]{}]{}, [[$f_{+}$]{}]{} and [[$f_{0}$]{}]{} are 0.590, 0.169 and 0.241, respectively.
Background processes are generated using KORALW for W pair production decaying to other final states, and PYTHIA [@PYTHIA] and KK2F [@KK2F] for $(\gamma)$. For studies of systematic effects, signal events are also generated using EEWW [@EEWW] and EXCALIBUR [@excalibur]. The L3 detector response is simulated with the GEANT [@geant] and GEISHA [@gheisha] packages. Detector inefficiencies, as monitored during the data taking period, are included.
A large sample of signal events is generated using the EEWW Monte Carlo program. This program assigns, differently from KORALW, W helicities on an event-by-event basis but uses the zero-width approximation for the W boson and does not include higher order radiative corrections and interference terms. The helicity fractions obtained from a fit to the generated decay angle distributions agree with the input values. A comparison of the fractions obtained from EEWW and YFSWW [@YFSWW], which includes improved $O(\alpha)$ corrections, with those obtained from KORALW also shows good agreement. Therefore the Born level formulæ (1) and (2) are applicable after radiative corrections.
Selection of W$^{+}$W$^- \rightarrow$ e events {#selection-of-ww--rightarrow-e-events .unnumbered}
===============================================
Only events which contain exactly one electron or one muon candidate are accepted [@wwlong]. Electrons are identified as isolated energy depositions in the electromagnetic calorimeter with an electromagnetic shower shape. A match in azimuthal angle with a track reconstructed in the central tracking chamber is required. Muons are identified and measured as tracks reconstructed in the muon chambers which point back to the interaction vertex. All other energy depositions in the calorimeters are assumed to originate from the hadronically decaying W. The neutrino momentum vector is assumed to be the missing momentum vector of the event. The following additional criteria are applied:
- The reconstructed momentum must be greater than 20 for electrons and 15 for muons.
- The neutrino momentum must be greater than 10 and its polar angle, $\theta_{\nu}$, has to satisfy $|\cos \theta_{\nu}| < 0.95$.
- The invariant mass of the lepton-neutrino system has to be greater than 60 .
- The invariant mass of the hadronic system has to be between 50 and 110 .
Figure \[fig:figure4\] shows some distributions of those variables for data and Monte Carlo.
The number of events selected by these criteria are listed in Table \[tab:table1\]. In total, 2010 events are selected with an efficiency of 65.7% and a purity of 96.3%. The contamination from [[$\tau\nu\text{q}\bar{\text{q}}'$]{}]{} and $(\gamma)$ is 2.4% and 1.3%, respectively, independent of and the W production angle.
Analysis of the **[W]{} helicity states** {#analysis-of-the-w-helicity-states .unnumbered}
=========================================
For the selected events, the rest frames of the W bosons are calculated from the lepton and neutrino momenta, and the decay angles [[$\theta^*_\ell$]{}]{} and [[$\theta^*_{\text{q}}$]{}]{} of the lepton and the quarks are determined. The angle [[$\theta^*_{\text{q}}$]{}]{} is approximated by the polar angle of the thrust axis with respect to the W direction in the rest frame of the hadronically decaying W.
The fractions of the W helicity states are obtained from the event distributions, $dN/d{\ensuremath{\cos{{\ensuremath{\theta^*_\ell}}}}}{}$ and $dN/d{\ensuremath{|\cos{{\ensuremath{\theta^*_{\text{q}}}}}|}}{}$. For each energy point, the background, as obtained from Monte Carlo simulations, is subtracted from the data, and the resulting distributions are corrected for selection efficiencies as obtained from large samples of KORALW Monte Carlo events. The corrected decay angle distributions at the different centre-of-mass energies are combined into single distributions for leptonic and hadronic decays, which are then fitted to the functions (1) and (2), respectively. A binned fit is performed on the normalised distributions, shown in Figure \[fig:figure1\], using [[$f_{-}$]{}]{} and [[$f_{0}$]{}]{} as the fit parameters. The fraction [[$f_{+}$]{}]{} is obtained by constraining the sum of all three parameters to unity.
Detector resolution introduces migration effects that bias the fitted parameters. For example, purely longitudinally polarised leptonically decaying W bosons at = 206 would be measured to have a helicity composition: [[$f_{0}$]{}]{} = 0.945, [[$f_{-}$]{}]{} = 0.043 and [[$f_{+}$]{}]{} = 0.012. The magnitude of these effects depends on the helicity fractions and on . Corrections for this bias as a function of the helicity fractions are determined from EEWW Monte Carlo samples. If the ratio of two helicity fractions is constant the bias correction function of the third fraction is linear to a good approximation. For the correction of [[$f_{0}$]{}]{} in the hadronic W decay, the ratio [[$f_{-}$]{}]{}/[[$f_{+}$]{}]{} is taken from the measurement in the leptonic W decay, as only the sum of [[$f_{+}$]{}]{} and [[$f_{-}$]{}]{} is known from hadronic decays. Bias correction functions are determined for the analysis of the complete data sample, separately for the and events and in bins of the production angle.
Results {#results .unnumbered}
=======
The results of the fits to the decay angle distributions for leptonic and hadronic W decays are shown in Figure \[fig:figure1\]. The data are well described only if all three W helicity states are used. Fits omitting the helicity 0 state fail to describe the data. For leptonic W decays, the $\chi^2$ increases from 12.7 for eight degrees of freedom if all helicity states are included to 56.2 for nine degrees of freedom if only the helicities $+1$ and $-1$ are used in the fit. For hadronic W decays, the $\chi^2$ increases from 6.6 for four degrees of freedom if all helicity states are included to 59.1 for four degrees of freedom if only the helicities $\pm1$ are used.
The measured fractions of the W helicity states in data, at an average centre-of-mass energy = 196.7 , are presented together with the Standard Model expectation in Tables \[tab:table2a\], \[tab:table2b\] and \[tab:table3\]. The parameters [[$f_{-}$]{}]{} and [[$f_{0}$]{}]{} derived from the fit are about 90% anti-correlated. These results include a bias correction of 0.005 on [[$f_{0}$]{}]{} for leptonic decays and 0.044 for hadronic decays. The bias correction adds 0.003 to the statistical uncertainty on [[$f_{0}$]{}]{} for leptonic decays and 0.007 to the one for hadronic decays. The measured W helicity fractions agree with the Standard Model expectations for the leptonic and hadronic decays, as well as for the combined sample. Longitudinal W polarisation is observed with a significance of seven standard deviations, including systematic uncertainties.
A number of systematic uncertainties are considered. These include selection criteria, binning effects, bias corrections, the contamination due to non double resonant four fermion processes, background levels, and efficiencies. Selection cuts are varied over a range of one standard deviation of the corresponding reconstruction accuracy. Fits are repeated with one bin more or one bin less in the decay angle distributions. Uncertainties on the bias and efficiency corrections are determined with large Monte Carlo samples, the latter being negligible. The contamination due to non double resonant four fermion processes is studied by using the EXCALIBUR Monte Carlo. Background levels are varied according to Monte Carlo statistics for both the [[$\tau\nu\text{q}\bar{\text{q}}'$]{}]{} and $(\gamma)$ processes. The largest uncertainties arise from selection criteria and binning effects. As an example, Table \[tab:table5\] summarises those effects on [[$f_{0}$]{}]{}.
Within the Standard Model, CP symmetry is conserved in the reaction and the helicity fractions [[$f_{+}$]{}]{}, [[$f_{-}$]{}]{} and [[$f_{0}$]{}]{} for the are expected to be identical to the fractions [[$f_{-}$]{}]{}, [[$f_{+}$]{}]{} and [[$f_{0}$]{}]{}, for the , respectively. CP invariance is tested by measuring the helicity fractions for and separately. The charge of the W bosons is obtained from the charge of the lepton. We select 1020 $\ell^+\nu$, and 990 $\ell^-\bar{\nu}$ events. Results of separate fits for the helicity fractions are given in Tables \[tab:table2a\], \[tab:table2b\] and \[tab:table3\] for leptonic, hadronic and combined fits. Good agreement is found, consistent with CP invariance.
To test the variation of the helicity fractions with the production angle, [$\Theta_\Wm$]{}, the data are grouped in four bins of $\cos{\ensuremath{\Theta_\Wm}}{}$. The ranges have been chosen such that large and statistically significant variations of the different helicity fractions are expected. Figure \[fig:figure2\] shows the four decay angle distributions for the leptonic W decays. The corrected distributions are fitted for leptonic and hadronic W decays separately in each bin of $\cos{\ensuremath{\Theta_\Wm}}{}$. The fit results, combining leptonic and hadronic W decays, are shown in Table \[tab:table4\] and Figure \[fig:figure3\], together with the Standard Model expectations from the KORALW Monte Carlo. The results agree with the Standard Model expectation and demonstrate a strong variation of the W helicity fractions with the production angle.
In conclusion, all three helicity states of the W boson are required in order to describe the data. Their fractions and their variations as a function of $\cos{\ensuremath{\Theta_\Wm}}{}$ are in agreement with the Standard Model expectation. The fraction of longitudinally polarised W bosons at $\sqrt{s}$ = $183-209$ is measured as 0.218 $\pm$ 0.027 $\pm$ 0.016. Separate analyses of the and events are consistent with CP conservation.
[99]{}
S.L. Glashow, (1961) 579;\
S. Weinberg, (1967) 1264;\
A. Salam, “Elementary Particle Theory”, Ed. N. Svartholm, Almqvist and Wiksell, Stockholm (1968), 367 L3 Collab., B. Adeva , Nucl. Inst. Meth. [**A 289**]{} (1990) 35;\
L3 Collab., O. Adriani , Physics Reports [**236**]{} (1993) 1;\
I. C. Brock , Nucl. Instr. and Meth. [**A 381**]{} (1996) 236;\
M. Chemarin , Nucl. Inst. Meth. [**A 349**]{} (1994) 345;\
M. Acciarri , Nucl. Inst. Meth. [**A 351**]{} (1994) 300;\
A. Adam , Nucl. Inst. Meth. [**A 383**]{} (1996) 342;\
G. Basti , Nucl. Inst. Meth. [**A 374**]{} (1996) 293 L3 Collab., M. Acciarri , Phys. Lett. [**B 474**]{} (2000) 194KORALW version 1.33 is used; M. Skrzypek $\etal$, Comp. Phys. Comm. [**94**]{} (1996) 216; M. Skrzypek $\etal$, Phys. Lett. [**B 372**]{} (1996) 289PYTHIA version 5.722 is used; T. Sjöstrand, preprint CERN-TH/7112/93 (1993), revised 1995; T. Sjöstrand, Comp. Phys. Comm. [**82**]{} (1994) 74KK2F version 4.12 is used; S. Jadach, B. F. L. Ward and Z. Was, Comp. Phys. Comm. [**130**]{} (2000) 260 EEWW version 1.1 is used; J. Fleischer , Comput. Phys. Commun. [**85**]{} (1995) 29 EXCALIBUR version 1.11 is used; F. A. Berends, R. Pittau and R. Kleiss, Comp. Phys. Comm. [**85**]{} (1995) 437 GEANT version 3.21 is used; R. Brun $\etal$, preprint CERN DD/EE/84-1 (1984), revised 1987 H. Fesefeldt, RWTH Aachen report PITHA 85/02 (1985) YFSWW3 version 1.14 is used: S. Jadach , (1996) 5434; Phys. Lett. [**B 417**]{} (1998) 326; (2000) 113010; (2002) 093010
=0 \#1[by 1 \#1]{} \#1[$^{#1}$]{}
[**The L3 Collaboration:**]{} =10.8pt =10000 =5000 =162truemm
P.Achard O.Adriani M.Aguilar-Benitez J.Alcaraz G.AlemanniJ.AllabyA.Aloisio M.G.AlviggiH.Anderhub V.P.AndreevF.AnselmoA.Arefiev T.Azemoon T.Aziz P.BagnaiaA.Bajo G.BaksayL.BaksayS.V.Baldew S.Banerjee Sw.Banerjee A.Barczyk R.Barillère P.Bartalini M.BasileN.BatalovaR.BattistonA.Bay F.BecattiniU.BeckerF.BehnerL.Bellucci R.Berbeco J.Berdugo P.Berges B.BertucciB.L.BetevM.BiasiniM.BigliettiA.Biland J.J.Blaising S.C.Blyth G.J.Bobbink A.BöhmL.BoldizsarB.Borgia S.BottaiD.BourilkovM.BourquinS.BracciniJ.G.BransonF.Brochu J.D.BurgerW.J.BurgerX.D.Cai M.CapellG.Cara RomeoG.CarlinoA.Cartacci J.CasausF.CavallariN.Cavallo C.Cecchi M.CerradaM.ChamizoY.H.Chang M.ChemarinA.Chen G.Chen G.M.Chen H.F.Chen H.S.ChenG.Chiefari L.CifarelliF.CindoloI.ClareR.Clare G.Coignet N.Colino S.Costantini B.de la CruzS.Cucciarelli J.A.van Dalen R.de AsmundisP.Déglon J.DebreczeniA.Degré K.DehmeltK.Deiters D.della Volpe E.Delmeire P.Denes F.DeNotaristefaniA.De Salvo M.Diemoz M.Dierckxsens C.Dionisi M.DittmarA.DoriaM.T.DovaD.Duchesneau M.DudaB.EchenardA.ElineA.El HageH.El MamouniA.Engler F.J.Eppling P.Extermann M.A.FalaganS.FalcianoA.FavaraJ.Fay O.FedinM.FelciniT.Ferguson H.Fesefeldt E.FiandriniJ.H.Field F.FilthautP.H.FisherW.FisherI.FiskG.Forconi K.FreudenreichC.FurettaYu.GalaktionovS.N.Ganguli P.Garcia-AbiaM.GataullinS.GentileS.GiaguZ.F.GongG.Grenier O.Grimm M.W.Gruenewald M.Guida R.van GulikV.K.Gupta A.GurtuL.J.GutayD.HaasR.Sh.HakobyanD.HatzifotiadouT.HebbekerA.Hervé J.HirschfelderH.Hofer M.HohlmannG.Holzner S.R.HouY.Hu B.N.Jin L.W.JonesP.de JongI.Josa-Mutuberr[í]{}aD.KäferM.KaurM.N.Kienzle-FocacciJ.K.KimJ.KirkbyW.KittelA.Klimentov A.C.K[ö]{}nigM.KopalV.Koutsenko M.Kr[ä]{}ber R.W.KraemerA.Kr[ü]{}ger A.Kunin P.Ladron de GuevaraI.LaktinehG.LandiM.LebeauA.LebedevP.LebrunP.Lecomte P.Lecoq P.Le Coultre J.M.Le GoffR.Leiste M.LevtchenkoP.LevtchenkoC.Li S.Likhoded C.H.LinW.T.LinF.L.LindeL.ListaZ.A.LiuW.LohmannE.Longo Y.S.Lu C.Luci L.LuminariW.LustermannW.G.Ma L.MalgeriA.Malinin C.MañaD.MangeolJ.Mans J.P.Martin F.Marzano K.MazumdarR.R.McNeil S.MeleL.Merola M.Meschini W.J.MetzgerA.MihulH.MilcentG.Mirabelli J.MnichG.B.Mohanty G.S.MuanzaA.J.M.MuijsB.Musicar M.Musy S.NagyS.NataleM.NapolitanoF.Nessi-TedaldiH.Newman A.NisatiH.Nowak R.Ofierzynski G.OrgantiniC.PalomaresP.PaolucciR.Paramatti G.PassalevaS.Patricelli T.PaulM.PauluzziC.PausF.PaussM.PedaceS.PensottiD.Perret-Gallix B.PetersenD.Piccolo F.Pierella M.PioppiP.A.Piroué E.PistolesiV.Plyaskin M.Pohl V.PojidaevJ.PothierD.O.Prokofiev D.Prokofiev J.QuartieriG.Rahal-CallotM.A.Rahaman P.Raics N.RajaR.Ramelli P.G.RancoitaR.Ranieri A.Raspereza P.RazisD.Ren M.RescignoS.ReucroftS.RiemannK.RilesB.P.RoeL.Romero A.Rosca S.Rosier-LeesS.RothC.RosenbleckB.RouxJ.A.Rubio G.Ruggiero H.Rykaczewski A.SakharovS.Saremi S.SarkarJ.Salicio E.SanchezM.P.SandersC.Sch[ä]{}ferV.SchegelskyH.SchopperD.J.SchotanusC.SciaccaL.ServoliS.ShevchenkoN.ShivarovV.Shoutko E.Shumilov A.ShvorobD.SonC.SougaP.Spillantini M.SteuerD.P.Stickland B.StoyanovA.StraessnerK.SudhakarG.SultanovL.Z.SunS.SushkovH.Suter J.D.SwainZ.SzillasiX.W.TangP.TarjanL.TauscherL.TaylorB.Tellili D.Teyssier C.TimmermansSamuel C.C.Ting S.M.Ting S.C.Tonwar J.Tóth C.TullyK.L.TungJ.Ulbricht E.Valente R.T.Van de WalleR.VasquezV.VeszpremiG.VesztergombiI.Vetlitsky D.Vicinanza G.Viertel S.VillaM.Vivargent S.VlachosI.Vodopianov H.VogelH.Vogt I.Vorobiev A.A.Vorobyov M.WadhwaX.L.Wang Z.M.WangM.WeberP.WienemannH.WilkensS.Wynhoff L.Xia Z.Z.Xu J.Yamamoto B.Z.Yang C.G.Yang H.J.YangM.YangS.C.Yeh An.ZaliteYu.ZaliteZ.P.Zhang J.ZhaoG.Y.ZhuR.Y.ZhuH.L.ZhuangA.ZichichiB.Zimmermann M.Z[ö]{}ller.
[A]{}[ plus 0pt minus 0pt plus 0pt minus 0pt plus 0pt minus 0pt]{}
III\. Physikalisches Institut, RWTH, D-52056 Aachen, Germany$^{\S}$
National Institute for High Energy Physics, NIKHEF, and University of Amsterdam, NL-1009 DB Amsterdam, The Netherlands
University of Michigan, Ann Arbor, MI 48109, USA
Laboratoire d’Annecy-le-Vieux de Physique des Particules, LAPP,IN2P3-CNRS, BP 110, F-74941 Annecy-le-Vieux CEDEX, France
Institute of Physics, University of Basel, CH-4056 Basel, Switzerland
Louisiana State University, Baton Rouge, LA 70803, USA
Institute of High Energy Physics, IHEP, 100039 Beijing, China$^{\triangle}$
University of Bologna and INFN-Sezione di Bologna, I-40126 Bologna, Italy
Tata Institute of Fundamental Research, Mumbai (Bombay) 400 005, India
Northeastern University, Boston, MA 02115, USA
Institute of Atomic Physics and University of Bucharest, R-76900 Bucharest, Romania
Central Research Institute for Physics of the Hungarian Academy of Sciences, H-1525 Budapest 114, Hungary$^{\ddag}$
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Panjab University, Chandigarh 160 014, India.
KLTE-ATOMKI, H-4010 Debrecen, Hungary$^\P$
Department of Experimental Physics, University College Dublin, Belfield, Dublin 4, Ireland
INFN Sezione di Firenze and University of Florence, I-50125 Florence, Italy
European Laboratory for Particle Physics, CERN, CH-1211 Geneva 23, Switzerland
World Laboratory, FBLJA Project, CH-1211 Geneva 23, Switzerland
University of Geneva, CH-1211 Geneva 4, Switzerland
Chinese University of Science and Technology, USTC, Hefei, Anhui 230 029, China$^{\triangle}$
University of Lausanne, CH-1015 Lausanne, Switzerland
Institut de Physique Nucléaire de Lyon, IN2P3-CNRS,Université Claude Bernard, F-69622 Villeurbanne, France
Centro de Investigaciones Energ[é]{}ticas, Medioambientales y Tecnológicas, CIEMAT, E-28040 Madrid, Spain${\flat}$
Florida Institute of Technology, Melbourne, FL 32901, USA
INFN-Sezione di Milano, I-20133 Milan, Italy
Institute of Theoretical and Experimental Physics, ITEP, Moscow, Russia
INFN-Sezione di Napoli and University of Naples, I-80125 Naples, Italy
Department of Physics, University of Cyprus, Nicosia, Cyprus
University of Nijmegen and NIKHEF, NL-6525 ED Nijmegen, The Netherlands
California Institute of Technology, Pasadena, CA 91125, USA
INFN-Sezione di Perugia and Università Degli Studi di Perugia, I-06100 Perugia, Italy
Nuclear Physics Institute, St. Petersburg, Russia
Carnegie Mellon University, Pittsburgh, PA 15213, USA
INFN-Sezione di Napoli and University of Potenza, I-85100 Potenza, Italy
Princeton University, Princeton, NJ 08544, USA
University of Californa, Riverside, CA 92521, USA
INFN-Sezione di Roma and University of Rome, “La Sapienza", I-00185 Rome, Italy
University and INFN, Salerno, I-84100 Salerno, Italy
University of California, San Diego, CA 92093, USA
Bulgarian Academy of Sciences, Central Lab. of Mechatronics and Instrumentation, BU-1113 Sofia, Bulgaria
The Center for High Energy Physics, Kyungpook National University, 702-701 Taegu, Republic of Korea
Purdue University, West Lafayette, IN 47907, USA
Paul Scherrer Institut, PSI, CH-5232 Villigen, Switzerland
DESY, D-15738 Zeuthen, Germany
Eidgenössische Technische Hochschule, ETH Zürich, CH-8093 Zürich, Switzerland
University of Hamburg, D-22761 Hamburg, Germany
National Central University, Chung-Li, Taiwan, China
Department of Physics, National Tsing Hua University, Taiwan, China
Supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie
Supported by the Hungarian OTKA fund under contract numbers T019181, F023259 and T037350.
Also supported by the Hungarian OTKA fund under contract number T026178.
Supported also by the Comisión Interministerial de Ciencia y Tecnolog[í]{}a.
Also supported by CONICET and Universidad Nacional de La Plata, CC 67, 1900 La Plata, Argentina.
Supported by the National Natural Science Foundation of China.
$\langle\rts{}\rangle$ \[\] 182.7 188.6 191.6 195.5 199.5 201.8 205.9
-------------------------------------------------------------- ------- ------- ------- ------- ------- ------- -------
Integrated luminosity \[\] 55.5 176.8 29.8 84.1 83.3 37.2 218.1
Selected [[$\text{e}\nu\text{q}\bar{\text{q}}'$]{}]{} events 82 293 59 133 110 56 355
Selected [[$\mu\nu\text{q}\bar{\text{q}}'$]{}]{} events 67 255 43 110 99 59 289
Sample [[$f_{-}$]{}]{} [[$f_{+}$]{}]{} [[$f_{0}$]{}]{}
-------------------- ------------------------------- ------------------------------- -------------------------------
$\ell^{-}\nu$ Data 0.559 $\pm$ 0.038 $\pm$ 0.016 0.201 $\pm$ 0.026 $\pm$ 0.015 0.240 $\pm$ 0.051 $\pm$ 0.017
$\ell^{+}\nu$ Data 0.625 $\pm$ 0.037 $\pm$ 0.016 0.179 $\pm$ 0.023 $\pm$ 0.015 0.196 $\pm$ 0.050 $\pm$ 0.017
$\ell^\pm\nu$ Data 0.589 $\pm$ 0.027 $\pm$ 0.016 0.189 $\pm$ 0.017 $\pm$ 0.015 0.221 $\pm$ 0.036 $\pm$ 0.017
Monte Carlo 0.592 $\pm$ 0.003 0.170 $\pm$ 0.002 0.238 $\pm$ 0.004
Sample [[$f_{\pm}$]{}]{} [[$f_{0}$]{}]{}
-------------- ------------------------------- -------------------------------
hadrons Data 0.750 $\pm$ 0.056 $\pm$ 0.039 0.250 $\pm$ 0.056 $\pm$ 0.039
hadrons Data 0.833 $\pm$ 0.062 $\pm$ 0.039 0.167 $\pm$ 0.062 $\pm$ 0.039
hadrons Data 0.785 $\pm$ 0.042 $\pm$ 0.039 0.215 $\pm$ 0.042 $\pm$ 0.039
Monte Carlo 0.757 $\pm$ 0.004 0.243 $\pm$ 0.004
[[$f_{-}$]{}]{} [[$f_{+}$]{}]{} [[$f_{0}$]{}]{}
------------- ------------------------------- ------------------------------- -------------------------------
Data 0.555 $\pm$ 0.037 $\pm$ 0.016 0.200 $\pm$ 0.026 $\pm$ 0.015 0.245 $\pm$ 0.038 $\pm$ 0.016
Data 0.634 $\pm$ 0.038 $\pm$ 0.016 0.181 $\pm$ 0.024 $\pm$ 0.015 0.185 $\pm$ 0.039 $\pm$ 0.016
Data 0.592 $\pm$ 0.027 $\pm$ 0.016 0.190 $\pm$ 0.017 $\pm$ 0.015 0.218 $\pm$ 0.027 $\pm$ 0.016
Monte Carlo 0.590 $\pm$ 0.003 0.169 $\pm$ 0.002 0.241 $\pm$ 0.003
W$\ell\nu$ Whadrons
---------------------------- ------------ ----------
Selection 0.013 0.024
Binning effects 0.007 0.029
Bias correction 0.006 0.011
Four fermion contamination 0.005 0.001
Background corrections 0.004 0.001
Total 0.017 0.039
$\cos \Theta_{\mathrm{W}^{-}}$ Fraction Data Monte Carlo
------------------------------------ ----------------- ---------------------------------- -------------------
[[$f_{-}$]{}]{} 0.173 $\pm$ 0.041 $\pm$ 0.033 0.156 $\pm$ 0.006
$[-1.0, -0.3] $ [[$f_{+}$]{}]{} 0.418 $\pm$ 0.060 $\pm$ 0.043 0.431 $\pm$ 0.008
[[$f_{0}$]{}]{} 0.409 $\pm$ 0.082 $\pm$ 0.051 0.413 $\pm$ 0.008
[[$f_{-}$]{}]{} 0.509 $\pm$ 0.055 $\pm$ 0.029 0.446 $\pm$ 0.006
$[-0.3, \phantom{-}0.3]$ [[$f_{+}$]{}]{} 0.303 $\pm$ 0.040 $\pm$ 0.032 0.282 $\pm$ 0.005
[[$f_{0}$]{}]{} 0.188 $\pm$ 0.060 $\pm$ 0.043 0.272 $\pm$ 0.006
[[$f_{-}$]{}]{} 0.683 $\pm$ 0.042 $\pm$ 0.026 0.723 $\pm$ 0.004
$[\phantom{-}0.3, \phantom{-}0.9]$ [[$f_{+}$]{}]{} 0.135 $\pm$ 0.027 $\pm$ 0.030 0.119 $\pm$ 0.003
[[$f_{0}$]{}]{} 0.182 $\pm$ 0.039 $\pm$ 0.027 0.158 $\pm$ 0.004
[[$f_{-}$]{}]{} 0.708 $\pm$ 0.093 $\pm$ 0.056 0.647 $\pm$ 0.007
$[\phantom{-}0.9, \phantom{-}1.0]$ [[$f_{+}$]{}]{} $-$0.010 $\pm$ 0.055 $\pm$ 0.028 0.029 $\pm$ 0.004
[[$f_{0}$]{}]{} 0.302 $\pm$ 0.082 $\pm$ 0.059 0.324 $\pm$ 0.007
{width="8cm"} {width="8cm"}\
{width="8cm"} {width="8cm"}
{width="12cm"}\
{width="12cm"}\
{width="\figwidth"}
{width="\figwidth"}
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'T. Beckert, W. J. Duschl'
date:
- 'Received …; accepted …'
- 'Received / Accepted'
title: 'Where have all the black holes gone?'
---
Evidence for Black Holes and Advection-Dominated Accretion
==========================================================
The existence of black holes (BH) in the centers of galaxies is now widely accepted and the best mass determinations are known for Sgr A$^*$ in the Galactic Center ($M_\mathrm{BH} = 2.6\,10^6
M_\odot$) from the stellar velocity dispersion, and for NGC4258 from keplerian rotation of maser spots in an accretion disk. Beside these very low luminosity AGNs, the masses of BH in quasars have been estimated from their continuum luminosity and the H$\beta$ line width (Laor [@L00]). It turns out, that radio-loud quasars and radio galaxies, which show powerful jets, are found in large elliptical galaxies with the most massive BHs $\sim 10^9 M_\odot$.
Seyfert Galaxies, on the other hand, are spiral galaxies which host an AGN . Activity in the nucleus, which is powered by accretion into a BH, can be discriminated against starbursts in Seyferts from radio and X-ray observations. Radio cores and jets with brightness temperatures above $10^8$K have been detected in some Seyferts (Ulvestad et al. [@U99]; Mundell et al. [@M00]; Falcke et al. [@F00]). Their flux stability over several years exculdes radio supernovae as the power supply. The X-ray emission shows rapid variability and in some cases a redshifted Fe K$\alpha$ line, which is an indicator of relativistic motion in the accretion disk around the BH. The masses of BHs in some Seyfert 1 galaxies have been measured by reverberation mapping of variable and correlated continuum and line emission (Peterson & Wandel [@PW00]). These measurements are in reasonable agreement with the $M_\mathrm{BH}$–$\sigma$ relation of enclosed mass ($\sim M_\mathrm{BH}$) versus velocity dispersions $\sigma$ in bulges of normal galaxies (Gebhardt et al. [@Geb00]). It is therefore reasonable to assume the existence of supermassive BH in most elliptical galaxies and spirals with bulges.
While in the high luminosity objects both jet and accretion disk can be identified in the spectrum, the situation is different in less luminous AGNs like weak Seyfert Galaxies and LINERs. But even here small scale jets are commonly found (Falcke et al. [@F00]) and argue for the existence of BHs. For instance NGC4258 is an interesting transition object showing both an outer irradiated thin accretion disk and a small scale radio jet. A geometrically thin standard accretion disk close to the black hole can not be identified, but the ionizing X-rays maybe produced at the base of the jet, which can be identical with the proposed advection-dominated accretion flow (ADAF) within $100
R_\mathrm{S}$ (Gammie, Narayan & Blandford [@GNB99]; $R_\mathrm{S}$: Schwarzschild radius). At even lower luminosities the Galactic Center (Sgr A$^*$) with a BH mass of $2.6 \,10^6
M_\odot$ (Genzel et al. [@G97]) is the only visible AGN with a power output of $\approx 2\,10^{-10} L_\mathrm{Edd}$. A comparable object in any other galaxy (spiral or elliptical) would not be seen as an AGN. Assuming an spherical and adiabatic Bondi inflow, cooled only by Bremsstrahlung, gives an radiation efficiency $\eta
= 9\,10^{-3}(L/L_\mathrm{Edd})$ (Frank, King & Raine [@FKR92]). Bremsstrahlung will be emitted in X-rays and the [*Chandra*]{} detection (Baganoff et al. [@BBB01]) of $L_\mathrm{X} \sim 2\,10^{33}$ erg s$^{-1}$ is consistent with a mass accretion rate of $\dot{M} = 1.6\,10^{-6} M_\odot$ yr$^{-1}$ and an efficiency of $\eta = 2.2\,10^{-8}$. The sub-mm luminosity of Sgr A$^*$ is about 30 times larger than the X-ray flux and makes Sgr A$^*$ a unique object. We will discuss a specific ADAF model for Sgr A$^*$ in Sec. \[SGR\_Sec\]. The Bondi flow faces at least two problems: it does not allow for any possible angular momentum of the inflow and does not include magnetic fields, which lead to synchrotron emission at radio frequencies and synchrotron self-compton cooling. Both can be accounted for in ADAF models. They provide a reasonable explanation for the spectral energy distribution (SED) of Sgr A$^*$ with a mass accretion of $\approx 10^{-6} M_\odot$ yr$^{-1}$ and a radiative efficiency of $10^{-5}$. Beside the basically unresolved radio core of Sgr A$^*$, it is not possible to identify a jet in the Galactic Center.
In this paper we will explore the hypothesis, that most of the normal galaxies without substantial AGN activity contain supermassive black holes $M > 10^6 M_\odot$ some of which have been active during the quasar phase $ 0.3 < z < 5$ ($z$ being the cosmological redshift) and are quietly accreting in an ADAF mode today. Spectral properties of ADAFs with rather large mass accretion rates are explored in Sec. 2. The total luminosities and spectral energy distributions are of interest for weak AGNs (Ho [@H99]). We investigate the transition from standard thin disk accretion to ADAFs and vice versa as an upper limit in the mass accretion rate for ADAFs in Sec. \[limits\]. The combined consequences for accretion in normal galaxies are discussed in Sec. \[disc\_cons\].
Radiation characteristics of ADAFs
==================================
The calculation of emission spectra from ADAFs are based on self-similar solutions for density, inflow velocity, rotation, and ion temperature presented in Beckert ([@Beckert00]), which are a generalisation of the Narayan & Yi ([@NY94]) solution in the Newtonian limit. Gravitational redshift and the small volume of the general-relativistic region makes it reasonable to extend the solution down to the black hole horizon at $R_\mathrm{S}$. Aspects of the Kerr metrics are not considered. The electron temperature is calculated from thermal balance between Coulomb heating by ions, adiabatic compression, advective energy transport, and radiation cooling. The primary radiation mechanisms are Bremsstrahlung and synchrotron emission. Both are local in the sense, that they only depend on the local state of the flow. At large mass accretion rates multiple inverse Compton scattering (IC) of synchrotron photons becomes the dominant cooling mechanism. The photon flux to be scattered is produced at different radii in the flow and is highly anisotropic, which makes it a non-local process. Inverse Compton scattering and the resulting cooling rate modify the density and temperature of the ADAF and are treated iteratively. For calculating the IC radiation, we used a method proposed by Poutanen & Svensson ([@PS96]) for thermal electrons, which we have modified to allow for the quasi-spherical geometry of ADAFs. From the total cooling we take the global radiation efficiency $\varepsilon_\mathrm{Rad} = Q^-/Q^+$ to recalculate the self-similar ADAF structure and electron temperature. We then seek convergence of the assumed and posteriori calculated $\varepsilon_{\mathrm{Rad}}$ to get globally consistent ADAF models with correct radiation spectra. For the ADAF we assume energy equipartition between ions and magnetic field. This determines the synchrotron emission and provides additional pressure to support the flow and lower the adiabatic index below $5/3$ to make ADAFs possible at all. For the viscosity we use the standard $\alpha$-prescription with $\alpha = 0.1$ well above the critical value of $\alpha_{\mathrm{crit}} \approx 10^{-2}$ for the transition to convection-dominated accretion flows (CDAFs) described by Narayan, Igumenshchev & Abramowicz ([@NIA00]). No outflows, jets or wind infall as investigated in Beckert ([@Beckert00]) are assumed, even though a wind infall is suggested for the Galactic Center (Melia & Coker [@MC99]). The radial viscous break due to bulk and shear viscosity is included, but it does not dominate the dynamical state of the flow for $\alpha \le 0.1$.
From the described model we can construct spectral energy distributions (SED) for different mass accretion rates. The mass of the central BH has only a weak influence on the SED, which is not included in the scaling of $\dot{M}$ to the Eddington accretion rate, and is not considered here. We assume a mass of $2.6\,10^6 M_\odot$ for the SED in Fig.\[nulnu\], appropriate for the Galactic Center, with a corresponding Eddington limit of $\dot{M}_\mathrm{Edd} = 0.0577
M_\odot$ yr$^{-1}$ (in this paper we define the Eddington accretion rate with an efficiency $\eta =
L_\mathrm{Edd}/(\dot{M}_\mathrm{Edd} c^2) = 0.1$) and use the scale free accretion rate $\dot{m} = \dot{M}/\dot{M}_\mathrm{Edd}$ in the following discussion. The spectral luminosity in Fig. \[nulnu\] scales as the black hole mass, $\nu L_\nu \propto
M_\mathrm{BH}$. For comparision we show in Fig.\[fig1a\] the SED for an ADAF arround a $10^9 M_\odot$ black hole with $\dot{m} = 3.6\,10^{-4}$. Fig.\[figeffic\] demonsttrates that the radiation efficiency for this flow is independend of the black hole mass.
The SEDs for $\dot{m}$ between $1.7\,10^{-5}$ and $2.2\,10^{-3}$ are shown in Fig.\[nulnu\]. The presented model spectra are accurate above 30 GHz, and they show that the synchrotron emission rises in flux from $10^{33}$ to $10^{37}$ erg s$^{-1}$ and shifts in frequency from $2\,10^{11}$ Hz to $2\,10^{12}$ Hz at $\dot{m} = 3.5\,10^{-4}$ and back to smaller frequency for larger $\dot{m}$. Above $\dot{m} = 1.5\,10^{-4}$ the Thomson optical depth for synchrotron photons from central regions around $3
R_\mathrm{S}$ is significant, and Compton scattering broadens the synchrotron peak and make it less prominent, compared to the IC emission. The dominating peak in the IC part of the spectrum, which can be identified between the synchrotron and the Bremsstrahlung peaks at $10^{19 \ldots 20}$ Hz, is the second Compton peak of twice scattered synchrotron photons. The synchrotron seed photons are produced in a region closer to the BH than the Compton scattered radiation. So the seed photon flux for first Compton scattering is anisotropic, and most synchrotron photons are scattered back into a high density and high temperature region with the largest optical depth. The second Compton peak is therefore the dominant one, and the asymmetry between even and odd scattering order decreases thereafter, because the photon field to be scattered becomes more and more isotropic. The SED becomes flat or inverted due to multiple IC scattering above $10^{14}$ Hz for $\dot{m} \ge 3\,10^{-4}$. The Bremsstrahlung peak will only be recognized below $\dot{m} <
1.5\,10^{-4}$. The peak does not shift very much in frequency as the maximum of electron temperature only varies between $2.5\,10^9$K and $8\,10^9$K, where the highest $T_{\mathrm{e}}$ are achieved at $\dot{m} \approx 2\,10^{-4}$ very close to the horizon and the photons from that radius are significantly redshifted.
One major prediction of ADAF models is the different evolution of observable flux in different frequency bands. This is expressed in the scale-free spectral luminosity $l_\nu$. We scale the radiation flux $L_\nu$ to the total, frequency integrated luminosity of the specific model from Eq. (\[effic\]) and define $$\label{fluxL}
% L_\nu = 4 \pi d^2 S_\nu , \quad
l_\nu =
\frac{L_\nu}{\eta \dot{M}_\mathrm{Edd} c^2} \left[\frac{\dot{m}}
{10^{-3}}\right]^{-2.3} \left[\frac{ h\nu_\mathrm{max}}
{ 100\,\mathrm{keV}}\right] \quad .$$ We see in Fig.\[fig2\] that the importance of synchrotron emission at 86 GHz decreases with rising $\dot{m}$, the X-rays follow the total luminosity at first, and become more important when they are dominated by IC emission above $\dot{m} \ge 2\,10^{-4}$. The importance of IC emission is most prominently seen in K and V band. The flux is rapidly rising with $\dot{m}$ and saturates above $\dot{m} = 2\,10^{-4}$ when IC is dominating the cooling and therefore IC follows the total luminosity. The described spectra show only a weak dependence on the absolute mass of the BH between $10^4$ and $10^9 M_\odot$, for which we have tested the models. Larger BH masses imply smaller densities in the accretion flow at the same $\dot{m}$. Consequently the electron temperature also becomes smaller due to weaker Coulomb coupling. As the only significant consequence for the SED, the position of the synchrotron peak is anticorrelated with $M_\mathrm{BH}$, and it shifts to smaller frequencies for larger BH masses.
A Note on the Galactic Center Source Sgr A$^*$ {#SGR_Sec}
==============================================
The enigmatic radio source Sgr A$^*$ in the Galactic Center is coincident with the center of gravity of an enclosed mass of $2.6\,10^6 M_\odot$. It was considered to be one of the test cases for ADAF models (Narayan et al. [@N98]), but the SED of the source poses three problems for standard ADAFs. (1) The observed radio spectrum is much flatter than predicted, so that only the sub-mm bump (Falcke [@F99]) is nowadays attributed to the accretion flow. Most of the radio emission at cm-wavelength must then be produced by an outflow or jet (Falcke & Markoff [@FM00]). (2) The X-ray spectrum as derived from [*Chandra*]{} observations (Baganoff et al. [@BMM01]) has a different slope than expected from thermal bremsstrahlung coming from the ADAF. (3) The observed rapid variability in X-rays (Baganoff et al. [@BBB01]) restricts the size of the variable emitting region to less than $10 R_\mathrm{S}$. The spectrum at high X-ray fluxes is harder than at low flux levels. This can be explained by inverse Compton emission of relativistic electrons in a jet (Markoff et al. [@M01]) but even a jet has to be powered by an accretion process and bremsstrahlung emission of the accreting gas is unavoidable. In contrast to these recent scenarios, here we present an ADAF-wind infall model, where the gas in the accretion flow is heated by wind infall at all radii (Beckert, [@Beckert00]) with a steeper density profile $\Sigma \propto
r^{-1/2-\beta}$ than normal ADAFs. The synchrotron emission dominates the SED (Fig. \[SGRSED\]) due to a strongly magnetised ADAF $\beta_P = P_{\mathrm{Gas}}/P_{\mathrm{total}} < 0.5$.
The wind infall is assumed to be strong $\beta = 0.24$ and rotates with $\Omega /2$ of the ADAF. The flow is magnetically dominated with $\beta_P = 0.35$ and the radiative efficiency $\epsilon =
9\,10^{-5}$ is larger than in the other models presented in this paper due to the larger electron temperature and the stronger magnetic fields, which leads to increased synchrotron emission. Viscosity is described by an $\alpha$-parametrisation with $\alpha
= 0.08$, lower than for the other ADAF models in Sec. 2, but convection is still expected to be unimportant (Narayan, Igumenshchev & Abramowicz [@NIA00]). This model gives a good fit to the radio spectrum but the problem with the X-ray observation persists.
Limits on the mass accretion rate\[limits\]
===========================================
The luminosity of the synchrotron peak relative to Bremsstrahlung and IC emission depends only on $\dot{m}$, and it is unaffected by the absolute mass of the BH. The global radiative efficiency $\varepsilon_\mathrm{Rad}$, defined in the previous section, gives the cooling rate $Q^-$ with respect to the viscous heating rate. The heating rate itself depends on $\varepsilon_\mathrm{Rad}$ in the ADAF model, and it is therefore not useful for comparing different models. In describing the efficiency of an accretion flow, it is more reasonable to use $\varepsilon = Q^-/(\eta \dot{M} c^2)$. In the following we assume a standard efficiency of $\eta = 0.1$ . The quantity $\varepsilon$ gives the ratio between the actual luminosity of the accretion flow and the value expected for a thin and effectively radiating accretion disk at the same mass accretion rate. The radiative efficiency $\varepsilon$ is shown for our model calculations in Fig. \[figeffic\]. As described in the previous section, different emission mechanisms dominate at different accretion rates, and we do not expect $\varepsilon$ to follow a simple power law in $\dot{m}$. The fit $$\label{effic}
% \varepsilon_{\mathrm Rad} \approx \left(336.3\ \dot{m}\right)^{2.209}
\varepsilon = \left(340\ \dot{m}\right)^{2.2}$$ therefore does not represent a theoretically derived physical law, and has to be taken with caution when used outside the scope of the calculated models. Nonetheless the results allow us to conclude that no consistent ADAF models with mass accretion rates larger than $\dot{m}_\mathrm{crit} = 2.95\,10^{-3}$ are feasible. The reason is that IC emission increases rapidly with electron temperature and density. For increasing mass accretion rates both density and surface density increase linearly as long as the accretion velocity stays the same (this is true for ADAFs with $\varepsilon \le 0.3$, which are consequently advection-dominated). The maximum $T_\mathrm{e}$ in the flow depends only weakly on $\dot{m}$ for IC dominated cooling. The Thomson optical depth is $$\tau = \frac{3 \dot{m}}{\eta \alpha} \sqrt{\frac{R_\mathrm{S}}{2 r}}\quad ,$$ and the mean energy which a photon gains from one scattering is $2
\gamma^2 h \nu \approx 6.5 h\nu$. The seed photons have $\nu
\approx 10^{12}$Hz, and upscattering stops if $(2\gamma^2)^n =
\gamma m_{\mathrm{e}}c^2/(h\nu)$, which results in photons gaining energy in up to $n \approx 10$ consecutive scatterings. Starting from a synchrotron luminosity $L_{\mathrm{s}}$ the IC luminosity will be $ L_{\mathrm{IC}} \approx (2 \gamma^2 \tau)^n
L_\mathrm{s}$, if $2 \gamma^2 \tau > 1$, and the spectrum will be flat for $2 \gamma^2 \tau = 1$. In the case of $\eta = \alpha =
0.1$ and $r \sim 3 R_\mathrm{S}$ this criterion is fulfilled, if $\dot{m} = 1.3\,10^{-3}$. This is a reasonable order of magnitude estimate, when compared to the model spectra described above. With a constant synchrotron luminosity of $5\,10^{-7} L_\mathrm{Edd}$, we find in the IC dominated regime $$L_{\mathrm{IC}} = 5\,10^{-7}\,(8\frac{\dot{m}}{\eta \alpha})^n\,
L_\mathrm{Edd}\quad .$$ An upper limit for the mass accretion rate is $$\dot{m}_\mathrm{c} = 0.53\,\eta\,\alpha ,$$ which is in rough agreement with the upper limit found by fitting the total luminosity in Eq. (\[effic\]). It must be noted that this limit $\dot{m}_\mathrm{c}$ has a different $\alpha$-dependence than other limits $\dot{m}_\mathrm{crit} = \xi \alpha^2$ (Narayan [@N96]); Esin, McClintock, Narayan [@E97]) which give a wide range for $\xi$ between 0.1 and 1.3 depending on details of the models.
Another local criterion for the existence of ADAFs is given by the imbalance of Bremsstrahlung cooling and viscous heating. At large radii the electron cooling rate, which is coupled to the ion heating rate by the radiation efficiency, decreases faster than Coulomb coupling between thermal ions and electrons. The electron temperature is therefore close to the ion temperature at these radii. The ions are close to the viral temperature as long as the radiative efficiency is significantly smaller than 1, which is the case for all calculated models here. In the region where $T_\mathrm{e}$ and $T_\mathrm{ion}$ are equal, the Bremsstrahlung cooling, which dominates at large radii, decreases as $r^{-5/2}$, while the viscous heating falls off as $r^{-3}$. At a critical outer radius, the radiative efficiency is 1 and no ADAF is possible at larger radii. This outer radius is found to be $$\label{BremsR}
R_\mathrm{out} = 5\,10^2 \alpha^4\,\dot{m}^{-1}\,R_\mathrm{S} \quad ,$$ which is valid for $\alpha \le 0.1$ and means that for $\dot{m} >
0.13$ Bremsstrahlung from an ADAF is more efficient than the viscous heating outside the marginally stable orbit at $R =
3R_\mathrm{S}$. From studies of the Galactic Center we know that no standard thin accretion disk exists within $10^5
R_\mathrm{S}$ and the mass accretion rate from spectral fitting of an ADAF model gives $\dot{m} \approx 1.6\,10^{-5}$. From Eq. (\[BremsR\]) this results in a lower limit estimate of $\alpha >
0.01$.
Discussion – Consequences for AGN Evolution\[disc\_cons\]
=========================================================
We find that the radiation efficiency depends strongly on the mass accretion rate ($\propto {\dot m}^{2.3}$) below a certain critical value $\dot{m}_\mathrm{crit}$ of the mass accretion rate (in units of its Eddington value). Above $\dot{m}_\mathrm{crit}$, $\varepsilon = 1$. In other words, below the critical mass accretion rate, the efficiency decreases so rapidly that one can discern two regimes: $$\label{eq:approxeps}
\varepsilon = \left\{ \begin{array}{ll} 1 & \mathrm{for\ }\dot m >
{\dot m}_\mathrm{crit}\\ 0 & \mathrm{for\ }\dot m < {\dot
m}_\mathrm{crit}\\ \end{array} \right.,$$ with only a small transition zone in $\dot{m}$ below ${\dot m}_\mathrm{crit}$ where $\varepsilon$ differs significantly from 0. For the purpose of our present discussion, however, the approximation of Eq.(\[eq:approxeps\]) suffices.
Furthermore, our numerical models show that the relation outlined by Eq. (\[eq:approxeps\]) holds for the entire range of $M_\mathrm{BH}$ investigated. In the following we will use the value ${\dot m}_\mathrm{crit}
= 0.003$ as derived by interpolating our numerical models (Eq. \[effic\]).
The onset of nuclear activity
-----------------------------
The existence of ${\dot m}_\mathrm{crit}$ with the properties discussed above translates into a fairly sharp transition between an active and an inactive state of a galaxy. As soon as $\dot m$ falls below ${\dot m}_\mathrm{crit}$ the radiation efficiency of the accretion decreases dramatically. In other words, already a relatively small change in the mass flow rate around ${\dot
m}_\mathrm{crit}$ suffices to “switch off" an AGN, and vice versa. The difference between a [*normal*]{} and an [*active*]{} galaxy is then not due to a difference in $\dot m$ which is as large as the difference in luminosities between the two classes. A much more important reason is the steep decline in the radiation efficiency for the accretion rates below which the disk turns advection-dominated.
Our numerical models predict ${\dot m}_\mathrm{crit} \sim 0.003$. This is in good agreement with observations (e.g., Peterson & Wandel [@PW00], who find AGN only in the luminosity range between $\sim 10^{-3}$ and 1 of the Eddington luminosity, or—equivalently—the Eddington mass accretion rate). In our interpretation the lack of galactic nuclei below $10^{-3}\,L_\mathrm{Edd}$ is not due to a lack of galaxies with mass accretion rates below $10^{-3}\,{\dot M}_\mathrm{Edd}$ but rather due to the steep decline of radiation efficiency below this critical value.
The general properties of the evolution of AGN and of their black holes
-----------------------------------------------------------------------
For a supermassive black hole of mass $M_\mathrm{BH}$, one can give an average accretion rate $\overline{\dot M}$ over its age $\tau_\mathrm{BH}$ of $$\overline{\dot M} \le \frac{M_\mathrm{BH}}{\tau_\mathrm{BH}}$$ We can only give an upper limit of $\overline{\dot M}$, because there is the possibility of a non-negligible seed mass of the black hole which is not due to this type of accretion processes.
If, in addition, we assume, that the age of the BH is not very much shorter than the age of its host galaxy and thus the Hubble time (at the location of the black hole), $\tau_\mathrm{H}$, we can write $$\overline{\dot M} \le \frac{M_\mathrm{BH}}{\tau_\mathrm{H}}$$ In the following, we express the time in units of $10^{10}\,$yr, and the mass flow rates in units of the Eddington rate. We introduce the abbreviations ${\dot m} = \dot M / {\dot M}_\mathrm{Edd}$, and $\tau_{10} = \tau / 10^{10}\,$yrs, and get an average mass accretion rate of $$\overline{\dot m} \le 0.04 \eta \frac{1}{\tau_{10}}.$$ In terms of the Eddington accretion rate, the average accretion rate $\overline{\dot m}$ is independent of the accreting black hole’s mass. For a constant accretion efficiency $\eta$, it is $\overline{\dot m} \propto t^{-1}_\mathrm{BH}$ a function of the age of the black hole, or—for that matter—the Universe. This leads to two interesting consequences:
- In terms of the Eddington rate, the accretion rate declines as the black hole, the galaxy, and the Universe as a whole evolve;
- The present-day average accretion rate is around $10^{-3}$ of its Eddington value.
The present-day average accretion rate, $\overline{\dot m}_{(0)}$ is fairly close to the critical accretion rate ${\dot
m}_\mathrm{crit} \sim 0.003$ given by $\varepsilon = 1$ (Eq.\[effic\]). For $\dot m > {\dot m}_\mathrm{crit}$ the radiation efficiency is unity, while for smaller $\dot m$ it decreases sharply $\propto m^{2.3}$. This means that relatively small changes in the momentary accretion rate are capable of transferring a galaxy from an inactive to an active state and vice versa. In the course of the further evolution of the Universe, it will become harder and harder for galaxies to turn active as the average accretion rate (in Eddington units) becomes smaller and smaller. This is compounded with and strengthened by a diminishing supply of material available for accretion, i.e., by an additional decrease of the absolute value of the average accretion rate. Extrapolating back to earlier cosmological epochs, we find (because of smaller $\tau_\mathrm{BH}$ and higher $\overline{\dot m}$) that the likelihood for a galaxy to be active was higher on two grounds: [*(1)*]{} The larger $\overline{\dot m}$ the smaller a fluctuation suffices to turn a galaxy active. [*(2)*]{} The further we go back in the evolution of the Universe, the larger was the supply of gas available for accretion.
Constraints on the AGN duty cycle
---------------------------------
As discussed above, the mass $M_\mathrm{BH}$ and age $\tau_\mathrm{BH}$ of a black hole define an upper limit for its average mass accretion rate. During phases of activity, the mass flow rate must be larger than $\overline{\dot m}$. Let us—for the purpose of a crude estimate—assume that we have only two states, namely the [*AGN*]{} phase, characterized by a mass flow rate ${\dot m}_\mathrm{AGN} > {\dot m}_\mathrm{crit}$ lasting for a period of time of $\theta \tau_\mathrm{H}$, and a [*normal galaxy*]{} phase for which the mass flow rate ${\dot
m}_\mathrm{normal} < {\dot m}_\mathrm{crit}$ is correspondingly smaller so as to maintain the average value $\overline{\dot m}$. Let us, moreover, assume[^1] that $(1-\theta) {\dot m}_\mathrm{normal} \ll \theta {\dot
m}_\mathrm{AGN}$, then we get for the duty cycle $$\theta = \frac{\overline{\dot m}-{\dot m}_\mathrm{normal}}
{{\dot m}_\mathrm{AGN}-{\dot m}_\mathrm{normal}} \geq
\frac{\overline{\dot m}}{2{\dot m}_\mathrm{AGN}}.$$
${\dot m}_\mathrm{AGN}\ge {\dot m}_\mathrm{crit}$ needs to be fulfilled for an activity phase to occur at all. The stronger the activity of a galaxy is the shorter it can be maintained. For instance, for an AGN operating at its Eddington limit, i.e., at ${\dot m}_\mathrm{AGN} = 1$ this means that it cannot stay at this level of activity—integrated over all individual phases of activity—for longer than a fraction $\theta$ of its entire evolution, i.e., some $10^7\,$yr in the present-day Universe. A super-Eddington activity level can be maintained only for an even shorter period of time.
Derivation of a more detailed luminosity evolution of an AGN sample requires, however, a treatment more detailed than the above order-of-magnitude estimates. In particular it has to be investigated whether real-world galaxies can maintain a sufficiently high supply of matter for the accretion process over long enough a period of time. This then involves, for instance, questions about the accretion time scales and the long-term development of mass reservoirs. This topic, however, is beyond the scope of the present paper and will be addressed separately (Duschl &Strittmatter, in prep.)
Conclusions
===========
We have shown that advection-dominated accretion flows into black holes display changing spectral energy distributions for different mass accretion rates. At very low mass accretion rates below $\dot{M} = 5\,10^{-5} \dot{M}_\mathrm{Edd}$ synchrotron emission and Bremsstrahlung dominate the SED. Above $\dot{M} = 2\,10^{-4}
\dot{M}_\mathrm{Edd}$ the inverse Compton radiation from synchrotron seed photons produce flat to inverted SEDs from the radio to X-ray bands. Multiple inverse Compton scattering is the most relevant cooling process for mildly relativistic electrons in two-temperature ADAFs, whenever the actual radiation efficiency in eq. (\[effic\]) is larger than 0.07 or the total luminosity larger than $4\,10^{-7} L_\mathrm{Edd}$. The rapidly increasing cooling efficiency of the inverse Compton process sets an upper limit for the mass accretion rate of ${\dot m}_\mathrm{crit} =
3\,10^{-3}$ for ADAFs. At larger mass accretion rates radiative cooling balances the local viscous heating and advective energy transport is unimportant. The resulting flows are less hot, disk-like, and therefore much denser than ADAFs. Below the critical accretion rate, the radiation efficiency declines rapidly $\varepsilon \propto \dot{m}^{2.3}$ and the total luminosity is $$L_\mathrm{ADAF} = 3.45\,10^{-3}\,(\dot{m}/0.003)^{3.3}\,L_\mathrm{Edd}
\quad .$$
In addition to the changing spectral behavior, the combination of a transition value between ADAFs and normal accretion disks (${\dot m}_\mathrm{crit} \sim 3\,10^{-3})$, and the very steep decline of the ADAF radiation efficiency below this transition value towards smaller accretion rates ($\varepsilon \propto {\dot
m}^{2.3}$) leads to an apparent dichotomy in what one observes: Above ${\dot m}_\mathrm{crit}$, one will recognize the sources as AGN; while below ${\dot m}_\mathrm{crit}$, practically no nuclear activity is observable. In other words, accretion into a super-massive black hole should be prominently visible only at mass flow rates above ${\dot m}_\mathrm{crit}$, which is in good agreement with observed AGN distributions. The major difference between AGN and normal galaxies is not so much a difference in mass flow rate—at least not by as much as the difference in luminosities may make us think—but rather one in the radiative efficiency.
At the same time, this means that—at accretion rates around ${\dot m}_\mathrm{crit}$—small changes in the mass flow rate are sufficient to cause a strong difference in radiation efficiency and thus nuclear luminosity. In other words, the crossing of ${\dot
m}_\mathrm{crit}$ acts almost like a [*switch*]{} which turns AGNs on and off.
Finally, the combination of the black holes’ masses and the mass accretion rate allowed us to put constraints on the duty cycle of AGN. It turned out that the most active AGN can maintain this level of activity only for rather short time scales (of the order of some $10^7$years).
We wish to thank the referee, Dr. Suzy Collin, for her very helpful report on this paper. This work was in part supported by the [*Deutsche Forschungsgemeinschaft, DFG,*]{} through grant SFB439/C2.
Beckert T. 2000, ApJ 539, 223 Baganoff F.K., Bautz M.W., Brandt W.N., et al., 2001a, Nature 413, 45 Baganoff F.K., Maeda Y., Morris M., et al., 2001b, ApJ in press ([ *astro-ph/0102151*]{}) Esin A.A., McClintock J.E., Narayan R. 1997, ApJ 489, 865 Falcke H., 1999, in [The Central Parsecs of the Galaxy]{} ASP Conference Series, Vol. 186, p.148 Falcke H., Markoff S., 2000, A&A 362, 113 Falcke H., Nagar N.M., Wilson A.S., Ulvestad J.S., 2000, ApJ 542, 197 Frank J., King A.R., Raine D.J., 1992, [*Accretion Power in Astrophysics 2nd ed.*]{}, Cambridge University Press, Cambridge, UK Gammie C.F., Narayan R., Blandford R., 1999, ApJ 516, 177 Gebhardt K. et al., 2000, ApJ 539, L13 Genzel R., Eckart A., Ott T., Eisenhauer F., 1997, MNRAS 291, 219 Ho L.C., 1999, ApJ 516, 672 Laor A., 2000, ApJ 543, L111 Markoff S., Falcke H., Yuan F., Biermann P.L., 2001, A&A, in press Melia F., Coker R., 1999, ApJ 511, 750 Mundell C.G., Wilson A.S., Ulvestad J.S., Roy A.L., 2000, ApJ 529, 816 Narayan R., Yi I., 1994, ApJ 428, L13 Narayan R., 1996, ApJ 462, 136 Narayan R., Mahadevan R., Grindlay J.E., Popham R.G., Gammie C., 1998, ApJ 492, 554 Narayan R., Igumenshchev I.V., Abramowicz M.A., 2000, ApJ 539, 798 Peterson B.M., Wandel A., 2000, ApJ 540, L13 Poutanen J., Svensson R., 1996, ApJ 470, 249 Ulvestad J.S., Wrobel J.M., Roy A.L., Wilson A.S., Falcke H., Krichbaum T.P., 1999, ApJ 517, L81
[^1]: This assumption is not necessary for activity to set in, as we have seen in the previous Sect. For the following, however, it is a handy assumption which does not influence the results of our order-of-magnitude estimates.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Jaume Gomis[^1] and Christian Römelsberger[^2]\
*Perimeter Institute\
*31 Caroline St. N.\
*Waterloo, ON N2L 2Y5 , Canada\
***
bibliography:
- 'wall.bib'
date: April 2006
title: |
**Bubbling Defect CFT’s\
**
---
‘=11
draftcontrol
[255=255 by 60 255 by-60255 by]{}
makepapertitle
=1 3em to 3cm 3em
1.5em
.5em
author
1em [bstract]{}
.5em date
pubnum \#1[ pubnum[\#1]{}]{}
bstract \#1[ bstract[ ]{} ]{}
PS. @paper[mkbothgobbletwo =1 oddfoot[to -to ]{} oddfoot[to ]{} evenfootoddfoot ]{}
version\#1
=1
version \#1[=1 @false currentlabel @label[\#1]{} \[\#1\] ]{} @bibitembibitem @lbibitemlbibitem =1 bibitem\#1[ @bibitem[\#1]{}@@label[\#1]{}]{} lbibitem\[\#1\]\#2[ @lbibitem\[\#1\][\#2]{}@@label[\#2]{}]{} @@label\#1[ @lab @inlabel labels ]{}
Introduction
============
Local gauge invariant operators are labeled by a point in spacetime. Such operators can be constructed by combining in a gauge invariant way the fields appearing in the action. This is, however, not the only way to define local operators. Operators which cannot be written in a local way in terms of the fields appearing in the action are quite ubiquitous in quantum field theory. Examples of this class of operators include twist operators in conformal field theory and “soliton" operators in gauge theories (see e.g. [@Kapustin:2005py] for a recent discussion). The insertion of an operator in the path integral has the effect of introducing at the location of the operator a very specific singularity for the fields that appear in the action.
There is no essential limitation in quantum field theory restricting the class of admissible singularities of the fundamental fields to be point-like; in principle they can be defined on any defect in spacetime. In four dimensional field theories one may consider line, surface and domain wall operators on top of the more familiar local operators labeled by a point in spacetime. In favorable circumstances – e.g. Wilson loops – these operators can be written down using the fields appearing in the action while in others – e.g. ’t Hooft loops – the operators are defined by the singularity they produce for the fundamental fields in the action at the location of the defect.
A convenient way to construct a defect operator is to introduce additional degrees of freedom localized on the defect. The extra degrees of freedom encode the type of singularity produced by the defect operator. The study of defect operators can then by mapped to the problem of studying the defect field theory describing the coupling of a four dimensional field theory to the degrees of freedom living on the defect. Demanding invariance of the defect operator under some symmetry constraints the geometry of the defect as well as the allowed degrees of freedom that can be added to the defect. In particular, demanding invariance under the conformal group on the defect leads to defect conformal field theories [@Cardy:1984bb; @McAvity:1995zd].
The AdS/CFT conjecture [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj] requires that all gauge invariant operators in ${\cal N}=4$ SYM have a realization in the bulk description. This program has been successfully carried out for the half-BPS local operators in ${\cal N}=4$ SYM [@Witten:1998qj; @Aharony:1999ti], where the operators can be identified with D-branes in the bulk [@Corley:2001zk; @Berenstein:2004kk]. Recently, the dictionary has been enlarged [@Gomis:2006sb] (see also [@Drukker:2005kx; @Yamaguchi:2006tq]) to include all the half-BPS Wilson loop operators in ${\cal N}=4$ SYM, which have also been identified with D-branes in the bulk. The half-BPS Wilson loop operators are constructed [@Gomis:2006sb] by integrating out in the defect conformal field theory the localized degrees of freedom living on the loop that are introduced by the bulk D-branes.
In this paper we study half-BPS domain wall operators in ${\cal N}=4$ SYM and their corresponding bulk description. Domain wall operators can be identified with a defect conformal field theory describing the coupling of ${\cal N}=4$ SYM to additional degrees of freedom localized in an $\BR^{1,2}\subset \BR^{1,3}$ defect. Supersymmetry requires that the degrees of freedom living on the defect fill three dimensional hypermultiplets. These new localized hypermultiplet degrees of freedom arise from the presence of branes (D5 and NS5-branes) in $AdS_5\times S^5$ that end on a common $\BR^{1,2}$ defect at the boundary. For each half-BPS configuration of five-branes in $AdS_5\times S^5$, we may associate a defect conformal field theory or equivalently a half-BPS domain wall operator.
The defect conformal field theory can be derived by studying the low energy effective field theory[^3] on $N$ D3-branes in the presence of D5 and NS5-branes intersecting the D3-branes along an $\BR^{1,2}\subset \BR^{1,3}$ defect. The data which determines the defect conformal field theory under study is the number of D3-branes which end on each of the D5-branes and NS5-branes. In the decoupling/near horizon limit the five-branes span a family of $AdS_4\times\tilde{S}^2$ and $AdS_4\times S^2$ geometries in $AdS_5\times S^5$ respectively, of the type found in [@Karch:2000ct]. Each such array of five-branes in the bulk corresponds to a half-BPS domain wall operator.
We study the backreaction on the $AdS_5\times S^5$ background due to the configuration of five-branes dual to a specific domain wall operator. We show that the solution of the supergravity BPS equations is determined by specifying boundary conditions on a two dimensional surface in the ten dimensional geometry. These boundary conditions encode the location where either an $S^2$ or an ${\tilde S }^2$ shrinks to zero size in a smooth manner. This result generalizes the work of LLM [@Lin:2004nb] – which applies to the half-BPS local operators – to the geometries dual to half-BPS domain wall[^4] operators.
The $AdS_5\times S^5$ vacuum solution corresponds to an infinite strip, where at the bottom of the strip ${\tilde S }^2$ shrinks to zero size while at the top of the strip $S^2$ shrinks to zero size. In the probe approximation, a D5-brane is located on the top boundary of the strip while a NS5-brane is located at the bottom. The precise location of brane is determined by the amount of D3-brane charge carried by the five-branes.
In the backreacted geometry we expect these five-branes to be replaced by bubbles of flux [@Gopakumar:1998vy][@Klebanov:2000hb][@Maldacena:2000yy][@Chamseddine:1997nm][@Chamseddine:1997mc], i.e. by smooth non-contractible three-cycles supporting the appropriate amount of three-form flux. There are two ways for this to happen. In one the geometry stays finite and there appears a change in the coloring of the boundary. For a D5-brane this corresponds to having a finite size $S^2$ and $\tilde S^2$ shrinking in a finite segment of the upper boundary. The second possibility is an infinite throat developing on the upper boundary with the $S^2$ shrunk on both boundaries of the throat. In both cases there appears a smooth three-sphere which can support the three-form flux.
It would be very interesting to get a better understanding of the behavior of the solutions near the defects, how the boundary conditions work and how the fluxes, brane charges and changes in the rank of the gauge group are related to the boundary conditions as we understand them.
The plan of the rest of the paper is as follows. In section \[secdomainwalls\] we study the brane configuration whose low energy effective field theory yields the defect conformal field theories we are interested in. In section \[secprobe\] we consider the bulk description of the half-BPS domain wall operators in the probe approximation. In section \[secsugra\] we derive the BPS equations and some important normalization conditions for spinor bilinears by realizing the (super) symmetry algebra in type IIB supergravity. In section \[secwallgeometries\] we first discuss the $AdS_5\times S^5$ solution of the BPS equations, then we ’bootstrap’ the general BPS equations to get a second order PDE for one remaining spinor variable and finally we discuss the supersymmetries of probe branes, boundary conditions and the general structure of solutions.
While this paper was in preparation, a paper [@Lunin:2006xr] appeared which overlaps with ours.
Half-BPS Domain Wall Operators and Defect Field Theory {#secdomainwalls}
======================================================
Defect operators can be defined by introducing degrees of freedom localized on the defect. The theory that captures the interactions of the localized degrees of freedom with those of ${\cal N}=4$ SYM is a defect conformal field theory if we impose that the defect preserves conformal invariance.
The description of defect operators in terms of defect conformal field theories naturally suggests the construction of such theories as low energy limits of branes in string theory. The strategy is to consider brane configurations involving D3-branes together with other branes intersecting the D3-branes along a defect. The low energy effective field theory is described by ${\cal N}=4$ SYM coupled to the degrees of freedom localized on the defect introduced by the other branes.
Here we are interested in half-BPS domain wall operators in ${\cal N}=4$ SYM. These operators arise by considering the following brane configuration:
0 1 2 3 4 5 6 7 8 9
---- --- --- --- --- --- --- --- --- --- ---
D3 x x x x
D5 x x x x x x
NS x x x x x x
The degrees of freedom associated with this brane configuration is the ${\cal N}=4$ SYM multiplet together hypermultiplets [@Hanany:1996ie] localized on $\BR^{1,2}\subset \BR^{1,3}$.
The action for this defect field theory in the absence of NS5-branes and when none of the D3-branes end on the D5-branes has been constructed in [@Karch:2000gx; @DeWolfe:2001pq; @Erdmenger:2002ex]. We refer the reader to these references for the detailed form of the action.
In order to obtain more general domain wall operators, one can allow for configurations where a number of D3-branes end on a D5-brane or NS5-brane. We label by D5$_k$/NS5$_k$ a D5/NS5-brane where $k$ D3-branes end. Each choice of partitioning the $N$ D3-branes among the five-branes corresponds to a different half-BPS domain wall operator. This is similar to the construction of half-BPS Wilson loops [@Gomis:2006sb] using D-branes.
The D3-branes ending on the five-branes have the effect of introducing magnetic charge on the five-brane worlvolume. It would be very interesting to construct explicitly this general class of defect field theories and study in detail the singularities produced on the ${\cal N}=4$ SYM fields by the corresponding half-BPS domain wall operator.
We conclude this section with the analysis of the symmetries of these defect conformal field theories, which play a crucial role in the following sections, where the supergravity description of these operators is studied.
The bosonic symmetry algebra is $SO(2,3)\times SO(3)\times SO(3)=Sp(4,\BR)\otimes SO(4)$, which is generated by generators $M^A$ and $M^L$. In the $(4,4)$ those generators $(M^A)^\alpha{}_\beta$ and $(M^L)^{\dot\alpha}{}_{\dot\beta}$ can be chosen real and the supersymmetry generators[^5] $Q_{\alpha\dot\alpha}$ are Hermitean Q\_\^=Q\_, they transform as = -(M\^A)\^\_Q\_= -(M\^L)\^\_Q\_. Their anti commutation relation is {Q\_,Q\_}= i(M\_AJ)\_I\_M\^A- iJ\_(M\_LI)\_M\^L, where $J_{\alpha\beta}$ is the real, invariant, antisymmetric matrix of $Sp(4,\BR)$ and $I_{\dot\alpha\dot\beta}$ is the real, invariant, symmetric matrix of $SO(4)$. Therefore, these defect conformal field theories are invariant under an $OSp(4|4)$ subalgebra of the $SU(2,2|4)$ algebra of ${\cal N}=4$ SYM.
The supersymmetry generators can be contracted with real Grassmann variables $\hat\epsilon_{\alpha\dot\alpha}$ to form Hermitean generators \^ Q\_= \^\^Q\^\_.
Probe Branes in $AdS_5\times S^5$ {#secprobe}
=================================
In this section we study in the probe approximation the branes in $AdS_5\times S^5$ which correspond to the half-BPS domain wall operators described in the previous section. In the next sections we study the backreaction produced by these branes and find the equations which determine the asymptotically AdS geometries dual to the defect operators.
In order to make manifest the $SO(2,3)\otimes SO(3)\otimes SO(3)$ symmetry of the half-BPS domain wall operators we foliate the $AdS_5$ geometry by $AdS_4$ slicings ds\^2=R\^2(\^2(x)ds\_[AdS\_4]{}+dx\^2), where $R$ is the radius of curvature of $AdS_5$ and $S^5$. We also foliate $S^5$ by $S^2\times\tilde{S}^2$ slicings ds\^2=R\^2(dy\^2+\^2(y)d\_2+\^2(y)d\_2), where $d\Omega_2 (d\tilde{\Omega}_2)$ is the metric on a unit $S^2$ ($\tilde{S}^2$).
We note that in this parametrization the $AdS_5\times S^5$ metric can be represented by an $AdS_4\times S^2\times\tilde{S}^2$ fibration over a strip, whose length is parametrized by $x$ and width by $y$. At the $y=0$ boundary of the strip $\tilde{S}^2$ shrinks smoothly to zero size while at the $y=\pi/2$ boundary $S^2$ shrinks smoothly.
The D5$_k$-brane in the previous section becomes in the near horizon limit a D5-brane in $AdS_5\times S^5$ with an $AdS_4\times\tilde{S}^2$ worldvolume and with $k$ units of magnetic flux dissolved on the D5-brane. The details of this solution can be found in [@Karch:2000gx] and the analysis of supersymmetry in [@Skenderis:2002vf]. A D5$_k$-brane sits at $y=\pi/2$ and at $x(k)=\sinh^{-1}(\pi k/R)$.
Similarly, the solution for the NS5$_k$-brane can also be found. It spans an $AdS_4\times S^2$ geometry in $AdS_5\times S^5$ and has $k$ units of magnetic flux dissolved in it. Now, the NS5$_k$-brane sits at $y=0$ and at $x(k)=\sinh^{-1}(\pi k/R)$.
We will show in section \[secprobebc\] that all these five-branes preserve exactly the same supersymmetries and coincide with the supersymmetries preserved by the defect conformal field theory.
We therefore see that any half-BPS domain wall operator in the probe approximation can be characterized by a collection of points on the appropriate boundary of the strip which characterizes $AdS_5\times S^5$. To each D5$_k$-brane of the microscopic description of the defect conformal field theory we associate a point at the $y=\pi/2$ boundary of the strip located at $x(k)$, where $k$ is the number of D3-branes ending on the D5-brane. Similarly, to each NS5$_k$-brane of the microscopic description of the defect conformal field theory we associate a point at the $y=0$ boundary of the strip located at $x(k)$, where $k$ is the number of D3-branes ending on the NS5-brane. Therefore, to a given half-BPS domain wall operator we can associate the following strip
The goal of the rest of the paper is to find the BPS equations in Type IIB supergravity which determine the backreaction[^6] produced by a collection of five-branes corresponding dual to a defect conformal field theory.
The supergravity solution {#secsugra}
=========================
In this section we derive the BPS equations by reproducing the $OSp(4|4)$ supersymmetry algebra of the half-BPS domain wall operators in Type IIB supergravity. This consists of two parts, the invariance of the background under the (super) symmetry transformations as well as the closure of the (super) symmetry algebra. This section is very technical. Readers who do not want to go into technical details can just read \[secansatz\]. The result of this section is the BPS equations (\[bpsdv\]), (\[bpsgv1\]) and (\[bpsgv2\]) together with the normalization conditions (\[normcond\]). We are using the type IIB supergravity conventions of [@Schwarz:1983qr; @Gauntlett:2005ww] with a mostly + signature. The gamma matrix conventions are summarized in Appendix \[appclifford\].
The Ansatz {#secansatz}
----------
The bosonic symmetry group is $SO(2,3)\times SO(3)\times SO(3)$ and the 10 dimensional space time is a $AdS_4\times S^2\times S^2$ fibration over a two dimensional base space $M_2$. The most general vielbein ansatz is
[l]{} e\^=A\_1e\^,=0,1,2,3,\
e\^m=A\_2e\^[m]{}, m=4,5,\
e\^i=A\_3e\^[i]{}, i=6,7,\
e\^a, a=8,9,
where $e^{\hat \mu}$ is a vielbein on a unit $AdS_4$, $e^{\tilde m}$ and $e^i$ are vielbeins on the unit $S^2$ and $\tilde{S}^2$ respectively and $e^a$ is a vielbein on $M_2$. The most general self dual 5-form flux has the form F=f\_a(e\^[0123a]{}+\_[ab]{}e\^[4567b]{}), where $f_a\,e^a$ is a real 2-form on $M_2$. The most general dilaton-axion $P$ and 3-form fluxes $G$ are given in terms of the complex 1-forms $p^{(4)}=p_a\,e^a$, $g^{(4)}=g_a\,e^a$ and $h^{(4)}=h_a\,e^a$ on $M_2$ P=p\_ae\^aG=g\_ae\^[45a]{}+ih\_ae\^[67a]{}. The most general $U(1)$-R connection is given by the two dimensional connection $q^{(4)}=q_a\,e^a$ on $M_2$ Q=q\_ae\^a.
A specific basis for $OSp(4|4)$
-------------------------------
We will derive the BPS equations by heavily using the explicit form of the symmetry algebra $OSp(4|4)$ in terms of the clifford algebras of $SO(2,3)\times SO(3)\times SO(3)$. We use the Clifford algebra conventions of Appendix \[appclifford\]. A basis of generators in the 4 of $Sp(4,\BR)$ is \[rotmatrix1\] M\^\~\^,M\^\~\^. The matrix J=iD\^[(1)]{}\^[(1)]{} is invariant and antisymmetric. One can coose a basis of Majorana spinors \_\^=B\^[(1)]{}\_and a dual basis of spinors $\chi^\alpha$ such that \^\^t\_=\^\_. Then \[rotmatrix2\] (M\^)\^\_=\^\^tM\^\_ (M\^)\^\_=\^\^tM\^\_are real and J\_=\_\^tJ\_is real, invariant and antisymmetric.
Similarly,
[ll]{} M\^[mn]{}\~(\^[mn]{}),& M\^m\~(\^m),\
M\^[ij]{}\~(\^[ij]{}),& M\^i\~(\^i)
are a basis of generators of the 4 of $SO(4)$, the matrix I=D\^[(2)]{}D\^[(3)]{} is invariant and symmetric. One can coose a basis of Majorana spinors \_\^=(B\^[(2)]{}B\^[(3)]{})\_ and a dual basis of spinors $\chi^{\dot\alpha}$ such that \^\^t\_=\^\_. Then $(M^{mn})^{\dot\alpha}{}_{\dot\beta}$,$(M^m)^{\dot\alpha}{}_{\dot\beta}$, $(M^{ij})^{\dot\alpha}{}_{\dot\beta}$ and $(M^i)^{\dot\alpha}{}_{\dot\beta}$ are real and $I_{\dot\alpha\dot\beta}$ is real, invariant and symmetric.
The anticommutation relation of the supercharges is then \[anticommutator\]
[rl]{} {Q\_,Q\_}=& (|\_(\^[(1)]{}\_)\_)M\^+ (|\_(\^[(1)]{}\_)\_)M\^-\
&(|\_(\^[(1)]{}\_[mn]{})\_)M\^[mn]{}- (|\_(\^[(1)]{}\_m)\_)M\^m-\
&(|\_(\^[(1)]{}\_[ij]{})\_)M\^[ij]{}- (|\_(\^[(1)]{}\_i)\_)M\^i.
Symmetries and the Killing spinor equations
-------------------------------------------
The Killing spinors $\epsilon$ have to transform in the $(4,2,2)$ representation of the Bosonic symmetry group $SO(2,3)\times SO(3)\times SO(3)$. The Bosonic symmetries are realized by Killing vector fields. Those act through the Lie derivative on the Killing spinors.
For a given point $Q$ on $M_{10}$ there is a $SO(1,3)\times SO(2)\times SO(2)$ stabilizer group. The Lie derivative for this stabilizer group acts by rotations \[stabilizeraction\] \^, \^[mn]{}\^[ij]{} on the Killing spinor $\epsilon$ at $Q$. For the tangent vectors $e_{\hat\mu}$, $e_{\tilde m}$ and $e_{\check i}$ at $Q$ there are unique Killing vector fields which generate a geodesic through $Q$ in the fiber. The Lie derivative along those Killing vector fields at $Q$ are given by the covariant derivatives £\_=\_,£\_[m]{}=\_[m]{}£\_[i]{}=\_[i]{}.
The fact that the Killing spinors $\epsilon$ have to transform in the $(4,2,2)$ representation of the Bosonic symmetry group $SO(2,3)\times SO(3)\times SO(3)$ implies that the Lie derivative action on a Killing spinor can be reproduced by a matrix action $N_\mu$, $N_m$ and $N_i$ which is consistent with (\[stabilizeraction\]), (\[rotmatrix1\]) and (\[rotmatrix2\]). Consistency and the 10-dimensional chirality condition then imply that \[definen\]
[l]{} N\_=n(\_\_8) n\^[-1]{},\
N\_m= n(\_m\_8) n\^[-1]{}\
N\_i= n(\_i\_8) n\^[-1]{},
where $n$ is a unitary matrix of the form n= f\_[\_1\_2\_3\_4]{}(\^[(1)]{})\^ (\^[(2)]{})\^(\^[(3)]{})\^ (\^[(4)]{})\^. Note that (\[definen\]) does not fix $n$ uniquely. The Killing spinor equations then have the form \[killingeq\]
[c]{} (\_-N\_)=0,\
(\_[m]{}-N\_m)=0,\
(\_[i]{}-N\_i)=0.
To solve those 10-dimensional Killing spinor equations let us first have a look at the simplified 8-dimensional and 2-dimensional Killing spinor equations
[c]{} (\_-(\_)) \^[(\_1,\_2,\_3)]{}\_=0,\
(\_[m]{}-(\_m)) \^[(\_1,\_2,\_3)]{}\_=0,\
(\_[i]{}-(\_i)) \^[(\_1,\_2,\_3)]{}\_=0,\
(-\_4\^8)\^[(\_4)]{}=0.
The solutions to those equations transform in the $(4,2,2)$ of $SO(1,3)\times SO(2)\times SO(2)$. This representation allows for a reality condition. A basis of the real representation is labelled by $(\alpha,\dot\alpha)$ (B\^[(1)]{}B\^[(2)]{}B\^[(3)]{})\^[-1]{} (\^[(1,1,1)]{}\_)\^= \^[(1,1,1)]{}\_.
Define \_0= \^\^[(1,1,1)]{}\_\^[(1)]{}, where $\hat\epsilon^{\alpha\dot\alpha}$ is a real Grassman variable defined in the last section. It is not hard to see that \[killingspinor\] = n\_0= \^f\_[\_1\_2\_3\_4]{} \^[(\_1,\_2,\_3)]{}\_\^[(\_4)]{}= \^ \^[(\_1,\_2,\_3)]{}\_\_[\_1\_2\_3]{} is the general solution of (\[killingeq\]).
The 10-dimensional chirality condition imposes \[chiralitycond\] \^[(4)]{}\_[-]{}=\_, whereas the reality condition implies =-\_1\_2\_3\^ \^[(\_1,\_2,\_3)]{}\_(\_[-\_1,-\_2,\_3]{}). This allows to translate the dilatino and gravitino variation equations into equations which depend linearly on \^\^[(\_1,\_2,\_3)]{}\_.
The Dilatino and Gravitino variation equations
----------------------------------------------
The dilatino variation equation P\_M\^M+G\_[MNP]{}\^[MNP]{}=0 turns into[^7] \[dv\] ip\_a\^[(1,1,0)]{}\^a- \^[(1,0,1)]{}\^a+ \^[(1,1,0)]{}\^a=0.
To calculate the gravitino variation equations \_M+F\_[PQRST]{}\^[PQRST]{}\_M-G\_[PQR]{}(\_M\^[PQR]{}-9\^P\_M\^[QR]{})=0, we need the spin connection
[l]{} \_=\_, \_[a]{}=\_,\
\_[mnp]{}=\_[mnp]{}, \_[mna]{}=\_[mn]{},\
\_[ijk]{}=\_[ijk]{}, \_[ija]{}=\_[ij]{},\
\_[aab]{}.
The gravitino variation equations turn into \^[(2,1,1)]{}+ \^a+\^a\^[(1,0,0)]{}-\^a\^[(0,1,1)]{}-\^a=0,\
-\^[(0,2,1)]{}+ \^a-\^a\^[(1,0,0)]{}+\^a\^[(0,1,1)]{}-\^a=0,\
-\^[(0,0,2)]{}+ \^a-\^a\^[(1,0,0)]{}-\^a\^[(0,1,1)]{}+\^a=0,\
\_a+\^b\_a\^[(1,0,0)]{}- \_a\^b\^[(0,1,1)]{}+ \^[(0,1,1)]{}- \_a\^b+ =0
Killing vectors, Lorentz rotations and constraints on Spinor bilinears
----------------------------------------------------------------------
The supersymmetry algebra implies that the spinor bilinears \^M=(|\_1\^M\_2) are Killing vectors which generate the Bosonic symmetries that appear in the anticommutator (\[anticommutator\]) of the two supersymmetries generated by $\epsilon_1$ and $\epsilon_2$ [@Halmagyi:2005pn]. This implies the equations \[bilinearcond1\]
[rcl]{} -2(|\_1\^\_2)&=& \_1\^\_2\^ (|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^)) \^[(1,1,1)]{}\_),\
-2(|\_1\^m\_2)&=& \_1\^\_2\^ (|\^[(1,1,1)]{}\_ (\^[(1)]{}\^m) \^[(1,1,1)]{}\_),\
-2(|\_1\^i\_2)&=& \_1\^\_2\^ (|\^[(1,1,1)]{}\_ (\^[(1)]{}\^i) \^[(1,1,1)]{}\_),\
-2(|\_1\^a\_2)&=&0.
Furthermore the Lorentz rotation that appears in the anticommutator of the two supersymmetries generated by $\epsilon_1$ and $\epsilon_2$ is \[bilinearcond2\]
[rl]{} l\^[MN]{}=&\_P\^[MN]{}\^P- F\^[MNPQR]{}(|\_1\_[PQR]{}\_2)-\
&(G\^[MNP]{}|\_1\_P\_2- G\_[PQR]{}|\_1\^[MNPQR]{}\_2).
Comparison with (\[anticommutator\]) leads to
[rcl]{} l\^&=&\_1\^\_2\^ (|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^)) \^[(1,1,1)]{}\_),\
l\^[mn]{}&=&-\_1\^\_2\^ (|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^[mn]{})) \^[(1,1,1)]{}\_),\
l\^[ij]{}&=&-\_1\^\_2\^ (|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^[ij]{})) \^[(1,1,1)]{}\_)
with all other Lorentz rotations vanishing.
The left hand sides of (\[bilinearcond1\]) can be expanded using (\[killingspinor\]), the identities (\[reducebilinears\]) and the symmetry properties (\[bilinearsymmetries\]). This implies[^8]
[ll]{} |=A\_1,& |\^[(1,1,0)]{}=|\^[(1,0,1)]{}= |\^[(0,1,1)]{}=0,\
|\^[(2,3,0)]{}=A\_2,& |\^[(3,3,0)]{}=|\^[(2,2,0)]{}= |\^[(3,2,0)]{}=0,\
|\^[(2,1,3)]{}=A\_3,& |\^[(3,1,3)]{}=|\^[(2,1,2)]{}= |\^[(3,1,2)]{}=0,\
Using the chirality condition (\[chiralitycond\]) one can see that in the last set of equations one can set $a=8$. This leaves 16 real equations for the 16 real components of $\zeta$. The overall phase of $\zeta$ cannot be determined from those equations. This means that the system of equations is overdetermined by one equation. Doing a cyclic permutation of the Pauli matrices $(0,1,2,3)\rightarrow(0,3,1,2)$ the above equations are solved by
[ll]{} f\_[1,1,1,1]{}=e\^[i]{},&f\_[1,1,-1,1]{}=i\_1e\^[i]{}\^,\
f\_[1,-1,1,1]{}=-\_1\_2e\^[i]{}\^,&f\_[1,-1,-1,1]{}=i\_2e\^[i]{},\
f\_[-1,1,1,1]{}=-i\_2e\^[i]{},&f\_[-1,1,-1,1]{}=-\_1\_2e\^[i]{}\^,\
f\_[-1,-1,1,1]{}=-i\_1e\^[i]{}\^,&f\_[-1,-1,-1,1]{}=e\^[i]{}
with \[normcond\] ||\^2+||\^2=,=. This leaves $\phi$ and the relative phase of $\alpha$ and $\beta$ undetermined. From now on we will continue working in the basis with cyclically permuted Pauli matrices.
The conditions (\[bilinearcond2\]) for $(MN)=(\mu m)$ and $(MN)=(\mu i)$ are (g\_a|\_1\^[4 5 i a]{}\_2)= (h\_a|\_1\^[m 6 7 a]{}\_2)=0. These imply the reality conditions (e\^[-2i]{}g\_a)=(e\^[-2i]{}h\_a)=0 or g\_a\^=-e\^[-4i]{}g\_ah\_a\^=-e\^[-4i]{}h\_a. The other conditions from the closure of the supersymmetry algebra are more involved and we do not need them.
The BPS equations
-----------------
We can insert the results of the last section into the Gravitino and Dilatino variation equations. The dilatino variation equations impose a reality condition on $P$ (p\_8\^-ip\_9\^)=e\^[-8i]{}(p\_8-ip\_9) one can gauge fix the $U(1)$ R-symmetry of type IIB supergravity by demanding $e^{2i\phi}=i$. Then the reality conditions read g\_a\^=g\_a,h\_a\^=h\_a(p\_8\^-ip\_9\^)=(p\_8-ip\_9).
To derive the remaining BPS equations it is useful to fix reparametrization invariance by going to the conformally flat metric on $M_2$ e\^8=A\_4dxe\^9=A\_4dy, and introducing the complex coordinate $z$ by dz=dx+idy. It is useful to combine the real 1-forms $f_a$, $g_a$, $h_a$ and $p_a$ into
[ll]{} f=(f\_8-if\_9),& g=(f\_8-if\_9),\
h=(h\_8-ih\_9),& p=(p\_8-ip\_9).
The dilatino variation equations then give rise to the BPS equations \[bpsdv\]
[l]{} p\^+(g+ih)=0,\
p-(g-ih)\^=0.
The Gravitino variation equations in the $\mu$, $m$ and $i$ directions give rise to the BPS equations \[bpsgv1\]
[l]{} +++\^=0,\
\^-\^+\^+=0,\
-\^+--\^=0,\
--\^-\^-=0,\
\^+-+\^=0,\
--\^-\^+=0.
Finally, the gravitino variation equations in the $a$-direction give rise to the reality condition q\_a=\_a, which reduces to $q_a=0$ in the chosen gauge, together with the BPS equations \[bpsgv2\]
[l]{} \_z+-\^=0,\
\_z\^+\^+=0,\
\_z\^-\^+\^-=0,\
\_z--+\^=0.
Bianchi Identities
------------------
The Bianchi identities
[c]{} P=0,\
G=-PG\^,\
dQ=-i PP\^,\
dF=GG\^
turn into the equations
[c]{} dp\^[(4)]{}=0,\
d(A\_2\^2g\^[(4)]{})+p\^[(4)]{}(A\_2\^2g\^[(4)]{})=0,\
d(A\_3\^2h\^[(4)]{})-p\^[(4)]{}(A\_3\^2h\^[(4)]{})=0,\
p\^[(4)]{}p\^[(4)]{}\^=0,\
d(A\_1\^4f\^[(4)]{})=0,\
d(A\_2\^2A\_3\^2f\^[(4)]{})=A\_2\^2A\_3\^2g\^[(4)]{}h\^[(4)]{}.
The first four identities can be solved by introducing the functions $\rho$, $l$, $m$ and $n$ \[bianchi2\]
[c]{} p\^[(4)]{}=d,\
g\^[(4)]{}=dm,\
h\^[(4)]{}=dn,\
f\^[(4)]{}=dl,
the last equation leads to a harmonic equation for $l$.
The domain wall geometries {#secwallgeometries}
==========================
In this section we will attempt to ’solve’ the system (\[bpsdv\]), (\[bpsgv1\]) and (\[bpsgv2\]) of BPS equations. Those equations are real linear in $\alpha$ and $\beta$. For this reason we can rescale $\alpha$ and $\beta$ such that the normalization conditions (\[normcond\]) are nicer \[normcond2\] A\_1=\^+\^,A\_2=\_1(+\^\^)A\_3=i\_1\_2(-\^\^). Those conditions will turn out to be crucial for “bootstrapping" the system.
In order to get a better understanding of what to expect from the general solution, let us first start by verifying that $AdS_5\times S^5$ is a solution and where supersymmetric brane probes are sitting.
$AdS_5\times S^5$
-----------------
The pure $AdS_5\times S^5$ solution is an $AdS_4\times S^2\times S^2$ fibration over an infinite strip. On the one boundary of the strip one $S^2$ is shrinking to zero size, whereas on the other boundary the other $S^2$ is shrinking to zero size. This can be seen by embedding $AdS_5$ into $\BR^6$ spanned by $X_{-1}, X_0\cdots,X_4$ and $S^5$ into $\BR^6$ spanned by $Y_1,\cdots,Y_6$ -X\_[-1]{}\^2-X\_0\^2+X\_1\^2++X\_4\^2=-R\^2Y\_1\^2++Y\_6\^2=R\^2. Then the strip can be parametrized by $-\infty<X_4<\infty$ and $0<r<R$ such that Y\_1\^2+Y\_2\^2+Y\_3\^2=r\^2Y\_4\^2+Y\_5\^2+Y\_6\^2=R\^2-r\^2.
The solution has no 3-form flux, i.e. $g=h=0$ and the dilatino variation equations (\[bpsdv\]) imply that the dilaton is constant $p=0$. The gravitino variation equations (\[bpsgv2\]) lead to the holomorphicity conditions \_[|z]{}(\^\^2A\_4)=\_[|z]{}(\^2A\_4)= \_[|z]{}=0. This implies that $|\alpha|^2|\beta|^2$ is holomorphic and real, i.e. ||\^2||\^2=c\^4. Furthermore, $\alpha\beta|\beta|^2$ is holomorphic and (\[normcond2\]) implies that it is real on one boundary of $M_2$ and imaginary on the other one. This determines[^9] ||\^2=c\^4e\^[z]{}, which implies =ce\^[-+i\_]{}=ce\^[+iy-i\_]{}. Using the equations (\[normcond2\]) we can determine A\_1=2c\^2(x),A\_2=2c\^2\_1(y)A\_3=-2c\^2\_1\_2(y). For the range of $y\in[0,\frac{\pi}{2}]$ to make sense, we have to set $\nu_1=1$ and $\nu_2=-1$. The gravitino variation equations (\[bpsgv1\]) then imply =ce\^[-]{},=ce\^, A\_4=2c\^2f=60. From this we conclude that 2c\^2=R.
The general solutions that we are looking for have to have this asymptotic form, i.e. they have to have semi-infinite strips where $A_2=0$ on one side and $A_3=0$ on the other side with a constant dilaton and no 3-form fluxes.
The general ’bootstrap’ {#secbootstr}
-----------------------
Let us start by using (\[bpsdv\]) =p(-),=-p(+) and (\[normcond2\]) to eliminate $g$, $h$, $A_1$, $A_2$ and $A_3$ from the BPS equations. The equations (\[bpsgv1\]) allow to solve for $A_4$, $f$, $p$ in terms of $\alpha$ and $\beta$
[l]{} = \_z||- \_z(),\
p=- \_z||,\
=\_z()+ 2 \_z||
and lead to one more independent equation for $\alpha$ and $\beta$ \[bootstr1\] - 2\_z(||\^2+||\^2)+ (()\^2+()\^2)=0.
The difference of the first two equations (\[bpsgv2\]) leads to the identity \[bootstr2\] \_z()=-2\_z|| and the sum of the first two equations (\[bpsgv2\]) leads to \_z()=0, which implies that \[bootstr3\] =a(z\^), where $a(z^\ast)$ is an antiholomorphic function. The other two equations (\[bpsgv2\]) are redundant.
We can reexpress $\frac{\nu_2A_4}{2\alpha\beta^\ast}$ and $p$ in terms of the ratio $\frac{\alpha}{\beta^\ast}$ \[a4p\]
[l]{} = -2 \_z()- \_z(),\
p= \_z().
The equations (\[bootstr1\]), (\[bootstr2\]) and (\[bootstr3\]) are then the remaining system of equations for $\alpha$ and $\beta$. Actually, (\[bootstr1\]) only depends on the ratio $\frac{\alpha}{\beta^\ast}$ and turns out to be the last equation that is trivially satisfied. The other two equations form a second order system for $\alpha$ and $\beta$.
Those equations are algebraic in the phase of $\alpha\beta^\ast$. Equation (\[bootstr3\]) can be solved for $|\alpha\beta^\ast|$ in terms of $\frac{\alpha}{\beta^\ast}$, this can be inserted into (\[bootstr2\]) to give a single second order differential equation for $\frac{\alpha}{\beta^\ast}$.
Probe branes and boundary conditions {#secprobebc}
------------------------------------
To start understanding the general solution let us first look at the probe branes again. The projector equation for a supersymmetric NS5-brane around $S^2$ with $k$ units of magnetic flux is [@Gauntlett:1997cv] \^[(1)]{}\^[(2)]{}- \^[(1)]{}=, and the projector equation for a supersymmetric D5-brane around $\tilde S^2$ with $k$ units of magnetic flux is -\^[(1)]{}\^[(3)]{}- \^[(1)]{}=. Those equations turn into e\^[z\_[NS5\_k]{}\^]{}==+ e\^[z\_[D5\_k]{}\^]{}==-i(+) which is (x(k))=,y\_[NS5]{}=0y\_[D5]{}= in agreement with the predictions of section \[secprobe\]. From this it is easy to see that the NS5-branes are sitting in a place where $\frac{\alpha}{\beta^\ast}$ is real, whereas the D5-branes are sitting in a place where $\frac{\alpha}{\beta^\ast}$ is imaginary. The absolute value $\left|\frac{\alpha}{\beta^\ast}\right|$ determines the magnetic flux on the brane.
For the $AdS_5\times S^5$ solution this means that the NS5-branes are sitting on the boundary of the strip where $S^2$ has maximal size, call it the ’black’ boundary and the D5-branes are sitting on the boundary of the strip where $\tilde S^2$ has maximal size, call it the ’white’ boundary.
In the regions where the 5-branes are sitting, we expect a backreaction of the geometry, which generates the throat of a 5-brane. This means that there is a 3-sphere which supports the appropriate 3-form flux. This is done by switching the shrunk 2-sphere in the respective region of the boundary. To understand this better, we need to work out the boundary conditions in the different regions.
The two dimensional geometry can be conformally mapped to a region in the complex plane. This region has a boundary on which one of the two 2-spheres is shrinking to zero size. In order to parametrize the boundary in a more invariant way, we impose the boundary condition A\_4|\_[M\_2]{}=1. The boundary is divided into (colored) segments on which either one or the other 2-sphere is shrinking to zero size A\_2|\_[M\_2,w]{}=0A\_3|\_[M\_2,b]{}=0. Using (\[normcond2\]) this leads to the same conditions on the phase of $\frac{\alpha}{\beta^\ast}$ as the five-brane projectors do. Furthermore, in order for the geometry to be smooth, one has to require either \_nA\_1|\_[M\_2,w]{}=0,\_nA\_2|\_[M\_2,w]{}=1,\_nA\_3|\_[M\_2,w]{}=0,\_nA\_4|\_[M\_2,w]{}=0, or alternatively \_nA\_1|\_[M\_2,b]{}=0,\_nA\_2|\_[M\_2,b]{}=0,\_nA\_3|\_[M\_2,b]{}=1,\_nA\_4|\_[M\_2,b]{}=0, where $\partial_n$ is the normal derivative to the boundary. In order for the fluxes to be regular, we either need to require p\_n|\_[M\_2,w]{}=f\_n|\_[M\_2,w]{}=g\_t|\_[M\_2,w]{}=h\_n|\_[M\_2,w]{}=0 or p\_n|\_[M\_2,b]{}=f\_n|\_[M\_2,b]{}=g\_n|\_[M\_2,b]{}=h\_t|\_[M\_2,b]{}=0. It is not difficult to see that the boundary conditions on $g$ and $h$ are satisfied, once the boundary condition on $p$ is satisfied.
Let us concentrate on the ’white’ boundary, where $A_2|_{\partial M_2,w}=0$. We assume that it is along the $x$-axis and that the strip is on the upper half plane. There the boundary conditions imply for the spinor variables $\frac{\alpha}{\beta^\ast}$ and $\alpha\beta^\ast$
[lcl]{} |\_[M\_2,w]{}i,\^|\_[M\_2,w]{},&& (|| )\_[M\_2,w]{}=i\_1\_2,\
\_y()\_[M\_2,w]{}= (||)\_[M\_2,w]{},&& \_y|\^|\_[M\_2,w]{}=0.
Note that all the normal derivatives of the spinor variables are determined, except for $\partial_y\arg(\alpha\beta^\ast)|_{\partial M_2,w}$. This phase only appears in the expression for $A_4$ in terms of the spinor variables.
The antiholomorphic function $a(z^\ast)$ has to be real on this boundary. Given $a(z^\ast)$ and using both boundary conditions on $A_4$, (\[bootstr3\]) can be solved for $|\alpha\beta^\ast|$ |\^|\_[M\_2,w]{}= |a||()\^2-()\^2|. This can be inserted into (\[bootstr2\]) to give a second order ODE for $\frac{\alpha}{\beta^\ast}$. The solutions of that ODE are determined by the values of $\frac{\alpha}{\beta^\ast}$ at the ’ends’ of the ’white’ boundary. The above boundary conditions are then enough for the second order PDE of \[secbootstr\]. The antiholomorphic function presumably has to be determined by the reality condition above and its asymptotic behavior.
Similarly the boundary conditions on the spinor variables for a ’black’ boundary along the y-axis, where the strip is the right half plane are
[lcl]{} |\_[M\_2,b]{},\^|\_[M\_2,b]{}i,&& (|| )\_[M\_2,b]{}=i\_1,\
\_x()\_[M\_2,b]{}= - (||)\_[M\_2,b]{},&& \_x|\^|\_[M\_2,b]{}=0.
Let us try to see how this story fits in with the probe brane picture. We expect that the five-branes get replaced by geometry with fluxes. At the position of the defect we expect that the value of $\frac{\alpha}{\beta^\ast}$ agrees with the one from the probe brane calculation. There are two possibilities that can happen: The defect is either a finite or an infinite distance along the boundary away from a given reference point. Furthermore the geometry at the defect has to have a 3-cycle that supports the flux.
For a defect at finite distance this can be done by a change of coloring, i.e. by inserting a finite interval of ’white’ boundary into the ’black’ boundary or vice versa. This creates a 3-sphere which can support the flux. On the ’black’ side of the interface, the value of $\frac{\alpha}{\beta^\ast}$ is given by the probe brane value.
In order for the geometry to be smooth at the interface of a ’black’ and a ’white’ boundary, it needs to have a right angle, such that the strip turns locally into a quadrant. Unlike the cases of chiral operators or Wilson lines [@Lin:2004nb; @Yamaguchi:2006te; @Lunin:2006xr], there are no such interfaces in our vacuum ($AdS_5\times S^5$) solution. Actually, closer examination reveals that not all the regularity conditions can hold at the same time. For example, if the boundary conditions on $A_2$ and $A_3$ hold at the same time, then $\partial_z\log|\alpha\beta|$ diverges at the interface. This implies that the dilaton $p$ diverges or $\frac{\alpha}{\beta^\ast}$ diverges.
On the other hand a defect at infinite distance produces an infinite throat with the same color on both sides. This also creates a three-sphere to support the three-form flux. We expect that $\frac{\alpha}{\beta^\ast}$ asymptotes to the value given by the probe brane picture.
In the asymptotic region of such a throat we expect $\frac{\alpha}{\beta^\ast}$ to be almost constant at the boundary. This implies that $\alpha\beta^\ast$ is linearly growing at the boundary, i.e. the warp factors $A_1$ and $A_3$ are growing linearly and the dilaton is growing logarithmically along the boundary. In the other direction this is of course bounded and cannot continue forever, it has to connect to an asymptotically $AdS_5\times S^5$ region.
As of now we haven’t found any convincing argument for either scenario, but those seem to be the only possibilities for the backreacted geometry of a five-brane.
We believe that those difficulties are arising due to the fact that we are trying to describe the backreacted geometry of five-branes instead of D3-branes [@Lin:2004nb] or strings [@Yamaguchi:2006te; @Lunin:2006xr], where the geometry seems to be really well behaved at the locations of the ’defects’.
The gravity discussion also leaves open the possiblity of more than two asymptotic $AdS_5\times S^5$ regions. This would correspond to several $\CN=4$ super Yang Mills theories that interact on a defect. In the case of only a single asymptotic $AdS_5\times S^5$ region one would get a $\CN=4$ super Yang Mills theory with a boundary. The latter case requires an interface between a ’black’ and a ’white’ boundary.
There is more work to be done in order to understand those outstanding issues better. To complete the story, one also needs to calculate the fluxes through all the three-cycles as well as change of the rank of the gauge group. Those impose the true physical boundary conditions and might be calculable even if the full solution is not known (see e.g. [@Halmagyi:2005pn]). We leave a closer examination of all those issues for future work [@wip].
We would like to thank Sujay Ashok, Eleonora Dell’Aquila, Jerome Gauntlett, Roberto Emparan, Anton Kapustin and Rob Myers for useful discussions. Research at the Perimeter Institute is supported in part by funds from NSERC of Canada and by MEDT of Ontario. JG is further supported by an NSERC Discovery grant.
Clifford algebra conventions {#appclifford}
============================
Generalities
------------
The Clifford algebra is defined by the anticommutation relations {\^m,\^n}=2\^[mn]{}, where $\eta^{mn}=\eta^m\delta^{mn}$. We choose a representation in which $\sqrt{\eta^m}\gamma^m$ is Hermitean[^10]. Given a complex structure, one can define the raising and lowering operators \^m=\^[2m]{}+i\^[2m+1]{}, (\^m)\^= \^[2m]{}-i\^[2m+1]{}. Then the raising and lowering operators satisfy the following anticommutation relations: {\^m,\^n}={(\^m)\^,(\^n)\^}=0 {\^m,(\^n)\^}=4\^[mn]{}. One can then define the fermion number operators F\^m=i\^[2m]{}\^[2m+1]{}= 1-\^m(\^m)\^=-1+(\^m)\^\^m. The chirality operator is then the product of all the Fermion number operators $\gamma=F^1\cdots F^n$.
The Fermion number operators have eigenvalues $\pm 1$. The eigenvalues of the Fermion number operators can be used to label a basis of states. One can define a ground state $\ket{0}$ which is anihilated by all the lowering operators. It has Fermion number $-1$ for all Fermion number operators. All other states can be gotten by applying raising operators. If one labels a state by $\ket{\nu_1,\cdots,\nu_n}$, then the raising and lowering operators act as follows: &=& \_1\_[m-1]{}(\^m)\^,\
&=& \_1\_[m-1]{}\^m . This defines the matrix elements of the gamma matrices. One can see that in this basis $\Gamma^m$ is real. From this follows that
- the matrices $\sqrt{\eta^m}\gamma^m$ are Hermitean,
- the matrices $\sqrt{\eta^{2m}}\gamma^{2m}$ are symmetric and real and
- the matrices $\sqrt{\eta^{2m+1}}\gamma^{2m+1}$ are antisymmetric and imaginary.
In general there are matrices $B$, $C$ and $D$ such that (\^m)\^&=&\_BB\^m B\^[-1]{},\
(\^m)\^&=&\_CC\^m C\^[-1]{},\
(\^m)\^t&=&\_DD\^m D\^[-1]{}, where $\eta_B,\eta_C,\eta_D=\pm 1$ is a constant. Given a spinor $\epsilon$, $\ast\epsilon=B^{-1}\epsilon^\ast$, $\bar\epsilon=\epsilon^\dagger C$ and $\tilde\epsilon=\epsilon^t D$ transform covariantly.
If $BB^\ast=\Bid$ one can impose the Majorana condition $\epsilon=\ast\epsilon$. And if $B$ commutes with the chirality operator $\gamma$, one can impose the Majorana-Weyl condition.
$Spin(1,9)$
-----------
Chirality operator: \^[(10)]{}=\^[09]{}
Complex conjugation: B\^[(10)]{}=\^[013579]{}
B\^[(10)]{}\^M(B\^[(10)]{})\^[-1]{}=(\^M)\^
Hermitean conjugation: C\^[(10)]{}=\^0
C\^[(10)]{}\^M(C\^[(10)]{})\^[-1]{}=-(\^M)\^
Transpose: D\^[(10)]{}=\^[13579]{}
D\^[(10)]{}\^M(D\^[(10)]{})\^[-1]{}=-(\^M)\^t
$Spin(1,3)$ – $AdS_4$
---------------------
\^[(1)]{}=i\^[0123]{}
Complex Conjugation: B\^[(1)]{}=\^[2]{}
B\^[(1)]{}\^(B\^[(1)]{})\^[-1]{}=(\^)\^ B\^[(1)]{}(i\^[(1)]{})(B\^[(1)]{})\^[-1]{}=(i\^[(1)]{})\^
B\^[(1)]{}(B\^[(1)]{})\^=
Hermitean conjugation: C\^[(1)]{}=\^[123]{}
C\^[(1)]{}\^(C\^[(1)]{})\^[-1]{}=(\^)\^ C\^[(1)]{}(i\^[(1)]{})(C\^[(1)]{})\^[-1]{}=(i\^[(1)]{})\^
Transpose: D\^[(1)]{}=\^[13]{}
D\^[(1)]{}\^(D\^[(1)]{})\^[-1]{}=(\^)\^t D\^[(1)]{}(i\^[(1)]{})(D\^[(1)]{})\^[-1]{}=(i\^[(1)]{})\^t
$Spin(2)$ – $S^2$
-----------------
\^[(2)]{}=i\^[45]{}
Complex Conjugation: B\^[(2)]{}=\^[5]{}
B\^[(2)]{}\^m (B\^[(2)]{})\^[-1]{}=-(\^m)\^ B\^[(2)]{}\^[(2)]{}(B\^[(2)]{})\^[-1]{}=-(\^[(2)]{})\^
B\^[(2)]{}(B\^[(2)]{})\^\*=-
Hermitean conjugation: C\^[(2)]{}=
C\^[(2)]{}\^m(C\^[(2)]{})\^[-1]{}=(\^m)\^ C\^[(2)]{}\^[(2)]{}(C\^[(2)]{})\^[-1]{}=(\^[(2)]{})\^
Transpose: D\^[(2)]{}=\^[5]{}
D\^[(2)]{}\^m(D\^[(2)]{})\^[-1]{}=-(\^m)\^t D\^[(2)]{}\^[(2)]{}(D\^[(2)]{})\^[-1]{}=-(\^[(2)]{})\^t
Similarly \^[(3)]{}=i\^[67]{}, B\^[(3)]{}=\^7,C\^[(3)]{}=D\^[(3)]{}=\^7.
$Spin(2)$ – $M_2$
-----------------
\^[(2)]{}=i\^[89]{}
Complex Conjugation: B\^[(4)]{}=\^[8]{}
B\^[(4)]{}\^a (B\^[(4)]{})\^[-1]{}=(\^a)\^
B\^[(4)]{}(B\^[(4)]{})\^\*=
Hermitean conjugation: C\^[(4)]{}=
C\^[(4)]{}\^a(C\^[(4)]{})\^[-1]{}=(\^m)\^
Transpose: D\^[(4)]{}=\^[8]{}
D\^[(4)]{}\^a(D\^[(4)]{})\^[-1]{}=(\^a)\^t
Decomposition of a 10-dimensional Spinor
----------------------------------------
The ten dimensional gamma matrix algebra can be decomposed in the folowing way
[l]{} \^=\^,\
\^m=\^[(1)]{}\^m,\
\^i=\^[(1)]{}\^[(2)]{}\^i,\
\^a=\^[(1)]{}\^[(2)]{}\^[(3)]{}\^a,\
\^[(10)]{}= \^[(1)]{}\^[(2)]{}\^[(3)]{}\^[(4)]{},\
B\^[(10)]{}=-B\^[(1)]{}B\^[(2)]{}(B\^[(3)]{}\^[(3)]{}) (B\^[(4)]{}\^[(4)]{}),\
C\^[(10)]{}= -i(C\^[(1)]{}\^[(1)]{})C\^[(2)]{}C\^[(3)]{}C\^[(4)]{},\
D\^[(10)]{}=-i(D\^[(1)]{}\^[(1)]{})D\^[(2)]{}(D\^[(3)]{}\^[(3)]{}) (D\^[(4)]{}\^[(4)]{}).
Relations for spinor bilinears
==============================
In this appendix we summarize some properties of spinor bilinears. (|\_2\^M\_1)\^&=& |\_1\^M\_2,\
\_1\^M\_2&=& -\_2\^M\_1,\
\^M\_2&=& |\_2\^M\_1,\
\^M\_2&=& \_1\^M\_2. The following table summarizes the symmetry properties of 8-dimensional spinor bilinears under transpositio(exchange of $(\alpha\dot\alpha)$ and $(\beta\dot\beta)$) and complex conjugation \[bilinearsymmetries\]
[|l|l|l|]{} &t&\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& \_1\_1\^\_2\_2\^\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& \_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& -\_2\_2\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& -\_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& -\_1\_1\^\_2\_2\^\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& \_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& \_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& \_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& -1& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& \_2\_2\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& -\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& -\_2\_2\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& \_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& \_1\_1\^\_2\_2\^\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& -\_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& -\_2\_2\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& \_1\_1\^\_2\_2\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& -\_1\_1\^\_3\_3\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^)) \^[(1,1,1)]{}\_& -\_1\_1\^\_2\_2\^\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^\^) (\^[(2)]{}\^) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& -\_1\_1\^\_2\_2\^\_3\_3\^& -\^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^[mn]{}) (\^[(3)]{}\^\^i)) \^[(1,1,1)]{}\_& \_2\_2\^& \^\
|\^[(1,1,1)]{}\_ ((\^[(1)]{}\^) (\^[(2)]{}\^\^m) (\^[(3)]{}\^\^[ij]{})) \^[(1,1,1)]{}\_& -\_3\_3\^& -\^\
where $\eta\eta^\prime=\eta_1\eta_1^\prime\eta_2\eta_2^\prime\eta_3\eta_3^\prime$.
[^1]:
[^2]:
[^3]: This brane configuration is a generalization of the brane construction in [@Hanany:1996ie] studied in the context of three dimensional mirror symmetry.
[^4]: Recently, Yamaguchi [@Yamaguchi:2006te] has made an analogous ansatz relevant for Wilson loops.
[^5]: The undotted index $\alpha=1,\cdots,4$ is an index in the real fundamental representation of $Sp(4,\BR)$, whereas the dotted index $\dot\alpha=1,\cdots,4$ is an index in the real fundamental representation of $SO(4)$.
[^6]: In [@Fayyazuddin:2002bm] the supergravity equations for intersecting D3/D5 branes were analyzed.
[^7]: Note that $\ast\zeta=\gamma^8\sigma^{(2,2,2)}\zeta^\ast$ is the covariant complex conjugation.
[^8]: The $\sigma$-s are Pauli matrices acting on the $\eta$-indices. Here $\sigma^0=\Bid$ and $\sigma^{(i,j,k)}=\sigma^i\otimes\sigma^j\otimes\sigma^k$.
[^9]: One could choose a different holomorphic function, but the solution would still locally be $AdS_5\times S^5$.
[^10]: By the square root we mean $\sqrt{1}=1$ and $\sqrt{-1}=i$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Protests and agitations are an integral part of every democratic civil society. In recent years, South Africa has seen a large increase in its protests. The objective of this paper is to provide an early prediction of the duration of protests from its free flowing English text description. Free flowing descriptions of the protests help us in capturing its various nuances such as multiple causes, courses of actions etc. Next we use a combination of unsupervised learning (topic modeling) and supervised learning (decision trees) to predict the duration of the protests. Our results show a high degree (close to 90$\%$) of accuracy in early prediction of the duration of protests. We expect the work to help police and other security services in planning and managing their resources in better handling protests in future.'
author:
-
title: Early prediction of the duration of protests using probabilistic Latent Dirichlet Allocation and Decision Trees
---
Introduction
============
Protests and agitations are an integral part of any democratic civil society. Not to be left behind when compared with the rest of the world, South Africa in recent years has also seen a massive increase in public protests. The causes of these protest were varied and have ranged from service delivery, labor related issues, crime, education, to environmental issues.\
While in the past multiple studies and news articles have analyzed the nature and cause of such protests, this research uses a combination of unsupervised (topic modeling using probabilistic Latent Dirichlet Allocation (pLDA)) and supervised (single and ensemble decision trees) learning to predict the duration of future protests. We develop an approach in which an user inputting a description of a protest in free flowing English text, the system predicts the duration of the protest to a high (close to 90$\%$) degree of accuracy . We expect that an early correct prediction of the duration of the protest by the system will allow police and other security services to better plan and allocate resources to manage the protests.
Problem Statement
=================
The objective of this research is to provide an early prediction of the duration of a protest based on South African protest data. The master dataset is obtained from the website - Code for South Africa [@IEEEhowto:CodeforSA]. It consisted of 20 features (columns) describing 876 instances (rows) of protests over the period of 1$^{st}$ February 2013 to 3$^{rd}$ March 2014. Among the 20 features, the statistically important and hence selected ones are shown in Table \[table:imp.stat.features\] . The rest are repeated codification of the important features that convey the same statistical information as the important features. Hence they are ignored. Also detailed addresses of the location (Town or city name, First street, Cross street, Suburb area place name etc) of the protests are not considered. Instead the more accurate measures - Coordinates (latitudes and longitudes) are used.\
Whether Metro or not Police station
------------------------------------ --------------------------------
Coordinates Start date
End date Cause of protest
Status of protest (Violent or not) Reason for protest (text data)
: Important Statistical Features[]{data-label="table:imp.stat.features"}
**Provinces** **$\%$ of protests** **Provinces** **$\%$ of protests**
--------------- ---------------------- --------------- ----------------------
Gauteng 37 Limpopo 6
Western Cape 18 Mpumalanga 5
Kwazulu Natal 14 Free State 3
Eastern Cape 9 Northern Cape 2
North West 6
: Percentage of protests vis-a-vis provinces[]{data-label="table:prov.of.protests"}
Table \[table:prov.of.protests\], \[table:issues.of.protests\], \[table:state.of.protests\] and \[table:duration.of.protests\] shows the overall descriptive statistics of the protests during the above mentioned period. From the 876 rows, three are removed for which the one or more columns are missing[^1]. Thus our modeling exercise is based upon 873 instances of protests. Table \[table:prov.of.protests\] shows that Gauteng[^2] as the seat of commerce, and President and Cabinet; and Western Cape[^3] as the legislative capital has the largest concentration of protests, followed by the others. Table \[table:issues.of.protests\] shows that the three largest issues of protests are from service delivery, labor related issues, and crime related at respectively 31$\%$, 30$\%$ and 12$\%$ of the total. From table \[table:state.of.protests\] it can be seen that the difference between peaceful and violent protests is low at 55$\%$ and 45$\%$ respectively. However the most interesting insight comes from the duration of protests[^4]. From table \[table:duration.of.protests\] it can be observed that the majority of protests (at 74.34$\%$ of the total) last for less than 24 hours. Thus this feature is highly skewed.\
**Issue** **$\%$ of protests** **Issue** **$\%$ of protests**
------------------ ---------------------- ------------------- ----------------------
Service delivery 31 Political 4
Labour 30 Transport 3
Crime 12 Xenophobia 2
Election 6 Individual causes 1
Vigilantism 5 Environment 1
Education 5
: Percentage of protests vis-a-vis issues[]{data-label="table:issues.of.protests"}
**State** **$\%$ of protests**
----------- ----------------------
Peaceful 55
Violent 45
: Percentage of protests vis-a-vis state []{data-label="table:state.of.protests"}
**Duration** **$\%$ of protests**
---------------------------------------- ----------------------
0 (less than 24 hrs) 74.34
1 11.34
2 4.58
3 2.06
4 2.17
5-13,19, 21-23, 31, 34, 37, 39, 57, 65 less than 1$\%$
: Percentage of protests vis-a-vis duration in days []{data-label="table:duration.of.protests"}
The idea behind this work is to use only the text description of the protests (predictor variable) to predict the duration of future protest(s) (response variable). Some typical examples of the descriptions of protests are as follows. Flagged as service delivery and violent - “Residents of both towns Butterworths and Centane blockaded the R-47 between the two towns, accusing the Mnquma Municipality of ignoring their request for repairs to the road.” Eyeballing the text does not indicate any violence. A second example flagged as service delivery and peaceful protest is “ANGRY community leaders in four North West villages under the Royal Bafokeng Nations jurisdiction protested this week against poor services and widespread unemployment among the youth. Now they not only demand their land back, but want a 30$\%$ stake in the mines which are said to employ labour from outside the villages.”. While the first line of the text referred to the cause of the protest as service delivery, the second line referred to political (demand for return of land) and labor issues (3$\%$ stake in mines and corresponding increase in employment). In this sense, we believe that strict flagging of protests into one category or another restricts the knowledge of the protests. We also believe that it is normal for human social concerns to spill from one area to another during protests that is not well captured by only one restrictive flag attached to one protest. Thus we drop the categorical features (with strict class labels) and consider only the text descriptions of the protests as a predictor of its duration. Another important advantage of using text descriptions is that they give flow/progress of events that occurred during a protest and other relevant details. Figure \[figure:wordcloud\] shows the word cloud of the entire protest corpus. Prior building the word cloud the usual preprocessing on the text corpus such as removal of punctuation, numbers, common English stopwords, white spaces etc have been carried out. The cloud consists of 75 words[^5] and words that occur with a minimum of at least 25 times are included. The more the frequency of the words occurring in the text corpus, the bigger is its font size. Lastly due to word stemming, “resid” is created from words like residence and residing.
![Word cloud of the entire text courpus of the protests[]{data-label="figure:wordcloud"}](Rplot_fig1 "fig:"){width="3.5in"} .
THEORETICAL CONCEPT - PROBABILISTIC LATENT DIRICHLET ALLOCATION
===============================================================
In this subsection we provide a short introduction to pLDA. LDA is an unsupervised generative probabilistic model primarily used for topic discovery. It states that for a collection of words in a large number of documents, each document is a mixture of a number of “latent topics” [^6] and each word in a document is a result of a latent topic. Following [@IEEEhowto:BleiNgJordan2003], [@IEEEhowto:Reed]; a document is a random mixture of latent topics and each topic[^7] in turn is created by the distribution of words. Mathematically the LDA model can be stated as follows. For each document **w** in a corpus *D*
1. Choose $N \sim Poison(\xi)$
2. Choose $\theta \sim Dir(\alpha)$
3. For each of the *N* words $w_{n}$:
1. Choose a topic $w_{n} \sim Multinomial(\theta)$
2. Choose a word $w_{n}$ from $p(w_{n}|z_{n}, \beta)$, a multinomial probability conditioned on the topic $z_{n}$
where a document **w** is a cobination of *N* words, for example $\textbf{w} = (w_{1}, w_{2}, ...., w_{N})$. A corpus *D* is a collection of *M* documents such that $D = (\textbf{w}_{1}, \textbf{w}_{2}, ...., \textbf{w}_{M})$. $\alpha$ and $\beta$ are parameters of the Dirichlet prior on per document topic and per topic word distribution respectively. *z* is a vector of topics. The central idea of LDA is to find the posterior distribution ($\theta$) of the topics (z) when the document (**w**) is given, i.e.
$$p(\theta, z|\textbf{w}, \alpha, \beta) = \dfrac{p(\theta, z, \textbf{w}|\alpha, \beta)}{p(\textbf{w}|\alpha, \beta)}$$
Since it is beyond the scope of this paper to derive the detailed formula, we summarize the two other important results that will subsequently be required in our analysis. The marginal distribution of a document is:
$$p(\textbf{w}|\alpha, \beta) = \int p(\theta|\alpha)(\prod_{n=1}^{N})\sum p(z_{n}|\theta)p(w_{n}|z_{n}, \beta)d\theta$$
The probability of a corpus is:
$$p(D|\alpha, \beta) = \prod_{d=1}^{m}\sum p(\theta_{d}|\alpha)(\prod_{n=1}^{N_{d}}p(w_{dn}|z_{dn}, \beta)d\theta)$$
Experimental Setup
==================
As seen in table \[table:duration.of.protests\], our response variable - duration of protests is highly schewed. Protests lasting for less than 24 hours is 74.34$\%$ of the total number of protests. Protests above one day is less than 5$\%$ of the total number. In effect it means that in reality, South Africa rarely experiences protests that stretches beyond one day. So for practical purposes, we couple protests lasting above 24 hours into one class and compare it against protests lasting below 24 hours. The right panel of figure \[figure:twoclass\] shows the percentage of protests falling in less than one day (74.34$\%$) and one or more days (25.66$\%$). Thus the binary classification problem now is to correctly predict the response variable (less than one day against one or more days) for the text corpus of the protest descriptions.\
![Setting of two class classification problem[]{data-label="figure:twoclass"}](Rplot_fig2 "fig:"){width="3.5in"} .
However before getting into the classification exercise we need to perform two tasks. One, we need to find the optimal number of latent topics from the text corpus. Two, since classification algorithms per se cannot take text documents, we need to extract a set of latent topics for each text description of a protest. In the next section we discuss the above tasks and their results.
RESULTS AND DISCUSSIONS
=======================
Finding the optimal number of hidden topics from the text corpus is an important task. If the chosen number of topics is too low, then the LDA model is unable to identify the accurate classifiers. However if the number is too high, the model becomes increasingly complex and thus less interpretable [@IEEEhowto:Zhao2015] [@IEEEhowto:Zhao2014]. In contrast to the often used procedure of intelligently guessing the optimal number of topics in a text corpus, following [@IEEEhowto:BleiNgJordan2003] we use the perplexity approach to find the optimal number of topics. Often used in language/text based models, perplexity is a measure of “on average how many different equally most probable words can follow any given word” [@IEEEhowto:NClab]. In other words its a measure of how well a statistical model (in our case LDA) describes a dataset [@IEEEhowto:Zhao2015] where a lower perplexity denotes a better model. It is mathematically denoted by:
$$perplexity(D_{test}) = exp \{- \dfrac{\sum_{d=1}^{M}logp(\textbf{w}_{d})}{\sum_{d=1}^{M} N_{d}} \}$$
where the symbols have the same meaning as in section III. Using a trial number of topics that range between 2 to 30[^8], we use perplexity to find the optimal number of topics from our text corpus. The text corpus is broken into a training and test set. A ten fold cross validation is carried out using 1000 iterations. The black dots show the perplexity score for each fold of cross validation on the test set for various numbers of topics. The average perplexity score is shown by the blue line in figure \[figure:opt.no.topics\] and the score is least for 24 latent topics. Thus the optimal number of topics for our text corpus is 24.\
![Finding the optimal number of topics[]{data-label="figure:opt.no.topics"}](Rplot_fig3 "fig:"){width="3.5in"} .
Next we perform a LDA with 24 topics on our text corpus. Table \[table:text.desc.assoc.topic.prob\] shows an example of a text description and the probabilities associated with the various topics.\
**Text** **P(T$_{1}$)..** **..P(T$_{8}$)** **...** **..P(T$_{20}$)** **..P(T$_{24}$)**
----------------------- ------------------ ------------------ --------- ------------------- -------------------
The residents wanted 0.0027 0.15 ... 0.79 0.0028
Sterkspruit to be
moved from the
Senqu municipality
and be a municipality
on its own
: AN EXAMPLE OF A TEXT DESCRIPTION AND ITS ASSOCIATED TOPIC PROBABILITIES[]{data-label="table:text.desc.assoc.topic.prob"}
\*\*\* **P(T$_{n}$)** refers to probability of the n$^{th}$ topic where n varies from 1 to 24. Here T$_{1}$ = shop, T$_{8}$ = march, T$_{20}$ = municip and T$_{24}$ = anc.
The topic names are given in the bottom of the table. The topics court and resid have the highest probabilities for the text. It might also be noted that since the topics are multinomial distributed to the entire text corpus and the words in corpus, hence there is no direct or visible relationship that connects the topics with the text descriptions. We can only assume that the topics are complexly related to the text descriptions.\
Table \[table:text.desc.assoc.highest.topic.prob\] shows the topics with four largest probabilities (largest, 2nd largest,... 4th largest) associated with a text description. With computational concerns in mind, we restrict the largest probabilities to only four.\
**Text** **LP** **2$^{nd}$ LP** **3$^{rd}$ LP** **4$^{th}$ LP**
----------------------- --------- ----------------- ----------------- -----------------
The residents wanted municip march resid hospit
Sterkspruit to be
moved from the
Senqu municipality
and be a municipality
on its own
: AN EXAMPLE OF TEXT DESCRIPTION AND ITS ASSOCIATED HIGHEST PROBABILITY TOPICS[]{data-label="table:text.desc.assoc.highest.topic.prob"}
\*\*\* **P(T$_{n}$)** refers to probability of the n$^{th}$ topic where n varies from 1 to 24. Here T$_{1}$ = shop, T$_{8}$ = march, T$_{20}$ = municip and T$_{24}$ = anc.
It might be recollected from the last paragraph of section IV where we stated that classification algorithms per se cannot take text documents as a predictor variable. Table \[table:text.desc.assoc.highest.topic.prob\] shows a way in which a single text description can be represented as a set of most relevant[^9] topics. Thus for an entire corpus of 873 text documents, the predictor side of the classification model would consist of a topic matrix of 873x4 dimensions. Including the response variable, our modeling data would have 873x5 dimensions.\
In addition, it might also be recalled that our response variable duration of protest is unbalanced with the percentage of values falling in the class - less than one day at 74.34$\%$ and one or more days at 25.66$\%$. Since unbalanced classes are not learned well by decision trees, so we use both way balanced sampling strategies to balance the modeling data. Next the modeling data is split into a training and test set in the 7:3 ratio. The dimensions of the training and test sets are 910x5 and 388x5 respectively.\
In the next step we use decision tree based classification algorithms to model the relationship between the topic matrix and the duration of the protests. Specifically we use three algorithms - C 5.0, treebag, and random forest and do a ten fold cross validation with five repeated iterations. The prediction results for the class one or more days on the test dataset is shown in table \[table:perf.metric\].\
**** **C5.0** **Treebag** **Random forest**
------------------- ----------- ------------- -------------------
Balanced accuracy 79.38$\%$ 88.40$\%$ 89.69$\%$
Kappa 0.590 0.769 0.795
Sensitivity 87.03$\%$ 94.59$\%$ 95.68$\%$
Specificity 72.41$\%$ 82.76$\%$ 84.24$\%$
: PERFORMANCE METRICS OF VARIOUS ALGORITHMS ON THE TEST SET[]{data-label="table:perf.metric"}
\*\*\* **P(T$_{n}$)** refers to probability of the n$^{th}$ topic where n varies from 1 to 24. Here T$_{1}$ = shop, T$_{8}$ = march, T$_{20}$ = municip and T$_{24}$ = anc.
Treebag with a number of decision trees is able to better predict the response variable than a single tree in C5.0 because multiple trees help in reducing variance without increasing bias. Again, random forest performs better than treebag because in addition to multiple trees it also randomly selects a subset of the total number of features at each node. Further best split feature from the subset is used to split each node of the tree. This salient feature of random forest is absent in treebag. The combination of a number of trees and random selection of a subset of features at each node of the tree helps in further reducing the variance of the model. Thus in effect, random forest performs best in predicting the duration of the protests on the test dataset.\
The codes for the analysis are in `https://www.dropbox.com/s/dzyj2lviqnlgk5x/dropbox_ieee_la_cci_2017.zip?dl=0`
LIMITATIONS
===========
The first limitation of this research is that LDA does not allow for evolution of topics over time. This means that if the nature and scope of the protests remain pretty stagnant over time then our model is expected to perform fairly well. Second, since it is a bag of words model, so sentence structures are not considered. And third, topics are not independent of one another. Thus it might happen that the same word represents two topics creating a problem of interpretability.
Conclusion
==========
This paper is an combination of unsupervised and supervised learning to predict the duration of protests in South African context. Protests and agitations being social issues; have multiple nuances in terms of causes, courses of actions etc that cannot be very well captured by restrictive tags. Thus we discard the approach of restrictive characterization of protests and use free flowing English texts to understand its nature, cause(s), course of action(s) etc. Topic discovery (unsupervised learning using pLDA) and subsequent classification (supervised learning using various decision tree algorithms) provides promising results in predicting the duration of protests. We expect that the implementation of the framework can help police and other security services in better allocating resources to manage the protests in future.
[1]{}
Code for South Africa, *Protest Data*,0.5em plus 0em minus 0em[https://data.code4sa.org/dataset/
Protest-Data/7y3u-atvk](https://data.code4sa.org/dataset/
Protest-Data/7y3u-atvk).
D.M. Blei, A.Y Ng and M.I Jordan, *Latent Dirichlet Allocation*,0.5em plus 0em minus 0emJournal of Machine Learning Research, Vol. 3, pp.993-1022, 2003.
C. Reed, *Latent Dirichlet Allocation: Towards a Deeper Understanding*,0.5em plus 0em minus 0em[http://obphio.us/pdfs/lda tutorial.pdf](http://obphio.us/pdfs/lda tutorial.pdf).
W. Zhao, J.J Chen, R. Perkins, Z. Liu, W. Ge, Y. Ding and W. Zou, *A heuristic approach to determine an appropriate number of topics in topic modeling*,0.5em plus 0em minus 0em12th Annual MCBIOS Conference, March, 2015.
W. Zhao, W. Zou, J.J Chen, *Topic modeling for cluster analysis of large biological and medical datasets*,0.5em plus 0em minus 0emBMC Bioinformatics, Vol. 15 (Suppl 11), 2014.
NCLab, *Perplexity*,0.5em plus 0em minus 0em<http://nclab.kaist.ac.kr/twpark/htkbook/node218_ct.html>.
[^1]: Since an insignificant percentage (0.34$\%$) of our data is missing we conveniently remove them without doing missing value imputation.
[^2]: Gauteng has in it Johannesburg and Pretoria. Johannesburg is the commercial hub of South Africa, and the office of President and Cabinet is in Pretoria.
[^3]: Cape Town as the seat of South African parliament is situated in Western Cape.
[^4]: Duration of protests is the difference between the End day and the Start Day of the protest.
[^5]: The number of words is kept low for better visibility.
[^6]: Note that the topics are latent because they are neither defined semantically nor epistemologically.
[^7]: Topic and latent topic are the same and used interchangeably
[^8]: x axis of fig \[figure:opt.no.topics\] shows the trial number of topics
[^9]: The most relevant topics are the ones with highest probabilities.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'R. Hainich'
- 'L.M. Oskinova'
- 'J.M. Torrejón'
- 'F. Fuerst'
- 'A. Bodaghee'
- 'T. Shenar'
- 'A.A.C. Sander'
- 'H. Todt'
- 'K. Spetzer'
- 'W.-R. Hamann'
bibliography:
- 'paper.bib'
date: 'Received <date> / Accepted <date>'
title: 'The stellar and wind parameters of six prototypical HMXBs and their evolutionary status[^1] '
---
[ High-mass X-ray binaries (HMXBs) are exceptional astrophysical laboratories that offer a rare glimpse into the physical processes that govern accretion on compact objects, massive-star winds, and stellar evolution. In a subset of the HMXBs, the compact objects accrete matter solely from winds of massive donor stars. These so-called wind-fed HMXBs are divided in persistent (classical) HMXBs and supergiant fast X-ray transients (SFXTs) according to their X-ray properties. While it has been suggested that this dichotomy depends on the characteristics of stellar winds, they have been poorly studied. ]{} [ With this investigation, we aim to remedy this situation by systematically analyzing donor stars of wind-fed HMXBs that are observable in the UV, concentrating on those with neutron star (NS) companions. ]{} [ We obtained *Swift* X-ray data, HST UV spectra, and additional optical spectra for all our targets. The spectral analysis of our program stars was carried out with the Potsdam Wolf-Rayet (PoWR) model atmosphere code. ]{} [ Our multi-wavelength approach allows us to provide stellar and wind parameters for six donor stars (four wind-fed systems and two OBe X-ray binaries). The wind properties are in line with the predictions of the line-driven wind theory. Based on the abundances, three of the donor stars are in an advanced evolutionary stage, while for some of the stars, the abundance pattern indicates that processed material might have been accreted. When passing by the NS in its tight orbit, the donor star wind has not yet reached its terminal velocity but it is still significantly slower; its speed is comparable with the orbital velocity of the NS companion. There are no systematic differences between the two types of wind-fed HMXBs (persistent versus transients) with respect to the donor stars. For the SFXTs in our sample, the orbital eccentricity is decisive for their transient X-ray nature. The dichotomy of wind-fed HMXBs studied in this work is primarily a result of the orbital configuration, while in general it is likely that it reflects a complex interplay between the donor-star parameters, the orbital configuration, and the NS properties. Based on the orbital parameters and the further evolution of the donor stars, the investigated HMXBs will presumably form Thorne–Żytkow objects in the future. ]{}
Introduction {#sect:intro}
============
High-mass X-ray binaries (HMXBs) are binary systems consisting of a massive star, also denoted as the donor star, and a compact object, which is either a neutron star (NS) or a black hole (BH). These systems are characterized by high X-ray luminosities ($L_\mathrm{X} \approx 10^{36}\,\mathrm{erg/s}$) emitted by stellar material accreted onto the compact object. Multi-wavelength studies of HMXBs offer the opportunity to contribute to a variety of physical and astrophysical research areas, including but not limited to accretion physics, stellar evolution, and the precursors of gravitational wave events.
The HMXB population encompasses different types of binary systems. Depending on the orbital configuration, the evolution state of the donor star, and how the matter is channeled to the compact object, one can distinguish between different types: in Roche-lobe overflow (RLOF) systems, the compact object directly accretes matter via the inner Lagrangian point (L1). In wind-fed HMXBs, the compact object accretes from the wind of the donor star. In OBe X-ray binary systems, the donor stars are usually OB-type dwarfs with a decretion disk. The compact object in these kind of systems either accretes matter from these disks or from the donor-star winds.
The wind-fed HMXBs are of particular interest (for recent reviews see @Martinez-Nunez2017 and @Sander2018b). Here the compact object is situated in and accretes solely from the wind of the massive star, usually a supergiant. Therefore, these objects are also denoted as SgXBs. @Ostriker1973 realized that accretion from a stellar wind onto a compact object is sufficient to power the high X-ray luminosities observed for these objects. Depending on their X-ray properties, wind-fed HMXBs are distinguished into classical (or persistent) HMXBs and the so-called Supergiant Fast X-ray Transients (SFXTs). While the former always exhibit an X-ray luminosity on the order of $L_\mathrm{X} \approx 10^{36}\,\mathrm{erg/s}$, the latter are characterized by quiescent X-ray phases with $L_\mathrm{X} \approx 10^{32}-10^{34}\,\mathrm{erg/s}$, which are interrupted by sporadic X-ray flares ($L_\mathrm{X} \ge 10^{36}\,\mathrm{erg/s}$). Although the origin of this dichotomy is hitherto not understood, it is assumed that the donor stars play an important role in this picture [e.g., @intZand2007; @Oskinova2012; @Krticka2015; @Gimenez-Garcia2016; @Sidoli2018].
The Bondi-Hoyle-Lyttleton accretion mechanism [@Hoyle1939; @Bondi1944] predicts that the X-ray luminosity ($L_\mathrm{X}$) of a wind-fed HMXB is very sensitive to the mass-loss rate ($\dot{M}$) and the wind velocity ($\vec{v}_\mathrm{wind}$) of the donor star, $$\label{eq:lx}
L_\mathrm{X} \propto \dot{M}/v_\mathrm{rel}^4~,$$ where $v_\mathrm{rel} = |\,\vec{v}_\mathrm{wind} + \vec{v}_\mathrm{orb}\,|$ is the relative velocity of the wind matter captured by the compact object. The orbital velocity ($\vec{v}_\mathrm{orb}$) is often neglected while evaluating Eq.(\[eq:lx\]), since stellar winds of OB-type stars have high terminal velocities, sometimes in excess of $2000\,\mathrm{km}\,\mathrm{s}^{-1}$. However, most HMXBs are compact systems with orbital periods of a few days [@Walter2015]. This implies that the distance between the compact objects and the donor stars are relatively small, which means that donor star winds will not have reached their terminal velocities at the position of the compact objects. On the other hand, the $v_\mathrm{orb}$ can be quite high, especially during periastron in eccentric systems. Therefore, it is of high importance to reliably quantify the role of $v_\mathrm{wind}$ and $v_\mathrm{orb}$ in these kind of systems, especially because of the strong dependence of $L_\mathrm{X}$ on $v_\mathrm{rel}$.
The X-rays emitted by the compact object can, in turn, have a significant impact on the donor star’s atmosphere and wind. These X-rays strongly ionize a certain part of the donor star wind, which can lead to significant changes in the observed spectra of these sources. This is demonstrated by @vanLoon2001 for important UV wind-lines using phase resolved spectroscopy of several donor stars. The underlying mechanism is first discussed by @Hatchett1977, which is, therefore, also denoted as the Hatchett-McCray effect. Depending on the wind density, the orbital configuration, and the amount of X-rays emitted by the compact object, its influence on the donor wind can be quite diverse [e.g., @Blondin1990; @Blondin1994].
For high X-ray luminosities, @Krticka2015 and @Sander2018a show that the donor wind velocity field in the direction of the compact object can be significantly altered. This is because the radiation of the compact object changes the ionization balance in the donor star wind, leading to a modification of the radiative acceleration of the wind matter. In extreme cases the donor star winds can be virtually stopped or even disrupted.
In this work, we concentrate on wind-fed HMXBs with NS companions and moderate X-ray luminosities, where the Hatchett-McCray effect is of modest importance and the winds are not significantly disturbed. However, even for those systems, the X-rays need to be accounted for during the spectral analysis, since they might have a noticeable effect on the ionization balance in the donor star wind and consequently on the spectra.
Despite the strong connection between the X-ray properties of wind-fed HMXBs and the properties of the donor stars, only a few of these stars have been studied so far. One reason for this deficiency is that most of the wind-fed HMXBs are highly obscured. Therefore, the most important wavelength range for the analysis of OB-star winds, the UV that provides essential wind diagnostics, is often not accessible. In this work, we analyze four wind-fed HMXBs and two OBe X-ray binaries that are observable in the UV.
The paper is organized as follows: in Sect.\[sect:sample\], we introduce our sample, while the data used in this work are described in Sect.\[sect:data\]. The atmosphere models and the fitting process are outlined in Sect.\[sect:modelling\]. Our results are presented in Sect.\[sect:parameters\], and discussed in Sects.\[sect:evolution\] and \[sect:x-rays\]. The summary and conclusions can be found in Sect.\[sect:conclusions\]. Additional tables, comments on the individual objects, and the complete spectral fits are presented in Appendices\[sec:addtables\], \[sec:comments\], and \[sect:spectra\], respectively.
The sample {#sect:sample}
==========
[lllccl]{} Name & HMXB type & Spectral type & Reference & Distance & Alias names
------------------------------------------------------------------------
\
& & of donor & & (kpc) &\
& persistent & O6Iafpe & 1 & $1.7^{+0.3}_{-0.2}$ & ,
------------------------------------------------------------------------
\
& intermediate& BN0.7Ib & 2 & $3.4^{+0.3}_{-0.2}$ &
------------------------------------------------------------------------
\
& SFXT & O8.5Ib-II(f)p & 3 & $2.2^{+0.2}_{-0.1}$ & ,
------------------------------------------------------------------------
\
& SFXT & B0.5Ia & 4 & $6.5^{+1.4}_{-1.1}$ &
------------------------------------------------------------------------
\
& persistent/Oe X-ray& O9.5Vep & 5 & $3.3^{+0.4}_{-0.3}$ &
------------------------------------------------------------------------
\
& Be X-ray & B0IIIne & 6 & $1.3^{+0.1}_{-0.1}$ &
------------------------------------------------------------------------
\
While about 30 wind-fed HMXBs are known in our Galaxy (see @Martinez-Nunez2017 for a recent compilation), most of these objects are located in the Galactic plane [@Chaty2008]. Therefore, they are often highly obscured and are not observable in the UV. However, ultraviolet resonance lines allow to characterize even the relatively weak winds of B-type stars [e.g., @Prinja1989; @Oskinova2011]. Since the determination of wind parameters is the main objective of this study, we restrict our sample to those wind-fed HMXBs that are observable in the UV.
In addition to Vela X-1, which has been analyzed previously by @Gimenez-Garcia2016, only four more wind-fed HMXBs meet the above condition, namely HD153919 (4U1700-37), BD+6073 (IGRJ00370+6122), LMVel (IGRJ08408-4503), and HD306414 (IGRJ11215-5952). The latter two systems are SFXTs, while the first one is a persistent HMXB, and BD+6073 (IGRJ00370+6122) exhibits properties of both types. Our sample also includes the Be X-ray binary HD100199 (IGRJ11305-6256) and BD+532790 (4U2206+54), which is classified as an Oe X-ray binary or as a persistent wind-fed binary with a non evolved donor. The latter classification is based on its X-ray properties, while the former is a result of the prominent hydrogen emission lines that are visible in optical spectra of this object. These lines are most likely formed in a decretion disk of the donor star. Thus, this systems might actually be intermediate between the classical wind-fed HMXB and the OBe X-ray binaries. The HMXB type, the spectral classification of the donor, and common alias names of the investigated systems are given in Table\[table:sample\].
The orbital parameters of the investigated systems and the spin period of the neutron stars are compiled from the literature and listed in Table\[table:orbital\]. The only exception is HD100199 because neither the orbit nor the properties of its NS are known.
The data {#sect:data}
========
[llclcllSc]{} Identifier & & Ref. & & Ref. & & Ref. & & Ref.
------------------------------------------------------------------------
\
& & & & & & & &\
HD153919 & $3.411660 \pm 0.000004$ & 1 & $0.008-0.22$ & 1,2 & $49149.412 \pm 0.006$ & 1 & &
------------------------------------------------------------------------
\
BD+6073 & $15.661 \pm 0.0017$ & 3 & $0.56 \pm 0.07$ & 3 & $55084.0 \quad\pm 0.4$ & 3 & 346.0 & 4\
LMVel & $9.5436 \pm 0.0002$ & 5 & $0.63 \pm 0.03$ & 5 & $54634.45 \,\,\,\pm 0.04$ & 5 & &\
HD306414 & $\sim164.6$ & 6 & $\sim0.8$ & 7 & & & 186.78 & 6,8\
BD+532790 & $\sim 9.568$ & 9,10,11,12 & $0.30 \pm 0.02$ & 12 & & & 5750. & 13\
Spectroscopy
------------
For our UV survey of wind-fed HMXBs, we made use of the *Space Telescope Imaging Spectrograph* [STIS, @Woodgate1998; @Kimble1998] aboard the *HST*. These high resolution, high S/N spectra (Proposal ID: 13703, PI: L.M. Oskinova) cover important wind diagnostics in the range 1150-1700Å. In this paper, we use the automatically reduced data that are provided by the *HST* archive. For three of our program stars, far UV data obtained with the *Far Ultraviolet Spectroscopic Explorer* [FUSE, @Moos2000] were retrieved from the MAST archive.
These data are complemented by optical spectroscopy from different sources. For HD100199, HD306414, LMVel, and HD153919, we use data taken with the *Fiber-fed Extended Range Optical Spectrograph* [FEROS, @Kaufer1999] mounted at the 2.2m telescope operated at the European Southern Observatory (ESO) in La Silla. These data sets were downloaded from the ESO archive. From the same repository, we also retrieved *FOcal Reducer and low dispersion Spectrograph* [FORS, @Appenzeller1998] spectra for HD153919. Optical spectra for BD+6073 were kindly provided by A.González-Galán. These spectra were taken with the *high-resolution FIbre-fed Echelle Spectrograph* [FIES, @Telting2014] mounted on the Nordic Optical Telescope (NOT) and published in @Gonzalez-Galan2014. For BD+532790, we downloaded a low resolution spectrum from the VizieR archive that was taken by @Munari2002 with a *Boller & Chivens Spectrograph* of the Asiago observatory. In addition, we obtained an optical spectrum of BD+532790 with a *DADOS spectrograph* in combination with two different SBIG cameras (SFT8300M & ST-8XME) mounted to the Overwhelmingly Small Telescope (OST) of the student observatory at the University of Potsdam. Default data reduction steps were performed for this data set using calibration data (dome flats, dark frames, -lamp spectrum) taken immediately after the science exposures. Finally Near-IR spectroscopy was obtained during the night of 2014 September 1, using the *Near Infrared Camera and Spectrograph* (NICS) mounted at the 3.5-m Telescopio Nazionale Galileo (TNG) telescope (La Palma island). Medium-resolution spectra (3.5 Åpixel$^{-1}$) were taken with the $H$ and $K_{\rm b}$ grisms under good seeing conditions. Details on the reduction process can be seen in @Rodes-Roca2018. The individual spectral exposures used in this work are listed in Table\[table:spectra\]. In this Table, we also give the phase at which the observations were taken for those systems where ephemerides are available (see Table\[table:orbital\] and references therein).
Photometry
----------
We compiled $UBVRI$ photometry from various sources [@Zacharias2004b; @DENIS2005; @Mermilliod2006; @Anderson2012; @Reig2015] for all our program stars. G-band photometry was retrieved from the Gaia DR1 release [@Gaia2016]. Near-infrared photometry ($J, H, K_S$) was obtained from @Cutri2003, while WISE photometry is available from @Cutri2012a for all our targets. Moreover, we made use of MSX infrared photometry [@Egan2003] for HD153919. The complete list of photometric measurements used for the individual objects is compiled in Table\[table:photometry\].
X-ray data {#subsect:xray_data}
----------
For all our *HST* observations, we obtained quasi-simultaneous X-ray data with the Neil Gehrels Swift Observatory [*Swift*, @Gehrels2004]. In addition, strictly simultaneous [*Chandra*]{} X-ray and *HST* UV observations were performed for ([*Chandra*]{} ObsID.17630, exposure time 14.6ks).
The data obtained with the X-ray telescope [XRT, @Burrows2005] aboard *Swift* are reduced using the standard XRT pipeline as part of HEASOFT v6.23. To extract the source spectra from data gathered while the XRT was in the photon counting (PC) mode, we used a circular region centered at its J2000 coordinates with a 25 radius or 80 radius depending on the source characteristics. Background counts were extracted from an annulus encompassing the source extraction region. When XRT was in the window timing mode (WT), the source extraction region consisted of a square with a width of 40 pixels while background counts were extracted from a similar-sized region situated away from the source.
The observed spectra were fitted using a suit of various X-ray spectral fitting software packages. For all objects, the photoionization cross-sections from @Verner1996 and abundances from @Wilms2000 were employed. The goal of X-ray spectral fitting was to provide the parameters describing the X-ray radiation field in the format required for the stellar atmosphere modeling (see Sect.\[sect:models\]). X-ray spectra of HMXBs are typically well represented by power law spectral models, which are not yet implemented in our stellar atmosphere model. Therefore, we decided to fit the observed spectra using a fiducial black body spectral model. The fitting returns a “temperature” parameter $T_\mathrm{X}$, which is not-physical but is employed to describe the spectral hardness and X-ray photon flux.
The *Swift* XRT observation of was taken in the WT mode. We extracted 43,640 net source counts during 5270s of exposure time. After rebinning the spectral data to contain a minimum of 20 net counts per bin, we fit an absorbed ($N_{\mathrm{H}} = 15_{-2}^{+4} \times 10^{22}$cm$^{-2}$) blackbody ($k_{\mathrm{B}}T = 2.1\pm0.1$keV) plus a power-law component ($\Gamma = 4 \pm 1$). The observed X-ray flux is $1.2_{-0.1}^{+0.2}\times 10^{-9}$ergcm$^{-2}$s$^{-1}$. [*Chandra*]{} observations of are presented by @Martinez-Chicharro2018. Towards the end of the observation which lasted about 4h, the source experienced a flare with X-ray flux increasing by a factor of three. Our *HST* observations were partially obtained during the end of this flare.
Nineteen source ($+$background) counts were gathered during the XRT observation of taken on the same day as the HST observation (ObsID 00032620025). Without rebinning the data, and assuming C-statistics [@Cash1979], we fit an absorbed blackbody model and obtained spectral parameters that were poorly constrained ($N_{\mathrm{H}} = 4_{-3}^{+28}\times 10^{22}$cm$^{-2}$ and $k_{\mathrm{B}}T \sim 1$keV) with an observed flux of $6.2\times 10^{-14}$ergcm$^{-2}$s$^{-1}$.
HD306414 was not detected in any of the contemporaneous *Swift* observations, and therefore we could not measure its X-ray flux. HD100199 was marginally detected with $12\pm4$ photons in an observation one day before the *HST* observation (ObsID 00035224007). We estimate a flux of $2.7^{+3.9}_{-1.8}\times10^{-13}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ from these data.
LMVel was also very X-ray faint during the *HST* observation (ObsID 00037881107). We therefore use *Swift* data taken a few days earlier (ObsID 00037881103) to measure the spectral shape. We find that a thermal blackbody model describes the data well, and use this model to fit the simultaneous data. There we find a flux of $5.8\times10^{-13}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ between 3–10keV.
For (), we extracted 2511 net source counts during an 1106s XRT observation in WT mode. The spectral data were arranged in order to contain at least 20 counts per bin, and were then fit with an absorbed blackbody model ($N_{\mathrm{H}} \leq 8\times 10^{21}$cm$^{-2}$ and $k_{\mathrm{B}}T = 1.3 \pm 0.1 $keV). The model derived flux is $1.3\pm0.1 \times 10^{-11}$ergcm$^{-2}$s$^{-1}$.
A compilation of the X-ray data used in this work can be found in Table\[table:xray\_data\], while the derived X-ray luminosities are listed in Table\[table:Lxray\].
[llc]{} & $\log L_\mathrm{X}$\[erg/s\]
------------------------------------------------------------------------
\
HD153919 & 4U1700-37 & 36.03
------------------------------------------------------------------------
\
BD+6073 & IGRJ00370+6122 & 31.90\
LMVel & IGRJ08408-4503 & 32.50\
HD306414 & IGRJ11215-5952 &\
BD+532790 & 4U2206+54 & 34.24\
HD100199 & IGRJ11305-6256 &\
Spectral modeling {#sect:modelling}
=================
Stellar atmosphere models {#sect:models}
-------------------------
The spectral analyses presented in this paper were carried out with the Potsdam Wolf-Rayet (PoWR) models. PoWR is a state-of-the-art code for expanding stellar atmospheres. The main assumption of this code is a spherically symmetric outflow. The code accounts for deviation from the local dynamical equilibrium (non-LTE), iron line blanketing, wind inhomogeneities, a consistent stratification in the quasi hydrostatic part, and optionally also for irradiation by X-rays. The rate equations for the statistical equilibrium are solved simultaneously with the radiative transfer in the comoving frame, while energy conservation is ensured. Details on the code can be found in @Graefener2002, @Hamann2003, @Todt2015, and @Sander2015.
The inner boundary of the models is set to a Rosseland continuum optical depth $\tau_\mathrm{ross}$ of 20, defining the stellar radius $R_*$. The stellar temperature $T_*$ is the effective temperature that corresponds to $R_*$ via the Stefan-Boltzmann law, $$\label{eq:sblaw}
L = 4 \pi \sigma_\mathrm{SB} R_\ast^2 T_\ast^4~\,,$$ with $L$ being the luminosity. The outer boundary is set to $R_\mathrm{max} = 100\,R_*$, which proved to be sufficient for our program stars.
In the subsonic part of the stellar atmosphere, the velocity field $v(r)$ is calculated consistently such that the quasi-hydrostatic density stratification is fulfilled. In the wind, corresponding to the supersonic part of the atmosphere, a $\beta$-law [@Castor1979; @Pauldrach1986] is assumed. A double-$\beta$ law [@Hillier1999; @Graefener2005] in the form described by @Todt2015 is used for those objects where $\beta$ values larger than unity are required to achieve detailed fits. For the first exponent we always assume 0.8, while the second exponent is adjusted during the spectral fitting procedure. The gradient of such a double-$\beta$ law is steeper at the bottom of the wind than for a single $\beta$-law with a large exponent.
In the main iteration, line broadening due to natural broadening, thermal broadening, pressure broadening, neglected multiplet splitting, and turbulence is approximately accounted for by assuming Gaussian line profiles with a Doppler width of $30\,\mathrm{km}\,\mathrm{s}^{-1}$. The turbulent pressure is accounted for in the quasi hydrostatic equation (see @Sander2015 for details). In the formal integral, line broadening is treated in all detail. For the microturbulence we set $\xi = 10\,\mathrm{km}\,\mathrm{s}^{-1}$ in the photosphere, growing proportional with the wind velocity up to a value of $\xi(R_\mathrm{max}) = 0.1\,v_\infty$. The only exceptions are the supergiants HD306414 and BD+6073 where higher $\xi$ values are necessary to reproduce the observation (see Appendix\[sec:comments\] for details). The atmospheric structures (e.g., the density and the velocity stratification) of the final models for the donor stars are listed in Tables\[table:struct\_hd153919\] – \[table:struct\_hd100199\].
Wind inhomogeneities are accounted for in the “microclumping” approach that assumes optically thin clumps [@Hillier1991; @Hamann1998]. The density contrast between the clumps of an inhomogeneous model and a homogeneous one (with the same mass-loss rate $\dot{M}$) is described by the clumping factor $D$. Since the interclump medium is assumed to be void, $D$ is the inverse of the clump’s volume filling factor $f_\mathrm{V} = D^{-1}$. According to hydrodynamical simulations [e.g., @Runacres2002; @Sundqvist2018], a radial dependency is expected for the clumping factor. Here, we use the clumping prescription suggested by @Martins2009. The clumping onset (parameterized by $v_\mathrm{cl}$), where the clumping becomes significant, is set to 10$\mathrm{km}\,\mathrm{s}^{-1}$, since this results in the best fits for all objects where this property could be constrained. The clumping factor is adjusted for each individual object.
The PoWR code accounts for ionization due to X-rays. The X-ray emission is modeled as described by @Baum1992, assuming that the only contribution to the X-ray flux is coming from free-free transitions. Since the current generation of PoWR models is limited to spherical symmetry, the X-rays are assumed to arise from an optically-thin spherical shell around the star. The X-ray emission is specified by three free parameters, which are the fiducial temperature of the X-ray emitting plasma $T_\mathrm{X}$, the onset radius of the X-ray emission $R_0$ ($R_0 > R_\ast$), and a filling factor $X_\mathrm{fill}$, describing the ratio of shocked to non-shocked plasma. For our HMXBs, the onset radius is set to the orbital distance between the donor star and the NS companion. The temperature of the X-ray emitting plasma are obtained from fits of the observed X-ray spectra (see Sect.\[subsect:xray\_data\]). The X-ray filling factor is adjusted such that the wavelength integrated X-ray flux from the observations is reproduced by the model.
The effects of the X-ray field on the emergent spectra are illustrated in Fig.\[fig:xray\_comp\]. While the photospheric absorption lines are not affected at all, certain wind lines change significantly. Whether the lines become stronger or weaker depends on the individual combination of the wind density at the position of the NS, the ionization balance in the wind, and the hardness and intensity of the X-rays injected. There is some parameter degeneracy as, for some models, nearly identical line profiles are obtained when reducing $\dot{M}$ and instead increasing the X-ray filling factor. Fortunately, this ambiguity can be avoided in the analysis of most of our targets because the X-ray field is constrained from observations (see Sect.\[subsect:xray\_data\]). The injected X-ray radiation is often needed to reproduce the wind lines in the UV and, hence, to measure the terminal wind velocity and mass-loss rate.
![ Comparison between two model spectra calculated for BD+6073 to illustrate the effect of the X-rays (red: with X-rays, black: without X-rays). []{data-label="fig:xray_comp"}](BD+6073_xray_comp.pdf){width="\hsize"}
Complex model atoms of , , , , , , , , and (see Table\[table:model\_atoms\] for details) are considered in the non-LTE calculations. The multitude of levels and line transitions inherent to the iron group elements (, , , , , , , and ) are treated in a superlevel approach [see @Graefener2002].
Applicability of the models
---------------------------
One of the main assumption of the PoWR models as well as all other stellar atmosphere codes, with the exception of the PHOENIX/3D code [@Hauschildt2014], is spherical symmetry. In HMXBs, however, the spherical symmetry is broken by the presence of the compact object. On the other hand, the X-ray luminosities are often quite modest in HMXBs with NS companions. This is also the case for the systems studied in this work, as illustrated by the values given in Table\[table:Lxray\]. For all but one sources, the X-ray luminosities are below $\log L_\mathrm{X} = 35$\[erg/s\]. For those X-ray luminosities, we expect that the disruptive effect of the X-rays emitted by the NS on the donor star wind is relatively limited. The only exception might be HD153919 (4U1700-37) that exhibited an X-ray luminosity of $\log L_\mathrm{X} = 36.03$\[erg/s\] during our *HST* observation. The *HST* spectrum of this source was taken during the end of an X-ray outburst described in @Martinez-Chicharro2018. However, only minor variations are present in the *HST* spectrum compared to earlier observations with the *IUE* satellite. This is illustrated in Fig.\[fig:hd153919-iue\], where we compare our *HST* spectrum with an averaged *IUE* spectrum constructed from observations in the high resolution mode that were taken between 1978 and 1989 with the large aperture. We used all available data sets with the exception of one exposure (Data ID: SWP36947) that exhibits a significantly lower flux compared to all other observations. The wind of HD153919 does not show any sign of inhibition, suggesting that the volume significantly affected by the X-ray emission of the NS is rather small. This is consistent with the findings by @vanLoon2001.
{width="\hsize"}
This gives us confidence that the winds of the donor stars in the studied systems are not disrupted by the X-ray emission of the NSs and that the applied models are valid within their limitations. However, for the individual objects, observational time series are necessary to confirm this.
Spectral analysis
-----------------
An in-depth spectral analysis of a massive star with non-LTE model spectra is an iterative process. Our goal is to achieve a overall best model fit to the observed data, while weighting the diagnostics according to their sensibility to the stellar parameters as described below. Starting from an estimate of the stellar parameters based on the spectral type of the target, a first stellar atmosphere model is calculated and its emergent spectrum is compared to the observations. This and the subsequent comparisons are performed “by eye” without any automatic minimization procedures. Based on the outcome of the initial comparison, the model parameters are adjusted, and a new atmosphere model is calculated. This procedure is repeated until satisfactory fits of the observations with the normalized line spectrum and the spectral energy distribution (SED) is achieved. As an example, the final fit of the normalized line spectrum of HD306414 is presented in Fig.\[fig:spec\].
{width="\hsize"}
For those objects in our sample with $T_\ast > 30\,\mathrm{kK}$, the stellar temperature is primarily derived from the equivalent-width ratio between and lines, such as $\lambda\lambda$4026, 4144, 4388, 4713, 4922, 5015, 6678 and $\lambda\lambda$4200, 4542, 5412, 6683. For stars with lower stellar temperatures, we additionally used the line ratios of to ($\lambda\lambda$4553, 4568, 4575, 5740 and $\lambda\lambda$4089, 4116) and to ($\lambda\lambda$4237, 4242, 5667, 5676, 5680, 5686, and $\lambda\lambda$4035, 4097).
The surface gravity $\log g_\mathrm{grav}$ is derived from the pressure broadened wings of the Balmer lines, focusing on the $\gamma$ and $\delta$ line, since $\beta$ and $\alpha$ are often affected by emission lines from the stellar wind.
The luminosities of all our targets together with the color excess $E_{B-V}$ and the extinction-law parameter $R_V$ for the individual lines of sight are obtained from a fit of the corresponding model SED to photometry and flux calibrated spectra. For this purpose, different reddening laws are applied to the synthetic SEDs. The finally adopted reddening prescriptions are given in Table\[table:parameters\]. Moreover, the model flux is scaled to the distance of the corresponding star, using the values compiled in Table\[table:sample\]. For example, the SED fit of HD306414 is shown in Fig.\[fig:sed\].
{width="\hsize"}
The projected rotational velocity and the microturbulence velocity in the photosphere are derived from the line profiles and the equivalent width of metal lines, such as $\lambda\lambda$4089, 4116; $\lambda\lambda$4553, 4568, 4575; 4481$\lambda$; and $\lambda\lambda$5801, 5812. Macroturbulence is not considered in this approach, and thus the $v \sin i$ values reported in Table4 must be considered as upper limits. With the `iacob-broad` tool [@Simon-Diaz2014], which separately determines a possible macroturbulent contribution to the line broadening, we obtained similar $v \sin i$ values within their error margins. The terminal wind velocity and the radial dependence of the microturbulence velocity are simultaneously estimated from the extend and shape of the P Cygni absorption troughs of the UV resonance lines. The $\beta$ parameter of the velocity law is adjusted such that the synthetic spectrum can reproduce the profiles of the UV resonance lines and the full-width at half-maximum (FWHM) of the H$\alpha$ emission. For the objects presented in this work, a double-$\beta$ law with a second $\beta$ exponent in the range of 1.2–3.0 result in slightly better spectral fits compared to the canonical $\beta$-law with $\beta = 0.8$ for O-type stars [@Kudritzki1989; @Puls1996]. Note that the mass-loss rate derived from a spectral fit also slightly depend on the used $\beta$ value.
The mass-loss rate and the clumping parameters are derived by fitting the wind lines in the UV and the optical. The main diagnostics for determining the mass-loss rates are the UV resonance lines exhibiting P Cygni line profiles, namely, $\lambda\lambda$1548, 1551 and $\lambda\lambda$1394, 1403. The clumping factor and the onset of the clumping are adjusted such that a consistent fit of unsaturated UV lines and H$\alpha$ could be achieved, utilizing the different dependency of those lines on density (linearly for the resonance lines and quadratic for recombination lines, such as H$\alpha$).
The abundances of the individual elements are adjusted such that the observed strength of the spectral lines belonging to the corresponding element are reproduced best by the model.
Stellar and wind parameters {#sect:parameters}
===========================
The stellar and wind parameters of the investigated donor stars are listed in Table\[table:parameters\] together with the corresponding error margins. For those physical quantities that are directly obtained from the spectral fit ($T_*, \log g, \log L, v_\infty, \beta, \dot{M}, E_{B-V}, R_V, v \sin i$, abundances), the corresponding errors are estimated by fixing all parameters but one and varying this parameter until the fit becomes significantly worse. For those quantities that follow from the fit parameters, the errors are estimated by linear error propagation. We do not account for uncertainties in the orbital parameters, since they are often not known. Moreover, the quoted errors do not account for systematic uncertainties, e.g., because of the simplifying assumptions of the models such as spherical symmetry.
[lcccccc]{} HMXB type & persistent & intermediate & & persistent/ & Be X-ray
------------------------------------------------------------------------
\
& & & & & Oe X-ray &\
Name & HD153919 & BD+6073 & LMVel & HD306414 & BD+532790 & HD100199\
Spectral type & [O6Iafpe]{} & [BN0.7Ib]{} & [O8.5Ib-II(f)p]{} & [B0.5Ia]{} & [O9.5Vep]{} & [B0IIIne]{}\
Alias name & [4U1700-37 ]{} & [IGRJ00370+6122 ]{} & [IGRJ08408-4503 ]{} & [IGRJ11215-5952 ]{} & [4U2206+54]{} & [IGRJ11305-6256 ]{}\
$T_{\ast}$ (kK) & $35^{+2}_{-3}$ & $24^{+1}_{-1}$ & $30^{+3}_{-3}$ & $25^{+1}_{-1}$ & $30^{+3}_{-3}$ & $30^{+2}_{-3}$
------------------------------------------------------------------------
\
$T_{2/3}$ (kK) & $34$ & $23$ & $29$ & $24$ & $30$ & $30$\
$\log g_\ast$ (cms$^{-2}$) & $3.4^{+0.4}_{-0.4}$ & $2.9^{+0.1}_{-0.1}$ & $3.2^{+0.2}_{-0.2}$ & $2.8^{+0.2}_{-0.2}$ & $3.8^{+0.3}_{-0.5}$ & $3.6^{+0.2}_{-0.2}$\
$\log L$ ($L_\odot$) & $5.7^{+0.1}_{-0.1}$ & $4.9^{+0.1}_{-0.1}$ & $5.3^{+0.1}_{-0.1}$ & $5.4^{+0.1}_{-0.1}$ & $4.9^{+0.1}_{-0.1}$ & $4.4^{+0.1}_{-0.1}$\
$v_{\infty}/10^3$ (kms$^{-1}$)& $1.9^{+0.1}_{-0.1}$ & $1.1^{+0.1}_{-0.2}$ & $1.9^{+0.1}_{-0.1}$ & $0.8^{+0.2}_{-0.1}$ & $0.4^{+0.1}_{-0.1}$ & $1.5^{+0.3}_{-0.3}$\
$\beta$ & $2^{+1}_{-1}$ & $1.2^{+0.6}_{-0.4}$ & $1.4^{+0.4}_{-0.4}$ & $3^{+1}_{-1}$ & $1.0$ & $0.8$\
$R_\ast$ ($R_\odot$) & $19^{+5}_{-6}$ & $17^{+4}_{-4}$ & $17^{+6}_{-5}$ & $28^{+6}_{-5}$ & $11^{+4}_{-4}$ & $6^{+2}_{-2}$\
$R_{2/3}$ ($R_\odot$) & $20$ & $18$ & $17$ & $31$ & $11$ & $6$\
$D$ & $20^{+50}_{-15}$ & $20^{+50}_{-16}$ & $20^{+10}_{-5}$ & $20^{+10}_{-10}$ & 10 & 10\
$\log \dot{M}$ ($M_\odot \mathrm{yr}^{-1}$)& $-5.6^{+0.2}_{-0.3}$ & $-7.5^{+0.1}_{-0.2}$ & $-6.1^{+0.2}_{-0.2}$ & $-6.5^{+0.2}_{-0.2}$ & $-7.5^{+0.3}_{-0.3}$ & $-8.5^{+0.5}_{-0.5}$\
$v \sin i$ (kms$^{-1}$) & $110^{+30}_{-50}$ & $120^{+20}_{-20}$ & $150^{+20}_{-20}$ & $60^{+20}_{-20}$ & $200^{+50}_{-50}$ & $230^{+60}_{-60}$\
$M_{V,\mathrm{John}}$ (mag) & $-6.4$ & $-5.3$ & $-5.8$ & $-6.6$ & $-4.7$ & $-3.5$\
$X_{\rm H}$ (mass fr.)& $0.65^{+0.1}_{-0.2}$ & $0.45^{+0.1}_{-0.1}$& $0.5^{+0.1}_{-0.1}$ & $0.6^{+0.13}_{-0.2}$ & $0.7375$ & $0.7375$\
$X_{\rm C}/10^{-3}$ (mass fr.) & $2.5^{+2}_{-1}$ & $0.5^{+0.2}_{-0.2}$ & $2.5^{+1.5}_{-1.0}$ & $0.25^{+0.15}_{-0.10}$ & 2.37 & 2.37\
$X_{\rm N}/10^{-3}$ (mass fr.) & $2.0^{+2}_{-1}$ & $2.5^{+1.5}_{-1.0}$ & $2.0^{+1.0}_{-1.0}$ & $4.0^{+2}_{-2}$ & 0.69 & 0.69\
$X_{\rm O}/10^{-3}$ (mass fr.) & $3^{+2}_{-1}$ & $3^{+1}_{-1}$ & $6^{+2}_{-2}$ & $6^{+4.0}_{-2.5}$ & 5.73 & 5.73\
$X_{\rm Si}/10^{-4}$ (mass fr.)& $3^{+3}_{-2}$ & $4^{+1}_{-2}$ & $6^{+3}_{-3}$ & $10^{+5}_{-3}$ & 6.65 & 6.65\
$X_{\rm Mg}/10^{-4}$ (mass fr.)& $6.92$ & $9^{+3}_{-3}$ & $5^{+2}_{-2}$ & $5^{+4}_{-2}$ & 6.92 & 6.92\
$E_{B-V}$ (mag) & $0.50^{+0.01}_{-0.01}$ & $0.85^{+0.01}_{-0.01}$& $0.44^{+0.01}_{-0.01}$ & $0.83^{+0.01}_{-0.01}$ & $0.595^{+0.015}_{-0.01}$ & $0.34^{+0.01}_{-0.01}$\
$R_V$ (reddening law) & $3.1$ (Seaton) & $2.8^{+0.1}_{-0.1}$ (Cardelli)& $3.1$ (Seaton) & $3.0^{+0.1}_{-0.1}$ (Cardelli) & $3.1$ (Seaton) & $3.1$ (Seaton)\
$M_\mathrm{spec}$ ($M_\odot$) & $34^{+100}_{-28}$ & $8^{+8}_{-4}$ & $16^{+29}_{-11}$ & $18^{+24}_{-11}$ & $27^{+67}_{-23}$ & $6^{+9}_{-4}$\
$a_2$ ($R_\ast$) & $1.6^{+1.5}_{-0.4}$ & $2.9^{+3.2}_{-2.8}$ & $2.9^{+1.6}_{-0.6}$ & $12^{+5}_{-3}$ & $5.4^{+4.3}_{-1}$ & -\
$v_\mathrm{orb, apa}$ (kms$^{-1}$)& $500^{+900}_{-300}$& $90^{+50}_{-30}$ & $120^{+200}_{-60}$ & $30^{+30}_{-20}$ & $200^{+400}_{-200}$ & -\
$v_\mathrm{orb, peri}$ (kms$^{-1}$)& $500^{+900}_{-300}$& $300^{+200}_{-90}$ & $500^{+500}_{-200}$ & $300^{+200}_{-100}$ & $400^{+600}_{-200}$ & -\
$v_\mathrm{wind, apa}$ (kms$^{-1}$)& $400^{+600}_{-300}$& $850^{+50}_{-40}$ & $1400^{+200}_{-200}$ & $730^{+30}_{-20}$ & $350^{+30}_{-30}$ & -\
$v_\mathrm{wind, peri}$ (kms$^{-1}$)& $400^{+600}_{-300}$& $200^{+200}_{-200}$ & $30^{+600}_{-30}$ & $220^{+200}_{-70}$ & $300^{+50}_{-40}$ & -\
$R_\mathrm{rl, apa}\,(R_\ast)$& $1.1^{+0.5}_{-0.2}$& $1.5^{+0.3}_{-0.2}$& $1.6^{+0.6}_{-0.3}$ & $2.6^{+0.7}_{-0.3}$ & $1.9^{+0.9}_{-0.3}$ & -\
$R_\mathrm{rl, peri}\,(R_\ast)$& $1.1^{+0.5}_{-0.2}$& $0.83^{+0.12}_{-0.07}$& $0.70^{+0.19}_{-0.08}$& $1.6^{+0.3}_{-0.2}$ & $1.8^{+0.5}_{-0.2}$ & -\
Comparison with single OB-type stars {#subsect:ob_comp}
------------------------------------
The winds of massive stars are characterized by a number of quantities, such as $\dot{M}$, $v_\infty$, or $D$. Since only a low number of donor-star winds have been analyzed by means of sophisticated atmosphere models, it is statistically unfavorable to pursue comparisons for individual wind parameters. Therefore, we use the so-called modified wind momentum $D_\mathrm{mom}$ to evaluate the winds of the donor stars. The modified wind momentum is defined as $$\label{eq:dmom}
D_\mathrm{mom} = \dot{M} v_\infty R_*^{1/2}~.$$
In Fig.\[fig:dmom\], we plot $D_\mathrm{mom}$ over the luminosity. A tight linear relation between the luminosity and the modified wind momentum is predicted by the line-driven wind theory [@Kudritzki1995; @Puls1996; @Kudritzki1999]. This so-called wind-momentum luminosity relation (WLR) is observationally confirmed for a variety of massive stars [e.g., @Kudritzki1999; @Kudritzki2002; @Massey2005; @Mokiem2007]. Exceptions are certain categories of objects such as the so-called weak-wind stars [@Bouret2003; @Martins2005; @Marcolino2009; @Shenar2017], where most of the wind mass-loss might be hidden from spectral analyses based on optical and UV data [see e.g., @Oskinova2011; @Huenemoerder2012]. In addition to the stars analyzed in this work, we also plot in Fig.\[fig:dmom\] the results obtained by @Gimenez-Garcia2016 and @Martinez-Nunez2015 for the donor stars in one SFXT (IGRJ17544-2619) and two persistent HMXBs (Vela X-1 and U91909+07) as well as the values compiled by @Mokiem2007 for Galactic O and B-type stars.
The donor stars in the investigated HMXBs fall in the same parameter regime as observed for other Galactic OB-type stars. Moreover, Fig.3 shows that these donors also follow the same WLRs as other massive stars in the Galaxy, indicating that the fundamental wind properties of the donor stars in wind-fed HMXBs are well within the range of what is expected and observed for these kind of massive stars.
![ Modified wind momentum over the luminosity. The SFXTs and the persistent HMXBs are shown by green and red asterisks, respectively. In addition to the objects investigated in this work, we also show the results obtained by @Martinez-Nunez2015 and @Gimenez-Garcia2016. The dark blue triangles and light blue squares depict the analyses compiled by @Mokiem2007 for O and B-type stars, respectively. []{data-label="fig:dmom"}](dmom-l.pdf){width="\hsize"}
Wind parameters of SFXTs versus those of HMXBs {#subsect:sfxts-persistent}
----------------------------------------------
A comparison between the wind parameters of the donor stars in SFXTs with those in persistent HMXBs reveals that there is no general distinction (see Table\[table:parameters\]). For example, HD153919 and LMVel both have winds with a high terminal velocity of 1900$\mathrm{km}\,\mathrm{s}^{-1}$, but the former is a persistent HMXB, while the latter is a SFXT. Moreover, we find SFXTs with quite different wind properties: while LMVel exhibits a fast stellar wind and a relatively high mass-loss rate, HD306414 has a significantly slower wind ($v_\infty = 800\,\mathrm{km}\,\mathrm{s}^{-1}$) and a low mass-loss rate. In fact, the parameters of HD306414 are quite similar to those of VelaX-1 [@Gimenez-Garcia2016], while VelaX-1 is a persistent source in contrast to HD306414.
The wind properties are important for characterizing the donor stars. However, the accretion onto the compact object and, consequently, the X-ray properties of a system depend on the wind conditions at the position of the compact object. Based on the orbital parameters (Table\[table:orbital\]), we determine the wind velocity at the apastron and periastron positions of the NS (see Table\[table:parameters\]). As described in Sect.\[sect:models\], we assume a double-$\beta$ law (with the second $\beta$ exponent given in Table\[table:parameters\]) for the wind velocity in the supersonic regime. However, the double-$\beta$ law as well as the single $\beta$-law might not be a perfect representation of the wind structure in some HMXBs [@Sander2018a]. Moreover, the wind velocity in the direction to the NS might be reduced because of the influence of the X-rays on the wind structure [@Krticka2015; @Sander2018a]. Thus, the real wind velocity could be slightly lower than what we constrain here. However, we do not expect that this effect is significant for the objects in our sample because of the relatively low X-ray luminosities of the NSs (see Table\[table:Lxray\]). A more detailed investigation will be presented in a forthcoming publication using hydrodynamic atmosphere models.
From Table\[table:parameters\], we clearly see that the velocities of the donor star winds at the position of the NSs ($v_\mathrm{wind, peri}$ & $v_\mathrm{wind, apa}$) are significantly lower than the corresponding terminal wind velocities ($v_\infty$). We note that the value of the $\beta$ velocity-law derived in this work defines the wind structure and as such has an influence on the wind velocity determined for the position of the NS. Low $\beta$ values result in higher velocities compared to high $\beta$ values. For the extreme case of HD304614, the uncertainty from the spectral fit is $\pm 1$ for the second exponent of the double-$\beta$ law. This uncertainty results in an error of less than 5% for the wind velocity at the position of the NS during apastron, while it is about 30% during periastron. These errors are significantly smaller than the those resulting from the orbital configuration, which are the main source for the errors quoted in Table\[table:parameters\].
The wind velocities at the position of the NSs are modulated with the orbital configurations of the systems. An intriguing example is LMVel: while the wind velocity at apastron is 1400$\mathrm{km}\,\mathrm{s}^{-1}$, it is as low as 30$\mathrm{km}\,\mathrm{s}^{-1}$ at periastron. In contrast, the system harboring HD153919 (4U1700-37), the only truly persistent source in our sample, exhibits a negligible eccentricity and, therefore, a stable wind velocity at the position of the NS. This velocity is about 20% of its terminal value, while the wind velocity at apastron in the SFXTs is $>70\,\%$ of $v_\infty$. In general, it seems that in SFXTs, the velocity of the donor star winds at the periastron position of the NSs is lower than in the persistent sources. During apastron passage this situation appears to be reversed. Hence, we can conclude that the wind velocities at the position of the NS are significantly modulated by the orbital configuration in the SFXTs. This suggests that the orbits might play an important role in the dichotomy of wind-fed HMXBs as already proposed by @Negueruela2006. In general, this dichotomy likely reflects a complex interplay between the donor-star parameters, the orbital configuration, and the NS properties.
Relative velocities and constraints on the formation of temporary accretion disks {#subsect:velos}
---------------------------------------------------------------------------------
Another interesting discovery is that the donor star wind velocity at periastron in all studied systems is within the uncertainties statistically indiscernible from the NSs orbital velocity. According to @Wang1981 these conditions are favorable for the formation of an accretion disk around the NS. Such a disk would act as a reservoir and might allow for X-ray outbursts peaking after periastron passage, and should also modify the X-ray light curve [see e.g., @Motch1991]. The formation of accretion disks regularly during periastron could potentially also influence the evolution of the spin period of the neutron star.
To check whether an accretion disk can form, we adopt the prescription from @Wang1981 in the formulation given by @Waters1989. According to these studies, an accretion disk can form if $$\begin{aligned}
\begin{aligned}
\label{eq:disk}
v_\mathrm{rel} \leq 304\,\eta^{1/4}
\left( \frac{P_\mathrm{orb}}{10\,\mathrm{d}} \right)^{-1/4}
\left( \frac{M_\mathrm{NS}}{M_\odot} \right)^{5/14}
\left( \frac{R_\mathrm{NS}}{10^6\,\mathrm{cm}} \right)^{-5/28}\\
\left( \frac{B_0}{10^{12}\,\mathrm{G}} \right)^{-1/14}
\left( \frac{L_\mathrm{X}}{10^{36}\,\mathrm{erg/s}} \right)^{1/28}\,\mathrm{km}\,\mathrm{s}^{-1}\,,
\end{aligned}\end{aligned}$$ where $P_\mathrm{orb}$ is the orbital period in days and $\eta$ describes the efficiency of the angular momentum capture. The NS properties enter with the magnetic flux density $B_0$, the X-ray luminosity $L_\mathrm{X}$, the NS mass $M_\mathrm{NS}$, and radius $R_\mathrm{NS}$.
With the help of Eq.(\[eq:disk\]), we can thus estimate whether an accretion disk around the NSs in our target systems would form. For $R_\mathrm{NS}$, we assume $1.1\times10^6\,\mathrm{cm}$ based on the estimates by @Oezel2016. We assume a magnetic flux density of $B_0 = 10^{12}\,\mathrm{G}$ for all NSs in our sample. The only exception is BD+532790, where @Torrejon2018 constrain the magnetic field of the NS to $B_0 > 2 \times 10^{13}\,\mathrm{G}$. We also assume the canonical NS mass $M_\mathrm{NS} = 1.4\,M_\odot$ [@Thorsett1999]. The only exception is the NS companion of HD153919, for which @Falanga2015 derive a mass of $1.96\,M_\odot$. Moreover, we set the efficiency factor to $\eta = 1$, as expected in the presence of an accretion disk [@Waters1989]. Based on these assumptions, we find that no disks are predicted to form in any of our target systems. Note that Eq.(\[eq:disk\]) is strictly valid only for circular systems. Moreover, if the X-ray luminosity of the SFXTs is higher during an outburst than during our *Swift* observations, we might obtain a different result. However, even for $Lx = 10^{38}\,\mathrm{erg}\,\mathrm{s}^{-1}$ no accretion disks are predicted to form.
Recent detailed studies of wind dynamics in the vicinity of an accreting NS have been performed by @ElMellah2019b. Their 3-D simulations show that when orbital effects are dynamically important, the wind dramatically departs from a radial outflow in the NS vicinity and the net angular momentum of the accreted flow could be sufficient to form a persistent disk-like structure. On the other hand, the 3-D hydrodynamic models by @Xu2019 show that in flows that are prone to instability, such as stellar winds, the disks are not likely to form. In support of this, observations do not indicate presence of stable accretion disks in HMXBs with NS components [e.g., @Bozzo2008]. Thus, the question of persistent disk formation remains open. Our spectral models, which rely on spherical symmetry, are capable of reproducing the line shapes formed in the stellar wind (e.g., lines with P Cygni profiles); this argues in the favor of the models where the wind flow is strongly bent only in a limited volume close to the NS.
We highlight that the orbital velocity cannot be neglected, since it is comparable to the wind velocity at the position of the NS. Thus, it needs to be accounted for when estimating the mass accretion rate from the donor-star wind according to the Bondi-Hoyle-Lyttleton mechanism. Consequently, the orbital velocity is important for predicting the X-ray luminosity (see also Sect.\[sect:intro\] and \[sect:x-rays\]).
Abundances {#subsect:abundances}
----------
In Table\[table:parameters\], we also list the chemical abundances for our program stars. Abundances that are derived from the spectral fits are given with the estimated errors. For those elements where only insufficient diagnostics are available, the abundances are fixed to the solar values, and the corresponding entries in Table\[table:parameters\] are given without errors.
Two thirds of our sample (HD153919, BD+6073, HD306414, and LMVel) shows a significant depletion of hydrogen compared to the primordial abundance. For BD+532790 and HD100199 no deviation from this value could be detected. Nitrogen is enriched with respect to the solar value [@Asplund2009] in all investigated wind-fed HMXBs. HD153919 and LMVel exhibit a carbon abundance that is approximately solar, while carbon is subsolar in all other objects. The same applies to oxygen, which is depleted in all investigated objects with the exception of HD306414 and LMVel, which shows an oxygen abundance of about $X_\mathrm{O} = X_{\mathrm{O}, \odot}$ and $1.1\,X_{\mathrm{O}, \odot}$, respectively.
@Crowther2006c determine CNO abundances for 25 Galactic OB-type supergiants. They constrain mean \[N/C\], \[N/O\], and \[C/O\] logarithmic number ratios (relative to solar) of +1.10, +0.79, and -0.31, respectively. Only BD+6073 appears to be fully consistent with these results, while the other objects in our sample exhibit conspicuous abundance patterns. The \[C/O\] ratio of HD153919 and LMVel (0.31 and 0.01) is significantly higher than the average values derived by @Crowther2006c, while it is substantially lower for HD306414 (\[C/O\] = -1.0).
In general, silicon and magnesium seems to be depleted in our program stars, with the exception of HD306414 and BD+6073. The former shows a supersolar silicon abundance, while the latter exhibits a slightly supersolar magnesium abundance. However, we note that the uncertainties for these abundance measurements are quite high. Hence, the results have to be interpreted with caution. In the next section, we will discuss these abundance patterns in an evolutionary context.
Stellar evolutionary status {#sect:evolution}
===========================
The detailed investigation of HMXBs offers the possibility to constrain open questions of massive star evolution, SN kicks, and common envelope (CE) phases.
Common envelope evolution and NS natal kicks {#subsect:evo_cce}
--------------------------------------------
The formation of a HMXB is a complex process. In the standard scenario, a massive binary system initiates RLOF from the primary to the secondary. This mass transfer becomes dynamically unstable, if the secondary cannot accrete all of the material. This often results in a CE phase that either leads to a merger or to the ejection of the primary’s envelope, entailing a significant shrinkage of the binary orbit [e.g., @Paczynski1967; @Taam2000; @Taam2010; @Ivanova2013 and references therein]. In the latter case, the stripped primary will undergo a core collapse forming a compact object, which can accrete matter from the rejuvenated secondary. These systems then emerge as HMXB.
If the mass transfer is stable, or in the case of large initial orbital separations, a CE phase can be avoided. To form a HMXB, however, this evolutionary path requires fortuitous SN kicks to reduce the orbital separation to the small values observed for the majority of these systems [@Walter2015].
With the exception of HD306414, all investigated wind-fed HMXBs have tight orbits with periods of less than 16d and semi-major axes of less than $64\,R_\odot$. These separations are significantly smaller than the maximum extension of the NS progenitor. Therefore, each of these systems could indeed have already passed through a CE phase. Alternatively, the core collapse that leads to the formation of the NS was asymmetric and imparted a natal kick on the new-born NS. This reduced the orbital separations and hardened the system. A third possibility, in principle, is that the binary was in a close configuration from the beginning, and this has not changed because the components evolved quasi-homogeneously [e.g., @Maeder1987; @Langer1992; @Heger2000; @Yoon2005; @Woosley2006]. This prevents a significant expansion of the stars, so that the system never entered a CE phase. However, there is no reason to suspect quasi homogeneous evolution (QHE) in our studied donor stars.
No significant eccentricity is expected for post CE systems, which is in strong contrast to most HMXBs in our sample. Yet, the current eccentricity of these systems might be a result of the core-collapse event, suggesting that relatively large natal kicks are associated with the formation of NSs. This appears to be consistent with the results presented by @Tauris2017, who find evidence that the kicks of the first SN in binaries evolving towards double neutron stars (DNSs) are on average larger than those of the second SN. In our sample, only HD153919 does not show any substantial eccentricity. This is either a result of tidal circularization after the first SN, which appears plausible considering the advanced evolutionary status of HD153919, or of a CE phase, which however implies that the SN kick was negligible in this case. Although the presence of a NS in this system is strongly favored [@Martinez-Chicharro2018], a BH cannot be excluded. Thus, a third possibility exists for HD153919. Since the formation of a BH is not necessary associated with a SN and a corresponding kick, the virtual circular orbit of HD153919 might be a result of a CE phase.
Abundance pattern {#subsect:evo_abund}
-----------------
The atmospheric abundance pattern of evolved massive stars, such as OB-type supergiants, is often affected by CNO burning products. Those are mixed to the surface due to processes, such as rotational induced mixing [@Heger2000]. Accordingly, it is expected that the oxygen and carbon abundance decrease in favor of the nitrogen abundance in the course of the evolution. As stated in Sect.\[sect:parameters\], most of our program stars are not compatible with this scenario. On the one hand, HD153919, LMVel, HD306414, and BD+6073 show hydrogen depletion and nitrogen enrichment, which point to an advanced evolution state. On the other hand, HD153919 and LMVel have about solar carbon abundance, and HD306414 has a supersolar oxygen abundance. Only for BD+6073 the hydrogen and the CNO abundances are consistent with an advanced evolution state according to single-star evolution.
For HD153919, @Clark2002 point out that its carbon overabundance has to be a result of accretion from the NS progenitor during its Wolf-Rayet (WR) stage, more precisely during the carbon sequence WR (WC) phase. This can also serve as an explanation for the high carbon and oxygen abundances of LMVel and HD306414, respectively. In the latter case, the NS progenitor had to reach the oxygen rich WR (WO) phase before exploding as SN. For this scenario to work, the masses of the corresponding WC and WO stars had to be below a certain limit to form NSs at the end. @Woosley2019 estimate that most stars with final masses up to $6\,M_\odot$, corresponding to $9\,M_\odot$ helium core masses or $30\,M_\odot$ on the ZAMS, will leave neutron star remnants. This constraint is compatible with a few Galactic WC stars [@Sander2019]. If this scenario is true, it proves that the low mass WC/WO stars indeed explode as Type IbcSN, instead of directly collapsing to a BH.
Alternatively, the high carbon and oxygen abundances might be explained by pollution of material ejected during the SN explosion. In this case, significant enrichment by other elements such as silicon and magnesium is expected as well, based on calculation of nucleosynthesis yields from core-collapse SNe [e.g., @Rauscher2002; @Nomoto2006; @Woosley2007; @Nomoto2013]. This is in contradiction to what is derived in our spectral analyses such as the low silicon abundances in the atmospheres of HD153919 as well as the slightly subsolar magnesium abundance in LMVel and HD306414. However, we note the supersolar magnesium abundance of HD306414 and BD+6073.
Angular momentum transfer and projected rotational velocities {#subsect:evo_vrot}
-------------------------------------------------------------
Interacting binary stars do not only exchange mass but also angular momentum. Mass transfer due to RLOF often spins up the accreting star until this mass gainer rotates nearly critical [@Packet1981; @deMink2013]. As mentioned earlier, the orbital parameters of all objects in our sample suggest that mass transfer has occurred in these systems in the past. However, in subsequent phases (especially the presented HMXB stage) the remaining OB-type star could loose angular momentum by its wind. It is therefore interesting to check if the donor stars exhibit rapid rotation.
We derive projected rotational velocities in the range from 60 to 230$\mathrm{km}\,\mathrm{s}^{-1}$. Interestingly, the smallest $v \sin i$ is found for HD306414, which might has avoided strong binary interactions in the past. The two OBe stars in our sample exhibit the larges projected rotational velocities (200$\mathrm{km}\,\mathrm{s}^{-1}$ and 230$\mathrm{km}\,\mathrm{s}^{-1}$). Nonetheless, we can rule out very rapid rotation for all donor stars in our sample. Using a rough approximation for the critical velocity $v_\mathrm{crit} = \sqrt{G M_\ast R_\ast^{-1}}$ (neglecting for example effects due to oblateness) and adopting the mean statistical inclination of $57\degr$, all donor stars are found to rotate far below critical.
The $v \sin i$ distribution of Galactic OB-type stars has been investigated in many studies [e.g., @Dufton2006; @Fraser2010; @Braganca2012; @Simon-Diaz2010; @Simon-Diaz2014; @Garmany2015]. These studies often find evidence of a bimodal distribution, showing a low $v \sin i$ peak and a group of fast rotators that extends to very high $v \sin i$ [e.g., @Ramirez-Agudelo2013; @Simon-Diaz2014; @Garmany2015]. A similar result is obtained by @Ramachandran2018 for $> 200$ OB-type stars in the Large Magellanic Cloud (LMC). @deMink2013 predict that the high $v \sin i$ peak predominately results from massive stars that were spun up because of binary interactions, while the low-velocity peak consists of single stars and binary systems that have not interacted yet.
Based on a study of about 200 northern Galactic OB-type stars, which also accounts for the effects of macroturbulence and microturbulence, @Simon-Diaz2014 find that the $v \sin i$ distribution for O and B-type supergiants peaks at 70$\mathrm{km}\,\mathrm{s}^{-1}$ and 50$\mathrm{km}\,\mathrm{s}^{-1}$, respectively. Comparing this with the projected rotational velocities of our sample, it appears that our program stars rotate on average more rapidly than single OB-type stars. This is in accordance with mass accretion in the past. The only exception might be HD306414.
For the O-type components in six Galactic WR+O binaries, @Shara2017 derive rotational velocities. Those are expected to be nearly critical, since the O-type stars are spun up by RLOF from the WR progenitor. However, @Shara2017 find that these stars spin with a mean rotational velocity of $350\,\mathrm{km}\,s^{-1}$, which is about 65% of their critical value. They argue that a significant spin-down even on the short timescales of the WR-phase (a few hundred thousand years) must have taken place. The rotational velocities derived for our donor stars are substantially lower than those of the O-type components in the WR binaries. Compared to these objects, the evolution time scales of our stars are significantly larger (a few million years). Thus, our donor stars might had more time to spin down, which would be consistent with their lower rotational velocities. In this picture, our results and those by @Shara2017 coincide nicely.
Mass-luminosity relation {#subsect:evo_mass}
------------------------
In binary systems it is expected that the mass gainer is internally mixed because of angular momentum transfer. Therefore, the mass gainer should be overluminous compared to single stars of the same mass [e.g., @Vanbeveren1994]. To investigate whether this is the case for our program stars, we compare the spectroscopic masses constrained in this work with masses from stellar-evolution tracks. The latter are obtained with the BONNSAI Bayesian statistics tool [@Schneider2014]. Using stellar and wind parameters ($T_\ast, \log L, \log g, v \sin i, X_\mathrm{H}, \dot{M}$) and their corresponding errors as input, the BONNSAI tool interpolates between evolutionary tracks calculated by @Brott2011. Based on this set of single star evolution tracks, the tool predicts the current mass that an object with these parameters would have, if it has evolved like a single star. The correlated parameters are listed and compared in Table\[table:bonnsai\].
For BD+6073 and HD306414, evolution masses could not be derived in this way, since the parameters of these stars are not reproduced by any of the underlying stellar evolution models. BD+532790 exhibits a spectroscopic mass that is 35% larger than its evolution mass. HD153919 and LMVel seem to be overluminous for their current mass.
Future evolution {#subsect:evo_future}
----------------
Unfortunately, binary evolution tracks that would be applicable to the HMXBs investigated in this work are not available. Nevertheless, the future evolution of our targets can be discussed based on their current orbital configuration and the stellar properties of the donor stars. All investigated systems are compact enough that the donor stars, in the course of their further evolution, will expand sufficiently to eventually fill their Roche lobe, initiating direct mass transfer to their NS companions. Whether or not this mass transfer is stable will significantly influence the further evolution and the final fate of these HMXBs.
The stability of the mass-transfer in such systems has recently received increased attention. @vandenHeuvel2017 study whether this mass-transfer would lead to a (second) CE phase and whether this would result in a merger. They conclude that the mass-transfer is indeed unstable for a broad parameter range, and that the vast majority of the known HMXBs, consisting of supergiants with NS companions (>95%) would not survive the spiral-in within a CE phase. Applying their findings to our results, and assuming a NS mass of $1.4\,M_\odot$, suggests that also all systems investigated in this work will enter a CE phase that leads to a merger. The same can be concluded from a comparison of the stellar and orbital parameters of our HMXBs with the CE-ejection solutions calculated by @Kruckow2016 for massive binary systems . For all our objects, the minimal orbital separation is significantly lower than $100\,R_\odot$, while the spectroscopic masses are higher than $8\,M_\odot$. Comparing these constraints with the solutions presented by @Kruckow2016 [see their figure2] suggests that the systems studied in this work are not able to eject the CE in the upcoming CE phase. These findings are consistent with conclusions by previous studies [e.g., @Podsiadlowski1994; @vandenHeuvel2017; @Tauris2017].
If the systems studied in this work merge, they will form so-called Thorne–Żytkow objects [TŻO, @Thorne1975; @Thorne1977]. @Cannon1992 already discuss HMXBs as a potential source of TŻOs, identifying this as one of two possible channels. @Podsiadlowski1995 estimate the number of TŻOs in the Galaxy to be 20-200. Thorne–Żytkow objects will likely appear as red supergiants (RSGs) [@Biehle1991; @Cannon1993], which are only distinguishable from normal RSGs by means of specific abundance patterns. These abundances are a result of the extremely hot non-equilibrium burning processes, that allow for interrupted rapid proton addition [@Thorne1977; @Cannon1993]. The first promising candidate for a TŻO is identified by @Levesque2014. According to @Tauris2017, a few to ten percent of the luminous red supergiants ($L \ge 10^{5}\,L_\odot$) in the Galaxy are expected to harbor a NS in their core.
Alternatively, TŻOs might appear as WN8 stars. This is suggested by @Foellmi2002 because of the peculiar properties of these class of objects, such as the low binary fraction, strong variability, and the high percentage of runaways. Recently, this has been proposed to be a valid scenario for WR124 [@Toala2018]. Based on population synthesis models, already @deDonder1997 have proposed that WR stars with compact objects at their center should exist. They denote these objects as “weird” WR stars. In view of the above results, we are inclined to conclude that the binaries examined in this work will presumably form some kind of TŻOs in the future.
However, a certain fraction of the HMXB population obviously survives, since we see compact DNS systems. If the HMXBs can avoid a merger in the imminent CE phase, they will likely undergo an additional phase of mass transfer according to the Case BB scenario [@Tauris2015; @Tauris2017]. This will lead to an ultra stripped star, which will explode as a Type Ib/Ic SN, leaving a NS. Since the associated kick will likely be small [@Tauris2017], the binary system will presumably stay intact, forming a DNS.
Independent of the future evolution of the HMXBs investigated in this study, we highlight that HMXBs and their properties offer the possibility to falsify stellar evolution scenarios and population synthesis models predicting event rates of double degenerate mergers. These simulations often also include some kind of HMXB evolution phase. Thus, the properties of the HMXB population can be used to constrain these models. Therefore, further studies analyzing a large fraction of the HMXB population are imperative.
Efficiency of the accretion mechanism {#sect:x-rays}
=====================================
The X-ray luminosity $L_\mathrm{X}$ of the accreting NS in our HMXBs is related to the accretion rate $S_\mathrm{accr}$ via the accretion efficiency parameter $\epsilon$: $$\label{eq:lx2}
L_\mathrm{X} = \epsilon S_\mathrm{accr} c^{2}\,.$$ The actual value of the accretion efficiency depends on the detailed physics of the accretion mechanism. Comparing the X-ray luminosities measured with *Swift* during our *HST* observation (see Table\[table:Lxray\]) with theoretical expectations, we are able to put observational constraints on $\epsilon$ in some of the systems in our sample.
In the Bondi-Hoyle-Lyttleton formalism [e.g., @Davidson1973; @Martinez-Nunez2017], the stellar wind accretion rate, $S_\mathrm{accr}$, can be estimated as $$\begin{aligned}
\begin{aligned}
\label{eq:accr}
S_\mathrm{accr} \approx 1.5 \times 10^{7} \left( \frac{\dot{M}}{M_\odot\,\mathrm{yr}^{-1}} \right) \left( \frac{v_\mathrm{rel}}{10^{8} \mathrm{cm}\,\mathrm{s}^{-1}} \right)^{-4} \\ \left( \frac{M_\mathrm{NS}}{M_\odot} \right) \left( \frac{d_\mathrm{NS}}{R_\odot} \right)^{-2} S_\mathrm{Edd}\,,
\end{aligned}\end{aligned}$$ where $d_\mathrm{NS}$ is the orbital separation and $S_\mathrm{Edd}$ is the Eddington accretion rate, which is defined as $$\label{eq:Sedd}
S_\mathrm{Edd} = \frac{L_\mathrm{Edd}}{c^{2}}\,,$$ with $L_\mathrm{Edd}$ being the Eddington luminosity. For a fully ionized plasma that only consists of helium and hydrogen, $L_\mathrm{Edd}$ can be approximated as $$\label{eq:Ledd}
L_\mathrm{Edd} \approx 2.55 \times 10^{38} \frac{M_\mathrm{NS}/M_\odot}{1 + X_\mathrm{H}}\,\mathrm{erg}\,\mathrm{s}^{-1}\,.$$
The hydrogen mass fraction $X_\mathrm{H}$ of the accreted material is obtained from our spectral analyses. The orbital separation between the donor star and the NS as well as the relative velocity of the matter passing by the NS are phase dependent. To allow for a meaningful comparison between the observed and the predicted X-ray flux, these parameters need to be calculated for the specific phase of our simultaneous *HST* and *Swift* observation. This is possible for only two systems in our sample (BD+6073 and LMVel) because an estimate of the inclination $i$ is prerequisite for these calculations. To derive $i$, we make use of the mass function $$\label{eq:massfunc}
f(M) = \frac{M_\mathrm{NS}^{3} \sin^{3} i}{(M_\mathrm{spec} + M_\mathrm{NS})^{2}}\,.$$ For BD+6073 a mass function of $f(M) = 0.0069\,M_\odot$ is derived by @Gonzalez-Galan2014, while @Gamen2015 determine $f(M) = 0.004\,M_\odot$ for LMVel. Using the spectroscopic mass of the donor stars as derived from our spectral analyses, we are able to estimate the inclination to about $38\degr$ and $50\degr$ for BD+6073 and LMVel, respectively.
With the inclination at hand and the orbital period as well as the eccentricity from Table\[table:orbital\], we solve the Kepler equation numerically. This allows us to derive the phase dependent distance between the NS and the donor star $d_\mathrm{NS}$. The wind velocity at the position of the NSs during our *HST* and *Swift* observations can then be derived from the atmosphere models.
With these properties, we are able to derive the accretion efficiencies using Eq.(\[eq:lx2\]) to (\[eq:Ledd\]). For BD+6073, we obtain $\epsilon = 1.1 \times 10^{-3}$, while it is approximately a factor of two higher for LMVel ($\epsilon = 2.1 \times 10^{-3}$). Although all stellar and wind parameters of the donor stars are constrained well, these results must be treated with some caution because of the discrepancies of the spectral fits described in Appendix\[sec:comments\]. @Shakura2014 suggest that at low-luminosity states, SFXTs can be at the stage of quasi-spherical settling accretion when the accretion rate on to the NS is suppressed by a factor of $\sim 30$ relative to the Bondi-Hoyle-Lyttleton value. This might be sufficient to explain the low accretion efficiency deduced for LMVel and BD+6073. Alternatively, @Grebenev2007 and @Bozzo2008 suggest that a magnetic gating or a propeller mechanism could strongly inhibit the accretion in SFXTs.
Wind accretion vs. Roche-lobe overflow {#sect:roche}
======================================
All HMXBs in our sample are thought to accrete matter only from the donor star wind or from its decretion disk. This perception is called into question by our analyses. For a subset of our sample RLOF during periastron passage seems plausible.
The Roche-lobe radii of the donor stars in our sample are estimated using a generalization of the fitting formula by @Eggleton1983 for nonsynchronous, eccentric binary systems provided by @Sepinsky2007. For BD+6073 and LMVel, the Roche-lobe radius at periastron, $R_\mathrm{rl, peri}$, is smaller than the stellar radius (see in Table\[table:parameters\]). During this orbital phase, matter can be directly transferred to the NS via the inner Lagrangian point. We note that this finding has no influence on the estimates performed in the previous section since the *HST* and corresponding *Swift* observations of these sources were performed during a quiescent X-ray phase.
Interestingly, both of these sources are classified as X-ray transients. For BD+6073, the X-ray light curve folded with the orbital phase peaks around $\phi \approx 0.2$, corresponding to 3 to 4d after periastron [@Gonzalez-Galan2014]. This behavior is usually attributed to an increased wind accretion-rate during periastron passage because of the lower wind velocity and higher wind density during this phase. However, for BD+6073, the reason could be direct overflow of matter, which follows the gravitational potential. The delay between periastron passage and outburst might be due to inhibition of direct accretion onto the NS because of magnetic and centrifugal gating mechanisms [@Illarionov1975; @Grebenev2007; @Bozzo2008].
For LMVel, the X-ray outbursts cluster around periastron as well [@Gamen2015]. In contrast to BD+6073, however, the outbursts are also observed prior to periastron passage ($\phi = 0.84-0.07$), suggesting that in this case a combination of donor-wind capture and RLOF might feed the accretion.
For these two systems, the amount of mass transfer via RLOF needs to be relatively limited, since otherwise these systems are expected to quickly enter a CE phase. Moreover, we note that our estimates of the Roche-lobe radius should be treated with caution, since some of the orbital parameters of our binary systems, such as the inclination, are not well constrained.
Hydrodynamical simulations [@Mohamed2007] suggest that a further mode of mass transfer plays a role in certain binary systems. This so-called wind Roche-lobe overflow (WRLOF) invokes a focusing of the primary stellar-wind towards the secondary. Recently, @ElMellah2019a have suggested that this mechanism is chiefly responsible for the formation of so-called ultra-luminous X-ray sources (ULXs), and that it also plays a role in certain HMXBs. Wind Roche-lobe overflow gets important when the radius were the wind is accelerated beyond the escape velocity is comparable to the Roche-lobe radius [@Mohamed2007; @Abate2013]. This condition is fulfilled for all wind-fed HMXBs in our sample (HD153919, HD306414, BD+6073, and LMVel). However, the detailed calculations by @ElMellah2019a suggest that this might be a too crude criterion. Their scenario for NSs is roughly applicable to HD306414. For this object, their model predicts WRLOF for periastron, but not for apastron.
WRLOF seems to be a possible mass-transfer mechanism in wind-fed HMXBs, but presumably not during all orbital phases. Mass-transfer in these systems can be significantly higher compared to the classical Bondi-Hoyle-Lyttleton mechanism [@Podsiadlowski2007]. However, this is not directly reflected in the X-ray luminosities of these objects, which are moderate (see e.g., Table\[table:Lxray\]). So, an effective gating mechanism seems to be at work in these systems that hampers the accretion of the transferred material (see also discussion in @ElMellah2019a on VelaX-1).
Summary and Conclusions {#sect:conclusions}
=======================
For this study, we observed six HMXBs with the *HST* STIS and secured high S/N, high resolution UV spectra. Simultaneously to these *HST* observations, we obtained *Swift* X-ray data to characterize the X-ray emission of the NSs. These data sets were used to determine the wind and stellar parameters of the donor stars in these HMXBs by means of state of the art model atmospheres, accounting for the influence of the X-rays on the donor-star atmosphere. The wind parameters of these objects were deduced for the first time. Based on these analyses, we draw the following conclusions:
1\) The donor stars occupy the same parameter space as the putatively single OB-type stars from the Galaxy. Thus, the winds of these stars do not appear to be peculiar, in contrast to earlier suggestions.
2\) There is no systematic difference between the wind parameters of the donor stars in SFXTs compared to persistent HMXBs.
3\) All SFXTs in our sample are characterized by high orbital eccentricities. Thus, the wind velocities at the position of the NS and, consequently, the accretion rates are strongly phase dependent. This leads us to conclude that the orbital eccentricity is decisive for the distinction between SFXTs and persistent HMXBs.
4\) In all investigated systems, the orbital velocities of the NSs are comparable to the wind velocity at their position. Therefore, the orbital velocity is important and can not be neglected in modeling the accretion or in estimating the accretion rate.
5\) Since all systems in our study have very tight orbits, the donor-star wind has not yet reached its terminal velocity when passing the position of the NS. While this has been reported earlier, it is in strong contrast to what is often implicitly assumed in the wider literature.
6\) For BD+6073 and LMVel, RLOF potentially occurs during periastron passage. Moreover, WRLOF seems plausible in a variety of HMXBs.
7\) The donor stars of HD153919, BD+6073, and LMVel are in advanced evolutionary stages, as indicated by their abundance patterns. They are on the way to become red supergiants and will thus engulf their NS companion soon.
8\) The carbon and oxygen abundances of HD153919, LMVel, and HD306414 suggest that their atmospheres were polluted by material accreted from the wind of the NS progenitor or SN ejecta.
9\) The donor star of HD153919 and LMVel are overluminous for their current mass.
10\) Statistically, the donor stars in our sample rotate faster than single OB-type stars typically do, suggesting mass accretion because of RLOF in the past. This is consistent with the orbital parameters of these systems.
11\) Most likely, the donor stars and the NSs of the HMXBs studied in this work will merge in an upcoming CE phase, forming some kind of Thorne–Żytkow objects.
12\) The accretion efficiency parameters $\epsilon$ of the NS in our sample are quite low, suggesting that either spherical settling accretion or a gated accretion mechanism was at work during our observations.
We thank the anonymous referee for their constructive comments. The first author of this work (R.H.) is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant HA 1455/28-1. L.M.O. acknowledges support from the Verbundforschung grant 50 OR 1809. J.M.T. acknowledges the research grant ESP2017-85691-P. A.A.C.S. is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant HA 1455/26. F.F., K.S., and A.B. are grateful for support from STScI Grant HST-GO-13703.002-A. T.S. acknowledges support from the European Research Council (ERC) under the European Union’s DLV-772225-MULTIPLES Horizon 2020 research and innovation programme. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-*HST* data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. This research has made use of the VizieR catalogue access tool, Strasbourg, France. The original description of the VizieR service was published in A&AS 143, 23.
\[onlinematerial\]
Additional tables {#sec:addtables}
=================
[lllccSS]{} Identifier & Wavelength & Instrument & Resolving power & Observation date & &
------------------------------------------------------------------------
\
& & & & & &\
HD153919 & 905-1187 & FUSE & 20000 & 2003-07-30 & 52850.91896991 & 0.945
------------------------------------------------------------------------
\
& 905-1187 & FUSE & 20000 & 2003-07-31 & 52851.76329861 & 0.193\
& 905-1187 & FUSE & 20000 & 2003-04-07 & 52736.60747685 & 0.439\
& 905-1187 & FUSE & 20000 & 2003-08-02 & 52853.3525463 & 0.658\
& 1150-1700 & STIS/*HST* & 45800 & 2015-02-22 & 57075.26050776 & 0.138\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2005-06-25 & 53546.31882176 & 0.773\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2009-05-03 & 54954.27310296 & 0.457\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2011-05-18 & 55699.24895024 & 0.816\
BD+6073 & 1150-1700 & STIS/*HST* & 45800 & 2015-01-01 & 57023.49188341 & 0.841\
& 3630-7170 & FIES/NOT & 25000 & 2013-01-29 & 56321.83944792 & 0.038\
LMVel & 905-1187 & FUSE & 20000 & 1999-12-26 & 51538.83758102 & 0.366\
& 1150-1700 & STIS/*HST* & 45800 & 2015-07-16 & 57219.38183042 & 0.855\
& 1150-1980 & SWP/IUE & 10000 & 1994-12-09 & 49607.75356481 & 0.709\
& 1150-1980 & SWP/IUE & 10000 & 1994-12-09 & 49607.88165509 & 0.695\
& 1150-1980 & SWP/IUE & 10000 & 1994-12-09 & 49607.99155093 & 0.684\
& 1850-3350 & LWP/IUE & 15000 & 1994-12-09 & 49607.84041667 & 0.7\
& 1850-3350 & LWP/IUE & 15000 & 1994-12-09 & 49607.95228009 & 0.688\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2006-01-04 & 53739.20960784 & 0.806\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2007-04-18 & 54208.99984617 & 0.58\
HD306414 & 1150-1700 & STIS/*HST* & 45800 & 2015-08-16 & 57250.78585894 &\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2007-01-17 & 54117.14642524 &\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2007-02-13 & 54144.06818366 &\
BD+532790 & 1150-1700 & STIS/*HST* & 45800 & 2015-08-15 & 57249.56224783 &\
& 3230-7530 & B&C/Asiago & 150-400 & - & &\
& 3950-5780 & DADOS/OST & 3500 & 2016-04-20 & 57498.94453704 &\
& 5290-7140 & DADOS/OST & 3500 & 2016-03-04 & 57451.02402778 &\
& 14800-17800 & NICS/TNG & 1150 & 2014-09-01 & 56901 &\
& 19500-23400 & NICS/TNG & 1250 & 2014-09-01 & 56901 &\
HD100199 & 905-1187 & FUSE & 20000 & 2000-03-24 & 51627.24236111 &\
& 1150-1700 & STIS/*HST* & 45800 & 2015-01-16 & 57038.99923304 &\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2007-06-27 & 54278.00155806 &\
& 3630-7170 &FEROS/ESO-2.2m&48000 & 2007-06-29 & 54280.96512831 &\
[lSSSSSS]{} & & & & & &
------------------------------------------------------------------------
\
$U$(mag) & 6.06 & 9.79 & 7.053 & 10.12 & 9.42 & 7.351
------------------------------------------------------------------------
\
$B$(mag) & 6.724 & 10.21 & 7.722 & 10.52 & 10.11 & 8.19\
$V$(mag) & 6.543 & 9.64 & 7.558 & 10.11 & 9.84 & 8.187\
$R$(mag) & 6.43 & 9.31 & 7.47 & 9.84 & 9.64 & 8.18\
$G$(mag) & 6.38 & 9.4 & 7.449 & 9.703 & 9.726 & 8.176\
$I$(mag) & 5.93 & 9.072 & & 9.41 & 9.43 & 8.22\
$J$(mag) & 5.744 & 8.389 & 6.935 & 8.548 & 9.218 & 8.048\
$H$(mag) & 5.639 & 8.265 & 6.887 & 8.340 & 9.116 & 8.067\
$K_S$(mag) & 5.496 & 8.166 & 6.808 & 8.185 & 9.038 & 8.009\
$W1$(mag) & 5.36 & 8.104 & 6.756 & 8.043 & 8.7 & 8.063\
$W2$(mag) & 5.109 & 8.085 & 6.687 & 7.982 & 8.562 & 8.012\
$W3$(mag) & 4.927 & 7.994 & 6.585 & 7.807 & 8.191 & 7.625\
$W4$(mag) & 4.273 & 7.521 & 6.207 & 7.412 & 7.9 & 7.041\
MSX6C A(Jy)& 0.6344& & & & &\
[lcccSSS]{} Identifier & ObsIDs & Observation mode & Observation date & &
------------------------------------------------------------------------
\
& & & & &\
HD153919 & 00033631008 & WT & 2015-02-22 & 57075.17872458 & 0.141
------------------------------------------------------------------------
\
BD+6073 & 00032620025 & PC & 2015-01-01 & 57023.68082204 & 0.853\
LMVel & 00037881103 & PC & 2015-07-08 & 57211.14901227 & 0.008\
& 00037881107 & PC & 2015-07-16 & 57219.12357639 & 0.901\
HD306414 & 00030881043 & PC & 2015-08-16 & 57250.84636196 &\
BD+532790 & 00033914003 & WT & 2015-08-15 & 57249.21537077 &\
HD100199 & 00035224007 & PC & 2015-01-15 & 57037.81825268 &\
[lSS|lSS]{} Ion & & & Ion & &
------------------------------------------------------------------------
\
& 22 & 231 & & 43 & 903
------------------------------------------------------------------------
\
& 1 & 0 & & 17 & 136\
& 35 & 595 & & 0 & 0\
& 26 & 325 & & 0 & 0\
& 1 & 0 & & 1 & 0\
& 38 & 703 & & 24 & 276\
& 36 & 630 & & 23 & 253\
& 38 & 703 & & 1 & 0\
& 20 & 190 & & 12 & 66\
& 14 & 91 & & 11 & 55\
& 32 & 496 & & 1 & 0\
& 40 & 780 & & 1 & 0\
& 25 & 300 & & 13 & 40\
& 29 & 406 & & 18 & 77\
& 15 & 105 & & 22 & 107\
& 37 & 666 & & 29 & 194\
& 33 & 528 & & 19 & 87\
& 29 & 406 & & 14 & 49\
& 36 & 630 & & 15 & 56\
& 16 & 120 & & 1 & 0\
& 0 & 0 & & 0 & 0\
& 0 & 0 & & 0 & 0\
& 23 & 253 & & 0 & 0\
& 11 & 55 & & 0 & 0\
& 10 & 45 & & 0 & 0\
& 1 & 0 & & 0 & 0\
& 1 & 0 & & 0 & 0\
& 32 & 496 & & &\
[lcccccc]{} & & &
------------------------------------------------------------------------
\
& This study & BONNSAI & This study & BONNSAI & This study & BONNSAI\
$T_{\ast}$ (kK) & $35^{+2}_{-3}$ & $35^{+3}_{-3}$ & $30^{+3}_{-3}$ & $29^{+3}_{-3}$ & $30^{+3}_{-3}$ & $31^{+3}_{-3}$
------------------------------------------------------------------------
\
$\log L$ ($L_\odot$) & $5.7^{+0.1}_{-0.1}$ & $5.68^{+0.09}_{-0.08}$ & $5.3^{+0.1}_{-0.1}$ & $5.34^{+0.09}_{-0.09}$ & $4.9^{+0.1}_{-0.1}$ & $4.9^{+0.1}_{-0.1}$
------------------------------------------------------------------------
\
$\log g_\ast$ (cms$^{-2}$) & $3.4^{+0.4}_{-0.4}$ & $3.5^{+0.2}_{-0.2}$ & $3.2^{+0.2}_{-0.2}$ & $3.3^{+0.2}_{-0.3}$ & $3.8^{+0.3}_{-0.5}$ & $3.8^{+0.2}_{-0.3}$
------------------------------------------------------------------------
\
$v \sin i$ (kms$^{-1}$) & $110^{+30}_{-50}$ & $100^{+40}_{-40}$ & $150^{+20}_{-20}$ & $150^{+20}_{-20}$ & $200^{+50}_{-50}$ & $200^{+45}_{-57}$
------------------------------------------------------------------------
\
$X_{\rm H}$ (mass fr.) & $0.65^{+0.1}_{-0.2}$ & $0.72^{+0.00}_{-0.01}$ & $0.5^{+0.1}_{-0.1}$ & $0.72^{+0.00}_{-0.2}$ &
------------------------------------------------------------------------
\
$\log \dot{M}$ ($M_\odot \mathrm{yr}^{-1}$) & $-5.6^{+0.2}_{-0.3}$ & $-5.7^{+0.2}_{-0.2}$ & $-6.1^{+0.2}_{-0.2}$ & $-6.2^{+0.2}_{-0.2}$ &
------------------------------------------------------------------------
\
$M$ ($M_\odot$) & $34^{+100}_{-28}$ & $43^{+5}_{-6}$ & $16^{+29}_{-11}$ & $24^{+6}_{-2}$ & $27^{+67}_{-23}$ & $20^{+2}_{-2}$
------------------------------------------------------------------------
\
Comments on the individual stars {#sec:comments}
================================
#### HD153919
(4U1700-37): the donor star in this persistent HMXB has the earliest spectral-type in our sample, exhibiting prominent emission lines in its spectrum. Based on our spectra and our spectral analysis we would classify this donor star as O6If/WN9, in contrast to the O6Iafpe classification assigned by @Sota2014. We would assign this different spectral type, since from our perspective this object is actually evolving from an Of to a WN star, in contrast to what is discussed by @Sota2014 for the O6Iafpe classification. In this sense, the O6If/WN9 category would be an extension of the Of/WN class to cooler temperatures in reminiscence of the old “cool slash” category. From our perspective, an O6If/WN9 classification would be more suitable also in representation of the wind parameters of this object, which point to an object that is on its way to the WR stage. The derived mass-loss rate is compatible with that of other Of/WN stars [@Hainich2014].
The basic stellar parameters derived in this work are in good agreement with the previous results obtained by @Clark2002. The mass-loss rate derived by means of our models accounting for wind inhomogeneities is almost a factor of four lower than the value obtained by @Clark2002 with unclumped models. The latter authors already have noted that moderate wind clumping would reduce their derived mass-loss rate. Taking into account the uncertainties of the individual studies, this brings the two works into agreement. We note that the terminal velocity determined from our *HST* spectrum is slightly higher (by 150$\mathrm{km}\,\mathrm{s}^{-1}$) than obtained by @Clark2002.
Interestingly, the hydrogen abundance deduced from our spectral fit coincides (within the uncertainties) with the value assumed by @Clark2002. While we also derived a supersolar nitrogen abundance, it is a factor of three lower compared to the value determined by @Clark2002. The carbon and oxygen abundances are in a better agreement. Like @Clark2002, we determine a solar carbon abundance and a oxygen abundance of about $0.5\,X_{\mathrm{O}, \odot}$.
#### BD+6073
(IGRJ00370+6122): according to @Gonzalez-Galan2014, this system is intermediate between a persistent HMXB and an “intermediate” SFXT because of its exceptional X-ray properties. In contrast to almost all other donor stars in our sample, a micro turbulence velocity of $\xi = 17^{+2}_{-2}\,\mathrm{km}\,\mathrm{s}^{-1}$ is required to achieve a satisfying fit. Most of the stellar parameters we deduce for BD+6073 agree very well with the results by @Gonzalez-Galan2014. While these authors assume a wind-strength of , our detailed wind analysis results in a value that almost a factor of three higher. Also the derived abundances partly differ. The carbon and nitrogen abundances are a factor of about 1.5 higher in our study than the results presented by @Gonzalez-Galan2014, while our oxygen abundance is lower by the same factor. The deviation is the highest for the magnesium abundances, which is twice as high in our study compared to their value. The derived silicon abundances are approximately compatible. The same holds for the hydrogen abundance, which is only a few percent lower in this work.
In the fit shown in Fig.\[fig:bd+6073\], the model obviously falls short to reproduce the resonance doublets of $\lambda\lambda$1239, 1243 and $\lambda\lambda$5801, 5812 with the observed strength. This model has been calculated with an X-ray irradiation that is consistent with the *Swift* observation. However, if we adopt an approximately 70 times higher X-ray irradiation, those resonance doublets perfectly match the observation, as demonstrated in Fig.\[fig:bd+6073\_2\]. Obviously, the stronger X-ray field causes sufficient photo- and Auger ionization to populate the N[v]{} and C[iv]{} ground states.
At this point, we have to realize that the X-ray measurement with *Swift* was not strictly simultaneous to our HST exposure, but was taken 4.5h later for technical reasons. Thus, given the X-ray variability of this target, we conclude that at the exact time of the HST observation the X-ray irradiation was somewhat enhanced due to some kind of flare.
#### LMVel
(IGRJ08408-4503): to our knowledge, the spectral analysis presented here is the first one for LMVel. Although the overall spectral fit represents the observed spectrum very well, we are not able to achieve a satisfactory fit of the line at 1245Å, which is stronger in the model compared to the observation. This might be a result of the neglection of macro clumping in our analysis, which in turn might imply an underestimation of the mass-loss rate [@Oskinova2007].
Similar to BD+6073, the model that has been calculated with an X-ray irradiation, which is consistent with the *Swift* data, falls short to reproduce the $\lambda\lambda$1239, 1243 doublet (see Fig.\[fig:lm\_vel\]). Those models that are able to reproduce this doublet to a satisfactory level (see Fig.\[fig:lm\_vel\_2\]) require an X-ray flux that is roughly 300 times higher than measured by the *Swift* observations. For technical reasons, the *Swift* data was taken 6.2h earlier than the HST data. Thus, this X-ray transient might experienced an X-ray outburst during our HST observations.
As for HD153919, we find that this donor star is hydrogen and oxygen depleted, while the carbon abundance is solar and the nitrogen abundance is supersolar.
#### BD+532790
(4U2206+54): unfortunately, we only have low resolution optical spectra with a low S/N at hand for this object, which is one reason for the relatively large error margins for some of the stellar parameters listed in Table\[table:parameters\]. Nevertheless, these spectra clearly show a double peaked H$\alpha$ emission line, as typical for the decretion disks of Be- and Oe-type stars. The same spectral characteristic is posed by the hydrogen lines in the H- and K-band spectra shown in Fig.\[fig:bd+532790\]. However, @Blay2006 [see also @Negueruela2001] argue that this star does not fulfill all criteria of a classical Be-type star, but is rather a peculiar O9.5V star. While the donor star in this system is analyzed in this work, it is not considered in the discussion section of this paper because of its unclear HMXB type.
Since our atmosphere models are restricted to spherical symmetry, we cannot account for asymmetries caused by the high rotational velocities of Be- and Oe-type stars, such as oblateness or decretion disks. Nevertheless, an adequate spectral fit can be achieved for most parts of the observed spectrum, with the exception of the hydrogen lines that are filled by the emission from the decretion disks. We also note that the width of the emission peaks of the resonance lines of and in the UV cannot be reproduced completely by our model, most likely because of asymmetries in the wind of this star. Since the $\alpha$ line is dominated by emission from the decretion disk, this line cannot be used to constrain the clumping within the donor star atmosphere and wind. Therefore, we assume a clumping factor of $D = 10$.
#### HD306414
(IGRJ11215-5952): this system is one of the SFXTs in our sample. Since it was not detected in our *Swift* observations, we had to assume a certain X-ray flux to proceed with the atmosphere model fits.
Massive stars are inherent X-ray sources because of their winds that exhibit an intrinsic instability. This so-called line-driven wind instability [@Lucy1970] gives rise to wind inhomogeneities as well as shocks that can produce X-rays [e.g., @Feldmeier1997; @Runacres2002]. The intrinsic X-ray flux of massive stars is proportional to their stellar luminosity with $L_X\,/\,L \approx 10^{-7}$ [@Pallavicini1981]. In the atmosphere model for this source, we therefore approximated the X-ray flux by two components. For the first one, we used a relatively soft X-ray continuum corresponding to an X-ray temperature of $T_X = 3 \times 10^{6}\,\mathrm{K}$. This component was inserted at a radius of $1.5\,R_\ast$, while the corresponding filling factor was adjusted such that $L_X \approx 10^{-7}\,L$ is produced. To model the contribution of the NS to the X-ray emission, a second X-ray continuum with an X-ray temperature of $T_X = 3 \times 10^{7}\,\mathrm{K}$ was injected at the position of the NS. The filling factor for this component was chosen such that the UV observations are reproduced best by the model, while ensuring that the total X-ray flux is below the detection limit of *Swift*.
The donor star has previously been analyzed by @Lorenzo2014. While we obtain a slightly higher stellar temperature and surface gravity as the latter authors, the luminosity derived in our analysis is 0.2dex lower even after accounting for the difference in the assumed distance. The reason for this discrepancy might be the different reddening estimates. While in our case the reddening is derived from an SED fit spanning from UV to infrared data, the estimate conducted by @Lorenzo2014 is solely based on optical and IR photometry, leading to a significantly higher $R_V$ value of 4.2 and a slightly lower $E_{B-V} = 0.7$. Assuming these values for our model SED does not result in a satisfactory fit, providing confidence to our solution. The lower luminosity obtained from our analysis in comparison to that derived by @Lorenzo2014 also entails a spectroscopic mass that is about 30% lower.
Our spectral analysis based on UV and optical data also results in a significantly lower mass-loss rate than determined by @Lorenzo2014 solely on the basis of optical spectra. This discrepancy in the derived mass-loss rate can be in large part attributed to the neglect of wind inhomogeneities in the spectral analysis by @Lorenzo2014. If we scale the mass-loss rate determined by @Lorenzo2014 according to the clumping factor ($D = 20$) derived in this work, the discrepancy nearly vanishes.
The hydrogen, oxygen, and magnesium abundances determine by our analysis agree very well with the ones obtained by @Lorenzo2014. The carbon abundances coincide on a 20% level, while the nitrogen and silicon abundance are higher by 30% and 40%, respectively, in our study compared to those derived by @Lorenzo2014. These deviations might be a result of different micro turbulence velocities assumed in the spectral analyses. Unfortunately, @Lorenzo2014 do not specify the micro turbulence velocity they assume. However, a value slightly different to the $\xi = 20^{+5}_{-5}\,\mathrm{km}\,\mathrm{s}^{-1}$ required by our analysis might explain the differences in the abundance measurements.
#### HD100199
(IGRJ11305-6256): in this work we present the first spectral analysis of this Be X-ray binary. The same restrictions as outlined for BD+532790 apply to the spectral modeling of HD100199. Overall, an excellent fit quality could be achieved with the exception of the line cores of the hydrogen lines in the optical. As for BD+532790, we are not able to constrain the clumping and assume $D = 10$.
Spectral fits {#sect:spectra}
=============
{width="90.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
{width="92.00000%"}
[^1]: Based on observations made with the NASA/ESA Hubble Space Telescope (HST), obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In these short notes we characterize the loxodromic unit vector fields on antipodally punctured Euclidean spheres as the only ones achieving a lower bound for the volume functional depending on the Poincaré indexes around their singularities.'
address:
- 'Dpto. de Matemática, Instituto de Matemática e Estatística, Universidade de Sāo Paulo, R. do Matāo 1010, Sāo Paulo-SP 05508-900, Brazil.'
- 'Dpto de Matemática e Estatística, Instituto de Matemática e Física, Universidade Federal de Pelotas, Rua Gomes Carneiro 1, Pelotas - RS 96001-970, Brazil'
author:
- Jackeline Conrado$^1$
- 'Adriana V. Nicoli$^2$'
- 'Giovanni S. Nunes$^3$'
title: Loxodromic unit vector field on punctured spheres
---
[^1]
Introduction and statement of the results
=========================================
Let $M$ be a closed oriented Riemannian manifold and $\nabla$ the Levi Civita connection. Let $\left\{e_a\right\}^n_{a=1}$ be an orthonormal local frame in $M$ and $\vec{v}:M\to T^1M$ a unit vector field on $M$, where $T^1M$ is equipped with the Sasaki metric. The volume of $\vec{v}$ is defined on [@GW] and [@Jhonson] as $$\begin{aligned}
\label{defvolume}
{{\rm vol}}(\vec{v}) &=& \int_M \Big(1+\sum_{a=1}^{n} \|\nabla_{e_a} \vec{v} \|^2 + \sum_{a_1<a_2} \| \nabla_{e_{a_1}} \vec{v}\wedge \nabla_{e_{a_2}} \vec{v} \|^2 + \cdots \nonumber \\
&\cdots& + \sum_{a_1<\cdots <a_{n-1}}\|\nabla_{e_{a_1}} \vec{v}\wedge \cdots \wedge \nabla_{e_{a_{n-1}}} \vec{v} \|^2
\Big)^{\frac{1}{2}}\nu, \end{aligned}$$ where $\nu$ denote the volume form for $\left\{e_a\right\}^n_{a=1}$.
Clearly ${{\rm vol}}(\vec{v}) \geq {{\rm vol}}(M)$ and ${{\rm vol}}(\vec{v})={{\rm vol}}(M)$ if and only if $\vec{v}$ is a parallel field. Such vector fields are rare because if $M$ admits a unit parallel vector field, then $M$ is locally a Riemannian product. In spheres of even dimension, the vector field with isolated singularities arises naturally in the study of the volume functional. When $M$ is an antipodally punctured sphere, a relation between the volume and the Poincaré index of the vector field was established in [@BCJ].
\[thmBCJ\](see Brito, Chacón, Johnson [@BCJ]) Let $M = \mathbb{S}^{n} \backslash \left\{N,S\right\}$, $n = 2$ or $3$, be the standard Euclidean sphere where two antipodal points $N$ and $S$ are removed. Let $\vec{v}$ be a unit vector field defined on $M$. Then,
\(a) for $n=2$, ${{\rm vol}}(\vec{v})\geq \frac{1}{2} (\pi + |I_{\vec{v}}(N)| + |I_{\vec{v}}(S)| -2){{\rm vol}}(\mathbb{S}^2)$,
\(b) for $n=3$, ${{\rm vol}}(\vec{v})\geq (|I_{\vec{v}}(N)| + |I_{\vec{v}}(S)|){{\rm vol}}(\mathbb{S}^3)$,
where $I_{\vec{v}}(P)$ stands for the Poincaré index of $\vec{v}$ around $P$.
It follows from the Theorem’s 1 proof, that the north-south unit vector field realizes this lower bound. A natural question arises: Is this the only one with this property? The answer is no. We prove that the loxodromic unit vector fields are unique with this property. Precisely:
\[principal\] The lower bound for the volume functional on $\mathbb{S}^2\backslash \{ N, S\}$ is realized if, and only if, $\vec{v}$ is a loxodromic unit vector field.
Preliminaries
=============
Let $M = \mathbb{S}^{2} \backslash \left\{N,S\right\}$ be the standard Euclidean sphere whose two antipodal points $N$ and $S$ are removed. Denote by $g$ the usual metric of $\mathbb{S}^2$ induced from $\mathbb{R}^3$, and by $\nabla$ the Levi-Civita connection associated to $g$. Let $\vec{v}$ be an unit vector field in $M$. We compute the volume of $\vec{v}$ making use of a global orthornormal special frame and using the Levi-Civita conection $\nabla$ of $M$. Consider the oriented orthonormal local frame $\left\{ e_1 = \vec{v}^{\perp} , e_2 = \vec{v} \right\}$ on $M$ and its dual basis $\left\{ \omega_1, \omega_2 \right\}$. The connection $1$-forms of $\nabla$ are $\omega_{ij}(X) = g(\nabla_{X}e_j, e_i)$ for $i,j = 1,2$ where $X$ is a vector in the corresponding tangent space. In dimension $2$, the volume (\[defvolume\]) reduces to $$\begin{aligned}
\label{volreduces}
{{\rm vol}}(\vec{v}) = \int_{\mathbb{S}^2}{\sqrt{1+ \kappa^2 + \tau^2}}\nu,\end{aligned}$$ where $\kappa=g(\nabla_{\vec{v}}\vec{v}, \vec{v}^{\perp})$ is the geodesic curvature of the integral curves tangent to $\vec{v}$ and $\tau = g(\nabla_{v^{\perp}}\vec{v}, \vec{v}^{\perp})$ is the geodesic curvature of the curves orthogonal to $\vec{v}$. Also, $$\omega_{12}= \tau \omega_1 + \kappa\omega_2.$$
In a sphere, a *loxodromic* (or *rhumb line*) is a curve crossing all parallels at the same angle. Because of their nature, these curves spiral towards the poles, as can see at the Figure \[loxodromic\]. Let us define *loxodromic vector field*.
A **loxodromic unit vector field in $M$** is an unit vector field that forms a constant angle along each parallel in $\mathbb{S}^2$.
Observe that the north-south unit vector field is a loxodromic unit vector field in $M$.
![Loxodromic curve on $\mathbb{S}^2$[]{data-label="loxodromic"}](loxodromica){width="8cm"}
Let $\mathbb{S}^1_{\varphi}$ be the parallel of $\mathbb{S}^2$ at latitude $\varphi \in (-\frac{\pi}{2}, \frac{\pi}{2})$, figure \[Svarphi\]. Let $\{\vec{u},\vec{n}\}$ be an oriented frame where $\vec{u}$ is tangent to $\mathbb{S}^1_{\varphi}$ and $\vec{n}$ is parallel to a south-north meridian. Let $\theta \in [0,\pi/2]$ be the oriented angle from $\vec{u}$ to $\vec{v}$. Then ${\vec{u}=\sin \theta \vec{v}^{\perp} + \cos \theta \vec{v} }$.
![$\mathbb{S}^1_{\varphi}$ be the parallel of $\mathbb{S}^2$ at latitude $\varphi \in (-\frac{\pi}{2}, \frac{\pi}{2})$.[]{data-label="Svarphi"}](Svarphi){width="8cm"}
Proof of the theorem
====================
Consider $\mathbb{S}^2 = \mathbb{S}^+ \cup \mathbb{S}^-$, where $\mathbb{S}^+$ and $\mathbb{S}^-$ are the northern and southern hemisphere, respectively.
\[lemI\] Let $\vec{v}$ be an unit vector field. If $\vec{v}$ satisfies $\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi$ then, $\vec{v}$ realize the lower bound for the volume on $\mathbb{S}^2$. On the other hand, if $\vec{v}$ realize the lower bound for the volume on $\mathbb{S}^2$, then $\vec{v}$ satisfies $$\begin{aligned}
\label{cond_sharpness}
i)\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi \hspace{0.4cm} \mbox{and} \hspace{0.4cm} ii) \kappa\sin \theta = \tau \cos \theta.\end{aligned}$$
Let $\vec{v}$ be satisfying $\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi$.\
We analyze two cases: north and south.
In the first case consider the northern hemisphere $\mathbb{S}^+$.
Remember the general inequality, $\sqrt{a^2 + b^2} \geq |a\cos \beta + b \sin \beta|$. If $b\cos \beta = a \sin \beta$ then, $$\label{igualdade}\sqrt{a^2 + b^2} = \left|a\cos \beta + b \sin \beta \right|.$$ for any $a$, $b$, $\beta \in \mathbb{R}$.
Considering the positive part of $\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi$ and (\[igualdade\]), we have $$\sqrt{1+\kappa^2+\tau^2}=|\cos{\varphi}+\sqrt{\kappa^2+\tau^2}\sin{\varphi}|.$$
As we are in the $\mathbb{S}^+$, i.g. $0 \leq \varphi < \pi/2$, implies that $$\sqrt{1+\kappa^2+\tau^2}=\cos{\varphi}+\sqrt{\kappa^2+\tau^2}\sin{\varphi},$$ and by the hypotheses, we get $$\label{cond_sharpnessI}
\sqrt{\kappa^2+\tau^2} = \tan{\varphi}.$$
Then $$\label{igualdadevolume}
\sqrt{1 + \kappa^2+\tau^2} = \cos \varphi + \sqrt{\kappa^2+\tau^2} \sin \varphi = \cos \varphi + \tan \varphi \sin \varphi$$ where $0 \leq \varphi < \pi/2.$
Denote by $\nu'$ the induced volume form to ${\mathbb{S}^1}_{\varphi}$. From (\[volreduces\]) and (\[igualdadevolume\]) we conclude $$\label{vol_hemis_norte}
{{\rm vol}}(\vec{v})_{|_{\mathbb{S}^+}} = \int_{\mathbb{S}^+}{\cos \varphi + \sqrt{\kappa^2 + \tau^2} \sin \varphi \nu}$$ $$\hspace{2.3cm}= \int_0^{\frac{\pi}{2}}{ \int_{\mathbb{S}^1_{\varphi}} \cos \varphi + \tan \varphi \sin \varphi \nu'd\varphi}$$ $$\hspace{2.1cm}= 2\pi \int_0^{\frac{\pi}{2}}{ \cos^2 \varphi + \sin^2 \varphi d\varphi} = \pi^2.$$
For the second case, consider the southern hemisphere $\mathbb{S}^{-}.$
In this part $-\pi/2 < \varphi \leq 0$, (\[igualdade\]) together with the negative part of $\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi$ we have $$\sqrt{1+\kappa^2+\tau^2}= \left| \cos{\varphi}-\sqrt{\kappa^2+\tau^2}\sin{\varphi}\right|.$$ Observe that $\sin \varphi \leq 0$ and $\cos \varphi \geq 0$ implies $$\label{}
\left| \cos{\varphi}-\sqrt{\kappa^2+\tau^2}\sin{\varphi}\right| = \cos \varphi -\sqrt{\kappa^2+\tau^2}\sin{\varphi}.$$ and $$\sqrt{\kappa^2+\tau^2} = \left| \tan \varphi\right| = - \tan \varphi.$$ We attain $$\label{vol_hemis_sul}
{{\rm vol}}(\vec{v})_{|_{\mathbb{S}^-}} = \int_{\mathbb{S}^-}{\cos \varphi - \sqrt{\kappa^2 + \tau^2} \sin \varphi \nu}$$ $$\hspace{2.6cm}= \int_{\frac{-\pi}{2}}^0{ \int_{\mathbb{S}^1_{\varphi}} \cos \varphi - \left|\tan \varphi\right| \sin \varphi \nu'd\varphi}$$ $$\hspace{3.1cm}= \int_{\frac{-\pi}{2}}^0{ \int_{\mathbb{S}^1_{\varphi}} \cos \varphi - (-\tan \varphi) \sin \varphi \nu'd\varphi}$$ $$\hspace{3.9cm}= 2\pi \int_{\frac{-\pi}{2}}^0{ \cos^2 \varphi + \tan \varphi \sin \varphi \cos\varphi d\varphi} = \pi^2.$$
From (\[vol\_hemis\_norte\]) and (\[vol\_hemis\_sul\]) we get the volume of $\vec{v}$ is $
{{\rm vol}}(\vec{v})_{|_{\mathbb{S}^2}} = 2\pi^2 = \frac{\pi}{2} {{\rm vol}}(\mathbb{S}^2).
$ We conclude that $\vec{v}$ realizes the lower bound for the volume on $\mathbb{S}^2$.
Notice that the integral of $i^*\omega_{12}$, where $i$ is the inclusion of $\mathbb{S}^1_{\varphi}$ in $\mathbb{S}^2$, give us the indices of singularities (see [@Manfredo]) and, in our case, $I_{\vec{v}}(N)=I_{\vec{v}}(S)=1$.
We now show that the $\vec{v}$ satisfies the conditions i) and ii) if, $\vec{v}$ realizes the lower bound for the volume on $\mathbb{S}^2$.
It follows from the proof of Theorem’s 1 in \[thmBCJ\] that the condition $$\sqrt{1 + \kappa^2 + \tau^2} = \left|\cos \varphi + \sqrt{\kappa^2 + \tau^2} \sin \varphi \right| = \left|\cos \varphi + \left|\kappa \cos \theta + \tau \sin \theta \right| \sin \varphi \right|.$$ is a consequence of ${{\rm vol}}(\vec{v})=\frac{\pi}{2}{{\rm vol}}(\mathbb{S}^2)$.
In the case $0 \leq \varphi < \pi/2$, the first equality implies $\sin \varphi \ = \sqrt{\kappa^2+\tau^2} \cos \varphi $. If $\ -\pi/2 < \varphi \leq 0$, in $\mathbb{S}^-$, then $0 \leq \ -\varphi < \pi/2$ so $ \sin(-\varphi) = \sqrt{\kappa^2+\tau^2} \cos(-\varphi) $ or $\ -\sin \varphi \ = \sqrt{\kappa^2+\tau^2} \cos \varphi$, therefore $\left| \sin \varphi \right| = \sqrt{\kappa^2+\tau^2} \cos \varphi$ for $-\pi/2< \varphi < \pi/2$. From second equality we get the condition ii).
\[lemII\] Let $M$ be a $2$-dimensional Riemannian manifold. If $\vec{v}$ be a unit vector field in $M$ and $\{\vec{u},\vec{n}\}$ be an oriented frame. Then $$\kappa = -\theta_{\vec{v}} - \cos \theta g(\nabla_{\vec{u}}{\vec{u}}, \vec{n}) \hspace{0.4cm}\mbox{and}\hspace{0.4cm} \tau = -\theta_{\vec{v}^{\perp}} + \sin \theta g(\nabla_{\vec{u}}{\vec{n}}, \vec{n}).$$
Computing $\kappa = g(\nabla_{\vec{v}}\vec{v},\vec{v}^{\perp})$. Without loss of generality we may assume $\theta \in (0, \pi/2]$, so $$\vec{v} = \cos \theta \vec{u} + \sin \theta \vec{n} \hspace{0.4cm}\mbox{and}\hspace{0.4cm} \vec{v}^{\perp} = \sin \theta\vec{u} - \cos \theta\vec{n}.$$ Thus, $$\nabla_{\vec{v}} \vec{v} = \left[ \cos^2 \theta\nabla_{\vec{u}} \vec{u} + \cos \theta\sin \theta\nabla_{\vec{n}} \vec{u} + \vec{v}(\cos \theta)\vec{u}\right] + \left[ \cos \theta\sin \theta\nabla_{\vec{u}} \vec{n} + \sin^2 \theta\nabla_{\vec{n}} \vec{n} + \vec{v}(\sin \theta)\vec{n} \right]$$ and $$g\left( \nabla_{\vec{v}} \vec{v}, \vec{v}^{\perp} \right) = g\left( \left[ \cos^2 \theta \nabla_{\vec{u}} \vec{u} + \cos \theta \sin \theta \nabla_{\vec{n}} \vec{u} + \vec{v}(\cos \theta)\vec{u}\right], \vec{v}^{\perp} \right)$$ $$+ g\left( \left[ \cos \theta\sin \theta \nabla_{\vec{u}} \vec{n} + \sin^2 \theta \nabla_{\vec{n}} \vec{n} + \vec{v}(\sin \theta)\vec{n} \right], \vec{v}^{\perp} \right)$$ $$= \underbrace{g\left( \cos^2 \theta \nabla_{\vec{u}} \vec{u}, \vec{v}^{\perp} \right)}_{a} + \underbrace{g\left( \cos \theta \sin \theta \nabla_{\vec{n}} \vec{u}, \vec{v}^{\perp} \right)}_{b} + \underbrace{g\left( \vec{v}(\cos \theta)\vec{u}, \vec{v}^{\perp} \right)}_{c} + \underbrace{g\left( \cos \theta\sin \theta \nabla_{\vec{u}} \vec{n}, \vec{v}^{\perp} \right)}_{d}$$ $$+ \underbrace{g\left( \sin^2 \theta \nabla_{\vec{n}} \vec{n}, \vec{v}^{\perp} \right)}_{e} + \underbrace{g\left( \vec{v}(\sin \theta)\vec{n}, \vec{v}^{\perp} \right)}_{f},$$ where $$a = g\left( \cos^2\theta \nabla_{\vec{u}} \vec{u}, \vec{v}^{\perp} \right) = \cos^2\theta\sin \theta g\left( \nabla_{\vec{u}}\vec{u}, \vec{u}\right) - \cos^3\theta g\left( \nabla_{\vec{u}}\vec{u}, \vec{n}\right) = - \cos^3\theta g\left( \nabla_{\vec{u}}\vec{u}, \vec{n}\right) .$$
$$b = g\left( \cos\theta\sin \theta \nabla_{\vec{n}} \vec{u}, \vec{v}^{\perp} \right) = \cos\theta\sin^2 \theta g\left( \nabla_{\vec{n}}\vec{u}, \vec{u}\right) - \cos^2\theta\sin \theta g\left( \nabla_{\vec{n}}\vec{u}, \vec{n}\right).$$
$$c = g\left( \vec{v}(\cos\theta)\vec{u}, \vec{v}^{\perp} \right) = \vec{v}(\cos\theta)\sin \theta = - \theta_{\vec{v}} \sin^2 \theta.$$
$$d = g\left( \cos\theta\sin \theta \nabla_{\vec{u}} \vec{n}, \vec{v}^{\perp} \right) = \cos\theta\sin^2 \theta g\left( \nabla_{\vec{u}}\vec{n}, \vec{u}\right) - \cos^2\theta\sin \theta g\left( \nabla_{\vec{u}}\vec{n}, \vec{n}\right).$$
$$e = g\left( \sin^2 \theta \nabla_{\vec{n}} \vec{n}, \vec{v}^{\perp} \right) = \sin^3 \theta g\left( \nabla_{\vec{n}}\vec{n}, \vec{u}\right) - \cos\theta\sin^2 \theta g\left( \nabla_{\vec{n}}\vec{n}, \vec{n}\right) = \sin^3 \theta g\left( \nabla_{\vec{n}}\vec{n}, \vec{u}\right).$$
$$f = g\left( v(\sin \theta)\vec{n}, \vec{v}^{\perp} \right) = \vec{v}(\sin \theta)\cos\theta = - \theta_v\cos^2 \theta.$$ By orthogonality between $\vec{u}$ and $\vec{n}$, $$g(\nabla_{\vec{n}}\vec{n},\vec{u}) = g(\nabla_{\vec{n}}\vec{u},\vec{u})=0.$$ Then, $$g\left( \nabla_{\vec{v}} \vec{v}, \vec{v}^{\perp} \right) = - \cos^3\theta g\left( \nabla_{\vec{u}}\vec{u}, \vec{n}\right) -\theta_{\vec{v}}\sin^2 \theta + \cos \theta\sin^2 \theta g\left( \nabla_{\vec{u}}\vec{n}, \vec{u}\right) - \theta_{\vec{v}}\cos^2 \theta.$$ $$= -\theta_{\vec{v}} + \left[ (-\cos^3 \theta - \cos \theta \sin^2 \theta) g\left( \nabla_{\vec{u}}\vec{u}, \vec{n} \right)\right] = -\theta_{\vec{v}} + \left[ -\cos \theta g\left( \nabla_{\vec{u}}\vec{u}, \vec{n} \right)\right].$$ Therefore, $$\kappa = -\theta_{\vec{v}} - \cos \theta g \left( \nabla_{\vec{u}}\vec{u}, \vec{n} \right).$$
A similar computation leads to $$\tau = -\theta_{\vec{v}^{\perp}} + \sin \theta g(\nabla_{\vec{u}}{\vec{n}}, \vec{n}).$$
**The proof of Theorem \[principal\]**: We begin by proving if $\vec{v}$ realizes the lower bound for the volume on $\mathbb{S}^2$ then, $\vec{v}$ is a loxodromic unit vector field. If $0 \leq \varphi < \pi/2$, from Lemma \[lemI\] we have $$i) \sin \varphi = \sqrt{\kappa^2+\tau^2} \cos \varphi \hspace{0.4cm} \mbox{and} \hspace{0.4cm} ii) \kappa\sin \theta = \tau \cos \theta.$$ By condition ii) $$\hspace{0.9cm}\theta \neq \pi/2 \hspace{0.2cm} \Rightarrow \hspace{0.2cm} \tau = \kappa \tan{\theta},$$ $$\theta = \pi/2 \hspace{0.2cm} \Rightarrow \hspace{0.2cm} \kappa = 0.$$ Considering $\theta \neq \pi/2 $, we have $$\label{secante}
\sqrt{\kappa^2+\tau^2}=\sqrt{\kappa^2 + \kappa^2\tan^2{\theta}}=\sqrt{\kappa^2(1+ \tan^2 \theta)}= \left|\kappa\right| \sqrt{\sec^2{\theta}} = \left|\kappa\sec{\theta}\right| = \frac{\left|\kappa\right|}{\left|\cos \theta\right|}.$$ Finally, from (\[cond\_sharpnessI\]) and (\[secante\]) we write the geodesic curvature in the integral curves of $\vec{v}$ as $$\label{cond_sharpnessII}
\left|\kappa \right|= \cos{\theta}\tan{\varphi}.$$ Observe that, if $\theta = \pi/2$ then $|\kappa|=0$ and $\tau =|\tan{\varphi}|$. For $\theta \neq \pi/2 $, $\kappa = \cos{\theta}\tan{\varphi}$. So $$\label{kappatau}
\kappa = \cos{\theta} \tan{\varphi} \hspace{0.3cm}\Rightarrow\hspace{0.3cm} \tau = \sin{\theta} \tan{\varphi}$$ Note that the case is similar if we consider $\theta \in (\pi/2,\pi].$
By Lemma \[lemII\] the geodesic curvature of curves integral tangent to $\vec{v}$ is $-\theta_{\vec{v}}-\cos{\theta}g(\nabla_{\vec{u}}\vec{u}, \vec{n})$.
Observe that $g(\nabla_{\vec{u}}\vec{u}, \vec{n}) = -\tan{\varphi}$, in that way $$\label{curgeodesica}
\kappa = -\theta_{\vec{v}}+\cos{\theta}\tan{\varphi}.$$ Using (\[kappatau\]) in (\[curgeodesica\]) we conclude that $-\theta_{\vec{v}}=0$. So the unit vector field $\vec{v}$ is a loxodromic unit vector field.
Now, consider $\vec{v}$ as a loxodromic unit vector field. From Lemma \[lemII\] we have $$\kappa = -\theta_{\vec{v}} - \cos \theta g \left( \nabla_{\vec{u}}\vec{u}, \vec{n} \right) \hspace{1cm}\mbox{and}\hspace{1cm} \tau = -\theta_{\vec{v}^{\perp}} + \sin \theta g(\nabla_{\vec{u}}{\vec{n}}, \vec{n}).$$ Observe that, $$g(\nabla_{\vec{u}}\vec{u}, \vec{n}) = -\tan{\varphi}\hspace{1cm}\mbox{and}\hspace{1cm}g(\nabla_{\vec{u}}\vec{n}, \vec{n}) = \tan{\varphi},$$ in that way $$\kappa = -\theta_{\vec{v}}+\cos{\theta}\tan{\varphi}\hspace{1cm}\mbox{and}\hspace{1cm}\tau = -\theta_{\vec{v}^{\perp}} + \sin \theta \tan{\varphi},$$ since $\vec{v}$ is a loxodromic unit vector field, then $-\theta_{\vec{v}} = -\theta_{\vec{v}^{\perp}} = 0.$ Therefore, $$\label{curvaturasI}
\kappa = \cos{\theta}\tan{\varphi}\hspace{2cm}\mbox{and}\hspace{2cm}\tau = \sin \theta \tan{\varphi}.$$ Substituting (\[curvaturasI\]) in $\sqrt{\kappa^2 + \tau^2}$ we obtain $$\sqrt{\tan^2{\varphi}} = \left|\tan{\varphi} \right|.$$ It implies, $$\left|\sin \varphi \right| = \sqrt{\kappa^2 + \tau^2}\cos \varphi,$$ for $ - \pi/2 < \varphi < \pi/2.$ From Lemma \[lemI\], we conclude that $\vec{v}$ realizes the lower bound for the volume on $\mathbb{S}^2$.
**Acknowledgements** We would like to thank Fabiano Brito for atracting our attention to the subject of this paper. His comments, suggestions, and encouragements helped shape this article. The first and second authors were financed by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code $001$.
[100]{}
V. Borrelli and O. Gil-Medrano - [*Area minimizing vector fields on round 2-spheres, J. fur Reine und Angewandte Mathematik*]{}, (Crelle’s Journal) 640 (2010), 85–99
F. G. B. Brito and P. M. Chacón - [ *A topological minorization for the volume of vector fields on 5-manifolds*]{}, Arch. Math. (Basel) 85 (2005), p. 283–292.
F. G. B. Brito, P. M. Chacón and D. L. Johnson - [*Unit field on punctured spheres*]{}, Bulletin de la Socété Mathématique de France, 136(1):147-157, (2008)
F. G. B. Brito, P. M. Chacón and A. M. Naveira - [*On the volume of unit vector fields on spaces of constant sectional curvature*]{}, Comment. Math. Helv. 79 (2004), p. 300–316.
P. M. Chacón - [ *Sobre a energia e energia corrigida de campos unitários e distribuições. Volume de campos unitários*]{}, Ph.D. Thesis, Universidade de São Paulo, Brazil, 2000, and Universidad de Valencia, Spain, 2001.
M. P. do Carmo - [*Differential Forms and Applications*]{}, Springer-Verlag Berlin Heidelberg, 1994.
H. Gluck and W. Ziller - [*On the volume of a unit vector field on the three-sphere*]{}, Comment. Math. Helv., 61:177-192, (1986)
D. L. Johnson – [*Chern-Simons forms on associated bundles, and boundary terms*]{}, Geometria Dedicata 120 (2007), p. 23–24.
D. L. Johnson - [*Volumes of flows*]{}, Proc. of the Amer. Math. Soc. (3) 104 (1988)
S. S. Chern – [ *On the curvatura integra in a Riemannian manifold*]{}, Ann.of Math. (2) 46 (1945), p. 674–684.
S. S. Chern and J. Simons – [*Characteristic forms and geometric invariants*]{}, Ann. of Math. (2) 99 (1974), p. 48–69.
S. L. Pedersen [*Volumes of vector fields on spheres*]{}, Trans. Amer. Math. Soc. 336 (1993), p. 69–78.
[^1]: The first and second authors are supported by a scholarship from the National Doctoral Program, CAPES-PROEX
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
This paper tackles the problems of generating concrete test cases for testing whether an application is vulnerable to attacks, and of checking whether security solutions are correctly implemented. The approach proposed in the paper aims at guiding developers towards the implementation of secure applications, from the threat modelling stage up to the testing one. This approach relies on a knowledge base integrating varied security data, e.g., attacks, attack steps, and security patterns that are generic and re-usable solutions to design secure applications. The first stage of the approach consists in assisting developers in the design of Attack Defense Trees expressing the attacker possibilities to compromise an application and the defenses that may be implemented. These defenses are given under the form of security pattern combinations. In the second stage, these trees are used to guide developers in the test case generation. After the test case execution, test verdicts show whether an application is vulnerable to the threats modelled by an ADTree. The last stage of the approach checks whether behavioural properties of security patterns hold in the application traces collected while the test case execution. These properties are formalised with LTL properties, which are generated from the knowledge base. Developers do not have to write LTL properties not to be expert in formal models. We experimented the approach on 10 Web applications to evaluate its testing effectiveness and its performance.
*Security Pattern; Security Testing; Attack-Defense Tree; Test Case Generation.*
author:
-
bibliography:
- 'doc.bib'
title: An Advanced Approach for Choosing Security Patterns and Checking their Implementation
---
Introduction {#sec:intro}
============
Today’s developers are no longer just expected to code and build applications. They also have to ensure that applications meet minimum reliability guarantees and security requirements. Unfortunately, choosing security solutions or testing software security are not known to be simple or effortless activities. Developers are indeed overloaded of new trends, frameworks, security issues, documents, etc. Furthermore, they sometimes lack skills and experience for choosing security solutions or writing concrete test cases. They need to be guided on how to design or implement secure applications and test them, in order to contribute in a solid quality assurance process.
This work focuses on this need and proposes an approach that guides developers devise more secure applications from the threat modelling stage, which is a process consisting in identifying the potential threats of an application, up to the testing one. The present paper is an extended version of [@RS18], which provides additional details on the security test case generation, the formalisation of behavioural properties of security patterns with Linear Temporal Logic (LTL) properties, and on their automatic generation. We also provide an evaluation of the approach and discuss the threats to validity.
In order to guide developers, our approach is based upon several several digitalised security bases or documents gathered in a knowledge base. In particular, the latter includes security solutions under the form of security patterns, which can be chosen and applied as soon as the application design. Security patterns are defined as *reusable elements to design secure applications, which will enable software architects and designers to produce a system that meets their security requirements and that is maintainable and extensible from the smallest to the largest systems* [@Rodriguez2003]. Our approach helps developers chose security patterns with regard to given security threats. Then, it builds security test cases to check whether an application is vulnerable, and test whether security patterns are correctly implemented in the application. More precisely, the contributions of this work are summarised in the following points:
- the approach assists developers in the threat modelling stage by helping in the generation of Attack Defense Trees (ADTrees) [@kordy2012attack]. The latter express the attacker possibilities to compromise an application, and give the defenses that may be put in place to prevent attacks. Defenses are here expressed with security patterns. We have chosen this tree model because it offers the advantage of being easy to understand even for novices in security;
- the second part of the approach supports developers in writing concrete security test cases. A test suite is automatically extracted from an ADTree. The test suite is made up of test case stubs, which are completed with comments or blocs of code. Once completed, these are used to experiment an application under test (shortened $AUT$), seen as a black-box. The test case execution provides verdicts expressing whether the $AUT$ is vulnerable to the threats modelled in the ADTree;
- the last part of the approach allows developers to check whether security patterns are correctly implemented in the application. Kobashi et al. dealt with this task by asking users to manually translate security pattern behaviours into formal properties [@Kobashi15]. Unfortunately, few developers have the required skills in formal modelling. We hence prefer proposing a practical way to generate them. After the security pattern choice, our approach provides generic UML sequence diagrams, which can be adapted to better match the application context. From these diagrams, the approach automatically generate LTL properties. After the test case execution, we check if these properties hold in the application traces. The developer is hence not aware of the LTL property generation.
We have implemented this approach in a tool prototype available in [@data]. This tool was used to conduct several experiments on 10 Web applications to evaluate the security testing and security pattern testing effectiveness of the tool as well as its performance.
### Paper Organisation {#paper-organisation .unnumbered}
Section \[sec:background\] outlines the context of this work. We recall some basic concepts and notations about security patterns and ADTrees. We also discuss about the related work and our motivations. Section \[sec:datastore\] briefly presents the architecture of the knowledge base used by our approach. The approach steps are described in Section \[sec:app\]. These steps are gathered into 3 stages called threat modelling, security testing, and security pattern testing. Subsequently, Section \[sec:impl\] describes our prototype implementation, and Section \[sec:eval\] evaluates the approach. Finally, Section \[sec:conclusion\] summarizes our contributions and presents future work.
Background {#sec:background}
==========
This section recalls the basic concepts related to security patterns and Attack Defense trees. The related work is presented thereafter.
Security Patterns
-----------------
Security patterns provide guidelines for secure system design and evaluation [@Yoder1998]. They also are considered as countermeasures to threats and attacks [@Schumacher2003]. Security patterns have to be selected in the design stage, integrated in application models, and eventually implemented. Their descriptions are usually given with texts or schema. But, they are often characterised by UML diagrams capturing structural or behavioural properties.
Several security pattern catalogues are available in the literature, e.g., [@patrepo; @Yskout2015], themselves extracted from other papers. In these catalogues, security patterns are systematically organised according to features and relationships among them. Among these features, we often find the solutions called intents, or the interests called forces. A security pattern may have different relationships with other patterns. These relations may noticeably help combine patterns together and not to devise unsound composite patterns. Yskout et al. proposed the following annotations between two patterns [@yskout2006system]: “depend”, “benefit”, “impair” (the functioning of the pattern can be obstructed by the implementation of a second one), “alternative”, “conflict”.
![Class layout of the security pattern “Intercepting Validator”.[]{data-label="fig:1"}](figure/interceptingvalidator){width="1\linewidth"}
Figure \[fig:1\] depicts the UML structural diagram of the security pattern “Intercepting Validator”, which is taken as example is the remainder of the paper. Its purpose is to provide the application with a centralized validation mechanism, which applies some filters (Validator classes) declaratively based on URL, allowing different requests to be mapped to different filter chains. This validation mechanism is decoupled from the other parts of the application and each data supplied by the client is validated before being used. The validation of input data prevents attackers from passing malformed input in order to inject malicious commands.
Attack Defense Trees
--------------------
ADTrees *are graphical representations of possible measures an attacker might take in order to compromise a system and the defenses that a defender may employ to protect the system* [@kordy2012attack]. ADTrees have two different kinds of nodes: attack nodes (red circles) and defense nodes (green squares). A node can be *refined* with child nodes and can have one child of the opposite type (linked with a dashed line). Node refinements can be disjunctive or conjunctive. The former is recognisable by edges going from a node to its children. The latter is graphically distinguishable by connecting these edges with an arc. We extend these two refinements with the sequential conjunctive refinement of attack nodes, defined by the same authors in [@jhawar2015attack]. This operator expresses the execution order of child attack nodes. Graphically, a sequential conjunctive refinement is depicted by connecting the edges, going from a node to its children, with an arrow.
![ADTree example modelling injection attacks[]{data-label="fig:injection"}](figure/InjectWebAPP){width=".5\linewidth"}
For instance, the ADTree of Figure \[fig:injection\] identifies the objectives of an attacker or the possible vulnerabilities related to the supply of untrusted inputs to an application. The root node is here detailed with disjunctive refinements connecting three leaves, which are labelled by attack referenced in a base called the Common Attack Pattern Enumeration and Classification (CAPEC) [@CAPEC]. The node CAPEC-66 refers to “SQL Injection”, CAPEC-250 refers to XML injections and CAPEC-244 to “Cross-Site Scripting via Encoded URI Schemes”.
An ADTree $T$ can be formulated with an algebraic expression called ADTerm and denoted $\iota(T)$. In short, the ADTerm syntax is composed of operators having types given as exponents in $\{o,p\}$ with $o$ modelling an opponent and $p$ a proponent. $\vee^s, \wedge^s, \overrightarrow{\wedge}^s$, with $s \in \{o,p\}$ respectively stand for the disjunctive refinement, the conjunctive refinement and the sequential conjunctive refinement of a node. A last operator $c$ expresses counteractions (dashed lines in the graphical tree). $c^s(a,d)$ intuitively means that there exists an action d (not of type s) that counteracts the action a (of type s). The ADTree of Figure \[fig:injection\] can be represented with the ADTerm $\vee^p ( \vee^p ($CAPEC-66, CAPEC-250$),$ CAPEC-244$)$.
Related Work
------------
The literature proposes several papers dealing with the test case generation from Attack trees (or related models) and some other ones about security pattern testing. As these topics are related to our work, we introduce them below and give some observations.
### Security Testing From Threat Models
the generation of concrete test cases from models has been widely studied in the last decade, in particular to test the security level of different kinds of systems, protocols or software. Most of the proposed approaches take specifications expressing the expected behaviours of the implementation. But, other authors preferred to bring security aspects out and used models describing attacker goals or vulnerability causes of the system. Such models are conceived during the threat modelling phase of the system [@threatmodeling], which is considered as a critical phase of the software life cycle since *“you cannot build a secure system until you understand your threats!”* [@securecode]. Schieferdecker et al. presented a survey paper referencing some approaches in this area [@MdlBST]. For instance, Xu et al. proposed to test the security of Web applications with models as Petri nets to describe attacks [@Xu12]. Attack scenarios are extracted from the Reachability graphs of the Petri nets. Then, test cases written for the Selenium tool are generated by means of a MIM (Model- Implementation Mapping) description, which maps each Petri net place and transition to a block of code. Bozic et al. proposed a security testing approach associating UML state diagrams to represent attacks, and combinatorial testing to generate input values used to make executable test cases derived from UML models [@Bozic2014].
Other authors adopted models as trees (Attack trees, vulnerability Cause Graphs, Security Activity Graphs, etc.) to represent the threats, attacks or vulnerability causes that should be prevented in an application. From these models, test cases are then written to check whether attacks can be successfully executed or whether vulnerabilities are detected in the implementation. Morai et al. introduced a security testing approach specialised for network protocols [@Morais2009]. Attack scenarios are extracted from an Attack tree and are converted to Attack patterns and UML specifications. From these, attack scripts are manually written and are completed with the injection of (network) faults. In the security testing method proposed in [@Marback09], data flow diagrams are converted into Attack trees from which sequences are extracted. These sequences are composed of events combined with parameters related to regular expressions. These events are then replaced with blocks of code to produce test cases. The work published in [@ElAriss2011] provides a manual process composed of eight steps. Given an Attack tree, these steps transform it into a State chart model, which is iteratively completed and transformed before using a model-based testing technique to generate test cases. In [@Marback2013], test cases are generated from Threat trees. The latter are previously completed with parameters associated to regular expressions to generate input values. Security scenarios are extracted from the Threat trees and are manually converted to executable test scripts. Shahmehri et al. proposed a passive testing approach, which monitors an $AUT$ to detect vulnerabilities [@Shahmehri2012]. The undesired vulnerabilities are modelled with security goal models, which are specialised directed acyclic graphs showing security goals, vulnerabilities and eventually mitigations. Detection conditions are then semi-automatically extracted and given to a monitoring tool.
We observed that the above methods either automatically generate abstract test cases from (formal) specifications or help write concrete test cases from detailed threat models. On the one hand, as abstract test cases cannot be directly used to experiment an $AUT$, some works proposed test case transformation techniques. However, this kind of technique is at the moment very limited. On the other hand, Only a few of developers have the required skills to write threat models or test cases, as a strong expertise on security is often required. Besides, the methods neither guide developers in the threat modelling phase nor provide any security solution. We focused on this problem and laid the first stone of the present approach in [@SR17a; @SR17b; @salva:hal-02019145]. We firstly presented a semi-automatic data integration method [@SR17a] to build security pattern classifications. This method extracts security data from various Web and publicly accessible sources and stores relationships among attacks, security principles and security patterns into a knowledge base. Section \[sec:datastore\] summarises the results of this work used in this paper, i.e., the first meta-model version of the data-store. In [@SR17b], we proposed an approach to help developers write ADTrees and concrete security test cases to check whether an application is vulnerable to these attacks. This work was extended in [@salva:hal-02019145] to support the generation of test suites composed of lists of ordered GWT test cases, a list being devoted to check whether an AUT is vulnerable to an attack, which is segmented into an ordered sequence of attack steps. This test suite organisation is used to reduce the test costs with the deduction of some test verdicts under certain conditions. However, it does not assist developers to ensure that security patterns have been correctly implemented in the application. This work supplements our early study by covering this part.
### Security Pattern Testing
the verification of patterns on models was studied in [@Dong2010; @Hamid2012; @Yoshizawa14; @Kobashi15; @RBS16]. In these papers, pattern goals or intents or structural properties are specified with UML sequence diagrams [@Dong2010] with expressions written with the Object Constraint Language (OCL) [@Hamid2012; @Yoshizawa14; @Kobashi15] or with LTL properties [@RBS16]. The pattern features are then checked on UML models.
Few works dealt with the testing of security patterns, which is the main topic of this paper. Yoshizawa et al. introduced a method for testing whether behavioural and structural properties of patterns may be observed in application traces [@Yoshizawa14]. Given a security pattern, two test templates (Object Constraint Language (OCL) expressions) are manually written, one to specify the pattern structure and another one to encode its behaviour. Then, developers have to make templates concrete by manually writing tests for experimenting the application. The latter returns traces on which the OCL expressions are verified.
We observed that these previous works require the modelling of security patterns or vulnerabilities with formal properties. Instead of assuming that developers are expert in the writing of formal properties, we propose a practical way to generate them. Intuitively, after the choice of security patterns, our approach provides generic UML sequence diagrams, which can be modified by a developer. From these diagrams, we automatically generate LTL properties, which capture the cause-effects relations among pairs of method calls. After the test case execution, we check if these properties hold in the application traces, obtained while the test case execution. The developer is hence not aware of the LTL property generation. As stated in the introduction, this work provides more details on test case generation and on the formalisation of behavioural properties of security patterns with LTL properties. We also complete the transformation rules allowing to derive more LTL properties from UML sequence diagrams. We also provide an evaluation of the approach targeting the security pattern testing stage and discuss the threats to validity.
Knowledge Base Overview {#sec:datastore}
=======================
Our approach relies on a knowledge base, denoted KB in the remainder of the paper. It gathers information allowing to help or automate some steps of the testing process. We summarise its architecture in this section but we refer to [@SR17a] for a complete description of its associations and of the data integration process.
Knowledge Base Meta-Model
-------------------------
Figure \[fig:datastore1\] exposes the meta-model used to structure the knowledge base KB. The entities refer to security properties and the relations encode associations among them. The entities in white are used to generated ADTrees, while those in grey are specialised for testing. The meta-model firstly associates attacks, techniques, security principles and security patterns. This is the result of observations we made from the literature and some security documents, e.g., the CAPEC base or security pattern catalogues [@patrepo; @Yskout2015]: we consider that an attack can be documented with more concrete attacks, which can be segmented into ordered steps; an attack step provides information about the target or puts an application into a state, which are reused by a potential next step. Attack steps are performed with techniques and can be prevented with countermeasures. Security patterns are characterised with strong points, which are pattern features extractable from their descriptions. The meta-model also captures the inter-pattern relationships defined in [@yskout2006system], e.g., “depend” or “conflict”. Countermeasures and strong points refer to the same notion of attack prevention. But finding direct relations between countermeasures and strong points is tedious as these properties have different purposes. To solve this issue, we used a text mining and a clustering technique to group the countermeasures that refer to the same security principles, which are desirable security properties. To link clusters and strong points, we chose to focus on these security principles as mediators. We organised security principles into a hierarchy, from the most abstract to the most concrete principles. We provide a complete description of this hierarchy in [@SR17b]. In short, we collected and organised 66 security principles covering the security patterns of the catalogue given in [@Yskout2015]. The hierarchy has four levels, the first one being composed of elements labelled by the most abstract principles, e.g., “Access Control”, and the lower level exhibiting the most concrete principles, e.g., “File Authorization”.
{width=".7\linewidth"}
Furthermore, every attack step is associated to one test case structured with the Given When Then (GWT) pattern. We indeed consider in this paper that a test case is a piece of code that lists stimuli supplied to an AUT and responses checked by assertions assigning (local) verdicts. To make test cases readable and re-usable, we use the behaviour driven approach using the pattern “Given When Then” (shortened GWT) to break up test cases into several sections:
- Given sections aim at putting the $AUT$ into a known state;
- When sections trigger some actions (stimuli);
- Then sections are used to check whether the conditions of success of the test case are met with assertions. In the paper, the Then sections are used to check whether an $AUT$ is vulnerable to an attack step $st$. In this case, the Then section returns the verdict “$Pass_{st}$”. Otherwise, it provides the verdict “$Fail_{st}$”. When a unexpected event occurs, we also assume that “$Inconclusive_{st}$” may be returned.
The meta-model of Figure \[fig:datastore1\] associates an attack step with a GWT test case by adding three entities (Given When and Then section) and relations. In addition, a test case section is linked to one procedure, which implements it. A section or a procedure can be reused with several attack steps or security patterns. The meta-model also reflects the fact that an attack step is associated with one “Test architecture” and with one “Application context”. The former refers to textual paragraphs explaining the points of observation and control, testers or tools required to execute the attack step on an $AUT$. An application context refers to a family, e.g., Android applications, or Web sites. As a consequence, a GWT test case section (and procedure) is classified according to one application context and one attack step or pattern consequence.
We finally updated the meta-model in such a way that a security pattern is also associated to generic UML sequence diagrams, themselves arranged in Application contexts. Security pattern catalogues often provide UML sequence diagrams expressing the security pattern behaviours or structures. These diagrams often help correctly implement a security pattern with regard to an application context.
Data Integration
----------------
We integrated data into KB by collecting them from heterogeneous sources: the CAPEC base, several papers dealing with security principles [@saltzer1975protection; @viega2001building; @Scambray2003; @dialani2002transparent; @meier2006web], the pattern catalogue given in [@Yskoutcatalog] and the inter-pattern relations given in [@yskout2006system]. We details the data acquisition and integration steps in [@SR17a]. Six manual or automatic steps are required: Steps 1 to 5 give birth to databases that store security properties and establishing the different relations presented in Figure \[fig:datastore1\]. Step 6 consolidates them so that every entity of the meta-model is related to the other ones as expected. The steps 1,2 and 6 are automatically done with tools.
The current knowledge base KB includes information about 215 attacks (209 attack steps, 448 techniques), 26 security patterns, 66 security principles. We also generated 627 GWT test case sections (Given, When and Then sections) and 209 procedures. The latter are composed of comments explaining: which techniques can be used to execute an attack step and which observations reveal that the application is vulnerable. We manually completed 32 procedures, which cover 43 attack steps. Security patterns are associated to at least one UML diagram. This knowledge base is available in [@data].
It is worth noting that KB can be semi-automatically updated if new security data are available. If a new threat or type of attack is discovered and added to the CAPEC base, the steps 1, 2 and 5 have to be followed again. Likewise, if a new security pattern is proposed in the literature, the steps 3,4 and 5 have to be reapplied.
Security Testing and Security Pattern Verification {#sec:app}
==================================================
Approach Overview {#sec:adtgen}
-----------------
{width="0.8\linewidth"}
We present in this section our testing approach whose steps are illustrated in Figure \[fig:overview\]. As illustrated in the figure, the purpose of this approach is threefold:
1. **Threat modelling:** it firstly aims at guiding developers through the elaboration of a threat model (left side of the figure). The developer gives an initial ADTree expressing attacker capabilities (Step 1). By means of KB, this tree is automatically detailed and completed with security patterns combinations expressing security solutions that may be put in place in the application design (Step 2). The tree may be modified to match the developer wishes (Step 3). The resulting ADTree, which is denoted $T_f$, captures possible attack scenarios and countermeasures given under the form of security pattern combinations. The set of security patterns chosen by the developer is denoted $SP(T_f)$.
2. **Security testing:** from $T_f$, the approach generates test case stubs, which are structured with the GWT pattern (Step 4). These stubs guide developers in the writing of concrete test cases (Step 8). The final test suite is executed on the $AUT$ to check whether the AUT is vulnerable to the attack scenarios expressed in the ADTree $T_f$ (Step 9).
3. **Security pattern verification:** the last part of the approach is devoted to checking whether security pattern behaviours hold in the $AUT$ traces. A set of generic UML sequence diagrams are extracted, from KB, for every security pattern in $SP(T_f)$ (Step 5). These show how security patterns classes or components should behave and help developers implement them in the application. These diagrams are usually adapted to match the application context (Step 6). The approach skims the UML sequence diagrams and automatically generates LTL properties encoding behavioural properties of the security patterns (Step 7). While the test case execution, the approach collects the $AUT$ method-call traces on which it checks whether the LTL properties are satisfied (Step 10).
The remaining of this section describes more formally the steps depicted in Figure \[fig:overview\].
Threat Modelling, Security Pattern Choice (Step 1 to 3)
-------------------------------------------------------
**Step 1: Initial ADTree Design**
The developer draws a first ADTree $T$ whose root node represents some attacker’s goals. This node may be refined with several layers of children to refine these goals. Different methods can be followed, e.g., DREAD [@owasp], to build this threat model. We here assume that the leaves of this ADTree are exclusively labelled by CAPEC attack identifiers, since our knowledge base KB is framed upon the CAPEC base. Figure \[fig:injection\] illustrates an example ADTree achieved for this step. The leaves of this tree are labelled by CAPEC attacks related to different kinds of injection-based attacks. Its describes in general terms attacker goals, but this model is not sufficiently detailed to generate test cases or to choose security solutions.
**Step 2: ADTree Generation**
![Genenal form of the generated ADTrees[]{data-label="fig:genADTree"}](figure/generictree){width="0.7\linewidth"}
\[.9\][ {width=".7\linewidth"}]{}
\[1\][ {width="1\linewidth"}]{}
KB is now queried to complete $T$ with more details about the attack execution phase and with defense nodes labelled by security patterns. For every leave of $T$ labelled by an attack $A$, an ADTree $T(A)$, is generated from KB. We refer to [@SR17b] for the description of the ADTree generation.
We have implemented the ADTree generation with a tool, which takes attacks of KB and yields XML files. These can be edited with the tool *ADTool* [@kordy2012attack]. For instance, Figures \[fig:adtree66\] and \[fig:adtree244\] show the ADTrees generated for the attacks CAPEC-66 and CAPEC-244. The ADTrees generated by this step are composed of several levels of attacks, having different levels of abstraction. The attack steps have child nodes referring to attack techniques, which indicate how to carry out the step. For instance the technique 1.1.1 is “Use a spidering tool to follow and record all links and analyze the web pages to find entry points. Make special note of any links that include parameters in the URL”. An attack step node is also linked to a defense node expressing security pattern combinations. Some nodes express inter-pattern relations. For instance, the node labelled by “Alternative” has children expressing several possible patterns to counter the attack step.
Figures \[fig:adtree66\] and \[fig:adtree244\] also reveal that our generated ADTrees follow the structure of our meta-model of Figure \[fig:datastore1\]. This structure has the generic form given in Figure \[fig:genADTree\]: ADTrees have a root attack node, which may be disjunctively refined with other attacks and so forth. The most concrete attack nodes are linked to defense nodes labelled by security patterns. We formulate in the next proposition that these nodes or sub-trees also are encoded with specific ADTerms, which shall be used for the test case generation:
An ADTree $T(A)$ achieved by the previous steps has an ADTerm $\iota(T(A))$ having one of these forms:
1. $\vee^p (t_1, \dots, t_n)$ with $t_i (1 \leq i \leq n)$ an ADTerm also having one of these forms:
2. $\overrightarrow{\wedge}^p(t_1,\dots, t_n)$ with $t_i (1 \leq i \leq n)$ an ADTerm having the form given in 2) or 3);
3. $c^p(st,sp)$, with $st$ an ADTerm expressing an attack step and $sp$ an ADTerm modelling a security pattern combination.
The first ADTerm expresses child nodes labelled by more concrete attacks. The second one represents sequences of attack steps. The last ADTerm is composed of an attack step $st$ refined with techniques, which can be counteracted by a security pattern combination $sp=\wedge^o(sp_1,\dots, sp_m)$. In the remainder of the paper, we denote the last expression $c^p(st,sp)$ a *Basic Attack Defence Step*, shortened as BADStep:
A BADStep $b$ is an ADTerm of the form $c^p(st, sp)$, where $st$ is a step only refined with techniques and $sp$ an ADTerm of the form:
1. $sp_1$, with $sp_1$ a security pattern,
2. $\wedge^o(sp_1,\dots,sp_m)$ modelling the conjunction of the security patterns $sp_1,\dots,\\sp_m (m>1)$.
$\operatorname{defense}(b)=_{def} \{sp_1 \}$ iff $sp=sp_1$, or $\operatorname{defense}(b)=_{def} \{sp_1,\dots,sp_m \}$ iff $sp=\wedge^o(sp_1,\dots,sp_m)$.\
$\operatorname{BADStep}(T)$ denotes the set of BADSteps of the ADTree $T$.
**Step 3: Security Pattern Choice and ADTree Edition**
\[1\][ {width="1\linewidth"}]{}
The developer may now edit every ADTree $T(A)$ generated by the previous step and choose security patterns when several possibilities are available. We assume that the defense nodes linked to attack nodes have conjunctive refinements of nodes labelled by security patterns only. Figure \[fig:adtree244-2\] depicts an example of modified ADTree of the attack CAPEC-244.
Every attack node $A$ of the initial ADTree $T$ is now automatically replaced with the ADTree $T(A)$. This step is achieved by substituting every term $A$ in the ADTerm $\iota(T$) by $\iota(T(A))$. We denote $\iota(T_f)$ the resulting ADTerm and $T_f$ the final ADTree. It depicts a logical breakdown of the options available to an attacker and the defences, materialised with security patterns, which have to be inserted into the application model and then implemented. The security pattern set found in $T_f$ is denoted $SP(T_f)$.
This step finally builds a report by extracting from KB the test architecture descriptions needed for executing the attacks on the $AUT$ and observing its reactions.
Security Testing {#sec:test}
----------------
We now extract attack-defense scenarios to later build test suites that will check whether attacks are effective on the $AUT$. An attack-defense scenario is a minimal combination of events leading to the root attack, minimal in the sense that, if any event of the attack-defense scenario is omitted, then the root goal will not be achieved.
The set of attack-defense scenarios of $T_f$ are extracted by means of the disjunctive decomposition of $\iota(Tf)$:
Let $T_f$ be an ADTree and $\iota(T_f)$ be its ADTerm. The set of Attack scenarios of $T_f$, denoted $SC(T_f)$ is the set of clauses of the disjunctive normal form of $\iota(T_f)$ over $BADStep(T_f)$.
$BADStep(s)$ denotes the set of BADSteps of a scenario $s$.
An attack scenario $s$ is still an ADTerm. Its satisfiability means that the main goal of the ADTree $T_f$ is feasible by achieving the scenario formulated by $s$. $BADStep(s)$ denotes the set of BADSteps of $s$.\
**Step 4: Test Suite Generation**
Let $s \in SC(Tf)$ be an attack-defense scenario and $b=c^p(st, sp) \in BADStep(s)$ a BADSteps of $s$. Step 4 generates the GWT test case $TC(b)$ composed of 3 sections extracted from KB with the relations $testG$, $testW$ and $testT$: we have one Given section, one When section and one Then section, each related to one procedure. This Then section aims to assert whether the $AUT$ is vulnerable to the attack step $st$ executed by the When section.
The final test suite $TS$, derived from an ADTree $T_f$, is obtained after having iteratively applied this test case construction on the scenarios of $SC(T_f)$. This is captured by the following definition:
Let $T_f$ be an ADTree, $s \in SC(Tf)$ and $b \in BADStep(s)$.\
$TS= \{TC(b) \mid b=c^p(st, sp) \in BADStep(s) \text{ and } s \in SC(T_f) \}$.
@capec244
Feature: CAPEC-244: Cross-Site Scripting via Encoded URI Schemes
#1. Explore
Scenario: Step1.1 Survey the application
Given prepare to Survey the application
When Try to Survey the application
# assertion for attack step success
Then Assert the success of Survey the application
@When("Try to Survey the application for user -controllable inputs")
public void trysurvey(){
// Try one of the following techniques :
//1. Use a spidering tool to follow and record all links and analyze the web pages to find entry points. Make special note of any links that include parameters in the URL.
//2. Use a proxy tool to record all user input entry points visited during a manual traversal of the web application.
//3. Use a browser to manually explore the website and analyze how it is constructed. Many browsers' plugins are available to facilitate the analysis or automate the discovery.
String url ="";
ZAProxyScanner j = new ZAProxyScanner("localhost", 8080, "zap");
j.spider(url);
}
@Then("Assert the success of Survey the application for user-controllable inputs")
public void asssurvey(){
// Assert one of the following indications :
// -A list of URLs, with their corresponding parameters (POST, GET, COOKIE, etc.) is created by the attacker.
ZAProxyScanner j = new ZAProxyScanner("localhost", 8080, "zap");
int x = j.getSpiderResults(j.getLastSpiderScanId())
.size();
Assert.assertTrue(x>0);
}}
We have implemented these steps to yield GWT test case stubs compatible with the Cucumber framework [@cucumber], which supports a large number of languages. Figure \[fig:feature\] gives a test case stub example obtained with our tool from the first step of the attack CAPEC-244 depicted in Figure \[fig:adtree244\]. The test case lists the Given When Then sections in a readable manner. Every section is associated to a generic procedure stored into another file. The procedure related to the When and Then sections are given in Figure \[fig:proc\]. The comments come from KB and the CAPEC base. In this example, the procedure includes a generic block of code, which may be reused with several applications; the “getSpider()” method relates to the call of the ZAProxy[^1] tool, which crawls a Web application to get its URLs.\
**Step 8: Test Case Stub Completion**
In the beginning of this step, the test case procedures are generic, which means that they are composed of comments or generic block of codes that help developers complete them. In the previous test case example, it only remains for the developer to write the initial URL of the Web application before testing whether it can be explored. Unfortunately, with other test cases, the developer might have to implement it completely.
After this step, we assume that the test cases are correctly developed with assertions in Then sections as stated in Section \[sec:datastore\]: a Then section of a test case $TC(b)$ returns the verdict “$Pass_{st}$” if an attack step $st$ has been successfully applied on the $AUT$ and “$Fail_{st}$” otherwise; when $TC(b)$ returns an unexpected exception or fault, we get the verdict “$Inconclusive_{st}$”.\
**Step 9: Test Case Execution**
The experimentation of the $AUT$ with the test suite $TS$ is carried out in this step. A test case $TC(b)$ of $TS$, which aims at testing whether the $AUT$ is vulnerable to an attack step $st$ leads to a local verdict denoted $\operatorname{Verdict}(TC(b) \vert \vert AUT)$:
Let $AUT$ be an application under test, $b=c^p(st,sp) \in BADStep(T_f)$, and $TC(b)\in TS$ be a test case.\
$\operatorname{Verdict}(TC(b) \vert \vert AUT)=$
- $Pass_{st}$, which means $AUT$ is vulnerable to the attack step $st$;
- $Fail_{st}$, which means $AUT$ does not appear to be vulnerable to the attack step $st$;
- $Inconclusive_{st}$, which means that various problems occurred while the test case execution.
We finally define the final verdicts of the security testing stage with regard to the ADTree $T_f$. These verdicts are given with the predicates $\operatorname{Vulnerable}(T_f)$ and $\operatorname{Inconclusive}(T_f)$ returning boolean values. The intermediate predicate $\operatorname{Vulnerable}(b)$ is also defined on a BADStep $b$ to evaluate a substitution $\sigma : BADStep(s) \rightarrow \{true,false\}$ on an attack-defense scenario $s$. A scenario $s$ holds if the evaluation of the substitution $\sigma$ to $s$, i.e., replacing every BADStep term $b$ with the evaluation of $\operatorname{Vulnerable}(b)$, returns true. The predicate $\operatorname{Vulnerable}(s)$ expresses whether an attack-defense scenario of $T_f$ holds. In that case, the threat modelled by $T_f$ can be achieved on $AUT$. This is defined with the predicate $\operatorname{Vulnerable}(T_f)$:
Let $AUT$ be an application under test, $T_f$ be an ADTree, $s \in SC(T_f)$ and $b=c^p(st,sp) \in BADStep(s)$.
1. $\operatorname{Vulnerable}(b)=_{def} true$ if $\operatorname{Verdict}(TC(b) \vert \vert AUT)= Pass_{st}$; otherwise, $\operatorname{Vulnerable}(b)=_{def} false$;
2. $\operatorname{Vulnerable}(s)=_{def} true$ if $eval(s\sigma)$ returns true, with $\sigma:BADStep(s)\rightarrow \{true,false\}$ the substitution $\{b_1 \rightarrow \operatorname{Vulnerable}(b_1), \dots, b_n \rightarrow \operatorname{Vulnerable}(b_n) \}$; otherwise, $\operatorname{Vulnerable}(s)=_{def} false$;
3. $\operatorname{Inconclusive}(s)=_{def}true$ if $\exists b \in \operatorname{BADStep}(s)$: $\operatorname{Verdict}(TC(b) \vert \vert$ $AUT)=Inconclusive_{st}$; otherwise, $\operatorname{Inconclusive}(s)=_{def}false$.
4. $\operatorname{Vulnerable}(T_f)=_{def} true$ if $\exists s \in SC(T_f): \operatorname{Vulnerable}(s)=true$; otherwise, $\operatorname{Vulnerable}($ $T_f)=_{def} false$;
5. $\operatorname{Inconclusive}(T_f)=_{def}true$ if $\exists s \in SC(T_f),$ $\operatorname{Inconclusive}(s)=true$; otherwise, $\operatorname{Inconclusive}(T_f)$ $=_{def}false$.
Security Pattern Verification
-----------------------------
Our approach also aims at checking whether security patterns are correctly implemented in the application. The security testing stage is indeed insufficient because the non-detection of vulnerability in the $AUT$ does not imply that a security pattern is correctly implemented. As stated earlier, we propose to generate LTL properties that express the behavioural properties of a security pattern. Then, these are used to check whether they hold on the $AUT$ traces. The originality of our approach resides in the fact that we do not ask developers for writing formal properties, we propose to generate them by means of KB.\
**Steps 5 and 6: UML Sequence Diagram Extraction and Modification**
After the threat modelling stage, this step starts by extracting from KB a list of generic UML sequence diagrams for each security pattern in $SP(T_f)$. These diagrams show how a security pattern should behave once it is correctly implemented, i.e., how objects interact in time. We now suppose that the developer implements every security pattern in the application. At the same time, he/she may adapt the behaviours illustrated in the UML sequence diagrams. In this case, we assume that the diagrams are updated accordingly.
Figure \[fig:case\] illustrates an example of UML sequence diagram for the security pattern “Intercepting Validator”. The diagram shows the interactions between an external Client, the pattern and the application, but also the interactions among the objects of the pattern. Here, the Intercepting Validator Object is called to validate requests. These are given to another object ValidatorURL, which filters the request with regard to the URL type. If the request is valid, it is processed by the application (Controller object), otherwise an error is returned to the client side.\
![UML sequence diag. of the pattern Intercepting Validator[]{data-label="fig:case"}](figure/interceptingvalidatorseq){width="1\linewidth"}
**Step 7: Security Pattern LTL Property Generation**
This step automatically generates LTL properties from UML sequence diagrams by detecting the cause-effect relations among method calls and expressing them in LTL. Initially, we took inspiration in the method of Muram et al. [@MuramTZ14], which transforms activity diagrams into LTL properties. Unfortunately, security patterns are not described with activity diagrams, but with sequence diagrams. This is why we devised 20 conversion schemas allowing to transform UML sequence diagram constructs, composed of two or three successive actions, into UML activity diagrams. Table \[tab:rules\] gives 6 of these schemas. Intuitively, these translate two consecutive method calls found in a sequence diagram by activity diagrams composed of action states. The other schemas (not all given in Table \[tab:rules\]) are the results of slight adaptations of the five first ones, where the number of objects or the guards have been modified. For instance, the last schema of Table \[tab:rules\] is an adaptation of the first one, which depicts interactions between two objects instead of three.
Then, we propose 20 rules to translate these activity diagrams into LTL properties. The last column of Table \[tab:rules\] lists 6 of these rules. Some of these rules are based on those proposed by Muram et al, but we devised other rules related to our own activity diagrams, which are more detailed. For instance, we take into account the condition state in the second rule to produce more precise LTL properties.
At the end of this step, we consider having a set of LTL properties $P(sp)$ for every security pattern $sp \in SP(T_f)$. Although the LTL properties of $P(sp)$ do not necessarily cover all the possible behavioural properties of a security pattern $sp$, this process offers the advantages of not asking developers for writing LTL formula or to instantiate generic LTL properties to match the application model or code.
[|m[3cm]{}|m[2.1cm]{}|m[2.3cm]{}|]{} Sequence Diag.&Activity Diag.<L properties\
{width="1\linewidth"} & {width=".3\linewidth"} &$\square (B.1 \longrightarrow \lozenge C.2)$\
{width="1\linewidth"} & {width=".8\linewidth"} &$\square ( B.1 \longrightarrow \lozenge $ $B.2 )$ $xor$ $(\neg B.1$ $\longrightarrow \lozenge C.3)) $\
{width="1\linewidth"} & {width="1\linewidth"} & $\square (B.1 \longrightarrow (\lozenge B.2) and (\lozenge C.3))$\
{width="1\linewidth"} & {width=".8\linewidth"} & $\square (B.1 xor C.3 \longrightarrow \lozenge B.3)$\
{width="1\linewidth"} & {width=".8\linewidth"} & $\square (B.1 and C.3 \longrightarrow \lozenge B.3)$\
{width=".7\linewidth"} & {width=".3\linewidth"} & $\square (B.1 \longrightarrow \lozenge B.2)$\
From the example of UML sequence diagram given in Figure \[fig:case\], 4 LTL properties are generated. Table \[prop\] lists them. These capture the cause-effect relations of every pair of methods found in the UML sequence diagram.
$p_1$ $\square (SecureBaseAction.invokes$ $\longrightarrow\lozenge InterceptingValidator.validate$
------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$p_2$ $\square (InterceptingValidator.validate \longrightarrow \lozenge ValidatorURL.create)$
$p_3$ $\square (ValidatorURL.create$ $\longrightarrow \lozenge ValidatorURL.validate)$
$p_4$ $ \square ( (ValidatorURL.validate \longrightarrow \lozenge Controller.call )$ $xor$ $(\neg ValidatorURL.validate \longrightarrow \lozenge SecureBaseAction.error)) $
: LTL properties for the pattern Intercepting Validator.[]{data-label="prop"}
**Step 10: Security Pattern Verification**
As stated earlier, we consider that the $AUT$ is instrumented with a debugger or similar tool to collect the methods called in the application while the execution of the test cases of $TS$. After the test case execution, we hence have a set of method call traces denoted $Traces(AUT)$.
A model-checking tool is now used to detect the non-satisfiability of LTL properties on $Traces(AUT)$. Given a security pattern $sp$, the predicate $Unsat^b(sp)$ formulates the non-satisfiability of a LTL property of $sp$ in $Traces(AUT)$. The final predicate $\operatorname{Unsat}^b(SP(T_f))$ expresses whether all the LTL properties of the security patterns given in $T_f$ hold.
Let $AUT$ be an application under test, $T_f$ be an ADTree, and $sp \in SP(T_f)$ be a security pattern.
1. $\operatorname{Unsat}^b(sp)=_{def} true$ if $\exists p \in P(sp), \exists t \in Traces(AUT), t \nvDash p$; otherwise, $\operatorname{Unsat}^b(sp)$ $=_{def}false$;
2. $\operatorname{Unsat}^b(SP(T_f))=_{def}true$ if $\exists sp \in SP(T_f), \operatorname{Unsat}^b(sp)= true$; otherwise, $\operatorname{Unsat}^b(SP(T_f))$ $=_{def}false$;
Table \[tab:table1\] informally summarises the meaning of some test verdicts and some corrections that may be followed in case of failure.
Vulnera-ble($T_f$) $\operatorname{Unsat}^b($ $SP(T_f))$ Incon $(T_f)$ Corrective actions
-------------------- -------------------------------------- --------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
False False False No issue detected
True False False At least one scenario is successfully applied on $AUT$. Fix the pattern implementation. Or the chosen patterns are inconvenient.
False True False Some pattern behavioural properties do not hold. Check the pattern implementations with the UML seq. diag. Or another pattern conceals the behaviour of the former.
True True False The chosen security patterns are useless or incorrectly implemented. Review the ADTree, fix $AUT$.
T/F T/F True The test case execution crashed or returned unexpected exceptions. Check the Test architecture and the test case codes.
: Test verdict Summary and Recommendations.[]{data-label="tab:table1"}
Implementation {#sec:impl}
==============
Our approach is implemented in Java and is released as open source in [@data]. At the moment, the $AUT$ must be a Web application developed with any kind of language provided that the $AUT$ may be instrumented to collect method call traces. The prototype tool consists of three main parts. The first one comes down to a set of command lines allowing to build the knowledge base KB. The data integration is mostly performed by calling the tool Talend, which is specialised into the extract, transform, load (ETL) procedure. An example of knowledge base is available in [@data].
A second software program semi-automatically generates ADTrees and GWT test cases. ADTrees are stored into XML files, and may be edited with *ADTool* [@kordy2012attack]. GWT test cases are written in Java with the Cucumber framework, which supports the GWT test case pattern. These test cases can be imported as an Eclipse project to be completed and executed. This software program also provides UML sequence diagrams stored in JSON files, which have to be modified to match the $AUT$ functioning. LTL properties are extracted from these UML sequence diagrams.
The last part of the tool is a tester that experiments Web applications with test cases and returns test verdicts. While the test case execution, we collect log files including method call traces. The LTL property verification on these traces is manually done by these steps: 1) the log files usually have to be manually filtered to remove necessary events 2) the tool Texada [@texada] is invoked to check the satisfiability of every LTL property on the log files. This tool takes as inputs a log file, a LTL property composed of variables and a list of events specifying variables in the formula to be interpreted as a constant event. Texada returns the number of times that a property holds in a log file. We have chosen the Texada tool as it offers good performance and can be used on large trace sets. But other tools could also be used, e.g., the LTL checker plugin of the ProM framework [@Maggi] or Eagle [@eagle].
Preliminary Evaluation {#sec:eval}
======================
First and foremost, it is worth noting that we carried out in [@SR17b] a first evaluation of the difficulty of using security patterns for designing secure applications. This evaluation was conducted on 24 participants and allowed us to conclude that the Threat modelling stage of our approach makes the security pattern choice and the test case development easier and makes users more effective on security testing. In this paper, we propose another evaluation of the security testing and security pattern testing parts of our approach. This evaluation addresses the following research questions:
- Q1: Can the generated test cases detect security issues?
- Q2: Can the generated LTL properties detect incorrect implementation of patterns?
- Q3: How long does it take to discover errors (Performance)?
Empirical Setup
---------------
We asked ten teams of two students to implement Web applications written in PHP as a part of their courses. They could choose to develop either a blog, or a todo list application or a RSS reader. Among the requirements, the Web applications had to manage several kinds of users (visitors, administrators, etc.), to be implemented in object-oriented programming, to use the PHP Data Objects (PDO) extension to prevent SQL injections, and to validate all the user inputs. As a solution to filter inputs, we proposed them to apply the security pattern Intercepting Validator. But its use was not mandatory.
Then, we applied our tool on these 10 Web applications in order to:
- test whether these are vulnerable to both SQL and XSS injections (attacks CAPEC-66 and CAPEC-244). With our tool, we generated the ADTrees of Figures \[fig:adtree66\] and \[fig:adtree244\], along with GWT test cases. We completed them to call the penetration testing tool ZAProxy (as illustrated in Figure \[fig:proc\]). All the applications were vulnerable to the steps “Explore” of the ADTrees (application survey), therefore we also experimented them with the test cases related to the steps “Experiment” (attempt SQL or XSS injections);
- test whether the behaviours of the pattern Intercepting Validator are correctly implemented in the 10 Web applications. We took the UML sequence diagram of Figure \[fig:case\] and adapted it ten times to match the context of every application. Most of the time, we had to change the class or method names, and to add as many validator classes as there are in the application codes. When a class or method of the pattern was not implemented, we leaved the generic name in the UML diagram. Then, we generated LTL properties to verify whether they hold in the application traces.
Q1: Can the generated test cases detect security issues?
--------------------------------------------------------
### Procedure {#procedure .unnumbered}
to study Q1, we experimented the 10 applications with the 4 GWT test cases of the two Steps Explore and Experiment of the attacks CAPEC-66 and CAPEC-244. As these test cases call a penetration testing tool, which may report false positives, we manually checked the reported errors to only keep the real ones. We also inspected the application codes to examine the security flaw causes and to finally check whether the applications are vulnerable.
### Results {#results .unnumbered}
Table \[table:result1\] provides the number of tests for both attacks (columns 2 and 3), the number of security errors detected by these tests (columns 4 and 5) and execution times in seconds (column 6). As a penetration testing tool is called, a large amount of malicious HTTP requests are sent to the applications in order to test them. The test number often depends on the application structure (e.g., number of classes, of called libraries, of URLs, etc.) but also on the number of forms available in an application.
Table \[table:result1\] shows that errors are detected in half of the applications. After inspection, we observed that several inputs are not filtered in App. 1, 5 and 6. On the contrary, for App. 3 and 7 all the inputs are checked. However, the validation process is itself incorrectly performed or too straightforward. For example, in App. 3 the validation comes down to checking that the input exists, which is far from sufficient to block malicious code injections. For the other applications, we observed that they all include a correct validation process, which is called after every client request. After the code inspection and the testing process, we conclude that they seem to be protected against both XML and SQL injections. These experiments tend to confirm that our approach can be used to test the security of Web applications.
App. \# XSS tests \# SQL tests \# XSS detection \# SQL detection time(s)
------ -------------- -------------- ------------------ ------------------ ---------
1 1610 199 1 0 14
2 12358 796 0 0 924
3 8209 398 10 4 29
4 7347 199 0 0 81
5 2527 398 3 0 1137
6 5884 597 1 1 30
7 9954 1194 1 0 49
8 2464 796 0 0 1478
9 1709 796 0 0 47
10 16441 796 0 0 93
: Results of the security testing stage: number of requests performed, number of detected security errors, and execution times in second[]{data-label="table:result1"}
Q2: Can the generated LTL properties detect incorrect implementation of patterns?
---------------------------------------------------------------------------------
### Procedure {#procedure-1 .unnumbered}
To investigate Q2, the PHP applications were instrumented with the debugger Xdebug, and we collected logs composed of method call traces while the test case execution. Then, we used the tool Texada to check whether every LTL property holds in these method call traces. When the pattern is strictly implemented as it is described in the UML sequence diagram of Figure \[fig:case\] (1 class Validator), 4 LTL properties are generated, as in Table \[tab:rules\]. However, the number of LTL properties may differ from one application to the other, with regard to the number of classes used to implement the security pattern. When there are more than 4 LTL properties for an application, the additional ones capture the call of supplementary Validator classes and only differ from the properties of Table \[tab:rules\] by the modification of the variable ValidatorURL. To keep our results comparable from one application to another, we denote with the notation $p_i$ the set of properties related to the property $p_i$ in \[tab:rules\].
Furthermore, both authors independently checked the validation part in every applications to assess how the security pattern is implemented in order to ensure that a property violation implies that a security pattern behaviour is not correctly implemented.
### Results {#results-1 .unnumbered}
Table \[table:result2\] lists in columns 2-5 the violations of the properties derived from those given in Table \[tab:rules\] for the 10 applications. These results firstly show that our approach detects that the security pattern Intercepting Validator is never correctly implemented. The pattern seems to be almost implemented in App. 2 because only $p_4$ does not hold here. An inspection of the application code confirms that the pattern structure is correctly implemented as well as most of its method call sequences. But we observed that the application does not always return an error to the user when some inputs are not validated. This contradicts one of the pattern purposes.
App. 3, 4, 7-10 include some sorts of input filtering processes at least defined in one class. But, these do not respect the security pattern behaviours. Most of the time, we observed that the validation process is implemented in a single class instead of having an Intercepting Validator calling other Validator classes. This misbehaviour is detected by the violations of the properties $p_2$ and $p_3$. Besides, we observed that the input validation is not systematically performed in App. 1, 5 and 6. This is detected by our tool with the violation of $p_1$. As a consequence, it is not surprising to observe that these applications are vulnerable to malicious injections. We also observed that when App. 5 validates the inputs, it does not define the validation logic in a class. The fact that the security pattern is not invoked is detected by the violation of $p_1$. But, this property violation does not reveal that there is another validation process implemented. In summary, our application code inspections confirmed the results of Table \[table:result2\]. In addition to assessing whether the security pattern behaviours are correctly implemented, we observed that our approach may also help learn more information about the validation process, without inspecting the code. For instance, the properties based on $p_1$ check whether a validation method defined in a class is called every time a client request is received. The properties based on $p_4$ give information about the error management. Their violations express that users are not always warned when invalid inputs are provided to the applications.
App. $p_1$ $p_2$ $p_3$ $p_4$ Time(min)
------ ------- ------- ------- ------- -----------
1 X X 4,02
2 X 51,15
3 X X 19,12
4 X X 29,34
5 X X X X 6,5
6 X X X X 14,40
7 X X 24,77
8 X X X 7,24
9 X X X 5,56
10 X X 67,03
: Results of the security pattern testing stage: violation of the LTL properties and execution times in second[]{data-label="table:result2"}
Q3: How long does it take to discover errors (Performance)?
-----------------------------------------------------------
### Procedure {#procedure-2 .unnumbered}
We measured the time consumed by the tool to carry out security testing and security pattern verification for the 10 applications. Execution times are given in Tables \[table:result1\] and \[table:result2\]. Furthermore, we also measured the number of LTL properties that are generated for 11 security patterns, which are often used with Web applications, as the LTL property number influences execution times.
### Results {#results-2 .unnumbered}
The plot chart of Figure \[fig:exec\] shows that security testing requires less than 2 minutes for 7 applications independently on the number of tests, whereas it requires more than 15 minutes for the 3 others. The security testing stage depends on several external factors, which makes it difficult to draw consistent conclusions. It firstly depends on the test case implementation; in our evaluation, we choose to call a penetration testing tool, therefore, execution times mostly depend on it. Another factor is the application structure (nb of classes, calls of external URLS, etc.). Therefore, we can only conclude here is that execution times are lower than 25 minutes, which remains reasonable with regard to the number of requests sent to applications.
![Execution times of the security testing stage for the ten applications[]{data-label="fig:exec"}](figure/exectimes){width="1\linewidth"}
The time required to detect property violations in method call traces is given in Column 6 of Table \[table:result2\]. Execution times vary here between 4 and 67 minutes according to the number of traces collected from the application and the number of generated LTL properties. For example, for App. 1O, 17237 security tests have been executed, and 17237 traces of about 30 events have been stored in several log files. Furthermore, 7 LTL properties have been generated for this applications. These results, and particularly the size of the trace set, explain the time required to check whether the LTL properties hold. In general terms, we consider that execution times remain reasonable with regard to the trace set sizes of the applications.
Table \[table:result3\] finally shows the number of LTL properties generated from generic UML properties (without adapting them to application contexts) for 11 security patterns whose descriptions include UML sequence diagrams. For these patterns, the property number is lower or equal than 13. For every pattern, the property number is in a range that seems reasonably well supported by model checkers. However, if several security patterns have to be tested, the property number might quickly exceed the model-checker limits. This is we have chosen in our approach to check the satisfiability of each LTL property, one after the other, on method call traces.
Security pattern \# UML diag. \# LTL properties
---------------------------- -------------- -------------------
Authentication Enforcer 3 9
Authorization Enforcer 3 13
Intercepting Validator 2 4
Secure Base Action 2 5
Secure Logger 2 5
Secure Pipe 2 10
Secure Service Proxy 2 6
Intercepting Web Agent 2 9
Audit Interceptor 2 7
Container Managed Security 2 7
Obfuscated Transfer Object 2 10
Obfuscated Transfer Object 2 10
: LTL property generation for some security patterns[]{data-label="table:result3"}
Threat to Validity
------------------
This preliminary experimental evaluation is applied on 10 Web applications, and not on other kinds of software or systems. This is a threat to external validity, and this is why we avoid drawing any general conclusion. But, we believe that this threat is somewhat mitigated by our choice of application, as the Web application context is a rich field in great demand in the software industry. Web applications also expose a lot of well-known vulnerabilities, which helps in the experiment set-up. In addition, the numbers of security patterns considered in the evaluation were insufficient. Hence, it is possible that our method is not applicable to all security patterns. In particular, we assume that generic UML sequence diagrams are provided in the security pattern descriptions. This is the case for the patterns available in the catalogue of Yskout et al. [@Yskoutcatalog], but not for all the patterns listed in [@patrepo]. To generalise the approach, we also need to consider more general patterns and employ large-scale examples.
The evaluation is based on the work of students, but this public is sometimes considered as a bias in evaluations. Students are usually not yet meticulous on the security solution implementation, and as we wished to experiment vulnerable applications to check that our approach can detect security flaws, we consider that applications developed by students meet our needs.
A threat to internal validity is related to the test case development. Our approach aims at guiding developers in the test case writing and security pattern choice. In the evaluation, we chose to complete test cases with the call of a penetration testing tool. The testing results would be different (better or worse) with other test cases. Significant advances have been made in these tools, which are more and more employed in the industry. Therefore, we believe that their use and the test cases considered in the experiments are close to real use cases. In the same way, we manually updated UML sequence diagrams to generate LTL properties that correspond to the application contexts. But, it is possible that we inadvertently made some mistakes, which led to false positives. To avoid this bias, we manually checked the correctness of the results by replaying the counterexamples returned by the model-checker and by inspecting the application codes.
Conclusion {#sec:conclusion}
==========
Securing software requires that developers acquire a lot of expertise in various stages of software engineering, e.g., in security design, or in testing. To help them in these tasks, we have proposed an approach based on the notion of knowledge base, which helps developers in the implementation of secure applications through steps covering threat modelling, security pattern choice, security testing and the verification of security pattern behavioural properties. This paper proposes two main contributions. It assists developers in the writing of concrete security test cases and ADTrees. It also checks whether security patterns properties are met in application traces by automatically generating LTL properties from the UML sequence diagrams that express the behaviours of patterns. Therefore, the approach does not require developers to have skills in (formal) modelling or in formal methods. We have implemented this approach in a tool prototype [@data]. We conducted an evaluation of our approach on ten Web applications, which suggests that it can be used in practice. Future work should complement the evaluation to confirm that the approach can be applied on more kinds of applications. We also mentioned that security pattern descriptions do not all include UML sequence diagrams, which are yet mandatory by our approach. We will try to solve this lack of documentation by investigating whether security pattern behavioural properties could be expressed differently, e.g., with annotations added inside application codes. In addition, we intend to consider how our ADTree generation could support the teaching of security testing and security by design.
[^1]: https://www.owasp.org/index.php/OWASP\_Zed\_Attack\_Proxy\_Project
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We have studied entropy, redundancy, complexity, and first passage times to notes for 804 pieces of 29 composers. The successful understanding of tonal music calls for an experienced listener, as entropy dominates over redundancy in musical messages. First passage times to notes resolve tonality and feature a composer. We also discuss the possible distances in space of musical dice games and introduced the geodesic distance based on the Riemann structure associated to the probability vectors (rows of the transition matrices).'
author:
- 'J. R. Dawin${}^{1}$, D. Volchenkov${}^{2}$ [^1]'
title: Markov Chain Analysis of Musical Dice Games
---
[**Keywords:**]{} Markov chains, entropy and complexity, musical distance, first passage time.
Musical dice game as a Markov chain {#sec:Introduction}
===================================
A system for using dice to compose music randomly, without having to know neither the techniques of composition, nor the rules of harmony, named [*Musikalisches Würfelspiel*]{} ([*Musical dice game*]{}) had become quite popular throughout Western Europe in the $18^{th}$ century [@Noguchi:1996]. Depending upon the results of dice throws, the certain pre-composed bars of music were patched together resulting in different, but similar, musical pieces. “The Ever Ready Composer of Polonaises and Minuets” was devised by Ph. Kirnberger, as early as in 1757. The famous chance music machine attributed to W.A. Mozart (`K 516f`) consisted of numerous two-bar fragments of music named after the different letters of the Latin alphabet and destined to be combined together either at random, or following an anagram of your beloved had been known since 1787. We can consider a note as an elementary event in the musical dice game, as notes provide a natural discretization of musical phenomena that facilitate their performance and analysis. Given the entire keyboard $\mathcal{K}$ of 128 notes corresponding to a pitch range of 10.5 octaves, each divided into 12 semitones, we regard a note as a discrete [*random variable*]{} $X$ that maps the musical event to a value of a $n$-set of pitches $\mathcal{P}=\{x_1, \ldots, x_n\}\subseteq \mathcal{K}.$ In the musical dice game, a piece is generated by patching notes $X_t$ taking values from the set of pitches $\mathcal{P}$ that [*sound good together*]{} into a temporal sequence $\left\{X_t\right\}_{t\geq 1}$. Herewith, two consecutive notes, in which the second pitch is a harmonic of the first one are considered to be pleasing to the ear, and therefore can be patched to the sequence. Thus tonal harmony sets up the [*Markov property*]{} for the sequence $\left\{X_t\right\}_{t\geq 1}$ that can be assessed in terms of the transition probabilities between consecutive notes, in the framework of a simple time – homogeneous model called [*Markov chain*]{} [@Markov:1906], $$\label{music_transition_matrix}
\begin{array}{l}
\Pr\left[ X_{t+1}\mid X_t=y,X_{t-1}=z,\ldots\right]\,=\,
\Pr\left[ X_{t+1}\mid X_t=y\right]\,=\,T_{yx}, \\
\sum_{x\in \mathcal{P}}T_{yx}=1,
\end{array}$$ where the stochastic transition matrix $T_{yx}$ weights the chance of a pitch $x$ going directly to another pitch $y$ independently of time. The model (\[music\_transition\_matrix\]) obviously does not impose a severe limitation on melodic variability, since there are many possible combinations of notes considered consonant, as sharing some harmonics and making a pleasant sound together. The relations between notes in (\[music\_transition\_matrix\]) are rather described in terms of probabilities and expected numbers of random steps than by physical time. Under such circumstances, the actual length $N$ of a composition is formally put $N\to\infty,$ or as long as you keep rolling the dice. Markov chains are widely used in algorithmic music composition, as being a standard tool, in music mix and production software. Interactions between humans via speech and music constitute the unifying theme of research in modern communication technologies. As with music, speech and written language also have the sets of rules (crucial for establishing effective communication) that govern which particular combinations of sounds and letters may or may not be produced. However, while communications by the spoken and written forms of human languages have been paid much attention from the very onset of information theory [@Shannon:1948; @Shannon:1951], not very much is known about the relevant information aspects of music [@Wolfe:2002]. Although we use the acoustic channel in both music and speech, the acoustical and structural features we implement to encode and perceive the signals in music and speech are dramatically different, as “speech is communication of world view as the intellection of reality while music is communication of world view as the feeling of reality” [@Seeger:1971]. With the Markov chain model (\[music\_transition\_matrix\]), we could precisely quantify this difference, since it allows to appraise tonal music as a generalized communication process, in which a composer sends a message transmitted by a performer to a listener. In our work, we report some results on the Markov chain analysis of the musical dice games encoded by the transition matrices between pitches in the MIDI representations
of the 804 musical compositions attributed to 29 composers: J.S. Bach (371), L.V. Beethoven (58), A.Berg (7), J. Brahms (8), D. Buxtehude (3), F. Chopin (26), C. Debussy (26), G. Fauré (5), C. Franck (7), G.F. Händel (45), F. Liszt (4), F. Mendelssohn Bartholdi (19), C. Monteverdi (13), W.A. Mozart (51), J. Pachelbel (2), S. Rachmaninoff (4), C. Saint-Saëns (2), E. Satie (3), A. Schönberg (2), F. Schubert (55), R. Schumann (30), A. Scriabin (7), D. Shostakovitch (12), J. Strauss (2), I. Stravinsky (5), P. Tchaikovsky (5), J. Titelouze (20), A. Vivaldi (4), R. Wagner (8). The MIDI representations of many musical pieces are freely available on the Web [@Mutopia].
The paper is organized as follows. In Sec. \[sec:How\_did\_we\_collect\_the\_data\], we discuss the MIDI representations of music and the different methods to encode them into a Markov chain transition matrix. The encoding problem is not trivial, as ambiguities would arise provided a piece has more than one voice. We then consider a music as a generalized communication process in Sec. \[sec:Entropy\_redundancy\_complexity\]. While the elements of the transition matrix (\[music\_transition\_matrix\]) indicate the possibility to consequently find the two notes in the musical score, an infinite number of powers of the transition matrix must be considered to estimate the eventual distance between them with respect to the entire structure of the musical dice game. First passage times to notes and the classification of composers with respect to their tone scale preferences are discussed in Sec. \[sec:First\_Passage\_Times\_in\_music\]. The possible distances between the different musical dice games are discussed in Sec. \[sec:musical\_distances\]. We conclude in the last section.
Encoding of a discrete model of music (MIDI) into a transition matrix {#sec:How_did_we_collect_the_data}
=====================================================================
While analyzing the statistical structure of musical pieces, we used the MIDI representations providing a computer readable [*discrete time model*]{} of music by a sequence of the ’note’ events, `note_on` and `note_off`:
`event type` `time` `channel` `note` `velocity`
-------------- -------- ----------- -------- ------------
`note_on` `192` `0` `60` `127`
`note_off` `192` `0` `60` `64`
: The MIDI events for the note C4.\[Tab\_music\_01\]
In the MIDI representation, each event (like that one shown in Tab. \[Tab\_music\_01\]) is characterized by the four variables: ’time’, ’channel’, ’note’, and ’velocity’. A MIDI file has a specific value of discreteness ’ticks/quarter’ indicating the number of ’ticks’ that make up a quarter note. The value of ’time’ then gives the number of ’ticks’ between two consequent note events. In the example given in Tab. \[Tab\_music\_01\], the event of `C4` starts after 192 ’ticks’ have passed. The ’channel’ indicate one of 16 channels (0 to 15) this event may belong to. Notes are not encoded by their names like `C` or `A`. Instead, the harmonic scale is mapped onto numbers from 0 to 127 with chromatic steps. For instance, the identification number 60 corresponds to the `C4`, in musical notation. Then, note number 61 is `C4#`, 62 is `D4` etc. (see Tab. \[Tab\_music\_02\] for some octaves and their MIDI note ID numbers)
`Octave` `C` `C#` `D` `D#` `E` `F` `F#` `G` `G#` `A` `A#` `B`
---------- ----- ------ ----- ------ ----- ----- ------ ----- ------ ----- ------ -----
3 48 49 50 51 52 53 54 55 56 57 58 59
4 60 61 62 63 64 65 66 67 68 69 70 71
5 72 73 74 75 76 77 78 79 80 81 82 83
: MIDI note ID numbers corresponding to musical notation. \[Tab\_music\_02\]
Finally, the ’velocity’ (0 – 127) describes the strength with which the note is played. As MIDI files contain all musically relevant data, it is possible to determine the probabilities of getting from one note to another for all notes in a musical composition by analyzing its MIDI file with a computer program. To get transition matrices (\[music\_transition\_matrix\]) for tonal sequences, we need only ’time’, ’channel’, and ’note’ to be considered.
The MIDI files of 804 musical compositions were processed by a program written in `Perl`; the MIDI parsing was done using the module `Perl::MIDI` [@Perl_MIDI], which allowed the conversion of the MIDI data into a more convenient form called `MIDI::Score` where each two consequent `note_on` and `note_off` events are combined to a single `note` event. Each `note` event contains an absolute `time`, the starting time of the event, and a `duration` which gives the duration of the event in ticks.
To give an example of the process of getting to a transition matrix from a musical score, we consider the first bars of the Fugue `BWV846` of J.S. Bach shown in Fig. \[Fig\_music\_02\]. The numbers below the first notes in Fig. \[Fig\_music\_02\] indicate the corresponding MIDI ID note numbers. In Tab. \[Tab\_music\_03\], we show the representation of these notes in MIDI and in `MIDI::Score` format. Here, the value of velocity is omitted.
-------------- -------- ------ -------- -------------- -------- ------- ------ --------
`event type` `time` `ch` `note` `event type` `time` `dur` `ch` `note`
`note on` 192 0 60 `note` 192 192 0 60
`note off` 192 0 60
`note on` 0 0 62 `note` 384 192 0 62
`note off` 192 0 62
`note on` 0 0 64 `note` 576 192 0 64
`note off` 192 0 64
-------------- -------- ------ -------- -------------- -------- ------- ------ --------
: MIDI and MIDI::Score data from the beginning of Fugue I BWV846.[]{data-label="Tab_music_03"}
For the first notes shown in Fig. \[Fig\_music\_02\], the definition of a transition is easy as there is only one voice. In particular, from Tab. \[Tab\_music\_03\], we can conclude that there would be the consequent transitions $60 \to 62$ and $62 \to 64$. However, like most musical pieces, this Fugue then contains several voices that play simultaneously, so that an additional convention is required to define a transition from note to note.
`time` `dur` `ch` `note` `name` `voice`
-------- ------- ------ -------- -------- ---------
2496 192 0 67 `G4` upper
2496 96 0 64 `E4` lower
2592 96 0 62 `D4` lower
2688 192 0 69 `A4` upper
2688 96 0 60 `C4` lower
2784 96 0 62 `D4` lower
2880 192 0 71 `B4` upper
2880 96 0 60 `C4` lower
: MIDI::Score data from the middle of the second bar of Fugue I BWV846 where the second voice starts playing. The note names and the voices of the events are also shown. \[Tab\_music\_04\]
In the middle of the second bar shown in Fig. \[Fig\_music\_02\], a second voice is starting. Some note events starting from there are given in Tab. \[Tab\_music\_04\] in `MIDI::Score` form. From Tab. \[Tab\_music\_04\], it is clear that in principle it is not necessary to put the upper voice into a different channel than that of the lower voice. In the example shown in Tab. \[Tab\_music\_04\], the notes 67 and 64 both start at time 2496. As note 64 has a duration of 96 ticks, it is obvious that note 62 at time 2592 belongs to the same voice as note 64. However, for the notes 69 and 60 starting at 2688, it is unclear to which voice each note belongs to, and how they might be encoded into a transition matrix. It is important to note that such an ambiguity is not a problem of MIDI representation itself, but rather of music. It depends upon the experience of a listener how she distinguishes voices while listening to a musical composition that contains several simultaneous voices. Even if the musical score explicitly separates those voices by placing them atop of each other, our personal impression of them might not coincide with that one notated, rather arising from live audio mixing of all simultaneous voices during the performance. Thus, to get transition matrices from MIDI fies, we have to answer the following important question: “Which transitions between which note events have to be accounted?”
In our approach, we sort note events ascending by time and channel. By surfing over the list of events, a transition between two subsequent occurrences is accounted when the moment of time of the second event is greater than that of the first one. When several events occur simultaneously, we give a priority to the event belonging to a small number channel. Let us emphasize that under the used method not all possible transitions between note events contribute into the transition matrix.
time dur ch note [name]{}
------- ----- ---- ------ -------------- ----
13056 288 0 76 [(`E5`)]{} \*
13056 288 0 72 [(`C5`)]{}
13056 192 1 60 [(`C4`)]{}
13152 96 1 59 [(`B3`)]{} \*
13248 96 1 60 [(`C4`)]{} \*
13344 96 0 78 [ (`F#5`)]{} \*
13344 48 0 74 [ (`D5`)]{}
13344 96 1 57 [ (`A3`)]{}
13392 48 0 72 [ (`C5`)]{} \*
\[Tab\_music\_05\]
For example, let us consider the musical bar shown in Fig. \[Fig\_music\_02a\] and its list of events given in the adjacent table. The resulting transitions accounted in the matrix would be those between events marked with ’\*’: $76 \to 59$, $59 \to 60$, $60 \to 78$, $ 78 \to 72.$ Note events with small channel values are favored over those with higher values. For simultaneous note events occurring in the same channel, only the first one is considered that mostly means the topmost voice, in musical notation.
-- --
-- --
We believe that the encoding method we use is quite efficient for unveiling the individual melodic lines and identifying a creative character of a composer from musical compositions because of the appearance of the resulting transition matrices. Those matrices generated with respect to the chosen encoding method look differently, from piece to piece and from composer to composer (see the examples shown in Fig. \[Fig\_music\_03\]). However, if we were treated each voice in a musical composition separately (the transitions of the upper voice and those of the lower voice might be accounted independently while computing the probabilistic vector forming a row of the transition matrix), the transition matrices were clearly dominated by a region along the main diagonal, similarly for all compositions.
It is important to mention that no mater which encoding method is used the resulting transition matrices appear to be essentially not symmetric: if $T_{xy}>0,$ for some $x,y,$ it might be that $T_{yx}=0.$ A musical composition can be represented by a weighted directed graph, in which vertices are associated with pitches and directed edges connecting them are weighted accordingly to the probabilities of the immediate transitions between those pitches. Markov’s chains determining random walks on such the graphs are not ergodic: it may be impossible to go from every note to every other note following the score of the musical piece.
Musical dice game as a communication process {#sec:Entropy_redundancy_complexity}
============================================
Contrary to the alphabets used in human languages, the sets of pitches underlying the different musical compositions can be very distinct and may not overlap (even under chromatic transposition). The cardinality of the set of pitches $\mathcal{P}$ also changes from piece to piece demonstrating a tendency of slow growth, with the length of composition $N$. In Fig. \[Fig\_music\_01\], we have sketched how the number of different pitches $n$ used to compose a piece depends upon the size of composition $N$. The data collected over 610 pieces created by the six classical music composers show that the growth can be well approximated by a logarithmic curve indicating that $n\,(\sim \log N)$ can be used as the simplest parameter assessing complexity of a classical musical composition.
Let us suppose that a musical piece generated as an output of the musical dice game (\[music\_transition\_matrix\]) can be encoded by a sequence of independent and identically-distributed random variables representing notes which can take values of different pitches. To measure the uncertainty associated with a pitch, we can then use the Shannon entropy [@Schurmann:1996], $$\label{musical_entropy}
H\,\,=\,\,-\sum_{x\in\mathcal{P}}
\pi_{x}\log_n\pi_{x}$$ where $\pi_{x}$ is the probability to find the note $x\in \mathcal{P}$ in the musical score, and the base of the logarithm is $n =|\mathcal{P}|$. Since the entropy of a musical piece defined by (\[musical\_entropy\]) is affected by the number of used pitches $n$, the parameter of information redundancy, $$\label{music_redundancy}
R\,\,=\,\,1- {H}/{\max H}, \quad \max H\, =\,\log n,$$ where $ \max H$ is the theoretical maximum entropy, might be used for comparing different musical compositions. Accordingly to information theory [@Cover:1991], redundancy quantifies predictability of a pitch in the piece, as being a natural counterpart of entropy. As we have mentioned above, a Markov chain encoding the musical dice game is not ergodic, and therefore the probability to find a pitch in the musical score cannot be found simply as the entry in the left eigenvector of the transition matrix ${\bf T}$ belonging to the maximal eigenvalue $\mu=1.$ In order to find the probability of observing the note in the musical score, we can use the method of group generalized inverse [@Meyer:1975; @Meyer:1982] that might be applied for analyzing every Markov chain regardless of its structure. As the Laplace operator corresponding to the Markov chain (\[music\_transition\_matrix\]), $$\label{music_laplace_opeartor}
{\bf L}\,\,=\,\, {\bf 1}-{\bf T},$$ where ${\bf 1}$ is a unit matrix, is always a member of a multiplicative matrix group, it always possesses a group inverse ${\bf L}^\sharp,$ a special case of the Drazin generalized inverse [@Drazin:1958; @Ben-Israel:2003; @Meyer:1975] satisfying the Erdélyi conditions [@Erdelyi:1967]: $$\label{Drazin_inverse}
{\bf LL}^\sharp {\bf L}\,\,=\,\,{\bf L}, \quad
{\bf L}^\sharp {\bf LL}^\sharp\,\,=\,\,{\bf L}^\sharp,
\quad \left[{\bf L},{\bf L}^\sharp\right]\,\,=\,\,0$$ where $[{\bf A},{\bf B}]={\bf AB}-{\bf BA}$ denotes the commutator of the two matrices. The role of group inverses (\[Drazin\_inverse\]) in the analysis of Markov chains have been discussed in details in [@Meyer:1975; @Meyer:1982; @Campbell:1979]. Here, we only mention that the stationary vector of a Markov chain can be calculated as $$\label{music_stationary_vector}
\pi_{x_i}\,\,=\,\,\left({\bf 1}-{\bf LL}^\sharp\right)_{x_ix_j};$$ the rows of (\[music\_stationary\_vector\]) are all equal to the corresponding components of the stationary vector $\pi$. Determining the entropy of texts written in a natural language is an important problem of language processing. The entropy of current written and spoken languages (English, Spanish) has been estimated experimentally as ranged from 0.5 to 1.3 bit per character [@Shannon:1951; @Lin:1973]. An approximately even balance (50:50) of entropy and redundancy is supposed as necessary to achieve effective communication in transmitting a message, as it makes easier for humans to perceive information [@Lin:1973].
For all musical compositions we studied, the magnitudes of entropy fluctuate in a range between 0.7 and 1.1 bit per note well fitting with the entropy range of usual languages. In classical music where the tonal method of composition is widely used, pieces involving more pitches appear to have lower magnitudes of entropy but higher values of redundancy (predictability). In Fig. \[Fig\_music\_04\], we have presented the statistics of entropy and redundancy vs. the number of pitches through their five-number summaries, for 371 chorales of J.S. Bach. A central line of each box in the box plot (Fig. \[Fig\_music\_04\]) shows the median (not the mean), the value separating the higher half of the data sample from the lower half, that is found by arranging all the observations from lowest value to highest value and picking the middle one. Other lines of the box plot indicate the quartile values which divide the sorted data set into four equal parts, so that each part represents one fourth of the sample. A lower line in each box shows the first quartile, and an upper line shows the third quartile. Two lines extending from the central box of maximal length 3/2 the interquartile range but not extending past the range of the data. The outliers are those points lying outside the extent of the previous elements.
The entropy and redundancy statistics suggests that compositions in classical music might contain some repeated patterns, or motives in which certain combinations of notes are more likely to occur than others. In particular, the dramatic increase of redundancy as the range of pitches expands up to 7.5 octaves implies that musical compositions involving many pitches might convey mostly conventional, predictable blocks of information to a listener. However, in contrast to human languages where entropy and redundancy are approximately equally balanced [@Lin:1973], in classical music entropy clearly dominates over redundancy. While decoding a musical message requires the listener to invest nearly as much efforts as in everyday decoding of speech, the successful understanding of the composition would call for an experienced listener ready to invest his or her full attention to a communication process that would span across cultures and epochs.
-- --
-- --
Another possible information measure that can be applied to the analysis of musical dice games is the past-future mutual information ([*complexity*]{}) introduced in studies of the symbolic sequences generated by dynamical systems [@Shaw:1984] (see also [@Cover:1991]). It estimates the information content of the blocks of notes and can be formally derived as the limiting excess of the block entropy $$H(S^N)\,\,=\,\,-\sum_{S^N}P(S^N)\log_n P(S^N),$$ in which $P(S^N)$ is the probability to find a patch $S^N$ of $N$ notes, over the $N$ times Shannon entropy $H$, as the size of the block $N\to\infty,$ $$\label{musical_complexity_past_future}
C\,\,=\,\,\lim_{N\to \infty}
\left( H(S^N)-H\cdot N\right).$$ Following [@Li:1991], we use the fact that the transition probability between states in a Markov chain determined by the matrix (\[music\_transition\_matrix\]) is independent of $N$, so that complexity (\[musical\_complexity\_past\_future\]) can be computed simply as $$\label{musical_complexity_past_future_02}
C\,\,=\,\,-\sum_{x\in \mathcal{P}}\pi_{x}\,\log_n
\frac{\pi_x}{\,\, \prod_{y\in\mathcal{P}}T_{xy}^{T_{xy}}\,\,}.$$ In Fig. \[Fig\_music\_05\] (left), we have presented the statistics of complexity values for the Bach’s chorales. The main trend (shown in Fig. \[Fig\_music\_05\] (left) by a solid line) is given by a cubic spline interpolating between the mean (not the median) complexity values. Complexity decreases rapidly with the number of pitches used in a composition suggesting that the musical pieces might contain a few types of melodic lines translated over the entire diapason of pitches by chromatic transposition. Finally, in Fig. \[Fig\_music\_05\] (right), we have sketched a scatter plot showing the pace of complexity with entropy in 480 compositions of classical music that implies that a strong positive correlation exists between the value of entropy and the logarithm of complexity, in compositions of classical music.
First passage times to notes resolve tonality and feature a composer {#sec:First_Passage_Times_in_music}
====================================================================
Statistics of entropy, redundancy, and complexity in the discrete time models of classical musical compositions suggests that tonal music generated by the musical dice game (\[music\_transition\_matrix\]) constitutes the well structured data that contain conventional patterns of information. Obviously, some notes might be more “important” than others, with respect to such a structure.
In music theory [@Thomson:1999], the hierarchical pitch relationships are introduced based on a [*tonic*]{} key, a pitch which is the lowest degree of a scale and that all other notes in a musical composition gravitated toward. A successful tonal piece of music gives a listener a feeling that a particular (tonic) chord is the most stable and final. The regular method to establish a tonic through a cadence, a succession of several chords which ends a musical section giving a feeling of closure, may be difficult to apply without listening to the piece.
-- --
-- --
While in a musical dice game, the intuitive vision of musicians describing the tonic triad as the “center of gravity” to which other chords are to lead acquires a quantitative expression. Namely, every pitch in a musical piece is characterized with respect to the entire structure of the Markov chain by its level of accessibility estimated by the first passage time to it [@Blanchard:2008; @Volchenkov:2010], that is the average length of the shortest random path toward the pitch from any other one randomly chosen in the musical score. Analyzing the first passage times in scores of tonal musical compositions, we have found that they can help in resolving tonality of a piece, as they precisely render the hierarchical relationships between pitches.
The majority of tonal music assumes that notes spaced over several octaves are perceived the same way as if they were played in one octave [@Burns:1999]. Using this assumption of octave equivalency, we can chromatically transpose each musical piece into a single octave getting the $12\times 12$ transition matrices, uniformly for all musical pieces, independently of the actual number of pitches used in composition. Given a stochastic matrix ${\bf T}$ describing transitions between notes within a single octave $\mathcal{O}$, the first passage time to the note $i\in\mathcal{O}$ is computed [@Volchenkov:2010] as the ratio of the diagonal elements, $$\label{music_first_passage_time}
\mathcal{F}_i\,\,=\,\,\left({\bf L}^{\#}\right)_{ii}/
\left({\bf 1}-{\bf LL}^{\#}\right)_{ii},$$ where ${\bf L}$ is the Laplace operator corresponding to the transition matrix ${\bf T}$, and ${\bf L}^{\#}$ is its group generalized inverse. Let us note that in the case of ergodic Markov chains the result (\[music\_first\_passage\_time\]) coincides with the classical one on the first passage times of random walks defined on undirected graphs [@Lovasz:1993].
In Fig. \[Fig\_music\_06a\], we have shown the two examples of the arrangements of first passage times to notes in one octave, for the E minor scale (left) and E major, A major scales (right). The basic pitches for the E minor scale are `E`, `F#`, `G`, `A`, `B`, `C`, and `D`. The E major scale is based on `E`, `F#`, `G#`, `A`, `B`, `C#`, and `D#`. Finally, the A major scale consists of `A`, `B`, `C#`, `D`, `E`, `F#`, and `G#`. The values of first passage times are strictly ordered in accordance to their role in the tone scale of the musical composition. Herewith, the tonic key is characterized by the shortest first passage time (usually ranged from 5 to 7 random steps), and the values of first passage times to other notes collected in ascending order reveal the entire hierarchy of their relationships in the musical scale.
-- --
-- --
By analyzing the typical magnitudes of first passage times to notes in one octave, we can discover an individual music language of a composer and track out the stylistic influences between different composers. The box plots shown in Fig. \[Fig\_music\_07\] depict the data on first passage times to notes in a number of compositions written by J.S. Bach, A. Berg, F. Chopin, and C.Franck through their five-number summaries: 3/2 the interquartile ranges, the lower quartile, the third quartile, and the median. In tonal music, the magnitudes of first passage times to the notes are completely determined by their roles in the hierarchy of tone scales. Therefore, a low median in the box plot (Fig. \[Fig\_music\_07\]) indicates that the note was often chosen as a tonic key in many compositions.
Correlation and covariance matrices calculated for the medians of the first passage times in a single octave provide the basis for the classification of composers, with respect to their tonality preferences. For our analysis, we have selected only those musical compositions, in which all 12 pitches of the octave were used. The tone scale symmetrical correlation matrix has been calculated for 23 composers, with the elements equal to the Pearson correlation coefficients between the medians of the first passage times. For exploratory visualization of the tone scale correlation matrix, we arranged the “similar” composers contiguously. Following [@Friendly:2002], while ordering the composers, we considered the eigenvectors (principal components) of the correlation matrix associated with its three largest eigenvalues. Since the cosines of angles between the principal components approximate the correlations between the tonal preferences, we used an ordering based on the angular positions of the three major eigenvectors to place the most similar composers contiguously, as it is shown in Fig. \[Fig\_music\_09\].
The correlogram presented on Fig. \[Fig\_music\_09\] allows for identifying the three groups of composers exhibiting similar preferences in the use of tone scales, as correlations are positive and strong within each tone group while being weak or even negative between the different groups. The smaller subgroups might be seen within the first largest group (from J. Strauss to G. Fauré), in the left upper conner of the matrix on Fig. \[Fig\_music\_09\]. Most of composers that appeared in the largest group are traditionally attributed to the Classical Period of music. The strongest positive correlations we observed in the choice of a tonic key (about 97%) is between the compositions of J. Strauss and A. Vivaldi who led the way to a more individualistic assertion of imaginative music. The tonality statistics in the masterpieces of R. Wagner appears also quite similar to them. Other subgroups are formed by G.F. Händel and D. Shostakovitch, J.S. Bach and R. Schumann. The Classical Period boasted by L.V. Beethoven and W.A. Mozart who led the way further to the Romantic period in classical music. F. Mendelssohn Bartholdi was deeply influenced by the music of J.S. Bach, L.V. Beethoven, and W.A. Mozart, as often reflected by his biographers [@Brown:2003] – not surprisingly, he found his place next to them. Furthermore, the piano concerts of C. Saint-Saëns were known to be strongly influenced by those of W.A. Mozart, and, in turn, appear to have influenced those of S. Rachmaninoff that receives full exposure in the correlogram (Fig. \[Fig\_music\_09\]). Moreover, we also get the evidence of affinity between I.Stravinsky and A. Berg, F. Schubert, F. Chopin, and G. Faure, as well as of the strong correlation between the tonality styles of A. Scriabin and F. Liszt. The last group, in the lower right conner of the matrix are occupied by the Middle and Late Romantic era composers: P. Tchaikovsky, J. Brahms, C. Debussy, and C. Franck. Interestingly, the names of composers that are contiguous in the correlogram (Fig. \[Fig\_music\_09\]) are often found together in musical concerts and on records performed by commercial musicians.
On possible distances in space of musical dice games {#sec:musical_distances}
====================================================
Most music is written for playing on standard keyboards and involves mostly overlapping sets of pitches. Given two pieces modeled by the different musical dice games but defined on the same set of pitches, a natural idea arises to compare their Markov chains in order to estimate their similitude.
Let us note that the Kullback –- Leibler divergence [@Cover:1991], a measure of the difference between two probability distributions playing the important role in information theory, cannot help us much with the Markov transition matrices since the transition probability vectors (rows of the transition matrices) in general are not the probability distributions, as many of their components might be equal to zero (even for quite a long composition mapped into one octave) thus prohibiting transitions between some states of the Markov chain. Nevertheless, the Kullback –- Leibler divergence can be used in order to compare two different musical dice games defined on the same set of pitches by means of their stationary vectors (\[music\_stationary\_vector\]), $$\label{musical_Kullback_leibler}
D_{KL}\left(\pi^{(1)}\mid\pi^{(2)}\right)\,\,=\,\,
\sum_{i=1}^n \pi^{(1)}_i\log\left(\frac{\pi^{(1)}_i}{\pi^{(2)}_i}\right).$$ The Kullback–Leibler divergence (\[musical\_Kullback\_leibler\]) is neither symmetric, nor satisfies the triangle inequality.
The Euclidean distance between the two transition matrices, ${\bf T}_A$ and ${\bf T}_B,$ is defined by $$\label{music_Euclidean_distance}
\mathcal{D}^{(E)}_{AB}\,\,=\,\,\left\|
{\bf T}_A -{\bf T}_B
\right\|_F$$ where $\|\ldots\|$ is the Frobenius norm induced by the Euclidean inner product for matrices, $
\left({\bf T}_A,{\bf T}_B\right)\,\,=\,\,\mathrm{Tr}
\left({\bf T}_A^\top{\bf T}_B\right),
$ in which ${\bf T}^\top$ denotes a transposed matrix. However, there is no any indication of that probabilistic space of musical dice games possesses the structure of Euclidean space.
Another possibility to compare the musical dice games by their transition matrices is to use the Riemann structure associated to the probability vectors (rows of the transition matrices) instead of (\[music\_Euclidean\_distance\]). Let us discuss such a distance in more details, for the case of $12\times 12$ transition matrices $\bf T$.
First, let us introduce the new matrix $Q_{ij}=\sqrt{T_{ij}}$ and note that the 12 rows of ${\bf Q}$ define the 12 points on the surface of a unit sphere $S_1^{11}.$ It is obvious that under any change to the musical dice game the rows of the matrix ${\bf Q}$ remain on the surface of $S_1^{11},$ and therefore the difference between a pair of musical compositions chromatically transposed into one octave is always described by a set of 12 rotations $\left\{\omega_1,\ldots,\omega_{12}\right\}\in \mathrm{SO}(12)$ relating the two sets of 12 points on $S_1^{11}.$
Second, given ${\bf Q}_A$ and ${\bf Q}_B,$ representing the two different musical dice games, $A$ and $B,$ on $S_1^{11}$, we can approximate the set of rotations $\left\{\omega_1,\ldots,\omega_{12}\right\}$ by a single one $\Omega_{AB}\in \mathrm{SO}(12)$ that minimizes the Frobenius norm of a possible discrepancy, $$\min_{\Omega\in\mathrm{SO}(12)}\left\|
{\bf Q}_A\Omega_{AB}-{\bf Q}_B
\right\|_F.$$ Indeed, such a minimization is nothing else but the orthogonal Procrustes problem [@Gower:2004], which is equivalent to the singular value decomposition of the matrix ${\bf Q}_A^\top{\bf Q}_B$, $$\label{music_SVD}
{\bf Q}_A^\top{\bf Q}_B\,\,=\,\,
{\bf U}\Sigma{\bf V}^\top,\quad
\Omega_{AB}\,\,=\,\,{\bf U}
{\bf V}^\top.$$ The matrix $\Omega_{AB}\in\mathrm{SO}(12)$ defined in (\[music\_SVD\]) describes the optimal rotation (with respect to the Frobenius norm) translating ${\bf Q}_A$ to ${\bf Q}_B,$ while the transposed matrix, $\Omega_{AB}^\top,$ makes the backward translation, ${\bf Q}_B$ to ${\bf Q}_A$. Obviously, $\Omega_{AB} ={\bf 1}$ if and only if ${\bf Q}_A={\bf Q}_B.$
We define the Riemann distance between the two musical dice games, $A$ and $B$, as the length of a geodesic curve connecting ${\bf Q}_A$ and ${\bf Q}_B$ on the surface of $S_1^{11}$ $$\label{music_geodesic_distance}
\mathcal{D}^{(R)}_{AB}\,\,=\,\,
\left\|\log\,\, \Omega_{AB} \right\|_F.$$ It follows from the definition that the metric (\[music\_geodesic\_distance\]) satisfies the conditions of non-negativity, identity of indiscernibles, symmetry, and subadditivity. The triangle inequality is satisfied, as the length of the geodesic curve on the unit sphere, $\exp\left(t\,\,\log\,\Omega_{AB}\right)$, $0\leq t \leq 1,$ is a strictly positive function.
Conclusions {#sec:music_conclusions}
===========
We have studied the musical dice games encoded by the transition matrices between pitches of the 804 musical compositions. Contrary to the language where the alphabet is independent of a message, musical compositions might involve different sets of pitches; the number of pitches used to compose a piece grows approximately logarithmically with its size.
Entropy dominates over redundancy in the musical dice games derived from the compositions of classical music. Thus the successful understanding of a musical composition requires much attention and experience from a listener. Statistics of complexity measured by the past-future mutual information suggests that pieces in classical music might contain a few melodic lines translated over the diapason of pitches by chromatic transposition. The hierarchical relations between pitches in tonal music can be rendered by means of first passage time to them, in musical dice games. Correlations between the medians of the first passage times to the notes of one octave provide the basis for the classification of composers, with respect to their tonality preferences. Finally, we have discussed the possible distances in space of musical dice games and introduced the geodesic distance based on the Riemann structure associated to the probability vectors (rows of the transition matrices).
Acknowledgments {#sec:Acknowledgments}
===============
We thank Ph. Blanchard, T. Krüger, and J. Loviscach for the inspiring discussions.
[000]{}
H. Noguchi, “Mozart: Musical game in C K.516f”. Available at [*http://www.asahi-net.or.jp/ rb5h-ngc/e/k516f.htm*]{} (1996).
A.A. Markov. “Extension of the limit theorems of probability theory to a sum of variables connected in a chain”. reprinted in Appendix B of: R. Howard. [*Dynamic Probabilistic Systems*]{}, vol. [**1**]{}: Markov Chains. John Wiley and Sons (1971).
C.E. Shannon, “A mathematical theory of communication,” [*Bell System Technical Journal*]{} [**27**]{}, pp. 379-423; 623-656 (1948). C.E. Shannon, “Prediction and entropy of printed English”, [*The Bell System Technical Journal*]{} [**30**]{}, p. 50 (1951). J. Wolfe, “Speech and music, acoustics and coding, and what music might be ’for’.” [*Proc. the 7th International Conference on Music Perception and Cognition*]{}, Sydney, 2002; C. Stevens, D. Burnham, G. McPherson, E. Schubert, J. Renwick (Eds.). Adelaide: Causal Productions. Ch. Seeger, “Reflections upon a Given Topic: Music in Universal Perspective”. [*Ethnomusicology*]{} [**15**]{}(3), p. 385 (1971). All music in the Mutopia Project free to download, print out, perform and distribute is available at [*http://www.mutopiaproject.org*]{}. While collecting the data, we have also used the following free resources: [*http://windy.vis.ne.jp/art/englib/berg.htm*]{} (for Alban Berg), [*http://www.classicalmidi.co.uk/page7.htm*]{}, [*http://www.jacksirulnikoff.com/*]{}.
This software is freely available at [ *http://search.cpan.org/ sburke/MIDI-Perl-0.8.*]{}
T. Schürmann, P. Grassberger, “Entropy Estimation of Symbol Sequences”, [*CHAOS*]{} [**6**]{}(3) 414-427 (1996). T.M. Cover, J.A. Thomas, [*Elements of Information Theory*]{}. London: Wiley (1991). C.D. Meyer, “The role of the group generalized inverse in the theory of finite Markov chains”, [*SIAM Rev.*]{} [**17**]{}, p. 443 (1975).
C.D. Meyer, “Analysis of finite Markov chains by group inversion techniques. Recent Applications of Generalized Inverses”, in: S.L. Campbell (Ed.), [*Research Notes in Mathematics*]{} [**66**]{}, Pitman, Boston, p. 50 (1982). M.P. Drazin, “Pseudo-inverses in associative rings and semigroups”. [*The American Mathematical Monthly*]{} [**65**]{}, 506-514 (1958).
A. Ben-Israel, Th.N.E. Greville, [*Generalized inverses: theory and applications.*]{} Springer; 2nd edition (2003).
I. Erdélyi, “On the matrix equation $Ax=\lambda Bx$.” [*J. Math. Anal. Appl.*]{} [**17**]{}, 119-132 (1967). S.L. Campbell, C.D. Meyer, [*Generalized Inverses of Linear transformations.*]{} New York: Dover Publications (1979).
N. Lin, [*The Study of Human Communication*]{}, The Bobbs-Merrill Company, Indianapolis, (1973). R. Shaw, [*The dripping faucet as a model chaotic system*]{}, CA Aerial Press, Santa Cruz (1984).
W. Li, “On the relationship Between Complexity and Entropy for Markov Chains and Regular Languages”, [*Complex systems*]{} [**5**]{}, 381 (1991).
W. Thomson, [*Tonality in Music: A General Theory.*]{}, San Marino, Calif.: Everett Books (1999). Ph. Blanchard, D. Volchenkov, “Intelligibility and first passage times in complex urban networks”, [*Proc. R. Soc. A*]{} [**464**]{}, 2153 (2008). D. Voclchenkov, “Random walks and flights over connected graphs and complex networks”, [*Commun. Nonlinear Sci. Numer. Simulat*]{}, doi:10.1016/j.cnsns.2010.02.016 (2010). E.M. Burns, [*Intervals, Scales, and Tuning. The Psychology of Music*]{}. Second edition, Deutsch, Diana, ed. San Diego: Academic Press (1999).
Lovász, L. 1993 Random Walks On Graphs: A Survey. [*Bolyai Society Mathematical Studies*]{} [**2**]{}: [*Combinatorics, Paul Erdös is Eighty*]{}, Keszthely (Hungary), p. 1-46.
M. Friendly, “Corrgrams: Exploratory Displays for Correlation Matrices”. [*The American Statistician*]{} [**56**]{}(4), 316 (2002).
C. Brown, [*A Portrait of Mendelssohn*]{}, New Haven and London (2003).
J.C. Gower, G.B. Dijksterhuis, [*Procrustes Problems*]{}. Oxford University Press (2004).
[^1]: E-Mail:[*[email protected]*]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We give a crystal structure on the set of Gelfand-Tsetlin patterns which parametrize bases for finite-dimensional irreducible representations of the general linear Lie algebra. The crystal data are given in closed form, expressed using tropical polynomial functions of the entries of the patterns. We prove that with this crystal structure, the natural bijection between Gelfand-Tsetlin patterns and semistandard Young tableaux is a crystal isomorphism.'
address:
- 'Department of Mathematics, Iowa State University, Ames, IA-50011, USA'
- 'Department of Mathematics, Iowa State University, Ames, IA-50011, USA'
author:
- 'Jonas T. Hartwig'
- 'O’Neill Kingston'
title: 'Gelfand-Tsetlin Crystals'
---
Introduction
============
The introduction of crystal bases in the 1990’s by Kashiwara ([@Kashiwara1990], [@Kashiwara1991], [@KN1994]) and Lusztig [@lusztig] was a breakthrough in the representation theory of Lie algebras and quantum groups, structures that have become ubiquitous in modern physics, algebra and geometry. A crystal basis can be combinatorially identified with certain directed graphs whose edges are labeled by simple root vectors. At the same time, the vertex set of the graph is a basis for a highest-weight representation of a quantum group (as described, for instance, by Hong and Kang in [@hongkang]), and its combinatorial structure is naturally compatible with taking tensor products, describing branching rules, and much more. Crystal bases for all classical Lie algebras can be realized in terms of Kashiwara-Nakashima tableaux. The semistandard Young tableaux that they generalize have been used extensively in representation theory over the last century.
Another long-running undercurrent to this area of study comes from a more purely combinatorial perspective. When the utility of applying Young tableaux to problems in representation theory became clear, many more generalizations were made than those discussed above. The era of computers accelerated the growth of interest in this area, and today there are robust communities of mathematicians whose work is focused on coding efficient representations (in the colloquial sense) of these structures, often in Python and Sage, so that these may then be used to attack problems in algebra, combinatorics, geometry and beyond. With a high level of research output surrounding tableaux and tableau-like structures, an active sub-discipline is the effort to identify when two seemingly-different types of structure are in fact equivalent. As described by Sheats in [@sheats], these enumeration problems can frequently be difficult, but they can also illuminate surprising connections between areas of mathematics that appeared to have little in common.
It is well-known that semistandard Young tableaux (SSYT) are in bijection with Gelfand-Tsetlin patterns (GTPs) ([@geltse1950]), arrays of numbers that were introduced using branching rules for the general linear Lie algebra and which have found many subsequent applications. We therefore have two different combinatorial descriptions of the same algebraic objects, one in terms of tableaux and the other in terms of patterns.
Any set in bijection with the vertices of a crystal graph itself acquires the structure of a crystal graph by requiring that the bijection is a crystal isomorphism. This trivial fact makes it obvious that the set of Gelfand-Tsetlin patterns has the structure of a crystal. However to the best of our knowledge this crystal structure has not been made explicit. That is the goal of this paper.
A related result from [@WatYam2019] expresses the string length functions in terms of the entries of the Gelfand-Tsetlin patterns, and this data is enough to theoretically determine the other crystal data, see Remark \[rem:watyam\].
Our formulas (see Definition \[def:GT-crystal-data\]) for the crystal data (raising and lowering operators, weight and string length functions) defined on Gelfand-Tsetlin patterns have several advantages over the description in terms of semistandard Young tableaux. First of all, they are given by simple arithmetic expressions involving taking the maximum over a set of integers computed directly from the entries of the pattern at hand. There is no need to apply a row- or column-reading function, and subsequently apply a cumbersome signature rule to the result, before the actual operation can be performed.
Secondly, particularly interesting is that the expressions are in fact given by tropical (Laurent) polynomials in the pattern entries. It is already well-known that crystal basis theory has deep connections to tropical mathematics, see e.g. [@bumpschilling]. It would be interesting to better understand the appearance of tropical polynomials in these expressions.
And lastly, it may be conjectured that similar tropical expressions can be written down in other types, in particular for representations of the symplectic and orthogonal Lie algebras, for which analogs of Gelfand-Tsetlin bases exist [@geltse1950b][@zelobenko1973][@molev]. Moreover, we expect that analogs of these formulas can be used to construct crystal bases for representations of some associative algebras analogous to enveloping algebras of Lie algebras, such as certain Galois orders [@FutOvs2010].
This paper is organized as follows. In Section \[sec:preliminaries\] we fix some notation and terminology and recall some well-known results regarding semistandard Young tableaux and Gelfand-Tsetlin patterns. In Section \[sec:crystal-structure\] we state and prove the first theorem, in which we give explicit formulae for the crystal operators on ${\mathrm{GTP}}(n, \lambda)$ and prove that these equip the set with a crystal basis structure. In Section \[sec:crystal-isomorphism\] we prove the second theorem, stating that the natural bijection between the sets ${\mathrm{SSYT}}(n,\lambda)$ and ${\mathrm{GTP}}(n, \lambda)$ is an isomorphism of crystals. The proof relies on the fact that, by the nature of the bijection, we may obtain various useful combinatorial data about a given semistandard tableaux by examining certain sums and differences of the corresponding pattern entries, see Lemma \[lem:main2\]. In Section \[sec:example\] we give an example to illustrate how to apply the formulas.
Acknowledgements {#acknowledgements .unnumbered}
================
The first author gratefully acknowledges support from Simons Collaboration Grant for Mathematicians, award number 637600. This work was inspired by a question of G. Benkart.
Preliminaries {#sec:preliminaries}
=============
In this section we recall some well-known definitions and results from the literature that we will use.
Crystals
--------
We follow [@hongkang]. Let $X=\left(A,\, \Pi=\left\{{\alpha}_i\right\}_{i\in I},\, \Pi^\vee=\left\{{\alpha}_i^\vee\right\}_{i\in I},\, P,\, P^\vee\right)$ be a Cartan datum with finite index set $I$.
The only Cartan datum we will use in this paper is $A_{n-1}$, where $I=\{1,2,\ldots,n-1\}$, the weight lattice $P$ is the abelian group generated by $\{{\boldsymbol{e}}_i\}_{i=1}^n$ subject to ${\boldsymbol{e}}_1+{\boldsymbol{e}}_2+\cdots+{\boldsymbol{e}}_n=0$, ${\alpha}_i={\boldsymbol{e}}_i-{\boldsymbol{e}}_{i+1}$, and ${\alpha}_j^\vee\in\operatorname{Hom}_{\mathbb{Z}}(P,{\mathbb{Z}})$ are given by $\langle {\boldsymbol{e}}_i,{\alpha}_j^\vee\rangle = \delta_{ij}-\delta_{i,j+1}$.
A *crystal* of type $X$ is a non-empty set $\mathcal{B}$ together with maps $$\begin{aligned}
{\mathrm{wt}}: \mathcal{B} &\rightarrow P, \\
{{\widetilde}{e}}_i, {{\widetilde}{f}}_i: \mathcal{B} &\rightarrow \mathcal{B} \sqcup \{ 0 \}, \quad i\in I,\\
\varepsilon_i, \varphi_i: \mathcal{B} &\rightarrow \mathbb{Z} \sqcup \{ -\infty \},\quad i\in I, \end{aligned}$$ satisfying for all $b,b'\in\mathcal{B}$ and $i\in I$:
1. ${{\widetilde}{f}}_i(b)=b'$ if and only if $b={{\widetilde}{e}}_i(b')$, in which case $${\mathrm{wt}}(b') = {\mathrm{wt}}(b) - \alpha_i, \qquad
{\varepsilon}_i(b') = {\varepsilon}_i(b) + 1, \qquad
\varphi_i(b') = \varphi_i(b) - 1$$ and we write $$b\overset{i}{\longrightarrow} b'.$$
2. $\varphi_i(b) = {\varepsilon}_i(b) + \langle {\mathrm{wt}}(b), \alpha_i^\vee \rangle$. In particular, $\varphi_i(b) = -\infty$ if and only if ${\varepsilon}_i(b) = -\infty$.
3. If $\varphi_i(b) = {\varepsilon}_i(b)=-\infty$, then ${{\widetilde}{e}}_i(b) = {{\widetilde}{f}}_i(b) = 0$.
The cardinality of $\mathcal{B}$ is the *degree* of the crystal, ${\operatorname}{wt}$ is called the *weight map*, ${{\widetilde}{e}}_i$ and ${{\widetilde}{f}}_i$ are called *crystal operators*, and $\varphi_i$ and $\varepsilon_i$ are called *string length functions*.
Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be crystals of type $X$. A *morphism* $\Psi: \mathcal{B}_1 \rightarrow \mathcal{B}_2$ is a map $\Psi:\mathcal{B}_1\sqcup\{0\}\to \mathcal{B}_2\sqcup\{0\}$ such that $\Psi(0)=0$ and for all $b,b'\in\Psi^{-1}(\mathcal{B}_2)$ and $i\in I$:
1. ${\operatorname}{wt}(\Psi(b))={\operatorname}{wt}(b),\quad {\varepsilon}_i(\Psi(b))={\varepsilon}_i(b),\quad \varphi_i(\Psi(b))=\varphi_i(b)$.
2. If $b\overset{i}{\longrightarrow} b'$ then $\Psi(b)\overset{i}{\longrightarrow}\Psi(b')$.
If moreover $\Psi$ is bijective as a function $\mathcal{B}_1\sqcup\{0\}\to\mathcal{B}_2\sqcup\{0\}$, then $\Psi$ is an *isomorphism*.
Partitions, Young diagrams and semistandard Young tableaux
----------------------------------------------------------
#### 1
For a fixed integer $N \geq 0$, a *partition* of $N$ is a sequence $\lambda = (\lambda_1, \lambda_2, \dots)$ of integers $\lambda_i$ such that $\lambda_1 \geq \lambda_2 \geq \dots \geq 0$ and $|\lambda|:=\Sigma_{i\geq 1} \lambda_i = N$. The *length* of a partition $\lambda$, $\ell(\lambda)$, is equal to the highest index $i$ for which $\lambda_i > 0$. For $1 \leq i \leq \ell(\lambda)$, the $\lambda_i$ are called the *parts* of $\lambda$. Let $\mathcal{P}(N)$ be the set of partitions of $N$ and put $\mathcal{P} = \bigcup_{N=0}^\infty \mathcal{P}(N)$.\
#### 2
If $\lambda$ is a partition of $N$, the *Young diagram* ${\mathrm{YD}}(\lambda)$ is a left-justified collection of boxes where the $i$th row has $\lambda_i$ boxes. The *shape* of a Young diagram is its partition $\lambda$. A *tableau* is a Young diagram whose boxes are filled with elements from an alphabet. $${\mathrm{YD}}((3,2,2,1)) = \text{ }
\ydiagram{3,2,2,1}$$
#### 3
A *subdiagram* is a Young diagram ${\mathrm{YD}}(\lambda')$ that is contained in Young diagram ${\mathrm{YD}}(\lambda)$. A *skew diagram* ${\mathrm{YD}}(\lambda/\lambda')$ is the diagram obtained by subtracting a subdiagram $\lambda'$ from $\lambda$. $${\mathrm{YD}}((4,2,2,1)/(2,2)) = \text{ }
\begin{ytableau}
\none & \none & & \\
\none & \none \\
& \\
\\
\end{ytableau}$$
#### 4
A *semistandard Young tableau* (SSYT) of shape $\lambda$ and rank $n$ is a tableau of shape $\lambda$ where the boxes are filled with entries from the alphabet is $[n]=\{1,2,\dots, n\}$ so that each row is weakly increasing from left to right and each column is strictly increasing from top to bottom. $$T = \text{ }
\begin{ytableau}
1 & 1 & 3 \\
2 & 3 \\
3 & 4 \\
4
\end{ytableau}$$ Let ${\mathrm{SSYT}}(n,\lambda)$ denote the set of SSYTs of rank $n$ and shape $\lambda$.
Crystal structure on ${\mathrm{SSYT}}(n,\lambda)$
-------------------------------------------------
We recall the crystal structure on semistandard Young tableaux. For details, see e.g. [@hongkang],[@bumpschilling].
Let $n$ be a positive integer and $\lambda$ be a partition of length at most $n$. Let $T\in{\mathrm{SSYT}}(n,\lambda)$. The *far-eastern reading* of $T$, denoted $\mathrm{FarEast}(T)$ is the $|{\lambda}|$-tuple of letters read off from $T$, reading columns from right to left and each column top to bottom. The map $\mathrm{FarEast}:{\mathrm{SSYT}}(n,{\lambda})\to\{1,2,\ldots,n\}^{|{\lambda}|}$ is injective and we denote the inverse map by $\mathrm{FarEast}^{-1}$, defined on the image of $\mathrm{FarEast}$.
For $i\in\{1,2,\ldots,n-1\}$, the *$i$-bracketing* of a tuple of letters $x=(x_1,x_2,\ldots,x_{|{\lambda}|})$, denoted in this paper by $[x]_i$ is obtained by crossing out the right-most $i$ having at least one $i+1$ to the right of it, in which case we also cross out the leftmost of those $i+1$’s, and repeating this recursively (ignoring crossed out entries) until $(i,i+1)$ is not a subsequence. The crossed out $i$’s and $(i+1)$’s in $x$ are said to be ($i$-)*bracketed*. Any remaining $i$’s or $(i+1)$’s in $x$ are ($i$-)*unbracketed*.
Let $n$ be a positive integer and ${\lambda}$ a partition with $n$ or fewer parts. Let $P$ be the weight lattice of type $A_{n-1}$. Define for $i\in\{1,2,\ldots,n-1\}$ and $T\in{\mathrm{SSYT}}(n,{\lambda})$: $$\begin{aligned}
{\mathrm{wt}}(T)&=N_1(T)\boldsymbol{e}_1+N_2(T)\boldsymbol{e}_2+\cdots+N_n(T)\boldsymbol{e}_n,\text{ where $N_i(T)=\#$boxes in $T$ containing $i$,} \\
\varphi_i(T)&=\text{number of $i$-unbracketed $i$'s in $[\mathrm{FarEast}(T)]_i$,}\\
\varepsilon_i(T)&=\text{number of $i$-unbracketed $(i+1)$'s in $[\mathrm{FarEast}(T)]_i$,}\\
\widetilde{f}_i(T) &=
\begin{cases}
\mathrm{FarEast}^{-1}\big(\text{change leftmost $i$ in $[\mathrm{FarEast}(T)]_i$ to $i+1$}\big), &
\text{if $\varphi_i(T)>0$,}
\\
0,& \text{otherwise,}
\end{cases}\\
\widetilde{e}_i(T) &=
\begin{cases}
\mathrm{FarEast}^{-1}\big(\text{change rightmost $i+1$ in $[\mathrm{FarEast}(T)]_i$ to $i$}\big), &
\text{if ${\varepsilon}_i(T)>0$,}\\
0,& \text{otherwise.}
\end{cases}\end{aligned}$$
Let $n$ be a positive integer and $\lambda$ be a partition with $n$ or fewer parts. The set ${\mathrm{SSYT}}(n,{\lambda})$ equipped with the above maps ${\mathrm{wt}},\varphi_i,\varepsilon_i,\widetilde{f}_i,\widetilde{e}_i$ constitutes a crystal of type $A_{n-1}$.
Let $n=4$ and ${\lambda}=(5,2,2)$ and consider $$T =
\begin{ytableau}
1 & 2 & 2 & 2 & 3 \\
3 & 3 \\
4 & 4
\end{ytableau}$$ Let us compute $\varphi_2(T)$ and ${\widetilde}{f}_2(T)$. The far-eastern reading of $T$ is $$\mathrm{FarEast}({\Lambda})=(3,2,2,2,3,4,1,3,4).$$ To find the $2$-bracketing of this, first cross out the rightmost $2$ having a $3$ somewhere to its right, and also cross out the leftmost of those $3$’s: $(3,2,2,\xcancel{2},\xcancel{3},4,1,3,4)$. Repeating this step once more, ignoring crossed out entries, we obtain $(3,2,\xcancel{2},\xcancel{2},\xcancel{3},4,1,\xcancel{3},4)$ at which point the $2$-bracketing is finished, as $(2,3)$ is not a subsequence anymore (ignoring crossed out entries). So $$[(3,2,2,2,3,4,1,3,4)]_2=(3,2,\xcancel{2},\xcancel{2},\xcancel{3},4,1,\xcancel{3},4).$$ Now we can compute: $$\varphi_2(T)
=\text{number of $2$-unbracketed $2$'s in
$(3,2,\xcancel{2},\xcancel{2},\xcancel{3},4,1,\xcancel{3},4)$}=1$$
$$\begin{aligned}
\widetilde{f}_2(T) &= \mathrm{FarEast}^{-1}\big(\text{change rightmost $2$ in $(3,2,\xcancel{2},\xcancel{2},\xcancel{3},4,1,\xcancel{3},4)$
to $3$}\big) \\
&= \mathrm{FarEast}^{-1}\big( (3,3,\xcancel{2},\xcancel{2},\xcancel{3},4,1,\xcancel{3},4)\big)
=\begin{ytableau}
1 & 2 & 2 & 3 & 3 \\
3 & 3 \\
4 & 4
\end{ytableau}\end{aligned}$$
Note that when applying $\mathrm{FarEast}^{-1}$ we ignore the bracketing.
\[def:tbrac\] The $i$-bracketing can be described directly on the tableaux $T$ as follows. Go through all the columns of $T$ from left to right and do the following. If the column contains an $i$ and there is a thus-far-unbracketed $i+1$ in the same column, or in a column further to the left, then cross out that $i$ along with the rightmost of those $i+1$’s. Then $\varphi_i(T)$ is the number of $i$-unbracketed $i$’s in $T$; ${\varepsilon}_i(T)$ is the number of $i$-unbracketed $i+1$’s in $T$; ${{\widetilde}{f}}_i(T)$ is obtained from $T$ by changing the rightmost $i$-unbracketed $i$ in $T$ to $i+1$; ${{\widetilde}{e}}_i(T)$ is obtained from $T$ by changing the leftmost $i$-unbracketed $i+1$ in $T$ to $i$.
Gelfand-Tsetlin patterns
------------------------
Let $n$ be a positive integers and $\lambda$ be a partition with $n$ or fewer parts. For us, a *Gelfand-Tsetlin pattern* (GTP) with $n$ rows and top row $\lambda$ is a triangular array of integers $$\Lambda =
\begin{Bmatrix}
\lambda^{(n)}_1 & & \lambda^{(n)}_2 & & \lambda^{(n)}_3 & & \cdots & & \lambda^{(n)}_n \\
& \lambda^{(n-1)}_1 & & \lambda^{(n-1)}_2 & & \cdots & & \lambda^{(n-1)}_{n-1} \\
& & \lambda^{(n-2)}_1 & & \cdots & & \lambda^{(n-2)}_{n-2} \\
& & & \ddots & & \adots \\
& & & & \lambda^{(1)}_{1}
\end{Bmatrix}$$ where
1. $\lambda^{(i)}_j \in \mathbb{Z}_{\geq 0}$ for $1\le j\le i\le n$,
2. $\lambda^{(i+1)}_j \geq \lambda^{(i)}_j \geq \lambda^{(i+1)}_{j+1}$ for $1\le j\le i\le n-1$,
3. $\lambda^{(n)}_i = \lambda_i$ for $1\le i\le n$.
Condition (ii) is known as the *interleaving condition*. By convention we set $\lambda^{(i)}_j=0$ if not $1\le j\le i\le n$. Let ${\mathrm{GTP}}(n,\lambda)$ denote the set of Gelfand-Tsetlin patterns with $n$ rows and top row $\lambda$.
Bijection between tableaux and patterns {#sec:natural-bijection}
---------------------------------------
There is a well-known and natural bijection $\mathcal{T}$ between ${\mathrm{GTP}}(n,\lambda)$ and ${\mathrm{SSYT}}(n,\lambda)$. Given a Gelfand-Tsetlin pattern $\Lambda \in {\mathrm{GTP}}(n,\lambda)$, we obtain a tableau $T=\mathcal{T}({\Lambda}) \in {\mathrm{SSYT}}(n,\lambda)$ by inserting $i$ into the squares of the skew diagram ${\mathrm{YD}}(\lambda^{(i)} / \lambda^{(i-1)})$, for $i=1,2,\dots,n$, where by convention $\lambda^{(0)}$ is the empty partition. Conversely, given $T \in {\mathrm{SSYT}}(n,\lambda)$, we obtain a pattern $\Lambda=\mathcal{T}^{-1}(T) \in {\mathrm{GTP}}(n,\lambda)$ as follows. Define the top row $\lambda^{(n)}$ of $\Lambda$ to be the shape of $T$. That is, ${\lambda}^{(n)}=\lambda$. Then, delete all boxes from $T$ containing the symbol $n$ to obtain tableau $T^{(n-1)}$ and define the next row ${\lambda}^{(n-1)}$ of $\Lambda$ to be the shape of $T^{(n-1)}$. Continue in this fashion until all the boxes of $T$ have been deleted. Then all the rows of $\Lambda$ have been specified.
Crystal structure on Gelfand-Tsetlin patterns {#sec:crystal-structure}
=============================================
In this section we prove the first main result of the paper. We equip the set of Gelfand-Tsetlin patterns with explicit crystal data, and prove that this makes ${\mathrm{GTP}}(n,{\lambda})$ into a crystal of type $A_{n-1}$. To define the crystal data we will need some notation.
$$\begin{matrix}
& {\lambda}^{(i+1)}_j & & {\lambda}^{(i+1)}_{j+1} & \\
{\lambda}^{(i)}_{j-1} & & {\lambda}^{(i)}_j & & {\lambda}^{(i)}_{j+1}\\
& {\lambda}^{(i-1)}_{j-1} & & {\lambda}^{(i-1)}_j &
\end{matrix}$$
We introduce the following *diamond numbers*, which are alternating sums around a diamond shape in ${\Lambda}$ starting at ${\lambda}^{(i)}_j$:
\[eq:diamonds\] $$\begin{aligned}
a^{(i)}_j(\Lambda) &:= \lambda^{(i)}_j - \lambda^{(i-1)}_j + \lambda^{(i)}_{j+1} - \lambda^{(i+1)}_{j+1}, \quad 0\le j\le i, \\
b^{(i)}_j(\Lambda) &:= - \lambda^{(i)}_j + \lambda^{(i-1)}_{j-1} - \lambda^{(i)}_{j-1} + \lambda^{(i+1)}_j, \quad 1\le j\le i+1,\end{aligned}$$
where by convention ${\lambda}^{(i)}_j=0$ if not $1\le j\le i$. Note that, $$b_j^{(i)}(\Lambda)=-a_{j-1}^{(i)}({\Lambda}),\quad 1\le j\le i+1,$$ and, by the interleaving conditions, $$\label{eq:ab-ineqs}
a_0^{(i)}({\Lambda})\le 0,\quad b^{(i)}_{i+1}({\Lambda})\le 0.$$ For notational convenience we put $$a_j^{(i)}(\Lambda)=0 \;\; \forall j>i, \qquad
b_j^{(i)}(\Lambda)=0 \;\; \forall j>i+1.$$ Next, define these *diamond-sums*: $$\begin{aligned}
\label{eq:diamond-sumA}
A^{(i)}_j({\Lambda}) &:= \sum_{k=j}^{i} a_k^{(i)}(\Lambda),\quad 0\le j\le i, \\
\label{eq:diamond-sumB}
B^{(i)}_j({\Lambda}) &:= \sum_{k=1}^j b_k^{(i)}(\Lambda), \quad 1\le j\le i+1.\end{aligned}$$ Note that imply $$A_0^{(i)}({\Lambda})\le A_1^{(i)}({\Lambda}),\quad B_{i+1}^{(i)}({\Lambda})\le B^{(i)}_i({\Lambda}).$$ The following relation will be useful: $$\label{eq:AB-relation}
A_0^{(i)}({\Lambda})=A_j^{(i)}({\Lambda})-B_j^{(i)}({\Lambda})=-B_{i+1}^{(i)}({\Lambda})\qquad
\forall j\in\{0,1,\ldots,i+1\}.$$
\[rem:watyam\] In [@WatYam2019] the authors give the formula for the $\mathbf{i}_A$-string datum, where $\mathbf{i}_A$ is the reduced long word $(1,2,1,3,2,1,\ldots,n-1,n-2,\ldots,1)$. Converting their notation (their $a_{ij}$ is our ${\lambda}^{(n+i-j)}_i$) gives the formula $d_{i,j}(\Lambda)=\sum_{m=1}^{j-i}({\lambda}^{(j)}_m-{\lambda}^{(j-1)}_m),\qquad 1\le i<j\le n$.
If ${\Lambda}\in{\mathrm{GTP}}(n,{\lambda})$, let ${\Lambda}\pm\Delta^{(i)}_j$ denote the array of integers obtained from ${\Lambda}$ by replacing ${\lambda}^{(i)}_j$ by ${\lambda}^{(i)}_j\pm 1$. (In general, the resulting array is not a valid Gelfand-Tsetlin pattern.)
\[def:GT-crystal-data\] Let $P$ be the weight lattice of type $A_{n-1}$. Put $\omega_i=\sum_{j=1}^i\boldsymbol{e}_j$. Define for any $\Lambda\in{\mathrm{GTP}}(n, \lambda)$ and $i\in\{1,2,\ldots,n-1\}$: $$\begin{aligned}
{\mathrm{wt}}({\Lambda}) &
=\sum_{j=1}^n \big(\sum_{k=1}^j \lambda^{(j)}_k - \sum_{k=1}^{j-1} {\lambda}^{(j-1)}_k\big)\boldsymbol{e}_j \label{eq:wt} \\
&=A_0^{(1)}({\Lambda})\omega_1+A_0^{(2)}({\Lambda})\omega_2+\cdots+A_0^{(n)}({\Lambda})\omega_n \nonumber \\
&=-\big(B_{2}^{(1)}({\Lambda})\omega_1+B_{3}^{(2)}({\Lambda})\omega_2+\cdots+B_{n+1}^{(n)}({\Lambda})\omega_n\big); \nonumber \\
\varphi_i(\Lambda) &= \max \big\{ A_1^{(i)}(\Lambda), A_2^{(i)}(\Lambda),\ldots, A_i^{(i)}(\Lambda) \big\};\\
\varepsilon_i(\Lambda) &= \max \big\{ B_1^{(i)}(\Lambda), B_2^{(i)}(\Lambda), \dots, B_i^{(i)}(\Lambda) \big\};\\
\widetilde{f}_i (\Lambda) &= \begin{cases}
\Lambda - \Delta^{(i)}_\ell, &\text{if $\varphi_i({\Lambda})>0$,} \\
0, &\text{if $\varphi_i(\Lambda)=0$,}
\end{cases}\\
& \quad \text{where }\ell = \max\big\{j\in\{1,2,\ldots,i\}\mid A^{(i)}_j(\Lambda)=\varphi_i(\Lambda) \big\};
\nonumber \\
\widetilde{e}_i (\Lambda) &=
\begin{cases}
\Lambda + \Delta^{(i)}_\ell, &\text{if $\varepsilon_i({\Lambda})>0$,}\\
0, &\text{if $\varepsilon_i(\Lambda)=0$,}
\end{cases} \\
& \quad \text{where } \ell = \min\big\{j\in\{1,2,\ldots,i\}\mid B^{(i)}_j(\Lambda)=\varepsilon_i(\Lambda) \big\}. \nonumber\end{aligned}$$
Let $n$ be any positive integer and ${\lambda}$ be a partition with $n$ or fewer parts. Then the set ${\mathrm{GTP}}(n,{\lambda})$ of all Gelfand-Tsetlin patterns with $n$ rows and top row ${\lambda}$, equipped with the crystal data ${\mathrm{wt}},{{\widetilde}{f}}_i,{{\widetilde}{e}}_i,\varphi_i,\varepsilon_i$ as in Definition \[def:GT-crystal-data\], is a crystal of type $A_{n-1}$.
Let $i\in\{1,2,\ldots,n-1\}$ be arbitrary. First we show that if ${\Lambda}\in{\mathrm{GTP}}(n,{\lambda})$ is such that $\varphi_i({\Lambda})>0$, then ${{\widetilde}{f}}_i(\Lambda)$ is a valid Gelfand-Tsetlin pattern. Let $\ell=\max\{j\in\{1,2,\ldots,i\}\mid A_j^{(i)}=\varphi_i({\Lambda})\}$. Then by definition, ${{\widetilde}{f}}_i({\Lambda})=\Lambda-\Delta^{(i)}_\ell$ which has integer entries, the top row still equals ${\lambda}$ (since $i<n$), and the interleaving conditions hold everywhere except possibly near the ${\lambda}_\ell^{(i)}$ entry. More precisely, we must show show the following inequalities hold:
$$\label{eq:crystal-pf-inequalities}
\begin{matrix}
{\lambda}_\ell^{(i+1)} & & & & {\lambda}_{\ell+1}^{(i+1)} \\
& \mathbin{\rotatebox[origin=c]{-45}{$\ge$}} & & \mathbin{\rotatebox[origin=c]{45}{$\ge$}} & \\
& &{\lambda}_\ell^{(i)}-1 & & \\
& \mathbin{\rotatebox[origin=c]{45}{$\ge$}} & & \mathbin{\rotatebox[origin=c]{-45}{$\ge$}} & \\
{\lambda}_{\ell-1}^{(i-1)} & & & & {\lambda}_\ell^{(i-1)}
\end{matrix}$$
We have that $\varphi_i({\Lambda})=A_\ell^{(i)}({\Lambda}) = a_{\ell}^{(i)}({\Lambda})+a_{\ell+1}^{(i)}({\Lambda})+\cdots+a_i^{(i)}({\Lambda})$. Note that $a_\ell^{(i)}({\Lambda})>0$, otherwise $j=\ell+1$ would satisfy $A_j^{(i)}({\Lambda})=\varphi_i({\Lambda})$ (we can’t have $A_{\ell+1}^{(i)}({\Lambda})>\varphi_i({\Lambda})$ by the definitions of $\varphi_i$ and $\ell$) contradicting maximality of $\ell$. Now, $a_\ell^{(i)}>0$ is equivalent to $$\label{eq:crystal-pf-eq0}
{\lambda}_\ell^{(i)}-{\lambda}_\ell^{(i-1)}+{\lambda}_{\ell+1}^{(i)}-{\lambda}_{\ell+1}^{(i+1)}>0$$ by definition of $a_\ell^{(i)}$. Since all entries of ${\Lambda}$ are integers, implies that $$\label{eq:crystal-pf-eq1}
{\lambda}_\ell^{(i)}-1 \ge {\lambda}_\ell^{(i-1)}+{\lambda}_{\ell+1}^{(i+1)}-{\lambda}_{\ell+1}^{(i)}.$$ By the interleaving condition for ${\Lambda}$, $$\label{eq:crystal-pf-eq2}
{\lambda}_{\ell+1}^{(i+1)}\ge {\lambda}_{\ell+1}^{(i)},
\quad\text{ and }\quad {\lambda}_{\ell}^{(i-1)}\ge {\lambda}_{\ell+1}^{(i)}.$$ Combining and we obtain $${\lambda}_{\ell}^{(i)}-1\ge{\lambda}_\ell^{(i-1)}\quad\text{and}\quad {\lambda}_\ell^{(i)}-1\ge {\lambda}_{\ell+1}^{(i+1)}$$ which are the two rightmost inequalities in . The two leftmost inequalities in are trivial since ${\lambda}_\ell^{(i+1)}\ge {\lambda}^{(i)}$ and ${\lambda}_{\ell-1}^{(i-1)}\ge {\lambda}_\ell^{(i)}$ by the interleaving conditions for ${\Lambda}$. This shows that if $\varphi_i({\Lambda})>0$ then ${{\widetilde}{f}}_i({\Lambda})\in{\mathrm{GTP}}(n,{\lambda})$.
Next, suppose that ${\varepsilon}_i({\Lambda})>0$. We must show that ${{\widetilde}{e}}_i({\Lambda})\in{\mathrm{GTP}}(n,{\lambda})$. We have ${\varepsilon}_i({\Lambda})=\max\{B_1^{(i)}({\Lambda}),\ldots,B_i^{(i)}({\Lambda})\}$. Let $\ell=\min\{j\in\{1,2,\ldots,i\}\mid B_j^{(i)}({\Lambda})={\varepsilon}_i({\Lambda})\}$. Then ${\varepsilon}_i({\Lambda})=B_{\ell}^{(i)}=b_1^{(i)}({\Lambda})+b_2^{(i)}({\Lambda})+\cdots+b_\ell^{(i)}({\Lambda})$. As before, $b_\ell^{(i)}({\Lambda})>0$ by the minimality of $\ell$. So $$\label{eq:crystal-pf-eq5}
-{\lambda}_{\ell-1}^{(i)}+{\lambda}_{\ell-1}^{(i-1)}-{\lambda}_{\ell}^{(i)}+{\lambda}_{\ell}^{(i+1)}>0.$$ We have ${{\widetilde}{e}}_i({\Lambda})={\Lambda}+\Delta^{(i)}_\ell$ and hence we must show that $$\label{eq:crystal-pf-inequalities-e}
\begin{matrix}
{\lambda}_\ell^{(i+1)} & & & & {\lambda}_{\ell+1}^{(i+1)} \\
& \mathbin{\rotatebox[origin=c]{-45}{$\ge$}} & & \mathbin{\rotatebox[origin=c]{45}{$\ge$}} & \\
& &{\lambda}_\ell^{(i)}+1 & & \\
& \mathbin{\rotatebox[origin=c]{45}{$\ge$}} & & \mathbin{\rotatebox[origin=c]{-45}{$\ge$}} & \\
{\lambda}_{\ell-1}^{(i-1)} & & & & {\lambda}_\ell^{(i-1)}
\end{matrix}$$ Analogously to the previous case, the rightmost two inequalities ${\lambda}_\ell^{(i)}+1\ge {\lambda}_{\ell+1}^{(i+1)}$ and ${\lambda}_\ell^{(i)}+1\ge {\lambda}_{\ell+1}^{(i-1)}$ hold trivially by the interleaving conditions for ${\Lambda}$. By we have $${\lambda}_\ell^{(i)}+1\le -{\lambda}_{\ell-1}^{(i)}+{\lambda}_{\ell-1}^{(i-1)}+{\lambda}_{\ell}^{(i+1)}$$ which together with ${\lambda}_{\ell-1}^{(i)}\ge{\lambda}_{\ell-1}^{(i-1)}$ and ${\lambda}_{\ell-1}^{(i)}\ge{\lambda}_{\ell}^{(i+1)}$ which hold by the interleaving condition for ${\Lambda}$, yields the leftmost two inequalities in . This shows that if ${\varepsilon}_i({\Lambda})>0$ then ${{\widetilde}{e}}_i({\Lambda})\in{\mathrm{GTP}}(n,{\lambda})$.
Next we show that property (i) in the definition of crystal holds. First we show that ${{\widetilde}{f}}_i(\Lambda)=\Lambda'$ iff ${{\widetilde}{e}}_i(\Lambda')=\Lambda$. Suppose ${{\widetilde}{f}}_i(\Lambda)=\Lambda'$. In particular $\varphi_i({\Lambda})>0$. Then we need to prove ${{\widetilde}{e}}_i(\Lambda')=\Lambda$. We have $\Lambda'=\Lambda-\Delta^{(i)}_\ell$ where $\ell$ is defined by $$\label{eq:GT-crystal-pf-fe-ell}
\ell=\max\{j\in\{1,2,\ldots,i\}\mid A^{(i)}_j({\Lambda})=\varphi_i({\Lambda})\}.$$ First we show that ${\varepsilon}_i(\Lambda')>0$. By definition, ${\varepsilon}_i(\Lambda')=\max\{B_1^{(i)}({\Lambda}'),B_2^{(i)}({\Lambda}'),\ldots,B_i^{(i)}({\Lambda}')\}$. So it suffices to show that $B_j^{(i)}({\Lambda}')>0$ for some $j$. For $j=\ell$ we have: $$\label{eq:GT-crystal-pf-fe1}
B_\ell^{(i)}({\Lambda}')=b_1^{(i)}({\Lambda}')+b_2^{(i)}({\Lambda}')+\cdots+b_\ell^{(i)}({\Lambda}')=B^{(i)}_\ell({\Lambda})+1$$ since $b_j^{(i)}({\Lambda}')=b_j^{(i)}({\Lambda})$ for $j=1,\ldots,i-1$ while $b_{\ell}^{(i)}({\Lambda}')=b_{\ell}^{(i)}({\Lambda})+1$ by definition of $b_j^{(i)}({\Lambda})$. By , $$\label{eq:GT-crystal-pf-fe2}
B_\ell^{(i)}({\Lambda})+1 = A_\ell^{(i)}({\Lambda})-A_0^{(i)}({\Lambda})+1 = \varphi_i({\Lambda})-A_0^{(i)}({\Lambda})+1$$ By definition of $\varphi_i$ we have $$\label{eq:GT-crystal-pf-fe3}
\varphi_i({\Lambda})-A_0^{(i)}({\Lambda})\ge 0.$$ Now - imply $B_\ell^{(i)}({\Lambda}')>0$, hence ${\varepsilon}_i({\Lambda}')>0$. It remains to be shown that ${{\widetilde}{e}}_i({\Lambda}')={\Lambda}$. Since ${\Lambda}={\Lambda}'+\Delta^{(i)}_\ell$, we have to show that $$\label{eq:GT-crystal-pf-fe-mustshow}
\ell=\min\{j\in\{1,2,\ldots,i\}\mid B_j^{(i)}({\Lambda}')={\varepsilon}_i({\Lambda}')\}.$$ For $1\le j<\ell$ we saw that $B_j^{(i)}({\Lambda}')=B_j^{(i)}({\Lambda})$ and implies that $B_j^{(i)}({\Lambda})<B_\ell^{(i)}({\Lambda})$, while $B_\ell^{(i)}({\Lambda}')=1+B_\ell^{(i)}({\Lambda})$. So ${\varepsilon}_i({\Lambda}')\ge B_\ell^{(i)}({\Lambda}')$ and we will show equality. For $\ell<j\le i$ we have, by definition of $b_j^{(i)}({\Lambda})$, $$\label{eq:GT-crystal-pf-fe4}
B_j^{(i)}({\Lambda}')=2+B_j^{(i)}({\Lambda}),$$ and by , $$\label{eq:GT-crystal-pf-fe5}
B_j^{(i)}({\Lambda})=A_j^{(i)}({\Lambda})-A_0^{(i)}({\Lambda}),$$ while by definition of $\ell$, , we have $$\label{eq:GT-crystal-pf-fe6}
A_j^{(i)}({\Lambda})-A_0^{(i)}({\Lambda}) < A_\ell^{(i)}({\Lambda})-A_0^{(i)}({\Lambda}).$$ Thus - imply that $$\label{eq:GT-crystal-pf-fe7}
B_j^{(i)}({\Lambda}')\le 1+B_\ell^{(i)}({\Lambda})=B_\ell^{(i)}({\Lambda}').$$ Therefore ${\varepsilon}_i({\Lambda}')=B_{\ell}^{(i)}({\Lambda}')$ and holds.
The converse is analogous but we provide some details for the sake completeness. Suppose that ${{\widetilde}{e}}_i(\Lambda')=\Lambda$. We need to show that ${{\widetilde}{f}}_i(\Lambda)=\Lambda'$. We have ${\varepsilon}_i({\Lambda}')>0$ and ${\Lambda}={\Lambda}'+\Delta_{\ell}^{(i)}$ where $\ell=\min\{j\in\{1,2,\ldots,i\}\mid B_j^{(i)}({\Lambda}')={\varepsilon}_i({\Lambda}')\}$. First we show $\varphi_i({\Lambda})>0$ by showing $A_\ell^{(i)}({\Lambda})>0$. We have $A_\ell^{(i)}({\Lambda})=A_\ell^{(i)}({\Lambda}')+1$ and $A_\ell^{(i)}({\Lambda}')={\varepsilon}_i({\Lambda}')-B_{i+1}^{(i)}({\Lambda}')\ge 0$ by . It remains to show ${{\widetilde}{f}}_i({\Lambda})={\Lambda}'$. Since ${\Lambda}'={\Lambda}-\Delta_\ell^{(i)}$, this is equivalent to showing that $\ell=\max\{j\in\{1,2,\ldots,i\}\mid A_j^{(i)}({\Lambda})=\varphi_i({\Lambda})\}$. For $\ell<j\le i$ we have $A_j^{(i)}({\Lambda})=A_j^{(i)}({\Lambda}')=B_j^{(i)}({\Lambda}')-B_{i+1}^{(i)}({\Lambda}')\le B_{\ell}^{(i)}({\Lambda}')-B_{i+1}^{(i)}({\Lambda}')=A_\ell^{(i)}({\Lambda}')=A_\ell^{(i)}({\Lambda})-1<A_\ell^{(i)}({\Lambda})$. So $\ell\le\max\{j\in\{1,2,\ldots,i\}\mid A_j^{(i)}({\Lambda})=\varphi_i({\Lambda})\}$. For $1\le j<\ell$ we have $A_j^{(i)}=2+A_j^{(i)}({\Lambda}')=2+B_j^{(i)}({\Lambda}')-B_{i+1}^{(i)}({\Lambda}')\le 1+B_\ell^{(i)}-B_{i+1}^{(i)}({\Lambda}')=1+A_\ell^{(i)}({\Lambda}')=A_\ell^{(i)}({\Lambda})$. This proves the desired equality.
Suppose now that ${{\widetilde}{f}}_i({\Lambda})={\Lambda}'$ and ${{\widetilde}{e}}_i({\Lambda}')={\Lambda}$ hold. In this case, all the entries of $\Lambda'$ equal those of $\Lambda$, except for one entry ${\lambda}'^{(i)}_\ell$ in the $i$th row which equals ${\lambda}^{(i)}_\ell-1$. Therefore $$\begin{aligned}
{\mathrm{wt}}(\Lambda') &=\sum_{j=1}^n \big(\sum_{k=1}^j \lambda'^{(j)}_k - \sum_{k=1}^{j-1} {\lambda}'^{(j-1)}_k\big)\boldsymbol{e}_j \\
&=-\boldsymbol{e}_i+\boldsymbol{e}_{i+1}+
\sum_{j=1}^n \big(\sum_{k=1}^j \lambda^{(j)}_k
- \sum_{k=1}^{j-1} {\lambda}^{(j-1)}_k\big)\boldsymbol{e}_j \\
&={\mathrm{wt}}({\Lambda})-{\alpha}_i\end{aligned}$$ which is equivalent to ${\mathrm{wt}}({\Lambda}')={\mathrm{wt}}({\Lambda})+{\alpha}_i$.
To conclude the proof of (i) we need to show $\varepsilon_i(\Lambda')=\varepsilon_i(\Lambda)+1$ and $\varphi_i(\Lambda')=\varphi_i(\Lambda)-1$. For $1\le j,\ell\le i$ and any ${\Lambda}\in{\mathrm{GTP}}(n,{\lambda})$ we have $$A^{(i)}_j({\Lambda}-\Delta_\ell^{(i)}) =
\begin{cases}
A_j^{(i)}({\Lambda}) , & 0\le j<\ell, \\
A_j^{(i)}({\Lambda})-1, & j=\ell, \\
A_j^{(i)}({\Lambda})-2, & \ell<j\le i.
\end{cases}$$ Suppose $\varphi_i({\Lambda})>0$ and let $\ell=\max\{j\in\{1,2,\ldots,i\}\mid A_j^{(i)}=\varphi_i({\Lambda})\}$. Then for all $1\le j\le i$. $A_j^{(i)}({\Lambda}-\Delta_\ell^{(i)})\le \varphi_i({\Lambda})-1$ with equality for $j=\ell$. Therefore $\varphi_i({{\widetilde}{f}}_i({\Lambda}))=\varphi_i({\Lambda})-1$.
Let $1\le j,\ell\le i$. Then for any ${\Lambda}\in{\mathrm{GTP}}(n,{\lambda})$ we have $$B^{(i)}_j({\Lambda}+\Delta_\ell^{(i)})=
\begin{cases}
B_j^{(i)}({\Lambda}) & 1\le j < \ell,\\
B_j^{(i)}({\Lambda})-1 & j=\ell,\\
B_j^{(i)}({\Lambda})-2 & \ell < j \le i.
\end{cases}$$ Suppose ${\varepsilon}_i({\Lambda})>0$ and let $\ell=\min\{j\in\{1,2,\ldots,i\}\mid B_j^{(i)}({\Lambda})={\varepsilon}_i({\Lambda}) \}$. Then $B_j^{(i)}({\Lambda}+\Delta_\ell^{(i)})\le {\varepsilon}_i({\Lambda})-1$ with equality for $j=\ell$. Thus ${\varepsilon}_i({{\widetilde}{e}}_i({\Lambda}))={\varepsilon}_i({\Lambda})-1$.
For property (ii), we verify that for all $\Lambda\in{\mathrm{GTP}}(n,\lambda)$ we have $$\varphi_i(\Lambda)-\varepsilon_i(\Lambda) = \langle {\mathrm{wt}}(\Lambda), {\alpha}_i^\vee \rangle.$$ We have $\varphi_i(\Lambda)=\max\{A_1^{(i)},A_2^{(i)},\ldots,A_i^{(i)}\}$ and $\varepsilon_i(\Lambda)=\max\{B_1^{(i)},B_2^{(i)},\ldots,B_i^{(i)}\}$. We will use relation . Writing $A^{(i)}_k=A_k^{(i)}(\Lambda)$ for brevity we have $$\begin{aligned}
\varphi_i(\Lambda)-\varepsilon_i(\Lambda) &=
\max\{A^{(i)}_1,A^{(i)}_2,\ldots,A^{(i)}_i\}-
\max\{ A^{(i)}_1-A^{(i)}_0, A^{(i)}_2-A^{(i)}_0,\ldots, A^{(i)}_i-A^{(i)}_0\} \\
&=\max\{A^{(i)}_1,A^{(i)}_2,\ldots,A^{(i)}_i\}-
\max\{ A^{(i)}_1, A^{(i)}_2,\ldots, A^{(i)}_i\} + A^{(i)}_0 \\
&=A^{(i)}_0 \\
&=\sum_{k=0}^i ({\lambda}^{(i)}_k+{\lambda}^{(i)}_{k+1}-{\lambda}^{(i-1)}_k-{\lambda}^{(i+1)}_{k+1}) \\
&=2\sum_{k=1}^i {\lambda}^{(i)}_k-\sum_{k=1}^{i-1}{\lambda}^{(i-1)}_k - \sum_{k=1}^{i+1}{\lambda}^{(i+1)}_k.\end{aligned}$$ On the other hand, using the first expression for the weight function, we have $${\mathrm{wt}}(\Lambda)=\sum_{j=1}^n \big(\sum_{k=1}^j \lambda^{(j)}_k - \sum_{k=1}^{j-1} {\lambda}^{(j-1)}_k\big)\boldsymbol{e}_j$$ so using $$\langle \boldsymbol{e}_j, {\alpha}_i^\vee \rangle
= \langle \omega_j-\omega_{j-1}, {\alpha}_i^\vee \rangle
= \delta_{ji}-\delta_{j-1,i}$$ we get $$\begin{aligned}
\langle {\mathrm{wt}}(\Lambda), {\alpha}_i^\vee \rangle &=
\langle \sum_{j=1}^n \big(\sum_{k=1}^j \lambda^{(j)}_k
- \sum_{k=1}^{j-1} {\lambda}^{(j-1)}_k\big)\boldsymbol{e}_j , {\alpha}_i^\vee \rangle \\
&=
\sum_{j=1}^n \big(\sum_{k=1}^j {\lambda}^{(j)}_k -\sum_{k=1}^{j-1} {\lambda}^{(j-1)}_k \big)(\delta_{ji}-\delta_{j-1,i}) \\
&=2\sum_{k=1}^i {\lambda}^{(i)}_k - \sum_{k=1}^{i-1} {\lambda}^{(i-1)}_k -\sum_{k=1}^{i+1} {\lambda}^{(i+1)}_k\end{aligned}$$ This shows that $\varphi_i(\Lambda)-\varepsilon_i(\Lambda) = \langle {\mathrm{wt}}(\Lambda), {\alpha}_i^\vee \rangle$, which also equals the coefficient of $\omega_i$ in ${\mathrm{wt}}({\Lambda})$, proving the second and third equality in .
Lastly, since $\varphi_i(\Lambda)$ and $\epsilon_i(\Lambda)$ are never $-\infty$, condition (iii) in the definition of a crystal is void.
Crystal isomorphism {#sec:crystal-isomorphism}
===================
In this section we prove our second main result which says that the natural bijection $\mathcal{T}$ from ${\mathrm{SSYT}}(n,\lambda)$ to ${\mathrm{GTP}}(n,\lambda)$ described in Section \[sec:natural-bijection\] is an isomorphism of crystals.
We will let $T_i$ denote the $i$th row of a semistandard Young tableaux $T$, and $T_{\ge \ell}$ the subtableau obtained by deleting the first $\ell-1$ rows,and similarly for $T_{\le \ell}$: $$T = \begin{matrix} T_1 \\ T_2 \\ \vdots \\ T_n \end{matrix}
\qquad \qquad
T_{\ge \ell} = \begin{matrix} T_\ell \\ T_{\ell+1} \\ \vdots \\ T_n \end{matrix}
\qquad \qquad
T_{\le \ell} = \begin{matrix} T_1 \\ T_{2} \\ \vdots \\ T_\ell \end{matrix}$$
The following counting lemma will be useful.
\[lem:main2\] Let ${\Lambda}\in {\mathrm{GTP}}(n,\lambda)$ and $T=\mathcal{T}({\Lambda})$.
1. \[it:lem-main2-N\] For all integers $k$ with $1\le k\le n$, the number of letters $i$ in $T_k$ is equal to $\lambda^{(i)}_k - \lambda^{(i-1)}_k$.
2. \[it:lem-main2-a\] $a^{(i)}_j(\Lambda)$ counts the number of $i$’s in $T_j$ minus the number of $(i+1)$’s in $T_{j+1}$.
3. \[it:lem-main2-b\] $b^{(i)}_j(\Lambda)$ counts the number of $(i+1)$’s in $T_j$ minus the number of $i$’s in $T_{j-1}$.
4. \[it:lem-main2-A\] $A^{(i)}_\ell(\Lambda)$ counts the number of $i$’s in $T_{\ge \ell}$ minus the number of $(i+1)$’s in $T_{\ge \ell+1}$.
5. \[it:lem-main2-B\] $B_{\ell}^{(i)}(\Lambda)$ counts the number of $(i+1)$’s in $T_{\le \ell}$ minus the number of $i$’s in $T_{\le \ell-1}$.
\(a) The number of boxes in $T_k$ containing a letter from $\{ 1, 2, \dots, i \}$ is $\lambda^{(i)}_k$. Then (b) and (c) are immediate by part (a) and the definitions, , of the diamond numbers. Now (d) and (e) follow from parts (b) and (c).
\[thm:main2\] Let $n$ be a positive integer and ${\lambda}$ a partition with $n$ or fewer parts. The bijection $\mathcal{T}$ from ${\mathrm{GTP}}(n,\lambda)$ to ${\mathrm{SSYT}}(n,\lambda)$ given in Section \[sec:natural-bijection\] is an isomorphism of crystals.
Let ${\Lambda}\in{\mathrm{GTP}}(n,\lambda)$, and let $T=\mathcal{T}({\Lambda})$.
: For each $i\in\{1,2,\ldots,n-1\}$, by Lemma \[lem:main2\], $\sum_{j=1}^i {\lambda}^{(i)}_j-\sum_{j=1}^{(i-1)} {\lambda}^{(i-1)}_j$ equals $N_i(T)$, since the letter $i$ cannot occur below the $i$th row in an SSYT.
Let $i\in \{1,2,\ldots,n-1\}$ be arbitrary. *In the rest of the proof, “bracketing” refers to $i$-bracketing.* Put $A^{(i)}_k=A_k^{(i)}(\Lambda)$ and $B^{(i)}_k=B^{(i)}_k({\Lambda})$ for brevity.
By definition, $\varphi_i(T)$ is the number of unbracketed $i$’s in $T$. So $\varphi_i(T)\ge \varphi_i(T_{\ge j})$ for any $j\in\{1,2\ldots,i\}$. Let $j_1\ge j_2\ge\cdots\ge j_k$ be all the rows of $T$ containing at least one unbracketed $i$. Then $\varphi_i(T_\ge {j_1}) = A_{j_1}^{(i)}$ by Lemma \[lem:main2\]. Furthermore, $A_{j_1}^{(i)}>A_{k}^{(i)}$ for $k=i, i-1, \ldots, j_1+1$. Next, $\varphi_i(T_{\ge j_2}) = \varphi_i(T_{\ge{j_1}})+\varphi_i(T_{\ge {j_2}}/T_{\ge {j_1}}) = A_{j_1}^{(i)}+(A_{j_2}^{(i)}-A_{j_1}^{(i)})=A_{j_2}^{(i)}$. (Here $T_{\ge {j_2}}/T_{\ge {j_1}}$ denotes the subtableau of $T$ consisting of row $j_2$ through row $j_1-1$.) And $A_{j_2}^{(i)}> A_k^{(i)}$ for $k=j_1, j_1-1,\ldots, j_2+1$. Continuing recursively, we eventually obtain that $\varphi_i(T)=\varphi_i(T_{\ge {j_k}}) = A_{j_k}^{(i)}>A_{j}^{(i)}$ for $j>j_k$. It remains to be shown that $A^{(i)}_j\le A^{(i)}_{j_k}$ for $i=j_k-1,j_k-2,\ldots, 1$. Since $j_k$ is the top row having unbracketed $i$’s, we have $A_j^{(i)}({\Lambda}_{\le j_k-1}) \le 0$ for $j=j_k-1, j_k-2,\ldots, 1$, where ${\Lambda}_{\le r}$ is defined to be $\mathcal{T}^{-1}(T_{\le r})$ for all $r$. Since $A_j^{(i)}({\Lambda}) - A_{j_k}^{(i)}({\Lambda}) = A_j^{(i)}({\Lambda}_{\le j_k-1})$, this shows the required inequality.
: This part can be proved completely analogously to the case of $\varphi_i$. But it also follows from the case of $\varphi_i$ and the fact that we already know that ${\mathrm{GTP}}(n,{\lambda})$ and ${\mathrm{SSYT}}(n,{\lambda})$ are crystals, and hence by property (ii) in the definition of crystal and that ${\mathrm{wt}}({\Lambda})={\mathrm{wt}}(T)$, $${\varepsilon}_i({\Lambda}) = \varphi_i({\Lambda})-\langle{\mathrm{wt}}({\Lambda}),{\alpha}_i^\vee\rangle = \varphi_i(T)-\langle{\mathrm{wt}}(T),{\alpha}_i^\vee\rangle = {\varepsilon}_i(T).$$
: We have seen already that $\varphi_i({\Lambda})=\varphi_i(T)$. Thus ${{\widetilde}{f}}_i({\Lambda})\neq 0$ iff ${{\widetilde}{f}}_i(T)\neq 0$. Suppose ${{\widetilde}{f}}_i({\Lambda})\neq 0$. Put ${\Lambda}'={{\widetilde}{f}}_i({\Lambda})={\Lambda}-\Delta^{(i)}_\ell$, where $\ell=\max\{i\in\{1,2,\ldots,i\}\mid A_j^{(i)}({\Lambda})=\varphi_i({\Lambda})\}$. By definition of the bijection $\mathcal{T}$, the SSYT $\mathcal{T}({\Lambda}')$ is obtained from $T$ by changing the rightmost $i$ in row $\ell$ to $i+1$. On the other hand, ${{\widetilde}{f}}_i(T)$ is obtained by changing the rightmost unbracketed $i$ in $T$ to $i+1$. So we must show that $\ell$ equals the row index of the rightmost unbracketed $i$ in $T$. First we show that there is an unbracketed $i$ in row $\ell$ of $T$. To do this we derive a series of equivalences. Let $j\in\{1,2,\ldots,i\}$ be arbitrary. Then: $$\begin{aligned}
&\text{$T$ has an unbracketed $i$ in row $j$}\\
\Leftrightarrow \; & \varphi_i(T_{\ge j})>\varphi_i(T_{\ge j+1}) \\
\Leftrightarrow \; & \varphi_i({\Lambda}_{\ge j})>\varphi_i({\Lambda}_{\ge j+1}) \quad\text{where ${\Lambda}_{\ge k}:=\mathcal{T}^{-1}(T_{\ge k})$} \\
\Leftrightarrow \; & \max\{A^{(i)}_k({\Lambda}_{\ge j})\mid k=1,2,\ldots,i\} > \max\{A^{(i)}_k({\Lambda}_{\ge j+1})\mid k=1,2,\ldots,i\} \\
\Leftrightarrow \; & \max\{A^{(i)}_k({\Lambda})\mid k=j,j+1,\ldots,i\} > \max\{A^{(i)}_k({\Lambda})\mid k=j+1,j+2,\ldots,i\} \\
\Leftrightarrow \; & A^{(i)}_j({\Lambda})> A^{(i)}_k({\Lambda}) \quad \text{ for all $k\in\{j+1,\, j+2,\,\ldots,\, i\}$.}\end{aligned}$$ The penultimate equivalence holds by the counting lemma, Lemma \[lem:main2\], and that the first row of $T_{\ge j}$ is the $j$th row of $T$ and so on. Now, by definition of $\ell$ we do indeed have $$A^{(i)}_\ell({\Lambda})>A^{(i)}_k({\Lambda})\quad\text{for all $k\in\{\ell+1,\, \ell+2,\, \ldots,\, i\}$}$$ and therefore by the above series of equivalences there is at least one unbracketed $i$ in row $\ell$ of $T$.
It remains to show that $\ell$ is the row of the rightmost unbracketed $i$ in $T$. Since any $i$ directly to the right of an unbracketed $i$ is itself unbracketed, any unbracketed $i$ further to the right would have to occur among the top $\ell-1$ rows of $T$. Any unbracketed $i$ among the top $\ell-1$ rows of $T$ would remain unbracketed when considered as an entry of the truncated tableau $T_{\le\ell-1}$. So it suffices to show that $T_{\le \ell-1}$ has no unbracketed $i$’s, or equivalently, that $\varphi_i(T_{\le\ell-1})=0$. Let ${\Lambda}_{\le\ell-1}=\mathcal{T}^{-1}(T_{\le\ell-1})$. As previously shown, $\varphi_i(T_{\le\ell-1})=\varphi_i({\Lambda}_{\le\ell-1})$. By Lemma \[lem:main2\], for all $1\le j\le i$: $$A^{(i)}_j({\Lambda}_{\le\ell-1}) = A^{(i)}_j({\Lambda})-A^{(i)}_\ell({\Lambda})$$ which is less than or equal to zero by definition of $\ell$. Hence $\varphi_i({\Lambda}_{\le\ell-1})=0$.
We know that ${\varepsilon}_i({\Lambda})={\varepsilon}_i(T)$. Thus ${{\widetilde}{e}}_i({\Lambda})=0$ iff ${{\widetilde}{e}}_i(T)=0$. Suppose that ${{\widetilde}{e}}_i({\Lambda})\neq 0$. Put ${\Lambda}'={{\widetilde}{e}}_i({\Lambda})={\Lambda}+\Delta^{(i)}_\ell$, where $$\ell=\min\big\{j\in\{1,2,\ldots,i\}\mid B^{(i)}_j({\Lambda})={\varepsilon}_i({\Lambda})\big\}.$$ Also recall that $${\varepsilon}_i({\Lambda})=\max\{B^{(i)}_1({\Lambda}),\, B^{(i)}_2({\Lambda}),\, \ldots,\, B^{(i)}_i({\Lambda})\}.$$ By definiton of the bijection $\mathcal{T}$, the SSYT $\mathcal{T}({\Lambda}')$ is obtained from $T$ by changing the leftmost $i+1$ in row $\ell$ of $T$ to $i$. On the other hand, ${{\widetilde}{e}}_i(T)$ is the SSYT obtained from $T$ by changing the leftmost unbracketed $i+1$ to $i$. So we must show that $\ell$ equals the row index of the row in $T$ which contains the leftmost unbracketed $i+1$.
First we show that row $\ell$ of $T$ contains an unbracketed $i+1$. For this, we derive an equivalent condition. For all $j\in\{1,2,\ldots,i\}$ we have: $$\begin{aligned}
&\text{$T$ contains an unbracketed $i+1$ in row $j+1$}\\
\Leftrightarrow \; & {\varepsilon}_i(T_{\le j+1}) > {\varepsilon}_i(T_{\le j}) \\
\Leftrightarrow \; & {\varepsilon}_i({\Lambda}_{\le j+1}) > {\varepsilon}_i({\Lambda}_{\le j}) \quad\text{where ${\Lambda}_{\le k}:=\mathcal{T}^{-1}(T_{\le k})$}\\
\Leftrightarrow \; & \max\{B^{(i)}_k({\Lambda}_{\le j+1})\mid k=1,2,\ldots,i\}>\max\{B^{(i)}_k({\Lambda}_{\le j})\mid k=1,2,\ldots,i\}\\
\Leftrightarrow \; & B^{(i)}_{j+1}>B^{(i)}_k\quad\text{for all $k\in\{1,2,\ldots,j\}$}\end{aligned}$$ This condition holds for $j+1=\ell$ by definition of $\ell$. Thus $T$ contains an unbracketed $i+1$ in row $\ell$.
Next we show that no row of $T$ contains an unbracketed $i+1$ further to the left. Such a row $j$ would have to be below $\ell$, i.e. $j\ge i+1$. By the above equivalences we would get $$B^{(i)}_j({\Lambda})> B^{(i)}_k({\Lambda})\quad\text{for all $k\in\{1,2,\ldots,j-1\}$}.$$ In particular, $B^{(i)}_j({\Lambda})>B^{(i)}_\ell({\Lambda})$ which contradicts the definition of $\ell$. This finishes the proof that $\mathcal{T}({{\widetilde}{e}}_i({\Lambda}))={{\widetilde}{e}}_i(\mathcal{T}({\Lambda}))$.
As is well-known, if a function between crystals preserve the string length functions and intertwines the ${{\widetilde}{f}}_i$ crystal operators, the it automatically intertwines the ${{\widetilde}{e}}_i$ crystal operators. We illustrate this for the convenience of the reader. We know that ${\varepsilon}_i({\Lambda})={\varepsilon}_i(T)$. Thus ${{\widetilde}{e}}_i({\Lambda})=0$ iff ${{\widetilde}{e}}_i(T)=0$. Suppose that ${{\widetilde}{e}}_i({\Lambda})\neq 0$. Since $\varphi_i(\mathcal{T}({{\widetilde}{e}}_i({\Lambda}))=\varphi_i({{\widetilde}{e}}_i({\Lambda}))\ge 1$. Thus we have $$\begin{aligned}
\mathcal{T}\big({{\widetilde}{e}}_i(\Lambda)\big) &=
{{\widetilde}{e}}_i{{\widetilde}{f}}_i\big(\mathcal{T}({{\widetilde}{e}}_i(\Lambda))\big)\\
&={{\widetilde}{e}}_i\mathcal{T}\big({{\widetilde}{f}}_i{{\widetilde}{e}}_i({\Lambda})\big) \quad\text{by $\mathcal{T}{{\widetilde}{f}}_i={{\widetilde}{f}}_i\mathcal{T}$}\\
&={{\widetilde}{e}}_i\big(\mathcal{T}(\Lambda)\big).\end{aligned}$$
Example {#sec:example}
=======
[0.49]{}
\(1) at (0,0) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 1 & \\ & & 3 & & \end{array}\right\}$]{}; (2) at (0,-3) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 1 & \\ & & 2 & & \end{array}\right\}$]{}; (5) at (0,-6) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 1 & \\ & & 2 & & \end{array}\right\}$]{}; (8) at (0,-9) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 0 & \\ & & 2 & & \end{array}\right\}$]{}; (11) at (0,-12) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 0 & \\ & & 1 & & \end{array}\right\}$]{}; (14) at (0,-15) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 0 & \\ & & 0 & & \end{array}\right\}$]{}; (15) at (0,-18) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 1 & & 0 & \\ & & 0 & & \end{array}\right\}$]{}; (4) at (-3,-6) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 1 & \\ & & 1 & & \end{array}\right\}$]{}; (7) at (-3,-9) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 1 & \\ & & 1 & & \end{array}\right\}$]{}; (10) at (-3,-12) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 1 & & 1 & \\ & & 1 & & \end{array}\right\}$]{}; (13) at (-3,-15) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 1 & & 0 & \\ & & 1 & & \end{array}\right\}$]{}; (3) at (3,-3) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 0 & \\ & & 3 & & \end{array}\right\}$]{}; (6) at (3,-6) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 0 & \\ & & 2 & & \end{array}\right\}$]{}; (9) at (3,-9) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 0 & \\ & & 1 & & \end{array}\right\}$]{}; (12) at (3,-12) \[draw,draw=none,scale=.7\] [$\left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 0 & \\ & & 0 & & \end{array}\right\}$]{};
\(1) to\[“$1$”\] (2); (1) to\[“$2$”\] (3); (2) to\[“$1$”\] (4); (2) to\[“$2$”\] (5); (3) to\[“$1$”\] (6); (4) to\[“$2$”\] (7); (5) to\[“$1$”\] (7); (5) to\[“$2$”\] (8); (6) to\[“$1$”\] (9); (7) to\[“$2$”\] (10); (8) to\[“$1$”\] (11); (9) to\[“$2$”\] (11); (9) to\[“$1$”\] (12); (10) to\[“$2$”\] (13); (11) to\[“$1$”\] (14); (12) to\[“$2$”\] (14); (13) to\[“$1$”\] (15); (14) to\[“$2$”\] (15);
[0.49]{}
\(1) at (0,0) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 1 \\ 2 \end{ytableau}$]{}; (2) at (0,-3) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 2 \\ 2 \end{ytableau}$]{}; (5) at (0,-6) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 3 \\ 2 \end{ytableau}$]{}; (8) at (0,-9) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 3 \\ 3 \end{ytableau}$]{}; (11) at (0,-12) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 2 & 3 \\ 3 \end{ytableau}$]{}; (14) at (0,-15) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 2 & 2 & 3 \\ 3 \end{ytableau}$]{}; (15) at (0,-18) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 2 & 3 & 3 \\ 3 \end{ytableau}$]{}; (4) at (-3,-6) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 2 & 2 \\ 2 \end{ytableau}$]{}; (7) at (-3,-9) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 2 & 3 \\ 2 \end{ytableau}$]{}; (10) at (-3,-12) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 3 & 3 \\ 2 \end{ytableau}$]{}; (13) at (-3,-15) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 3 & 3 \\ 3 \end{ytableau}$]{}; (3) at (3,-3) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 1 \\ 3 \end{ytableau}$]{}; (6) at (3,-6) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 1 & 2 \\ 3 \end{ytableau}$]{}; (9) at (3,-9) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 1 & 2 & 2 \\ 3 \end{ytableau}$]{}; (12) at (3,-12) \[draw,draw=none,scale=.7\] [$\begin{ytableau} 2 & 2 & 2 \\ 3 \end{ytableau}$]{};
\(1) to\[“$1$”\] (2); (1) to\[“$2$”\] (3); (2) to\[“$1$”\] (4); (2) to\[“$2$”\] (5); (3) to\[“$1$”\] (6); (4) to\[“$2$”\] (7); (5) to\[“$1$”\] (7); (5) to\[“$2$”\] (8); (6) to\[“$1$”\] (9); (7) to\[“$2$”\] (10); (8) to\[“$1$”\] (11); (9) to\[“$2$”\] (11); (9) to\[“$1$”\] (12); (10) to\[“$2$”\] (13); (11) to\[“$1$”\] (14); (12) to\[“$2$”\] (14); (13) to\[“$1$”\] (15); (14) to\[“$2$”\] (15);
Figure \[fig:crystals\] shows the respective crystal graphs of two isomorphic crystals of type $A_2$. To illustrate, consider the Gelfand-Tsetlin pattern $${\Lambda}= \left\{\begin{array}{*{5}c} {\lambda}^{(3)}_1 & & {\lambda}^{(3)}_2 & & {\lambda}^{(3)}_3 \\ & {\lambda}^{(2)}_1 & & {\lambda}^{(2)}_2 & \\ & & {\lambda}^{(1)}_1 & & \end{array}\right\}
= \left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 1 & \\ & & 2 & & \end{array}\right\}$$ in the crystal ${\mathrm{GTP}}(3,(3,1,0))$ (in Figure \[fig:crystals\] it is in the second vertex row from the top). Let us compute ${\widetilde}f_1({\Lambda})$. First we need $\varphi_1({\Lambda})$. The relevant diamond sum (see \[eq:diamond-sumA\]) for ${\Lambda}$ is (recall that entries outside the array are zero by convention) $$A_1^{(1)}({\Lambda}) = a_1^{(1)}({\Lambda}) = {\lambda}_1^{(1)} - {\lambda}_1^{(0)} + {\lambda}_2^{(1)} - {\lambda}_2^{(2)} = 2 - 0 + 0 - 1 = 1,$$ which gives $$\varphi_1({\Lambda}) = \max\big\{ A_1^{(1)}({\Lambda})\big\} = 1.$$ By Definition \[def:GT-crystal-data\], the only value for $\ell$ here is $\ell=1$, hence applying ${{\widetilde}{f}}_1$ on ${\Lambda}$ has the effect of decrementing the entry ${\lambda}^{(1)}_1$: $${\widetilde}{f}_1({\Lambda}) = {\Lambda}-\Delta^{(1)}_1= \left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 3 & & 1 & \\ & & 1 & & \end{array}\right\}$$ as is visible in Figure \[fig:crystals\]. Let us also compute ${{\widetilde}{f}}_2({\Lambda})$. The diamond sums we need are $$\begin{aligned}
A_1^{(2)}({\Lambda}) &= a_1^{(2)}({\Lambda}) + a_2^{(2)}({\Lambda}) = (3 - 2 + 1 - 1) + (1 - 0 + 0 - 0) = 1 + 1 = 2 \\
A_2^{(2)}({\Lambda}) &= a_2^{(2)}({\Lambda}) = 1 - 0 + 0 - 0 = 1,\end{aligned}$$ which gives $$\varphi_2({\Lambda}) = \max\big\{ A_1^{(2)}({\Lambda}), A_2^{(2)}({\Lambda}) \big\} = \max \big\{ 2, 1 \big\} = 2.$$ Here, the largest index $\ell\in\{1,2\}$ for which $A^{(2)}_\ell=\varphi_2({\Lambda})$ is $\ell=1$. Therefore, applying ${{\widetilde}{f}}_2$ on ${\Lambda}$ has the effect of decrementing the entry ${\lambda}^{(2)}_1$: $${\widetilde}{f}_2({\Lambda}) = {\Lambda}-\Delta^{(2)}_1= \left\{\begin{array}{*{5}c} 3 & & 1 & & 0 \\ & 2 & & 1 & \\ & & 2 & & \end{array}\right\}$$ as can be seen in Figure \[fig:crystals\]. The remaining crystal structure can be worked out in a similar fashion.
[99]{}
Bump, D., Schilling, A. *Crystal bases: representations and combinatorics*, World Scientific, New Jersey (2017)
Futorny, V., Molev, A., Ovsienko, S., *Gelfand-Tsetlin bases for representations of finite W-algebras and shifted Yangians* in “Lie theory and its applications in physics VII”, (H.-D. Doebner and V.K. Dobrev, Eds), Proceedings of the VII International Workshop, Varna, Bulgaria, June 2007. Heron Press, Sofia, 2008, pp. 352–363. Preprint version arXiv:0711.0552v1 \[math.RT\].
Fulton, W. *Young tableaux: with applications to representation theory and geometry*, Cambridge University Press, London Mathematical Society Student Texts, Vol. 35. (1997)
Futorny, V., Ovsienko, S., *Galois orders in skew monoid rings*, J. Algebra 324 (2010) 598–630.
Gelfand, I.M., Tsetlin, M.L., *Finite-dimensional representations of the group of unimodular matrices*, Dokl. Akad. Nauk SSSR 71 (1950), 825–828 (Russian). English transl. in: I. M. Gelfand, “Collected papers”. Vol II, Berlin: Springer-Verlag 1988, pp. 653–656.
Gelfand, I.M., Tsetlin, M.L., *Finite-dimensional representations of groups of orthogonal matrices*, Dokl. Akad. Nauk SSSR 71 (1950), 1017–1020 (Russian). English transl. in: I. M. Gelfand, “Collected papers”. Vol II, Berlin: Springer-Verlag 1988, pp. 657–661.
Hong, J., Kang, S.-J. *Introduction to quantum groups and crystal bases*, American Mathematical Society, Graduate Studies in Mathematics, Vol. 42. (2002)
Kashiwara, M., *Crystalizing the $q$-analog of universal enveloping algebras*, Comm. Math. Phys. 133 (1990) 249–260.
Kashiwara, M., *On crystal bases of the $q$-analogue of universal enveloping algebras*, Duke Math. J. 63 (1991) 465–516.
Kashiwara, M., Nakashima, T., *Crystal graphs for representations of the q-analogue of classical Lie algebras*, J. Algebra 165 (1994), 2, 295–345.
Lusztig, G., *Canonical bases arising from quantized enveloping algebras*, ii. Progress of Theoretical Physics Supplement, 102:175–201, 1990.
Molev, A. *Yangians and classical Lie algebras*, American Mathematical Society, Mathematical Surveys and Monographs, Vol. 143. (2007)
Sheats, J. T., *A symplectic jeu de taquin bijection between the tableaux of King and of De Concini*, Transactions of the American Mathematical Society, 351 No. 09 (1999) 3569–3608.
Zelobenko, D. P., *Compact Lie groups and their representations*, Transl. of Math. Monographs 40 AMS, Providence RI, 1973.
H. Watanabe, K. Yamamura *Alcove paths and Gelfand-Tsetlin patterns*, arXiv:1909.00327v1 \[math.CO\]. (2019)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We use a newly assembled large sample of 3,545 star-forming galaxies with secure spectroscopic, grism, and photometric redshifts at $z=1.5-2.5$ to constrain the relationship between UV slope ($\beta$) and dust attenuation ($L_{\rm IR}/L_{\rm UV} \equiv {\rm IRX}$). Our sample benefits from the combination of deep [*Hubble*]{} WFC3/UVIS photometry from the [*Hubble*]{} Deep UV (HDUV) Legacy survey and existing photometric data compiled in the 3D-HST survey. Our sample significantly extends the range of UV luminosity and $\beta$ probed in previous samples of UV-selected galaxies, including those as faint as $M_{1600}=-17.4$ ($\simeq 0.05L^{\ast}_{\rm UV}$) and having $-2.6\la
\beta\la 0.0$. IRX is measured using stacks of deep [ *Herschel*]{}/PACS 100 and $160$$\mu$m data, and the results are compared with predictions of the IRX-$\beta$ relation for different assumptions of the stellar population model and dust obscuration curve. Stellar populations with intrinsically blue UV spectral slopes necessitate a steeper attenuation curve in order reproduce a given IRX-$\beta$ relation. We find that $z=1.5-2.5$ galaxies have an IRX-$\beta$ relation that is consistent with the predictions for an SMC extinction curve if we invoke sub-solar ($0.14Z_\odot$) metallicity models that are currently favored for high-redshift galaxies, while the commonly assumed starburst attenuation curve over-predicts the IRX at a given $\beta$ by a factor of $\ga 3$. The IRX of high-mass $M_{\ast}> 10^{9.75}$$M_\odot$ galaxies is a factor of $>4$ larger than that of low-mass galaxies, lending support for the use of stellar mass as a proxy for dust attenuation. Separate IRX-$L_{\rm UV}$ relations for galaxies with blue and red $\beta$ conflate to give an average IRX that is roughly constant with UV luminosity for $L_{\rm UV}\ga 3\times 10^9$$L_\odot$. Thus, the commonly observed trend of fainter galaxies having bluer $\beta$ may simply reflect bluer intrinsic UV slopes for such galaxies, rather than lower dust obscurations. Taken together with previous studies, we find that the IRX-$\beta$ distribution for young and low-mass galaxies at $z\ga 2$ implies a dust curve that is steeper than that of the SMC, suggesting a lower dust attenuation for these galaxies at a given $\beta$ relative to older and more massive galaxies. The lower dust attenuations and higher ionizing photon output implied by low metallicity stellar population models point to Lyman continuum production efficiencies, $\xi_{\rm ion}$, that may be elevated by a factor of $\approx 2$ relative to the canonical value for $L^{\ast}$ galaxies, aiding in their ability to keep the universe ionized at $z\sim 2$.
author:
- 'Naveen A. Reddy, Pascal A. Oesch, Rychard J. Bouwens, Mireia Montes, Garth D. Illingworth, Charles C. Steidel, Pieter G. van Dokkum, Hakim Atek, Marcella C. Carollo, Anna Cibinel, Brad Holden, Ivo Labbé, Dan Magee, Laura Morselli, Erica J. Nelson, & Steve Wilkins'
title: 'The HDUV Survey: A Revised Assessment of the Relationship between UV Slope and Dust Attenuation for High-Redshift Galaxies'
---
INTRODUCTION {#sec:intro}
============
The ultraviolet (UV) spectral slope, $\beta$, where $f_\lambda\propto
\lambda^\beta$, is by far the most commonly used indicator of dust obscuration—usually parameterized as the ratio of the infrared-to-UV luminosity, $L_{\rm IR}/L_{\rm UV}$, or “IRX” [@calzetti94; @meurer99]—in moderately reddened high-redshift ($z\ga 1.5$) star-forming galaxies. The UV slope can be measured easily from the same photometry used to select galaxies based on the Lyman break, and the slope can be used as a proxy for the dust obscuration in galaxies (e.g., @calzetti94 [@meurer99; @adelberger00; @reddy06a; @daddi07a; @reddy10; @overzier11; @reddy12a; @buat12]) whose dust emission is otherwise too faint to directly detect in the mid- and far-infrared (e.g., @adelberger00 [@reddy06a]). Generally, these studies have indicated that UV-selected star-forming galaxies at redshifts $1.5\la z\la 3.0$ follow on average the relationship between UV slope and dust obscuration (i.e., the IRX-$\beta$ relation) found for local UV starburst galaxies (e.g., @nandra02 [@reddy04; @reddy06a; @daddi07a; @sklias14]; c.f., @heinis13 [@alvarez16]), though with some deviations that depend on galaxy age [@reddy06a; @siana08; @reddy10; @buat12], bolometric luminosity (e.g., @chapman05 [@reddy06a; @casey14b]), stellar mass [@pannella09; @reddy10; @bouwens16b], and redshift [@pannella15]. Unfortunately, typical star-forming ($L^{\ast}$) galaxies at these redshifts are too faint to directly detect in the far-infrared. As such, with the exception of individual lensed galaxy studies [@siana08; @siana09; @sklias14; @watson15; @dessauges16], most investigations that have explored the relation between UV slope and dust obscuration for moderately reddened galaxies have relied on stacking relatively small numbers of objects and/or used shorter wavelength emission—such as that arising from polycyclic aromatic hydrocarbons (PAHs)—to infer infrared luminosities.
New avenues of exploring the dustiness of high-redshift galaxies have been made possible with facilities such as the Atacama Large Millimeter Array (ALMA), allowing for direct measurements of either the dust continuum or far-IR spectral features for more typical star-forming galaxies in the distant universe [@carilli13; @dunlop17]. Additionally, the advent of large-scale rest-optical spectroscopic surveys of intermediate-redshift galaxies at $1.4\la z\la 2.6$—such as the 3D-HST [@vandokkum13], the MOSFIRE Deep Evolution Field (MOSDEF; @kriek15), and the Keck Baryonic Structure surveys (KBSS; @steidel14)—have enabled measurements of obscuration in individual high-redshift star-forming galaxies using Balmer recombination lines (e.g., @price14 [@reddy15; @nelson16]). While these nebular line measurements will be possible in the near future for $z\ga 3$ galaxies with the [*James Webb Space Telescope*]{} ([ *JWST*]{}), the limited lifetime of this facility and the targeted nature of both ALMA far-IR and [*JWST*]{} near- and mid-IR observations means that the UV slope will remain the only easily accessible proxy for dust obscuration for large numbers of [ *individual*]{} typical galaxies at $z\ga 3$ in the foreseeable future.
Despite the widespread use of the UV slope to infer dust attenuation, there are several complications associated with its use. First, the UV slope is sensitive to metallicity and star-formation history (e.g., @kong04 [@seibert05; @johnson07b; @dale09; @munoz09; @reddy10; @wilkins11; @boquien12; @reddy12b; @schaerer13; @wilkins13; @grasha13; @zeimann15]). Second, there is evidence that the relationship between UV slope and dust obscuration depends on stellar mass and/or age (e.g., @reddy06a [@buat12; @zeimann15; @bouwens16b]), perhaps reflecting variations in the shape of the attenuation curve. Third, the measurement of the UV slope may be complicated by the presence of the 2175Å absorption feature [@noll09; @buat11; @kriek13; @buat12; @reddy15]. Fourth, as noted above, independent inferences of the dust attenuation in faint galaxies typically involve stacking mid- and far-IR data, but such stacking masks the scatter in the relationship between UV slope and obscuration. Quantifying this scatter can elucidate the degree to which the attenuation curve may vary from galaxy-to-galaxy, or highlight the sensitivity of the UV slope to factors other than dust obscuration. In general, the effects of age, metallicity, and star-formation history on the UV slope may become important for ultra-faint galaxies at high redshift which have been suggested to undergo bursty star formation (e.g., @weisz12 [@hopkins14; @dominguez15; @guo16; @sparre17; @faucher17]).
Obtaining direct constraints on the dust obscuration of UV-faint galaxies is an important step in evaluating the viability of the UV slope to trace dustiness, quantifying the bolometric luminosities of ultra-faint galaxies and their contribution to the global SFR and stellar mass densities, assessing possible variations in the dust obscuration curve over a larger dynamic range of galaxy characteristics (e.g., star-formation rate, stellar mass, age, metallicity, etc.), and discerning the degree to which the UV slope may be affected by short timescale variations in star-formation rate.
Separately, recent advances in stellar populations models that include realistic treatments of stellar mass loss, rotation, and multiplicity [@eldridge09; @brott11; @levesque12; @leitherer14] can result in additional dust heating from ionizing and/or recombination photons. Moreover, the intrinsic UV spectral slopes of high-redshift galaxies with lower stellar metallicities may be substantially bluer [@schaerer13; @sklias14; @alavi14; @cullen17] than what has been typically assumed in studies of the IRX-$\beta$ relation. Thus, it seems timely to re-evaluate the IRX-$\beta$ relation in light of these issues.
With this in mind, we use a newly assembled large sample of galaxies with secure spectroscopic or photometric redshifts at $1.5\le z\le
2.5$ in the GOODS-North and GOODS-South fields to investigate the correlation between UV slope and dust obscuration. Our sample takes advantage of newly acquired [*Hubble*]{} UVIS F275W and F336W imaging from the HDUV survey (Oesch et al. 2017, submitted) which aids in determining photometric redshifts when combined with existing 3D-HST photometric data. This large sample enables precise measurements of dust obscuration through the stacking of far-infrared images from the [*Herschel Space Observatory*]{}, and also enables stacking in multiple bins of other galaxy properties (e.g., stellar mass, UV luminosity) to investigate the scatter in the IRX-$\beta$ relation. We also consider the newest stellar population models—those which may be more appropriate in describing very high-redshift ($z\ga 2$) galaxies—in interpreting the relationship between UV slope and obscuration.
[lc]{} Fields & GOODS-N, GOODS-S\
Total area & $\sim 329$arcmin$^2$\
Area with HDUV imaging & $\sim 100$arcmin$^{2}$\
UV/Optical photometry & 3D-HST Catalogs and HDUV F275W and F336W\
Mid-IR imaging & [*Spitzer*]{} GOODS Imaging Program\
Far-IR imaging & GOODS-[*Herschel*]{} and PEP Surveys\
Optical depth of sample & $H \simeq 27$\
UV depth of sample & $m_{\rm UV} \simeq 27$\
Total number of galaxies & 4,078\
Number of galaxies with far-IR coverage & 3,569\
Final number (excl. far-IR-detected objects) & 3,545\
$\beta$ Range & $-2.55\le \beta \le 1.05$ ($\langle\beta\rangle = -1.71$) \[tab:sample\]
The outline of this paper is as follows. In Section \[sec:sample\], we discuss the selection and modeling of stellar populations of galaxies used in this study. The methodology used for stacking the mid- and far-IR [*Spitzer*]{} and [*Herschel*]{} data is discussed in Section \[sec:stacking\]. In Section \[sec:predirx\], we calculate the predicted relationships between IRX and $\beta$ for different attenuation/extinction curves using energy balance arguments. These predictions are compared to our (as well as literature) stacked measurements of IRX in Section \[sec:discussion\]. In this section, we also consider the variation of IRX with stellar masses, UV luminosities, and the ages of galaxies, as well as the implications of our results for modeling the stellar populations and inferring the ionizing efficiencies of high-redshift galaxies. AB magnitudes are assumed throughout [@oke83], and we use a @chabrier03 initial mass function (IMF) unless stated otherwise. We adopt a cosmology with $H_{0}=70$kms$^{-1}$Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$, and $\Omega_{\rm m}=0.3$.
SAMPLE AND IR IMAGING {#sec:sample}
=====================
Parent Sample
-------------
A few basic properties of our sample are summarized in Table \[tab:sample\]. Our sample of galaxies was constructed by combined the publicly-available ground- and space-based photometry compiled by the 3D-HST survey [@skelton14] with newly obtained imaging from the [*Hubble*]{} Deep UV (HDUV) Legacy Survey (GO-13871; Oesch et al. 2017, submitted). The HDUV survey imaged the two GOODS fields in the F275W and F336W bands to depths of $\simeq 27.5$ and $27.9$mag, respectively ($5\sigma$; $0\farcs4$ diameter aperture), with the UVIS channel of the [*Hubble Space Telescope*]{} WFC3 instrument. A significant benefit of the HDUV imaging is that it allows for the Lyman break selection of galaxies to fainter UV luminosities and lower redshifts than possible from ground-based surveys (Oesch et al. 2017, submitted), and builds upon previous efforts to use deep UVIS imaging to select Lyman break galaxies at $z\sim 2$ [@hathi10; @windhorst11]. The reduced UVIS images, covering $\approx 100$arcmin$^{2}$, include previous imaging obtained by the CANDELS [@koekemoer11] and UVUDF surveys [@teplitz13; @rafelski15].
Photometry and Stellar Population Parameters
--------------------------------------------
Source Extractor [@bertin96] was used to measure photometry on the UVIS images using the detection maps for the combined F125W$+$F140W$+$F160W images, as was done for the 3D-HST photometric catalogs [@skelton14]. The publicly-available 3D-HST photometric catalogs were then updated with the HDUV photometry—i.e., such that the updated catalogs contain updated photometry for objects lying in the HDUV pointings as well as the original set photometry for objects lying outside the HDUV pointings. This combined dataset was then used to calculate photometric redshifts using EAZY [@brammer08] and determine stellar population parameters (e.g., stellar mass) using FAST [@kriek09]. Where available, grism and external spectroscopic redshifts were used in lieu of the photometric redshifts when fitting for the stellar populations. These external spectroscopic redshifts are provided in the 3D-HST catalogs [@momcheva16]. We also included 759 spectroscopic redshifts for galaxies observed during the 2012B-2015A semesters of the MOSDEF survey [@kriek15].
For the stellar population modeling, we adopted the @conroy10 stellar population models for $Z=0.019$Z$_\odot$, a delayed-$\tau$ star-formation history with $8.0\le \log[\tau/{\rm yr}] \le 10.0$, a @chabrier03 initial mass function (IMF), and the @calzetti00 dust attenuation curve with $0.0\le A_{\rm V}\le
4.0$.[^1] We imposed a minimum age of $40$Myr based on the typical dynamical timescale for $z\sim 2$ galaxies [@reddy12b].
The UV slope for each galaxy was calculated both by (a) fitting a power law through the broadband photometry, including only bands lying redward of the Lyman break and blueward of rest-frame $2600$$\mu$m; and (b) fitting a power law through the best-fit SED points that lie in wavelength windows spanning rest-frame $1268\le \lambda\le
2580$Å, as defined in @calzetti94. Method (a) includes a more conservative estimate for the errors in $\beta$, but generally the two methods yielded values of the UV slope for a given galaxy that were within $\delta\beta \simeq 0.1$ of each other. We adopted the $\beta$ calculated using method (a) for the remainder of our analysis, and note that in Section \[sec:predirx\], we consider the value of $\beta$ using windows lying strictly blueward of $\approx 1800$Å.
Criteria for Final Sample
-------------------------
The photometric catalogs, along with those containing the redshifts and stellar population parameters, were used to select galaxies based on the following criteria. First, the object must have a Source Extractor “class star” parameter $< 0.95$, or observed-frame $U-J$ and $J-K$ colors that reside in the region occupied by star-forming galaxies as defined by [@skelton14]—these criteria ensure the removal of stars.
Second, the galaxy must have a spectroscopic or grism redshift, or $95\%$ confidence intervals in the photometric redshift, that lie in the range $1.5\le z\le 2.5$. Note that the high photometric redshift confidence intervals required for inclusion in our sample naturally selects those objects with $H\la 27$.
Third, the object must not have a match in X-ray AGN catalogs compiled for the GOODS-North and GOODS-South fields (e.g., @shao10 [@xue11]). Additionally, we use the @donley12 [*Spitzer*]{} IRAC selection to isolate any infrared-bright AGN. While the X-ray and IRAC selections may not identify [*all*]{} AGN at the redshifts of interest, they are likely to isolate those AGN that may significantly influence our stacked far-IR measurements.
Fourth, the object must not have rest-frame $U-V$ and $V-J$ colors that classify it as a quiescent galaxy [@williams09; @skelton14]. The object is further required to have a specific star-formation rate sSFR$\ga 0.1$Gyr$^{-1}$. These criteria safeguard against the inclusion of galaxies where $\beta$ may be red due to the contribution of older stars to the near-UV continuum, or where dust heating by older stars may become significant.
Fifth, to ensure that the sample is not biased towards objects with red $U-H$ colors at faint $U$ magnitudes (owing to the limit in $H$-band magnitude mentioned previously), the galaxy must have an apparent magnitude at $[1+z]\times 1600$Å of $\le 27.0$mag. This limit still allows us to include galaxies with absolute magnitudes as faint as $M_{\rm 1600}\simeq -17.4$. These criteria result in a sample of 4,078 galaxies.
Spitzer and Herschel Imaging
----------------------------
We used the publicly available [*Spitzer*]{}/MIPS 24$\mu$m and [ *Herschel*]{}/PACS 100 and 160$\mu$m data in the two GOODS fields for our analysis. The 24$\mu$m data come from the [*Spitzer*]{} GOODS imaging program (PI: Dickinson), and trace the dust-sensitive rest-frame $7.7$$\mu$m emission feature for galaxies at $1.5\le z\le
2.5$ (e.g., @reddy06a). The observed $24$$\mu$m fluxes of $z\sim 2$ galaxies have been used extensively in the past to derive infrared luminosities ($L_{\rm IR}$) given the superior sensitivity of these data to dust emission when compared with observations taken at longer wavelengths (roughly a factor of three times more sensitive than [*Herschel*]{}/PACS to galaxies of a given $L_{\rm IR}$ at $z\sim
2$; @elbaz11). However, a number of observations have highlighted the strong variation in $L_{\rm 7.7}/L_{\rm IR}$ with star-formation rate [@rieke09; @shipley16], star-formation-rate surface density [@elbaz11], and gas-phase metallicity and ionization parameter at high-redshift [@shivaei16]. As such, while we stacked the $24$$\mu$m data for galaxies in our sample, we did not consider these measurements when calculating $L_{\rm IR}$. In Appendix \[sec:l8lir\], we consider further the variation in $L_{\rm
7.7}/L_{\rm IR}$ with other galaxy characteristics.
The [*Herschel*]{} data come from the GOODS-[*Herschel*]{} Open Time Key Program [@elbaz11] and the PACS Evolutionary Probe (PEP) Survey (PI: Lutz; @magnelli13), and probe the rest-frame $\simeq 30-65$$\mu$m dust continuum emission for galaxies at $1.5\le
z\le 2.5$. We chose not to use the SPIRE data given the much coarser spatial resolution of these data (FWHM$\ga 18\arcsec$) relative to the $24$$\mu$m (FWHM$\simeq 5\farcs 4$), $100$$\mu$m (FWHM$\simeq
6\farcs 7$), and $160$$\mu$m (FWHM$\simeq 11\arcsec$) data. The pixel scales of the 24, 100, and 160$\mu$m images are $1\farcs 2$, $1\farcs 2$, and $2\farcs 4$, respectively. As noted above, only the $100$ and $160$$\mu$m data are used to calculate $L_{\rm IR}$.
Of the 4,078 galaxies in the sample discussed above, 3,569 lie within the portions of the [*Herschel*]{} imaging that are $80\%$ complete to flux levels of 1.7 and 5.5mJy for the 100 and 160$\mu$m maps in GOODS-N, respectively, and 1.3 and 3.9mJy for the 100 and 160$\mu$m maps in GOODS-S, respectively. Of these galaxies, 24 (or $0.67\%$) are directly detected with signal-to-noise $S/N>3$ in either the $100$ or $160$$\mu$m images. As we are primarily concerned with constraining the IRX-$\beta$ relation for [*moderately*]{} reddened galaxies, we removed all directly-detected [*Herschel*]{} objects from our sample—the latter are very dusty star-forming galaxies at the redshifts of interest with $L_{\rm IR}\ga 10^{12}$$L_\odot$. The very low frequency of infrared-luminous objects among UV-faint galaxies in general could have been anticipated from the implied low number density of $L_{\rm IR}\ga 10^{12}$$L_\odot$ objects from the IR luminosity function [@reddy08; @magnelli13] and the high number density of UV-faint galaxies inferred from the UV luminosity function [@reddy08; @reddy09; @alavi16] at $z\sim2$. The inclusion of such dusty galaxies does not significantly affect our stacking analysis owing to the very small number of such objects. Excluding these dusty galaxies, our final sample consists of 3,545 galaxies with the redshift and absolute magnitude distributions shown in Figure \[fig:zmag\].
Summary of Sample
-----------------
To summarize, we have combined HDUV UVIS and 3D-HST catalogued photometry to constrain photometric redshifts for galaxies in the GOODS fields and isolate those star-forming galaxies with redshifts $z=1.5-2.5$ down to a limiting near-IR magnitude of $\simeq 27$AB (Table \[tab:sample\]). All galaxies are significantly detected (with $S/N>3$) down to an observed optical (rest-frame UV) magnitude of $27$AB. Our sample includes objects with spectroscopic redshifts in the aforementioned range wherever possible. This sample is then used as a basis for stacking deep [*Herschel*]{} data, as discussed in the next section.
One of the most beneficial attributes of our sample is that it contains the largest number of UV-faint galaxies—extending up to $\approx 3$ magnitudes fainter than the characteristic absolute magnitude at $z\sim 2.3$ ($M^{\ast}_{1700}=-20.70$; @reddy09) and $z\sim 1.9$ ($M^{\ast}_{1500}=-20.16$; @oesch10)—with robust redshifts at $1.5\le z\le 2.5$ assembled to date (Figure \[fig:zmag\]). The general faintness of galaxies in our sample is underscored by their very low detection rate ($S/N>3$) at $24$$\mu$m—85 of 3,545 galaxies, or $\approx 2.4\%$—compared to the $\approx 40\%$ detection rate for rest-frame UV-selected galaxies with ${\cal R}\le 25.5$ [@reddy10]. Consequently, unlike most previous efforts using ground-based UV-selected samples of limited depth, the present sample presents a unique opportunity to evaluate the IRX-$\beta$ relation for the analogs of the very faint galaxies that dominate the UV and bolometric luminosity densities at $z\gg3$ (e.g., @reddy08 [@smit12]), but for which direct constraints on their infrared luminosities are difficult to obtain.
[lccccccccc]{} [**All**]{} & 3545 & 1.94 & -1.71 & $1.54\pm0.14$ & $29\pm6$ & $62\pm17$ & $0.26\pm0.03$ & $2.1\pm0.4$ & 0.80\
\
[**$M_{1600}$ bins:**]{}\
$M_{1600} \le -21$ & 81 & 2.12 & -1.74 & $4.83\pm0.96$ & $177\pm30$ & $377\pm93$ & $1.00\pm0.20$ & $17.1\pm2.4$ & 6.73\
$-21<M_{1600}\le -20$ & 575 & 2.07 & -1.68 & $4.37\pm0.28$ & $87\pm13$ & $171\pm43$ & $0.86\pm0.06$ & $7.6\pm1.0$ & 2.92\
$-20<M_{1600}\le -19$ & 1390 & 1.99 & -1.67 & $2.33\pm0.20$ & $38\pm8$ & $84\pm25$ & $0.41\pm0.03$ & $3.1\pm0.6$ & 1.26\
$M_{1600}>-19$ & 1499 & 1.92 & -1.72 & $1.00\pm0.16$ & $31\pm9$ & $54\pm24$ & $0.16\pm0.03$ & $2.0\pm0.5$ & 0.48\
\
[**$\beta$ bins:**]{}\
$\beta \le -1.70$ & 2084 & 1.96 & -2.04 & $0.52\pm0.16$ & $5\pm7$ & $21\pm18$ & $0.09\pm0.03$ & $<1.4$ & 0.77\
$-1.70<\beta\le -1.40$ & 722 & 1.92 & -1.56 & $1.89\pm0.41$ & $43\pm13$ & $86\pm37$ & $0.31\pm0.07$ & $2.9\pm0.7$ & 0.95\
$-1.40<\beta\le -1.10$ & 345 & 1.94 & -1.26 & $3.92\pm0.55$ & $52\pm18$ & $103\pm56$ & $0.65\pm0.09$ & $3.7\pm1.1$ & 0.93\
$-1.10<\beta\le -0.80$ & 205 & 1.93 & -0.97 & $7.07\pm0.53$ & $80\pm25$ & $173\pm73$ & $1.15\pm0.09$ & $5.7\pm1.4$ & 0.81\
$\beta>-0.80$ & 189 & 1.90 & -0.31 & $5.09\pm0.62$ & $167\pm23$ & $340\pm63$ & $0.80\pm0.10$ & $11.0\pm1.2$ & 0.59\
\
[**$M_{1600}$ & $\beta$ bins:**]{}\
$M_{1600}\le -19$ $+$ $\beta \le -1.4$ & 1616 & 2.01 & -1.86 & $1.86\pm0.21$ & $25\pm9$ & $51\pm21$ & $0.33\pm0.04$ & $1.9\pm0.5$ & 1.58\
$M_{1600}\le -19$ $+$ $\beta > -1.4$ & 430 & 1.97 & -1.02 & $7.20\pm0.50$ & $117\pm19$ & $288\pm46$ & $1.25\pm0.09$ & $9.5\pm1.0$ & 1.47\
$M_{1600}> -19$ $+$ $\beta \le -1.4$ & 1190 & 1.92 & -1.97 & $0.36\pm0.21$ & $13\pm7$ & $26\pm23$ & $0.06\pm0.03$ & $<1.0$ & 0.48\
$M_{1600}> -19$ $+$ $\beta > -1.4$ & 309 & 1.90 & -0.79 & $3.11\pm0.38$ & $95\pm19$ & $176\pm75$ & $0.49\pm0.06$ & $6.3\pm1.2$ & 0.48\
\
[**Stellar Mass & $\beta$ bins:**]{}\
$\log[M_{\ast}/{\rm M}_\odot]\le 9.75$ & 2571 & 1.94 & -1.88 & $0.75\pm0.14$ & $10\pm7$ & $17\pm20$ & $0.13\pm0.03$ & $<1.2$ & 0.71\
$\,\,\,\,\,\,+\beta\le -1.4$ & 2385 & 1.94 & -1.95 & $0.63\pm0.19$ & $11\pm6$ & $28\pm15$ & $0.10\pm0.03$ & $<1.0$ & 0.72\
$\,\,\,\,\,\,+\beta>-1.4$ & 186 & 1.89 & -1.12 & $2.95\pm0.76$ & $19\pm25$ & $72\pm73$ & $0.47\pm0.12$ & $<4.0$ & 0.57\
$\log[M_{\ast}/{\rm M}_\odot]>9.75$ & 974 & 1.96 & -0.92 & $4.93\pm0.40$ & $111\pm 12$ & $229\pm36$ & $0.84\pm0.07$ & $8.3\pm0.7$ & 1.22\
$\,\,\,\,\,\,+\beta\le -1.4$ & 421 & 2.04 & -1.61 & $4.22\pm0.42$ & $59\pm14$ & $118\pm46$ & $0.80\pm0.07$ & $5.1\pm1.1$ & 2.26\
$\,\,\,\,\,\,+\beta>-1.4$ & 553 & 1.94 & -0.72 & $5.23\pm0.48$ & $132\pm14$ & $263\pm44$ & $0.87\pm0.09$ & $9.4\pm0.8$ & 0.90\
\
[**Age bins:**]{}\
$\log[{\rm Age}/{\rm yr}]\le 8.00$ & 81 & 1.96 & -1.49 & $0.32\pm0.92$ & $62\pm39$ & $135\pm91$ & $<0.51$ & $<6.3$ & 0.55\
$\log[{\rm Age}/{\rm yr}]>8.00$ & 3464 & 1.94 & -1.71 & $1.43\pm0.22$ & $25\pm6$ & $52\pm19$ & $0.23\pm0.04$ & $1.8\pm0.4$ & 0.81 \[tab:stackedresults\]
STACKING METHODOLOGY {#sec:stacking}
====================
To mitigate any systematics in the stacked fluxes due to bright objects proximate to the galaxies in our sample, we performed the stacking on residual images that were constructed as follows.[^2] We used the $24$$\mu$m catalogs and point spread functions (PSFs) included in the GOODS-[ *Herschel*]{} data release to subtract all objects with $S/N>3$ in the $24$$\mu$m images, with the exception of the 85 objects in our sample that are directly detected at $24$$\mu$m. Objects with $S/N>3$ in the 24$\mu$m images were used as priors to fit and subtract objects with $S/N>3$ in the 100 and 160$\mu$m images. The result is a set of residual images at 24, 100, and 160$\mu$m for both GOODS fields.
For each galaxy contributing to the stack, we extracted from the 24, 100, and 160$\mu$m residual images regions of $41\times 41$, $52\times 52$, and $52\times 52$ pixels, respectively, centered on the galaxy. The sub-images were then divided by the UV luminosity, $L_{\rm UV}=\nu L_\nu$ at $1600$Å, of the galaxy, and these normalized sub-images for each band were then averaged together using 3$\sigma$ clipping for all the objects in the stack. We performed PSF photometry on the stacked images to measure the fluxes. Because the images are normalized by $L_{\rm UV}$, the stacked fluxes are directly proportional to the average IRX. The corresponding weighted average fluxes in each band ($\langle f_{24}\rangle$, $\langle
f_{100}\rangle$, and $\langle f_{160}\rangle$), where the weights are $1/L_{\rm UV}$, were computed by multiplying the stacked fluxes by the weighted average UV luminosity of galaxies in the stack. The measurement uncertainties of these fluxes were calculated as the 1$\sigma$ dispersion in the fluxes obtained by fitting PSFs at random positions in the stacked images, avoiding the stacked signal itself.
While stacking on residual images aids in minimizing the contribution of bright nearby objects to the stacked fluxes, this method will not account for objects that are blended with the galaxies of interest in the [*Herschel*]{}/PACS imaging. This presents a particular challenge in our case, where the galaxies are selected from [*HST*]{} photometry, as a single galaxy (e.g., as observed from the ground) may be resolved with [*HST*]{} into several subcomponents, each of which is of course unresolved in the [*Herschel*]{} imaging but each of which will contribute to the stacked flux. Galaxies that are resolved into multiple subcomponents will contribute more than once to the stack, resulting in an over-estimate of the stacked far-IR flux. This effect is compounded by that of separate galaxies contributing more than once to the stack if they happen to be blended at the [ *Herschel*]{}/PACS resolution. This bias was quantified as follows.
For a given band, we used the PSF to generate $N$ galaxies, where $N$ is the number of galaxies in the stack, each having a flux equal to the weighted average flux of the stacked signal. These simulated galaxies were added to the residual image at locations that were shifted from those of the real galaxies by offsets $\delta x$ and $\delta y$ in the x- and y-directions, respectively, where the offsets were chosen randomly. This ensures that the spatial distribution of the simulated galaxies is identical to that of the real galaxies. We then stacked at the locations of the simulated galaxies and compared the simulated and recovered stacked fluxes. This was done 100 times, each time with a different pair of (randomly chosen) $\delta x$ and $\delta y$. The average ratio of the simulated and recovered fluxes, or the bias factor, from these 100 simulations varied from $\approx
0.60-0.90$, depending on the number of galaxies contributing to the stack and the particular band. These simulations were performed for every band and for every stack in our analysis, and the stacked fluxes of the galaxies in our sample were multiplied by the bias factors calculated from these simulations.
To further investigate this bias, we also stacked all galaxies in our sample that had no [*HST*]{}-detected object within $3\farcs 35$, corresponding to the half-width at half-maximum of the [ *Herschel*]{}/PACS 100$\mu$m PSF. While this criterion severely restricts the size of the sample to only 465 objects, it allowed us to verify the bias factors derived from our simulations. Stacking these 465 objects yielded weighted average fluxes at $24$, $100$, and $160$$\mu$m that are within 1$\sigma$ of the those values obtained for the entire sample once the bias factors are applied.[^3]
Infrared luminosities were calculated by fitting the @elbaz11 “main sequence” dust template to the stacked $\langle
f_{100}\rangle$ and $\langle f_{160}\rangle$ fluxes. We chose this particular template as it provided the best match to the observed infrared colors $f_{100}/f_{160}$ of the stacks, though we note that the adoption of other templates (e.g., @chary01 [@dale02; @rieke09]) results in $L_{\rm IR}$ that vary by no more than $\approx
50\%$ from the ones calculated here (see @reddy12a for a detailed comparison of $L_{\rm IR}$ computed using different dust templates). Upper limits in $L_{\rm IR}$ are quoted in cases where $L_{\rm IR}$ divided by the modeled uncertainty is $>3$. In a few instances, $\sim 2\,\sigma$ detections of both the $100$ and $160$$\mu$m stacked fluxes yield a modeled $L_{\rm IR}$ that is significant at the $3\sigma$ level.
The mean UV slope of objects contributing to the stack was computed as a weighted average of the UV slopes of individual objects where, again, the weights are $1/L_{\rm UV}$. These same weights were also applied when calculating the weighted average redshift, absolute magnitude, stellar mass, and age of objects contributing to the stack. Table \[tab:stackedresults\] lists the average galaxy properties and fluxes for each stack performed in our study.
PREDICTED IRX-$\beta$ RELATIONS {#sec:predirx}
===============================
We calculated the relationship between IRX and $\beta$ using the recently developed “Binary Population and Spectral Synthesis” (BPASS) models [@eldridge12; @stanway16] with a stellar metallicity of $Z=0.14Z_\odot$ on the current abundance scale [@asplund09] and a two power-law IMF with $\alpha = 2.35$ for $M_{\ast} > 0.5$$M_\odot$ and $\alpha= 1.30$ for $0.1\le M_{\ast}\le
0.5$$M_\odot$. We assumed a constant star formation with an age of $100$Myr and included nebular continuum emission. This particular BPASS model (what we refer to as our “fiducial” model) is found to best reproduce simultaneously the rest-frame far-UV continuum, stellar, and nebular lines, and the rest-frame optical nebular emission line strengths measured for galaxies at $z\sim 2$ [@steidel16]. Two salient features of this model are the very blue intrinsic UV continuum slope $\beta_0 \simeq -2.62$ relative to that assumed in the @meurer99 calibration of the IRX-$\beta$ relation ($\beta_0 = -2.23$), and the larger number of ionizing photons per unit star-formation-rate (i.e., $\approx 20\%$ larger than those of single star models with no stellar rotation; @stanway16) that are potentially available for heating dust. For comparison, the BPASS model for the same metallicity with a constant star-formation history and an age of $300$Myr (the median for the sample considered here, and similar to the mean age of $z\sim
2$ UV-selected galaxies; @reddy12b) is $\beta_0 = -2.52$. Below, we also consider the more traditionally used @bruzual03 (BC03) models.
We calculated the IRX-$\beta$ relation assuming an energy balance between flux that is absorbed and that which is re-emitted in the infrared [@meurer99]. The absorption is determined by the extinction or attenuation curve, and we considered several choices including the SMC extinction curve of @gordon03, and the @calzetti00 and @reddy15 attenuation curves. The original forms of these extinction/attenuation curves were empirically calibrated at $\lambda \ga 1200$Å. The @calzetti00 and @reddy15 curves were extended down to $\lambda = 950$Å using a large sample of Lyman Break galaxy spectra at $z\sim 3$ and a newly-developed iterative method presented in @reddy16a. The SMC curve of @gordon03 was extended in the same way, and we used these extended versions of the curves in this analysis. For reference, our new constraints on the shape of dust obscuration curves imply a lower attenuation of $\lambda\la 1250$Å photons relative to that predicted from polynomial extrapolations below these wavelengths [@reddy16a]. In practice, because most of the dust heating arises from photons with $\lambda>1200$Å, the implementation of the new shapes of extinction/attenuation curves does little to alter the predicted IRX-$\beta$ relation. For reference, the following equations give the relationship between ${E(B-V)}$ and $\beta$ for the fiducial (BPASS) model with nebular continuum emission and the shapes of the attenuation/extinction curves derived above: $$\begin{aligned}
{\bf \rm Calzetti+00:} & \beta = -2.616 + 4.684\times{E(B-V)}; \nonumber \\
{\bf \rm SMC:} & \beta = -2.616 + 11.259\times{E(B-V)}; \nonumber \\
{\bf \rm Reddy+15:} & \beta = -2.616 + 4.594\times{E(B-V)}.
\label{eq:betaebmv}\end{aligned}$$ The intercepts in the above equations are equal to $-2.520$ for the $300$Myr BPASS model.
For each value of ${E(B-V)}$, we applied the aforementioned dust curves to the BPASS model and calculated the flux absorbed at $\lambda >
912$Å. Based on the high covering fraction ($\ga 92\%$) of optically-thick ${\text{\ion{H}{1}}}$ inferred for $z\sim 3$ galaxies [@reddy16b], we assumed a zero escape fraction of ionizing photons and that photoelectric absorption dominates the depletion of such photons, rather than dust attenuation [@reddy16b]. We then calculated the resultant Ly$\alpha$ flux assuming Case B recombination and the amount of Ly$\alpha$ flux absorbed given the value of the extinction/attenuation curve at $\lambda=1216$Å, and added this to the absorbed flux at $\lambda>912$Å. This total absorbed flux is equated to $L_{\rm IR}$, where we have assumed that all of the dust emission occurs between $8$ and $1000$$\mu$m.
Finally, we divided the infrared luminosity by the UV luminosity of the reddened model at $1600$Å to arrive at the value of IRX. The UV slope was computed directly from the reddened model using the full set of @calzetti94 wavelength windows. Below, we also consider the value of $\beta$ computed using the subset of the @calzetti94 windows at $\lambda < 1740$Å, as well as a single window spanning the range $1300-1800$Å. Formally, we find the following relations between IRX and $\beta$ given Equation \[eq:betaebmv\], where $\beta$ is measured using the full set of @calzetti94 wavelength windows: $$\begin{aligned}
{\bf \rm Calzetti+00:} & {\rm IRX} = 1.67 \times [10^{0.4(2.13\beta + 5.57)} - 1]; \nonumber \\
{\bf \rm SMC:} & {\rm IRX} = 1.79 \times [10^{0.4(1.07\beta + 2.79)} - 1]; \nonumber \\
{\bf \rm Reddy+15:} & {\rm IRX} = 1.68 \times [10^{0.4(1.82\beta + 4.77)} - 1].\end{aligned}$$ These relations may be shifted redder by $\delta\beta = 0.096$ to reproduce the IRX-$\beta$ relations for the 300Myr BPASS model. For reference, Appendix \[sec:sumrelations\] summarizes the relations between $\beta$ and ${E(B-V)}$ and between IRX and $\beta$ for different assumptions of the stellar population model, nebular continuum, Ly$\alpha$ heating, and the normalization of the dust curve.
Figures \[fig:irxpred1\] and \[fig:irxpred2\] convey a sense for how the stellar population and nebular continuum, Ly$\alpha$ heating, UV slope measurements, and the total-to-selective extinction ($R_{\rm V}$) affect the IRX-$\beta$ relation. Models with a bluer intrinsic UV slope require a larger degree of dust obscuration to reproduce a given observed UV slope, thus causing the IRX-$\beta$ relation to shift towards bluer $\beta$. Relative to the @meurer99 relation, the IRX-$\beta$ relations for the fiducial (BPASS) $100$ and $300$Myr models predict a factor of $\approx 2$ more dust obscuration at a given $\beta$ for $\beta \ga -1.7$, and an even larger factor for $\beta$ bluer than this limit (left panel of Figure \[fig:irxpred1\]). The commonly utilized BC03 model results in a factor of $\approx 30\%$ increase in the IRX at a given $\beta$ relative to the @meurer99 curve, while the $0.28Z_\odot$ BC03 model results in an IRX-$\beta$ relation that is indistinguishable from that of the BPASS model for the same age (right panel of Figure \[fig:irxpred2\]). These predictions underscore the importance of the adopted stellar population model when using the IRX-$\beta$ relation to discern between different dust attenuation/extinction curves (e.g., @meurer99 [@boquien12; @schaerer13]). Note that the inclusion of nebular continuum emission shifts the IRX-$\beta$ relation by $\delta\beta \simeq 0.1$ to the right (i.e., making $\beta$ redder), so that the IRX at a given $\beta$ is $\approx 0.1$dex lower (leftmost panel of Figure \[fig:irxpred2\]).
The specific treatment of dust heating from Ly$\alpha$ photons has a much less pronounced effect on the IRX-$\beta$ relation. If none of the Ly$\alpha$ flux is absorbed by dust—also equivalent to assuming that the escape fraction of ionizing photons is $100\%$—then the resulting IRX is $\approx 10\%$ lower at a given $\beta$ than that predicted by our fiducial model. Similarly, assuming that all of the Ly$\alpha$ is absorbed by dust results in an IRX-$\beta$ relation that is indistinguishable from that of the fiducial model.
The wavelengths over which $\beta$ is computed will also effect the IRX-$\beta$ relation to varying degrees, depending on the specific wavelength ranges and the stellar population model. For the BPASS model, computing $\beta$ from the reddened model spectrum within a single window spanning the range $1300-1800$Å results in an IRX-$\beta$ relation that is shifted by as much as $\delta\beta = 0.4$ to redder slopes. This effect is due to the fact that the stellar continuum rises less steeply towards shorter wavelengths for $\lambda
\la 1500$Å. Consequently, the $\log({\rm IRX})$ is $\simeq
0.18$dex lower in this case relative to that computed based on the full set of @calzetti94 windows. Similar offsets are observed when using the subset of the @calzetti94 windows lying at $\lambda < 1800$Å, while the offsets are not as large with the BC03 model. Most previous studies of the IRX-$\beta$ relation adopted a $\beta$ computed over relatively broad wavelength ranges coinciding with the @calzetti94 windows. However, the systematic offsets in the IRX-$\beta$ relation arising from the narrower wavelength range used to compute UV slopes become relevant for very high-redshift (e.g., $z\ga 8$) galaxies where [*Hubble*]{} photometry is typically used to constrain the UV slope and where such observations only go as red as rest-frame $\la 1800$Å.
Finally, the rightmost panel of Figure \[fig:irxpred2\] shows the effect of lowering the total-to-selective extinction ($R_{\rm V}$), or normalization, of the attenuation/extinction curves by various amounts.
Of the [*physical*]{} factors discussed above, the IRX-$\beta$ relation is most sensitive to the effects of changing the intrinsic UV slope and/or $R_{\rm V}$. To underscore the importance of the assumed stellar population when interpreting the IRX-$\beta$ relation, we show in Figure \[fig:keyplot\] the comparison of our fiducial BPASS model assuming the @calzetti00 curve and an intrinsic $\beta_0 =
-2.23$ (accomplished by simply shifting the model to asymptote to this intrinsic value), along with the same model assuming an SMC curve with $\beta_0 = -2.62$. As is evident from this figure, the two IRX-$\beta$ relations that assume different attenuation curves and intrinsic UV slopes have a significant overlap (within a factor two in IRX) over the range $-2.1\la \beta \la -1.3$. Notably this range includes the typical $\beta \simeq -1.5$ found for UV-selected galaxies at $z\sim 2$ (see @reddy12a). In the next section, we examine these effects further in the context of the stacked constraints on IRX-$\beta$ provided by the combined HDUV and 3D-HST samples.
DISCUSSION {#sec:discussion}
==========
IRX-$\beta$ for the Entire Sample {#sec:irxbeta}
---------------------------------
As a first step in constraining the IRX-$\beta$ relation at $z=1.5-2.5$, we stacked galaxies in bins of UV slope. The resulting IRX for each of these bins, as well as for the whole sample, are shown in Figure \[fig:irxbetarv\]. The predicted IRX-$\beta$ relations for different assumptions of the stellar population (BPASS or BC03) intrinsic UV slope, $\beta_0$, and the difference in normalization of the dust curves, $\delta R_{\rm V}$, are also shown. To account for the former, we simply shifted the fiducial relation (computed assuming $\beta_0 = -2.62$) so that it asymptotes to a redder value of $\beta_0
= -2.23$, similar to that assumed in @meurer99.
Our stacked results indicate a highly significant ($\ga 20\sigma$) correlation between IRX and $\beta$. However, none of the predicted relations calculated based on assuming an intrinsic UV slope of $\beta_0 = -2.23$, as in @meurer99, are able to reproduce our stacked estimates for the full range of $\beta$ considered here. For example, the upper left panel of Figure \[fig:irxbetarv\] shows that while both the @calzetti00 and @reddy15 attenuation curves predict IRX that are within $3\sigma$ of our stacked values for $\beta < -1.2$, they over-predict the IRX for galaxies with redder $\beta$.
Lowering the normalization of the @reddy15 attenuation curve by $\delta R_{\rm V} = 1.5$ results in a better match to the stacked determinations, but with some disagreement (at the $>3\sigma$ level) with the stack of the entire sample (lower left panel of Figure \[fig:irxbetarv\]). @reddy15 estimated the systematic uncertainty in their determination of $R_{\rm V}$ to be $\delta R_{\rm
V}\approx 0.4$, which suggests that their curve may not have a normalization as low as $R_{\rm V} = 1.0$ given their favored value of $R_{\rm V} = 2.51$. Regardless, without any modifications to the normalizations and/or shapes of the attenuation curves in the literature [@calzetti00; @gordon03; @reddy15], the corresponding IRX-$\beta$ relations are unable to reproduce our stacked estimates if we assume an intrinsic UV slope of $\beta_0 = -2.23$. At face value, these results suggest that the attenuation curve describing our sample is steeper than the typically utilized @calzetti00 relation, but grayer than the SMC extinction curve. However, this conclusion depends on the intrinsic UV slope of the stellar population, as we discuss next.
Independent evidence favors the low-metallicity BPASS model in describing the underlying stellar populations of $z\sim 2$ galaxies [@steidel16]. The very blue intrinsic UV slope characteristic of this model—as well as those of the BC03 models with comparable stellar metallicities (e.g., the $0.28Z_\odot$ BC03 model with the same high-mass power-law index of the IMF as the BPASS model has $\beta_0 = -2.65$)—is also favored in light of the non-negligible number of galaxies in our sample ($\approx 9\%$) that have $\beta<-2.23$ at the $3\sigma$ level, the canonical value assumed in @meurer99. Figure \[fig:irxpred1\] shows that the low-metallicity models with blue $\beta_0$ result in IRX-$\beta$ relations that are significantly shifted relative to those assuming redder $\beta_0$. With such models, we find that our stacked measurements are best reproduced by an SMC-like extinction curve (upper right-hand panel of Figure \[fig:irxbetarv\]), in the sense that all of the measurements lie within $3\sigma$ of the associated prediction. On the other hand, with such stellar population models, grayer attenuation curves (e.g., @calzetti00) over-predict the IRX at a given $\beta$ by a factor of $\approx 2-7$. More generally, we find that the slope of the IRX-$\beta$ relation implied by our stacked measurements is better fit with that obtained when considering the SMC extinction curve, while grayer attenuation curves lead to a more rapid rise in IRX with increasing $\beta$.
Our stacked measurements and predicted IRX-$\beta$ curves are compared with several results from the literature in Figure \[fig:irxbetalit\]. In the context of the IRX-$\beta$ predictions that adopt sub-solar metallicities, we find that most of the stacked measurements for UV-selected galaxies at $z\sim 1.5-3.0$ suggest a curve that is SMC-like, at least for $\beta \la -0.5$. Several of the samples, including those of @heinis13, @alvarez16, and @sklias14, indicate an IRX that is larger than the SMC prediction for $\beta \ga -0.5$. Such a behavior is not surprising given that the dust obscuration has been shown to decouple from the UV slope for galaxies with large star-formation rates, as is predominantly the case for most star-forming galaxies with very red $\beta$ [@goldader02; @chapman05; @reddy06a; @reddy10; @penner12; @casey14b; @salmon16].
As discussed in a number of studies [@reddy06a; @reddy10; @penner12; @casey14b; @koprowski16], dusty galaxies in general can exhibit a wide range in $\beta$ (from very blue to very red) depending on the particular spatial configuration of the dust and UV-emitting stars. Figure \[fig:irxbetalit\] shows that the degree to which such galaxies diverge from a given attenuation curve depends on $\beta_0$. Many of the dusty galaxies that would appear to have IRX larger than the @meurer99 or @calzetti00 predictions may in fact be adequately described by such curves if the stellar populations of these galaxies are characterized by very blue intrinsic UV spectral slopes. On the other hand, if these dusty galaxies have relatively enriched stellar populations, and redder intrinsic slopes, then their departure from the @calzetti00 prediction would be enhanced.
Undoubtedly, large variations in IRX can also be expected with different geometries of dust and stars. Regardless, if sub-solar metallicity models are widely representative of the stellar populations of typical star-forming galaxies at $z\ga 1.5$, then our stacked measurements, along with those in the literature, tend to disfavor gray attenuation curves for these galaxies. The large sample studied here, as well as those of @bouwens16b and @alvarez16, suggest an SMC-like curve. At first glance, this conclusion may appear to be at odds with the wide number of previous investigations that have found that the @meurer99 and @calzetti00 attenuation curves generally apply to moderately-reddened star-forming galaxies at $z\ga 1.5$ (e.g., @nandra02 [@seibert02; @reddy04; @reddy06a; @daddi07a; @pannella09; @reddy10; @magdis10; @overzier11; @reddy12a; @forrest16; @debarros16]; c.f., @heinis13).
In the framework of our present analysis, the reconciliation between these results is simple. Namely, our analysis does not call into question previous [*measurements*]{} of IRX-$\beta$, but calls for a different [*interpretation*]{} of these measurements. In the previous interpretation, most of the stacked measurements from the literature were found to generally agree with the @meurer99 relation [ *if we assume a relatively red intrinsic slope of $\beta=-2.23$*]{}. In the present interpretation, we argue that sub-solar metallicity models necessitate a steeper attenuation curve in order to reproduce the measurements of IRX-$\beta$ (e.g., see also @cullen17). Our conclusion is aided by the larger dynamic range in $\beta$ probed by the HDUV and 3D-HST samples, which allows us to better discriminate between different curves given that their corresponding IRX-$\beta$ relations separate significantly at redder $\beta$ (Figures \[fig:keyplot\] and \[fig:irxbetalit\]).
IRX Versus Stellar Mass {#sec:irxmass}
-----------------------
The well-studied correlations between star-formation rate and stellar mass (e.g., @noeske07 [@reddy06a; @daddi07a; @pannella09; @wuyts11; @reddy12b; @whitaker14; @schreiber15; @shivaei15]), and between star-formation rate and dust attenuation (e.g., @wang96 [@adelberger00; @reddy06b; @buat07; @buat09; @burgarella09; @reddy10]), have motivated the use of stellar mass as a proxy for attenuation [@pannella09; @reddy10; @whitaker14; @pannella15; @alvarez16; @bouwens16b] as the former can be easily inferred from fitting stellar population models to broadband photometry. The connection between reddening and stellar mass can also be deduced from the mass-metallicity relation [@tremonti04; @kewley08; @andrews13; @erb06a; @maiolino08; @henry13; @maseda14; @steidel14; @sanders15].
Motivated by these results, we stacked galaxies in two bins of stellar mass divided at $M_{\ast} = 10^{9.75}$$M_\odot$ (and further subdivided into bins of $\beta$; Table \[tab:stackedresults\]) to investigate the dependence of the IRX-$\beta$ relation on stellar mass.[^4] The high-mass subsample ($M_\ast >
10^{9.75}$$M_\odot$) exhibits a redder UV slope ($\langle\beta\rangle = -0.92$) and larger IRX ($\langle{\rm
IRX}\rangle=6.8\pm 0.6$) than the low-mass subsample with $\langle\beta\rangle = -1.88$ and $\langle{\rm IRX}\rangle<1.7$ ($3\sigma$ upper limit). Moreover, the high-mass subsample exhibits an IRX-$\beta$ relation consistent with that predicted assuming our fiducial stellar population model and the SMC extinction curve (Figure \[fig:irxmass\]). Separately, the low-mass subsample as a whole, as well as the subset of the low-mass galaxies with $\beta \le
-1.4$, have $3\sigma$ upper limits on IRX that require a dust curve that is at least as steep as the SMC.
The constraints on the IRX-$M_{\ast}$ relation from our sample are shown relative to previous determinations in the right panel of Figure \[fig:irxmass\]. The “$z\sim 2$ Consensus Relation” presented in @bouwens16b was based on the IRX-$M_{\ast}$ trends published in @reddy10, @whitaker14, and @alvarez16. Formally, our stacked detection for the high-mass ($M_{\ast}>10^{9.75}$$M_\odot$) subsample lies $\approx 4\sigma$ below the consensus relation, but is in excellent agreement with the mean IRX found for galaxies of similar masses ($\simeq 2\times
10^{10}$$M_\odot$) in @reddy10. The upper limit in IRX for the low-mass ($M_{\ast}\le 10^{9.75}$$M_\odot$) sample is consistent with the predictions from the consensus relation. Based on these comparisons, we conclude that the IRX-$M_{\ast}$ relation from the present work is in general agreement with previous determinations, and lends support for previous suggestions that stellar mass may be used as a rough proxy for dust attenuation in high-redshift star-forming galaxies (e.g., @pannella09 [@reddy10; @bouwens16b]). Moreover, these comparisons underscore the general agreement between our IRX measurements (e.g., as a function of $\beta$ and $M_{\ast}$) and those in the literature, in spite of our different interpretation of these results in the context of the dust obscuration curve applicable to high-redshift galaxies (Section \[sec:irxbeta\]).
IRX Versus UV Luminosity {#sec:irxuvlum}
------------------------
As alluded to in Section \[sec:intro\], quantifying the dust attenuation of UV-faint (sub-$L^{\ast}$) galaxies has been a longstanding focus of the high-redshift community. While the steep faint-end slopes of UV luminosity functions at $z\ga 2$ imply that such galaxies dominate the UV luminosity density at these redshifts, knowledge of their dust obscuration is required to assess their contribution the cosmic star-formation-rate density (e.g., @steidel99 [@adelberger00; @bouwens07; @reddy08]). Several studies have argued that UV-faint galaxies are on average less dusty than their brighter counterparts [@reddy08; @bouwens09; @bouwens12; @kurczynski14]. This inference is based on the fact that the observed UV luminosity is expected to monotonically correlate with star-formation rate for galaxies fainter than $L^{\ast}$ (e.g., see Figure 13 of @reddy10 and Figure 10 of @bouwens09) and that the dustiness is a strong function of star-formation rate [@wang96; @adelberger00; @reddy06b; @buat07; @buat09; @burgarella09; @reddy10].
While several investigations have shown evidence for a correlation between IRX and UV luminosity [@bouwens09; @reddy10; @reddy12a], others point to a roughly constant IRX as a function of UV luminosity [@buat09; @heinis13]. As discussed in these studies, the different findings may be a result of selection biases, in the sense that UV-selected samples will tend to miss the dustiest galaxies, which also have faint observed UV luminosities. Hence, for purely UV-selected samples, IRX would be expected to decrease with $L_{\rm
UV}$. Alternatively, the rarity of highly dust-obscured galaxies compared to intrinsically faint galaxies (e.g., as inferred from the shapes of the UV and IR luminosity functions; @caputi07 [@reddy08; @reddy09; @magnelli13]) implies that in a number-weighted sense, the mean bolometric luminosity should decrease towards fainter $L_{\rm UV}$. How this translates to the variation of IRX with $L_{\rm UV}$ will depend on how quickly the dust can build up in dynamically-relaxed faint galaxies. From a physical standpoint, dust enrichment on timescales much shorter than the dynamical timescale would suggest a relatively constant IRX as a function of $L_{\rm UV}$.
The HDUV/3D-HST sample presents a unique opportunity to evaluate the trend between IRX and $L_{\rm UV}$ as the selection is based on rest-optical criteria. Consequently, our sample is less sensitive to the bias against dusty galaxies that are expected to be significant in UV-selected samples (e.g., @chapman00 [@barger00; @buat05; @burgarella05; @reddy05; @reddy08; @casey14a]). Indeed, Figure \[fig:zmag\] shows that our sample includes a large number of UV-faint galaxies that are also quite dusty based on their red $\beta
\ga -1.4$—these galaxies, while dusty, still have bolometric luminosities that are sufficiently faint to be undetected in the PACS imaging.
Figure \[fig:irxuv\] shows the relationship between dust attenuation and UV luminosity for our sample (gray points) and those of @heinis13 at $z\sim 1.5$ and @alvarez16 at $z\sim 3$. The latter two indicate an IRX that is roughly constant with $L_{\rm
UV}$, but one which is a factor of $2-3$ offset towards higher IRX than found for our sample. This offset is easily explained by the fact that the IRX-$L_{\rm UV}$ relations for the $z\sim 1.5$ and $z\sim 3.0$ samples were determined from galaxies that are on average significantly redder than in our sample. In particular, most of the constraints on IRX-$L_{\rm UV}$ from these studies come from galaxies with $\beta \ga -1.5$. When limiting our stacks to galaxies with $\beta > -1.4$ (red points in Figure \[fig:irxuv\]), we find an excellent agreement with the IRX-$L_{\rm UV}$ relations found by @heinis13 and @alvarez16. On the other hand, stacking those galaxies in our sample with $\beta\le -1.4$ results in an IRX-$L_{\rm UV}$ relation that is not surprisingly offset towards lower IRX than for the sample as a whole.
Thus, while the IRX-$L_{\rm UV}$ relation appears to be roughly constant for all of the samples considered here, Figure \[fig:irxuv\] implies that the $\beta$ distribution as a function of $L_{\rm UV}$ is at least as important as the presence of dusty star-forming galaxies in shaping the observed IRX-$L_{\rm UV}$ relation. Furthermore, the trend of a bluer average $\beta$ with decreasing $L_{\rm UV}$ (e.g., Figure \[fig:zmag\]; see also @reddy08 [@bouwens09]) suggests that the mean reddening should be correspondingly lower for UV-faint galaxies than for UV-bright ones once the effect of the less numerous dusty galaxies with red $\beta$ are accounted for.
The blue ($\beta \le -1.4$) star-forming galaxies in our sample have IRX$\la 1$, such that the infrared and UV luminosities contribute equally to the bolometric luminosities of these galaxies. The expectation of rapid dust enrichment from core-collapse supernovae [@todini01] implies that it is unlikely that the dust obscuration can be significantly lower than this value when observed for dynamically-relaxed systems. Consequently, the observation that the mean UV slope becomes progressively bluer for fainter galaxies at high-redshift (e.g., @bouwens09 [@wilkins11; @finkelstein12; @alavi14]) may simply be a result of systematic changes in metallicity and/or star-formation history where the intrinsic UV slope also becomes bluer but IRX remains relatively constant (e.g., Figure \[fig:irxpred1\]; @wilkins11 [@wilkins13; @alavi14]). [*Thus, the common observation that UV-faint galaxies are bluer than their brighter counterparts may not directly translate into a lower dust obscuration for the former.*]{}
Moreover, $\beta$ is relatively insensitive to IRX for $\beta-\beta_0\la 0.2$ (e.g., Figure \[fig:irxpred1\]). Our results thus indicate caution when using the IRX-$\beta$ relation to infer the dust reddening of blue galaxies at high-redshift, as such estimates may be highly dependent on the intrinsic UV slope of the stellar population and even otherwise quite uncertain if the difference in observed and intrinsic UV slopes is small.
Young/Low-Mass Galaxies {#sec:young}
-----------------------
ALMA has opened up new avenues for investigating the ISM and dust content of very high-redshift galaxies, and a few recent efforts have focused in particular on the \[\]$158$$\mu$m line in galaxies at $z\ga 5$ [@schaerer15; @maiolino15; @watson15; @capak15; @willott15; @knudsen16; @pentericci16] and dust continuum at mm wavelengths. @capak15 report on ALMA constraints on the IRX of a small sample of 10 $z\sim 5.5$ LBGs and find that they generally fall below the SMC extinction curve. The disparity between the SMC curve and their data points is increased if one adopts a bluer intrinsic slope than that assumed in @meurer99, a reasonable expectation for these high-redshift and presumably lower metallicity galaxies.
More generally, earlier results suggesting that “young” LBGs (ages $\la 100$Myr) and/or those with lower stellar masses at $z\ga 2$ are consistent with an SMC curve [@baker01; @reddy06a; @siana08; @siana09; @reddy10; @reddy12a; @bouwens16b] would also require a steeper-than-SMC curve if their intrinsic slopes are substantially bluer than that normally assumed in interpreting the IRX-$\beta$ relation. Unfortunately, there is only a very small number of galaxies in our sample that have SED-determined ages of $<100$Myr (81 galaxies), and stacking them results in an unconstraining upper limit on IRX (Table \[tab:stackedresults\]).
Note that an ambiguity arises because the ages are derived from SED-fitting, which assumes some form of the attenuation curve. Following @reddy10, the number of galaxies considered “young” would be lower under the assumption of SMC extinction rather than @calzetti00 attenuation, as the former results in lower ${E(B-V)}$ for a given UV slope, translating into older ages. Self-consistently modeling the SEDs based on the location of galaxies in the IRX-$\beta$ plane results in fewer $<100$Myr galaxies, but of course their location in the IRX-$\beta$ plane is unaffected [@reddy10], as is the conclusion that such young galaxies would require a dust curve steeper than that of the SMC if they have blue intrinsic UV slopes. In addition, as noted in Section \[sec:irxmass\], galaxies in our low-mass ($M_\ast \le 10^{9.75}$$M_\odot$) subsample appear to also require a dust curve steeper than that of the SMC.
Figure \[fig:irxyoung\] summarizes a few recent measurements of IRX-$\beta$ for young and low-mass galaxies at $z\sim 2$ [@baker01; @siana09; @reddy10; @reddy12a], low-mass galaxies and LBGs in general at $z\sim 4-10$ [@bouwens16b], and our own measurements. The compilation from @reddy10 and @reddy12a includes 24$\mu$m constraints on the IRX of young galaxies. Shifting their IRX by $\approx 0.35$dex to higher values to account for the deficit of PAH emission in galaxies with $\la
400$Myr [@shivaei16] results in upper limits or a stacked measurement of IRX that are broadly consistent with either the SMC curve or one that is steeper.
Considering the [*Herschel*]{} measurements here and in @reddy12a, and ALMA measurements at $z\sim 2-10$ [@capak15; @bouwens16b], we find that young/low-mass galaxies at $z\ga 2$ follow a dust curve steeper than that of the SMC, particularly in the context of a blue intrinsic slope, $\beta_0 =
-2.62$. Note that unlike cB58, which has a stellar metallicity of $\simeq 0.25$$Z_\odot$ [@pettini00], the Cosmic Eye has a metallicity of $\sim 0.5$$Z_\odot$, suggesting a relatively red intrinsic UV slope. In this case, the IRX of the Cosmic Eye may be described adequately by the SMC curve. There is additional evidence of suppressed IRX ratios at lower mass from rest-optical emission line studies of $z\sim 2$ galaxies. In particular, @reddy15 found that a significant fraction of $z\sim 2$ galaxies with $M_{\ast}\la
10^{9.75}$$M_\odot$ have very red $\beta$, or ${E(B-V)}$, relative to the reddening deduced from the Balmer lines (e.g., see their Figure 17), implying that such galaxies would have lower IRX for a given $\beta$ than that predicted by common attenuation/extinction curves.
Evidence for curves steeper than the SMC average have been observed along certain sightlines within the SMC [@gordon03], in the Milky Way and some Local Group galaxies [@gordon03; @sofia05; @gordon09; @amanullah14], and in quasars (e.g., @hall02 [@jiang13; @zafar15]). Our results, combined with those in the literature, suggest that such steep curves may be typical of low-mass and young galaxies at high redshift. While the attenuation curve will undoubtedly vary from galaxy-to-galaxy depending on the star-formation history, age, metallicity, dust composition, and geometrical configuration of stars and dust, the fact that young/low-mass galaxies lie [*systematically*]{} below the IRX-$\beta$ relation predicted with an SMC curve suggests that a steep curve may apply uniformly to such galaxies.
An unresolved issue is the physical reason why young and low-mass galaxies may follow a steeper attenuation curve than their older and more massive counterparts. @reddy12a suggest a possible scenario in which galaxies transition from steep to gray attenuation curves as they age due to star formation occurring over extended regions and/or the cumulative effects of galactic outflows that are able to clear the gas and dust along certain sightlines. On the other hand, if young and low-mass galaxies have higher ionizing escape fractions as a result of lower gas/dust covering fractions (e.g., @reddy16b), then one might expect their attenuation curve to exhibit a shallower dependence on wavelength than the SMC extinction curve.
In any case, curves steeper than the SMC may be possible with a paucity of large dust grains and/or an over-abundance of silicate grains [@zafar15]. In particular, large dust grains may be efficiently destroyed by SN shock waves [@draine79; @mckee89; @jones04], which would have the effect of steepening the dust curve (i.e., such that proportionally more light is absorbed in the UV relative to the optical). If the destruction of large grains is significant in young/low-mass galaxies, then it may explain both their red $\beta$ and their low IRX. Alternatively, the lower gas-phase \[Si/O\] measured from the composite rest-frame UV spectrum of $z\sim 2$ galaxies relative to the solar value indicates significant depletion of Si onto dust grains, while carbon is under-abundant relative to oxygen [@steidel16]. This result may suggest an enhancement of silicate over carbonaceous grains that may result in a steeper attenuation curve.
Implications for Stellar Populations and Ionizing Production Efficiencies
-------------------------------------------------------------------------
Inferring the intrinsic stellar populations of galaxies based on their observed photometry requires one to adopt some form of the dust attenuation curve. It is therefore natural to ask whether these inferences change in light of our findings of a steeper (SMC-like) attenuation curve for $z\sim 2$ galaxies with intrinsically blue UV spectral slopes. To address this issue, we re-modeled (using FAST) the SEDs of galaxies in our sample assuming two cases: (a) a $1.4Z_\odot$ BC03 model (canonically referred to as the “solar” metallicity model using old solar abundance measurements) with the @calzetti00 attenuation curve; and (b) a $0.28Z_\odot$ BC03 model with an SMC extinction curve. The ages derived in case (b) are on average $30\%$ older than those derived in case (a), primarily because less reddening is required to reproduce a given UV slope with the SMC extinction curve, resulting in older ages (e.g., see discussion in @reddy12b). Similarly, the stellar masses derived in case (b) are on average $30\%$ lower than those derived in case (a).
Perhaps most relevant in the context of our analysis are the changes in inferred reddening and SFR. As shown in Figure \[fig:sedcompare\], the reddening deduced from the SMC extinction curve is lower than that obtained with @calzetti00 attenuation curve, owing to the steep rise in the far-UV of the former relative to the latter. The largest differences in ${E(B-V)}$ and SFR occur for the reddest objects and those with larger SFRs. We find the following relations between the reddening and SFRs derived for the two cases discussed above: $$\begin{aligned}
{E(B-V)}_{0.28Z_\odot,{\rm SMC}} = 0.65{E(B-V)}_{1.4Z_\odot,{\rm Calz}}\end{aligned}$$ and $$\begin{aligned}
\log[{\rm SFR}_{0.28Z_\odot,{\rm SMC}}/M_\odot\,{\rm yr}^{-1}] = \nonumber \\
0.79 \log[{\rm SFR}_{1.4Z_\odot,{\rm Calz}}/M_\odot\,{\rm yr}^{-1}] -0.05.\end{aligned}$$ The lower SFRs derived in the SMC case result in a factor of $\approx
2$ lower SFR densities at $z\ga 2$, as discussed in some depth in @bouwens16b. The general applicability of an SMC-like dust curve to high-redshift galaxies is also of particular interest when considering its impact on the ionizing efficiencies of such galaxies, a key input to reionization models [@robertson13; @bouwens15b; @bouwens16a]. Specifically, the ionizing photon production efficiency, $\xi_{\rm ion}$, is simply the ratio of the rate of H-ionizing photons produced to the non-ionizing UV continuum luminosity. This quantity is directly related to another commonly-used ratio, namely the Lyman continuum flux density (e.g., at $800$ or $900$Å) to the non-ionizing UV flux density (e.g., at $1600$Å), $f_{\rm 900}/f_{\rm 1600}$.
The ionizing photon production efficiency is typically computed by combining Balmer emission lines, such as H$\alpha$, with UV continuum measurements (e.g., @bouwens15b), but both the lines and the continuum must be corrected for dust attenuation. In the context of our present analysis, the dust corrections for the UV continuum are lower by factors of $1.12$, $1.82$, and $2.95$, for galaxies with Calzetti-inferred SFRs of 1, 10, and 100$M_\odot$yr$^{-1}$, respectively. Thus, for given Balmer line luminosities that are corrected for dust using the Galactic extinction curve [@calzetti94; @steidel14; @reddy15], $\xi_{\rm ion}$ would be correspondingly larger by the same factors by which the dust-corrected UV luminosities are lower.
A secondary effect that will boost $\xi_{\rm ion}$ above the predictions from single star solar metallicity stellar population models is the higher ionizing photon output associated with lower metallicity and rotating massive stars [@eldridge09; @leitherer14]. In particular, the fiducial $0.14Z_\odot$ BPASS model that includes binary evolution and an IMF extending to 300$M_\odot$ predicts a factor of $\approx 3$ larger H$\alpha$ luminosity per solar mass of star formation after 100Myr of constant star formation relative to that computed using the @kennicutt12 relation, which assumes a solar metallicity Starburst99 model. On the other hand, the UV luminosity is larger by $\approx 30\%$. Thus, such models predict a $\xi_{\rm ion}$ that is elevated by a factor of $\approx 2$ relative to those assumed in standard calibrations between H$\alpha$/UV luminosity and SFR (e.g., see also @nakajima16 [@bouwens16a; @stark15; @reddy16b]). Consequently, calculations or predictions of $\xi_{\rm ion}$ for high-redshift galaxies should take into account the effects of a steeper attenuation curve and lower metallicity stellar populations that may include stellar rotation/binarity. [*Our results suggest that an elevated value of $\xi_{\rm ion}$ is not only a feature of very high-redshift ($z\ga
6$) galaxies, but may be quite typical for $z\sim 2$ galaxies as well.*]{}
CONCLUSIONS
===========
In this paper, we have presented an analysis of the relationship between dust obscuration (IRX$=L_{\rm IR}/L_{\rm UV}$) and other commonly-derived galaxy properties, including UV slope ($\beta$), stellar mass ($M_{\ast}$), and UV luminosity ($L_{\rm UV}$), for a large sample of 3,545 rest-optically selected star-forming galaxies at $z=1.5-2.5$ drawn from HDUV UVIS and 3D-HST photometric catalogs of the GOODS fields. Our sample is unique in that it significantly extends the dynamic range in $\beta$ and $L_{\rm UV}$ compared to previous UV-selected samples at these redshifts. In particular, close to $60\%$ of the objects in our sample have UV slopes bluer than $\beta = -1.70$ and $>95\%$ have rest-frame UV absolute magnitudes fainter than the characteristic magnitude at these redshifts, with the faintest galaxies having $L_{\rm UV}\approx 0.05L^{\ast}_{\rm UV}$.
We use stacks of the deep [*Herschel*]{}/PACS imaging in the GOODS fields to measure the average dust obscuration for galaxies in our sample and compare it to predictions of the IRX-$\beta$ relation for different stellar population models and attenuation/extinction curves using energy balance arguments. Specifically, we consider the commonly adopted @bruzual03 stellar population models for different metallicities ($0.28$ and $1.4Z_\odot$), as well as the low-metallicity ($0.14Z_\odot$) BPASS model. Additionally, we compute predictions of the IRX-$\beta$ relations for the @calzetti00 and @reddy15 dust attenuation curves, and the SMC extinction curve. The lower metallicity stellar population models result in significant shifts in the IRX-$\beta$ relation of up to $\delta\beta =
0.4$ towards bluer $\beta$ relative to the canonical relation of @meurer99. In the context of the lower metallicity stellar population models applicable for high-redshift galaxies, we find that the strong trend between IRX and $\beta$ measured from the HDUV and 3D-HST samples follows most closely that predicted by the SMC extinction curve. We find that grayer attenuation curves (e.g., @calzetti00) over-predict the IRX at a given $\beta$ by a factor of $\ga 3$ when assuming intrinsically blue UV spectral slopes. [*Thus, our results suggest that an SMC curve is the one most applicable to lower stellar metallicity populations at high redshift.*]{}
Performing a complementary stacking analysis of the [*Spitzer*]{}/MIPS $24$$\mu$m images implies an average mid-IR-to-IR luminosity ratio, $\langle L_{7.7}/L_{\rm IR}\rangle$, that is a factor of $3-4$ lower than for galaxies with reddest ($\beta>-0.5$) and the UV-brightest ($M_{1600}\la -21$) and UV-faintest ($M_{1600}\ga -19$) galaxies relative to the average for all galaxies in our sample (Appendix \[sec:l8lir\]). These results indicate large variations in the conversion between rest-frame 7.7$\mu$m and IR luminosity.
At any given UV luminosity, galaxies with redder $\beta$ have larger IRX. IRX-$L_{\rm UV}$ relations for blue and red star-forming galaxies average together to result in a roughly constant IRX of $\simeq 3-4$ over roughly two decades in UV luminosity ($2\times 10^9
\la L_{\rm UV}\la 2\times 10^{11}$$L_\odot$). Consequently, the bluer $\beta$ observed for UV-faint galaxies seen in this work and previous studies may simply reflect intrinsically bluer UV spectral slopes for such galaxies, rather than signifying changes in the dust obscuration.
Galaxies with stellar masses $M_{\ast}>10^{9.75}$$M_\odot$ have an IRX-$\beta$ relation that is consistent with the SMC extinction curve, while the lower mass galaxies in our sample with $M_{\ast}\le
10^{9.75}$$M_\odot$ have an IRX-$\beta$ relation that is [*at least*]{} as steep as the SMC. The shifting of the IRX-$\beta$ relations towards bluer $\beta$ for the lower metallicity stellar populations expected for high-redshift galaxies implies that the low-mass galaxies in our sample, as well as the low-mass and young galaxies from previous studies, require a dust curve steeper than that of the SMC. The low metallicity stellar populations favored for high-redshift galaxies result in steeper attenuation curves and higher ionizing photon production rates which, in turn, facilitate the role that galaxies may have in reionizing the universe at very high redshift or keeping the universe ionized at lower redshifts ($z\sim
2$).
There are several future avenues for building upon this work. Detailed spectral modeling of the rest-UV and/or rest-optical spectra of galaxies (e.g., @steidel16 [@reddy16b]) may be used to discern their intrinsic spectral slopes and thus disentangle the effects of the intrinsic $\beta$ and the attenuation curve relevant for high-redshift galaxies. Second, the higher spatial resolution and depth of X-ray observations (compared to the far-IR) makes them advantageous for investigating the bolometric SFRs and hence dust obscuration for galaxies substantially fainter than those directly detected with either [*Spitzer*]{} or [*Herschel*]{} in reasonable amounts of time, provided that the scaling between X-ray luminosity and SFR can be properly calibrated (e.g., for metallicity effects; @basuzych13). In addition, nebular recombination line estimates of dust attenuation (e.g., from the Balmer decrement; @reddy15) may be used to assess the relationship between IRX and $\beta$ for individual star-forming galaxies, rather than through the stacks necessitated by the limited sensitivity of far-IR imaging.
This work was supported by NASA through grant HST-GO-13871 and from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. K. Penner kindly provided data from his published work in electronic format. NAR is supported by an Alfred P. Sloan Research Fellowship.
natexlab\#1[\#1]{}\[1\][[\#1](#1)]{}
, K. L., & [Steidel]{}, C. C. 2000, , 544, 218
, A., [Siana]{}, B., [Richard]{}, J., [et al.]{} 2014, , 780, 143
—. 2016, , 832, 56
, A., [Takagi]{}, T., [Baker]{}, A. J., [et al.]{} 2004, , 612, 222
, J., [Burgarella]{}, D., [Heinis]{}, S., [et al.]{} 2016, , 587, A122
, R., [Goobar]{}, A., [Johansson]{}, J., [et al.]{} 2014, , 788, L21
, B. H., & [Martini]{}, P. 2013, , 765, 140
, M., [Grevesse]{}, N., [Sauval]{}, A. J., & [Scott]{}, P. 2009, , 47, 481
, A. J., [Lutz]{}, D., [Genzel]{}, R., [Tacconi]{}, L. J., & [Lehnert]{}, M. D. 2001, , 372, L37
, A. J., [Cowie]{}, L. L., & [Richards]{}, E. A. 2000, , 119, 2092
, A. R., [Lehmer]{}, B. D., [Hornschemeier]{}, A. E., [et al.]{} 2013, , 774, 152
, E., & [Arnouts]{}, S. 1996, , 117, 393
, M., [Buat]{}, V., [Boselli]{}, A., [et al.]{} 2012, , 539, A145
, R. J., [Illingworth]{}, G. D., [Franx]{}, M., & [Ford]{}, H. 2007, , 670, 928
, R. J., [Illingworth]{}, G. D., [Oesch]{}, P. A., [et al.]{} 2015, , 811, 140
, R. J., [Smit]{}, R., [Labb[é]{}]{}, I., [et al.]{} 2016, , 831, 176
, R. J., [Illingworth]{}, G. D., [Franx]{}, M., [et al.]{} 2009, , 705, 936
, R. J., [Illingworth]{}, G. D., [Oesch]{}, P. A., [et al.]{} 2012, , 754, 83
, R. J., [Aravena]{}, M., [Decarli]{}, R., [et al.]{} 2016, , 833, 72
, G. B., [van Dokkum]{}, P. G., & [Coppi]{}, P. 2008, , 686, 1503
, I., [de Mink]{}, S. E., [Cantiello]{}, M., [et al.]{} 2011, , 530, A115
, G., & [Charlot]{}, S. 2003, , 344, 1000
, V., [Marcillac]{}, D., [Burgarella]{}, D., [et al.]{} 2007, , 469, 19
, V., [Takeuchi]{}, T. T., [Burgarella]{}, D., [Giovannoli]{}, E., & [Murata]{}, K. L. 2009, , 507, 693
, V., [Iglesias-P[á]{}ramo]{}, J., [Seibert]{}, M., [et al.]{} 2005, , 619, L51
, V., [Giovannoli]{}, E., [Heinis]{}, S., [et al.]{} 2011, , 533, A93
, V., [Noll]{}, S., [Burgarella]{}, D., [et al.]{} 2012, , 545, A141
, D., [Buat]{}, V., & [Iglesias-P[á]{}ramo]{}, J. 2005, , 360, 1413
, D., [Buat]{}, V., [Takeuchi]{}, T. T., [Wada]{}, T., & [Pearson]{}, C. 2009, , 61, 177
, D., [Armus]{}, L., [Bohlin]{}, R. C., [et al.]{} 2000, , 533, 682
, D., [Kinney]{}, A. L., & [Storchi-Bergmann]{}, T. 1994, , 429, 582
, P. L., [Carilli]{}, C., [Jones]{}, G., [et al.]{} 2015, , 522, 455
, K. I., [Lagache]{}, G., [Yan]{}, L., [et al.]{} 2007, , 660, 97
, C. L., & [Walter]{}, F. 2013, , 51, 105
, C. M., [Narayanan]{}, D., & [Cooray]{}, A. 2014, , 541, 45
, C. M., [Scoville]{}, N. Z., [Sanders]{}, D. B., [et al.]{} 2014, , 796, 95
, G. 2003, , 115, 763
, S. C., [Blain]{}, A. W., [Smail]{}, I., & [Ivison]{}, R. J. 2005, , 622, 772
, S. C., [Scott]{}, D., [Steidel]{}, C. C., [et al.]{} 2000, , 319, 318
, R., & [Elbaz]{}, D. 2001, , 556, 562
, C., & [Gunn]{}, J. E. 2010, , 712, 833
, F., [McLure]{}, R. J., [Khochfar]{}, S., [Dunlop]{}, J. S., & [Dalla Vecchia]{}, C. 2017, ArXiv e-prints, arXiv:1701.07869
, E., [Dickinson]{}, M., [Morrison]{}, G., [et al.]{} 2007, , 670, 156
, D. A., & [Helou]{}, G. 2002, , 576, 159
, D. A., [Cohen]{}, S. A., [Johnson]{}, L. C., [et al.]{} 2009, , 703, 517
, S., [Reddy]{}, N., & [Shivaei]{}, I. 2016, , 820, 96
, M., [Zamojski]{}, M., [Rujopakarn]{}, W., [et al.]{} 2016, ArXiv e-prints, arXiv:1610.08065
, A., [Siana]{}, B., [Brooks]{}, A. M., [et al.]{} 2015, , 451, 839
, J. L., [Koekemoer]{}, A. M., [Brusa]{}, M., [et al.]{} 2012, , 748, 142
, B. T., & [Salpeter]{}, E. E. 1979, , 231, 438
, B. T., [Dale]{}, D. A., [Bendo]{}, G., [et al.]{} 2007, , 663, 866
, J. S., [McLure]{}, R. J., [Biggs]{}, A. D., [et al.]{} 2017, , 466, 861
, D., [Dickinson]{}, M., [Hwang]{}, H. S., [et al.]{} 2011, , 533, A119
, J. J., & [Stanway]{}, E. R. 2009, , 400, 1019
—. 2012, , 419, 479
, C. W., [Gordon]{}, K. D., [Rieke]{}, G. H., [et al.]{} 2005, , 628, L29
, D. K., [Shapley]{}, A. E., [Pettini]{}, M., [et al.]{} 2006, , 644, 813
, N. M., [Roussel]{}, H., [Sauvage]{}, M., & [Charmandaris]{}, V. 2004, , 419, 501
, N. M., [Sauvage]{}, M., [Charmandaris]{}, V., [et al.]{} 2003, , 399, 833
, C.-A. 2017, ArXiv e-prints, arXiv:1701.04824
, S. L., [Papovich]{}, C., [Salmon]{}, B., [et al.]{} 2012, , 756, 164
, B., [Tran]{}, K.-V. H., [Tomczak]{}, A. R., [et al.]{} 2016, , 818, L26
, F., [Dwek]{}, E., & [Chanial]{}, P. 2008, , 672, 214
, J. D., [Meurer]{}, G., [Heckman]{}, T. M., [et al.]{} 2002, , 568, 651
, K. D., [Cartledge]{}, S., & [Clayton]{}, G. C. 2009, , 705, 1320
, K. D., [Clayton]{}, G. C., [Misselt]{}, K. A., [Landolt]{}, A. U., & [Wolff]{}, M. J. 2003, , 594, 279
, K., [Calzetti]{}, D., [Andrews]{}, J. E., [Lee]{}, J. C., & [Dale]{}, D. A. 2013, , 773, 174
, Y., [Rafelski]{}, M., [Faber]{}, S. M., [et al.]{} 2016, , 833, 37
, P. B., [Anderson]{}, S. F., [Strauss]{}, M. A., [et al.]{} 2002, , 141, 267
, N. P., [Ryan]{}, Jr., R. E., [Cohen]{}, S. H., [et al.]{} 2010, , 720, 1708
, S., [Buat]{}, V., [B[é]{}thermin]{}, M., [et al.]{} 2013, , 429, 1113
, G., [Malhotra]{}, S., [Hollenbach]{}, D. J., [Dale]{}, D. A., & [Contursi]{}, A. 2001, , 548, L73
, A., [Scarlata]{}, C., [Dom[í]{}nguez]{}, A., [et al.]{} 2013, , 776, L27
, D. W., [Tremonti]{}, C. A., [Blanton]{}, M. R., [et al.]{} 2005, , 624, 162
, P. F., [Kere[š]{}]{}, D., [O[ñ]{}orbe]{}, J., [et al.]{} 2014, , 445, 581
, L. K., [Thuan]{}, T. X., [Izotov]{}, Y. I., & [Sauvage]{}, M. 2010, , 712, 164
, P., [Zhou]{}, H., [Ji]{}, T., [et al.]{} 2013, , 145, 157
, B. D., [Schiminovich]{}, D., [Seibert]{}, M., [et al.]{} 2007, , 173, 392
, A. P. 2004, in Astronomical Society of the Pacific Conference Series, Vol. 309, Astrophysics of Dust, ed. A. N. [Witt]{}, G. C. [Clayton]{}, & B. T. [Draine]{}, 347
, R. C., & [Evans]{}, N. J. 2012, , 50, 531
, L. J., & [Ellison]{}, S. L. 2008, , 681, 1183
, K. K., [Richard]{}, J., [Kneib]{}, J.-P., [et al.]{} 2016, , 462, L6
, A. M., [Faber]{}, S. M., [Ferguson]{}, H. C., [et al.]{} 2011, , 197, 36
, X., [Charlot]{}, S., [Brinchmann]{}, J., & [Fall]{}, S. M. 2004, , 349, 769
, M. P., [Coppin]{}, K. E. K., [Geach]{}, J. E., [et al.]{} 2016, , 828, L21
, M., & [Conroy]{}, C. 2013, , 775, L16
, M., [van Dokkum]{}, P. G., [Labb[é]{}]{}, I., [et al.]{} 2009, , 700, 221
, M., [Shapley]{}, A. E., [Reddy]{}, N. A., [et al.]{} 2015, , 218, 15
, P., [Gawiser]{}, E., [Rafelski]{}, M., [et al.]{} 2014, , 793, L5
, J. C., [Ly]{}, C., [Spitler]{}, L., [et al.]{} 2012, , 124, 782
, C., [Ekstr[ö]{}m]{}, S., [Meynet]{}, G., [et al.]{} 2014, , 212, 14
, E. M., [Leitherer]{}, C., [Ekstrom]{}, S., [Meynet]{}, G., & [Schaerer]{}, D. 2012, , 751, 67
, S. C., [Galliano]{}, F., [Jones]{}, A. P., & [Sauvage]{}, M. 2006, , 446, 877
, G. E., [Elbaz]{}, D., [Daddi]{}, E., [et al.]{} 2010, , 714, 1740
, G. E., [Rigopoulou]{}, D., [Helou]{}, G., [et al.]{} 2013, , 558, A136
, B., [Popesso]{}, P., [Berta]{}, S., [et al.]{} 2013, , 553, A132
, R., [Nagao]{}, T., [Grazian]{}, A., [et al.]{} 2008, , 488, 463
, R., [Carniani]{}, S., [Fontana]{}, A., [et al.]{} 2015, , 452, 54
, M. V., [van der Wel]{}, A., [Rix]{}, H.-W., [et al.]{} 2014, , 791, 17
, C. 1989, in IAU Symposium, Vol. 135, Interstellar Dust, ed. L. J. [Allamandola]{} & A. G. G. M. [Tielens]{}, 431
, G. R., [Heckman]{}, T. M., & [Calzetti]{}, D. 1999, , 521, 64
, I. G., [Brammer]{}, G. B., [van Dokkum]{}, P. G., [et al.]{} 2016, , 225, 27
, J. C., [Gil de Paz]{}, A., [Boissier]{}, S., [et al.]{} 2009, , 701, 1965
, K., [Ellis]{}, R. S., [Iwata]{}, I., [et al.]{} 2016, , 831, L9
, K., [Mushotzky]{}, R. F., [Arnaud]{}, K., [et al.]{} 2002, , 576, 625
, E. J., [van Dokkum]{}, P. G., [Momcheva]{}, I. G., [et al.]{} 2016, , 817, L9
, K. G., [Weiner]{}, B. J., [Faber]{}, S. M., [et al.]{} 2007, , 660, L43
, S., [Pierini]{}, D., [Cimatti]{}, A., [et al.]{} 2009, , 499, 69
, P., [Rouan]{}, D., [Lacombe]{}, F., & [Tiphene]{}, D. 1995, , 297, 311
, P. A., [Bouwens]{}, R. J., [Illingworth]{}, G. D., [et al.]{} 2010, , 709, L16
, J. B., & [Gunn]{}, J. E. 1983, , 266, 713
, R. A., [Heckman]{}, T. M., [Wang]{}, J., [et al.]{} 2011, , 726, L7
, M., [Carilli]{}, C. L., [Daddi]{}, E., [et al.]{} 2009, , 698, L116
, M., [Elbaz]{}, D., [Daddi]{}, E., [et al.]{} 2015, , 807, 141
, K., [Dickinson]{}, M., [Pope]{}, A., [et al.]{} 2012, , 759, 28
, L., [Carniani]{}, S., [Castellano]{}, M., [et al.]{} 2016, , 829, L11
, M., [Kellogg]{}, M., [Steidel]{}, C. C., [et al.]{} 1998, , 508, 539
, M., [Steidel]{}, C. C., [Adelberger]{}, K. L., [Dickinson]{}, M., & [Giavalisco]{}, M. 2000, , 528, 96
, S. H., [Kriek]{}, M., [Brammer]{}, G. B., [et al.]{} 2014, , 788, 86
, M., [Teplitz]{}, H. I., [Gardner]{}, J. P., [et al.]{} 2015, , 150, 31
, N., [Dickinson]{}, M., [Elbaz]{}, D., [et al.]{} 2012, , 744, 154
, N. A., [Erb]{}, D. K., [Pettini]{}, M., [Steidel]{}, C. C., & [Shapley]{}, A. E. 2010, , 712, 1070
, N. A., [Erb]{}, D. K., [Steidel]{}, C. C., [et al.]{} 2005, , 633, 748
, N. A., [Pettini]{}, M., [Steidel]{}, C. C., [et al.]{} 2012, , 754, 25
, N. A., & [Steidel]{}, C. C. 2004, , 603, L13
—. 2009, , 692, 778
, N. A., [Steidel]{}, C. C., [Erb]{}, D. K., [Shapley]{}, A. E., & [Pettini]{}, M. 2006, , 653, 1004
, N. A., [Steidel]{}, C. C., [Fadda]{}, D., [et al.]{} 2006, , 644, 792
, N. A., [Steidel]{}, C. C., [Pettini]{}, M., [et al.]{} 2008, , 175, 48
, N. A., [Steidel]{}, C. C., [Pettini]{}, M., & [Bogosavljevi[ć]{}]{}, M. 2016, , 828, 107
, N. A., [Steidel]{}, C. C., [Pettini]{}, M., [Bogosavljevi[ć]{}]{}, M., & [Shapley]{}, A. E. 2016, , 828, 108
, N. A., [Kriek]{}, M., [Shapley]{}, A. E., [et al.]{} 2015, , 806, 259
, G. H., [Alonso-Herrero]{}, A., [Weiner]{}, B. J., [et al.]{} 2009, , 692, 556
, B. E., [Furlanetto]{}, S. R., [Schneider]{}, E., [et al.]{} 2013, , 768, 71
, H., [Sauvage]{}, M., [Vigroux]{}, L., & [Bosma]{}, A. 2001, , 372, 427
, D. A., [Pastoriza]{}, M. G., & [Riffel]{}, R. 2010, , 725, 605
, B., [Papovich]{}, C., [Long]{}, J., [et al.]{} 2016, , 827, 20
, R. L., [Shapley]{}, A. E., [Kriek]{}, M., [et al.]{} 2015, , 799, 138
, D., [Boone]{}, F., [Zamojski]{}, M., [et al.]{} 2015, , 574, A19
, D., [de Barros]{}, S., & [Sklias]{}, P. 2013, , 549, A4
, C., [Pannella]{}, M., [Elbaz]{}, D., [et al.]{} 2015, , 575, A74
, M., [Heckman]{}, T. M., & [Meurer]{}, G. R. 2002, , 124, 46
, M., [Martin]{}, D. C., [Heckman]{}, T. M., [et al.]{} 2005, , 619, L55
, J. Y., [Hirashita]{}, H., & [Asano]{}, R. S. 2014, , 439, 2186
, L., [Lutz]{}, D., [Nordon]{}, R., [et al.]{} 2010, , 518, L26
, H. V., [Papovich]{}, C., [Rieke]{}, G. H., [Brown]{}, M. J. I., & [Moustakas]{}, J. 2016, , 818, 60
, I., [Reddy]{}, N. A., [Shapley]{}, A. E., [et al.]{} 2015, , 815, 98
, I., [Reddy]{}, N., [Shapley]{}, A., [et al.]{} 2016, ArXiv e-prints, arXiv:1609.04814
, B., [Teplitz]{}, H. I., [Chary]{}, R.-R., [Colbert]{}, J., & [Frayer]{}, D. T. 2008, , 689, 59
, B., [Smail]{}, I., [Swinbank]{}, A. M., [et al.]{} 2009, , 698, 1273
, R. E., [Whitaker]{}, K. E., [Momcheva]{}, I. G., [et al.]{} 2014, ArXiv e-prints, arXiv:1403.3689
, P., [Zamojski]{}, M., [Schaerer]{}, D., [et al.]{} 2014, , 561, A149
, R., [Bouwens]{}, R. J., [Franx]{}, M., [et al.]{} 2012, , 756, 14
, J. D. T., [Draine]{}, B. T., [Dale]{}, D. A., [et al.]{} 2007, , 656, 770
, U. J., [Wolff]{}, M. J., [Rachford]{}, B., [et al.]{} 2005, , 625, 167
, M., [Hayward]{}, C. C., [Feldmann]{}, R., [et al.]{} 2017, , 466, 88
, E. R., [Eldridge]{}, J. J., & [Becker]{}, G. D. 2016, , 456, 485
, D. P., [Walth]{}, G., [Charlot]{}, S., [et al.]{} 2015, , 454, 1393
, C. C., [Adelberger]{}, K. L., [Giavalisco]{}, M., [Dickinson]{}, M., & [Pettini]{}, M. 1999, , 519, 1
, C. C., [Strom]{}, A. L., [Pettini]{}, M., [et al.]{} 2016, , 826, 159
, C. C., [Rudie]{}, G. C., [Strom]{}, A. L., [et al.]{} 2014, ArXiv e-prints, arXiv:1405.5473
, H. I., [Rafelski]{}, M., [Kurczynski]{}, P., [et al.]{} 2013, , 146, 159
, P., & [Ferrara]{}, A. 2001, , 325, 726
, C. A., [Heckman]{}, T. M., [Kauffmann]{}, G., [et al.]{} 2004, , 613, 898
, P., [Brammer]{}, G., [Momcheva]{}, I., [et al.]{} 2013, ArXiv e-prints, arXiv:1305.2140
, B., & [Heckman]{}, T. M. 1996, , 457, 645
, D., [Christensen]{}, L., [Knudsen]{}, K. K., [et al.]{} 2015, , 519, 327
, D. R., [Johnson]{}, B. D., [Johnson]{}, L. C., [et al.]{} 2012, , 744, 44
, K. E., [Franx]{}, M., [Leja]{}, J., [et al.]{} 2014, , 795, 104
, S. M., [Bunker]{}, A., [Coulton]{}, W., [et al.]{} 2013, , 430, 2885
, S. M., [Bunker]{}, A. J., [Stanway]{}, E., [Lorenzoni]{}, S., & [Caruana]{}, J. 2011, , 417, 717
, R. J., [Quadri]{}, R. F., [Franx]{}, M., [van Dokkum]{}, P., & [Labb[é]{}]{}, I. 2009, , 691, 1879
, C. J., [Carilli]{}, C. L., [Wagg]{}, J., & [Wang]{}, R. 2015, , 807, 180
, R. A., [Cohen]{}, S. H., [Hathi]{}, N. P., [et al.]{} 2011, , 193, 27
, S., [F[ö]{}rster Schreiber]{}, N. M., [Lutz]{}, D., [et al.]{} 2011, , 738, 106
, Y. Q., [Luo]{}, B., [Brandt]{}, W. N., [et al.]{} 2011, , 195, 10
, T., [M[ø]{}ller]{}, P., [Watson]{}, D., [et al.]{} 2015, , 584, A100
, G. R., [Ciardullo]{}, R., [Gronwall]{}, C., [et al.]{} 2015, , 814, 162
A. ${E(B-V)}$ vs. $\beta$ and IRX vs. $\beta$ Relations {#sec:sumrelations}
=======================================================
In Table \[tab:sumrelations\], we summarize the relations between $\beta$ and ${E(B-V)}$ and between IRX and $\beta$ for different assumptions of the dust curve, heating from Ly$\alpha$, inclusion of nebular continuum emission, and the normalization of the dust curve.
[llcccccc]{}\[!h\] BPASS - $0.14Z_\odot$ & Calzetti+00 & Yes & Yes & 0.0 & $-2.616 + 4.684{E(B-V)}$ & & $1.67\times[10^{0.4(2.13\beta+5.57)}-1]$\
& & Yes & Yes & 1.5 & & & $1.66\times[10^{0.4(1.81\beta+4.73)}-1]$\
& & Yes & No & 0.0 & & & $1.39\times[10^{0.4(2.13\beta+5.57)}-1]$\
& & Yes & No & 1.5 & & & $1.38\times[10^{0.4(1.81\beta+4.73)}-1]$\
& & No & Yes & 0.0 & $-2.709 + 4.684{E(B-V)}$ & & $1.73\times[10^{0.4(2.13\beta+5.76)}-1]$\
& & No & Yes & 1.5 & & & $1.73\times[10^{0.4(1.81\beta+4.90)}-1]$\
& & No & No & 0.0 & & & $1.44\times[10^{0.4(2.13\beta+5.76)}-1]$\
& & No & No & 1.5 & & & $1.42\times[10^{0.4(1.81\beta+4.90)}-1]$\
\
& Reddy+15 & Yes & Yes & 0.0 & $-2.616 + 4.594{E(B-V)}$ & & $1.68\times[10^{0.4(1.82\beta+4.77)}-1]$\
& & Yes & Yes & 1.5 & & & $1.67\times[10^{0.4(1.50\beta+3.92)}-1]$\
& & Yes & No & 0.0 & & & $1.40\times[10^{0.4(1.82\beta+4.77)}-1]$\
& & Yes & No & 1.5 & & & $1.38\times[10^{0.4(1.50\beta+3.92)}-1]$\
& & No & Yes & 0.0 & $-2.709 + 4.594{E(B-V)}$ & & $1.74\times[10^{0.4(1.82\beta+4.94)}-1]$\
& & No & Yes & 1.5 & & & $1.74\times[10^{0.4(1.50\beta+4.05)}-1]$\
& & No & No & 0.0 & & & $1.44\times[10^{0.4(1.82\beta+4.94)}-1]$\
& & No & No & 1.5 & & & $1.43\times[10^{0.4(1.50\beta+4.05)}-1]$\
\
& SMC (Gordon+03) & Yes & Yes & 0.0 & $-2.616 + 11.259{E(B-V)}$ & & $1.79\times[10^{0.4(1.07\beta+2.79)}-1]$\
& & Yes & Yes & 1.5 & & & $1.80\times[10^{0.4(0.93\beta+2.44)}-1]$\
& & Yes & No & 0.0 & & & $1.47\times[10^{0.4(1.07\beta+2.79)}-1]$\
& & Yes & No & 1.5 & & & $1.47\times[10^{0.4(0.93\beta+2.44)}-1]$\
& & No & Yes & 0.0 & $-2.709 + 11.259{E(B-V)}$ & & $1.83\times[10^{0.4(1.07\beta+2.89)}-1]$\
& & No & Yes & 1.5 & & & $1.85\times[10^{0.4(0.93\beta+2.52)}-1]$\
& & No & No & 0.0 & & & $1.50\times[10^{0.4(1.07\beta+2.89)}-1]$\
& & No & No & 1.5 & & & $1.50\times[10^{0.4(0.93\beta+2.52)}-1]$\
\
BC03 - $1.4Z_\odot$ & Calzetti+00 & Yes & Yes & 0.0 & $-2.383 + 4.661{E(B-V)}$ & & $1.44\times[10^{0.4(2.14\beta+5.10)}-1]$\
& & Yes & Yes & 1.5 & & & $1.41\times[10^{0.4(1.82\beta+4.33)}-1]$\
& & Yes & No & 0.0 & & & $1.35\times[10^{0.4(2.14\beta+5.10)}-1]$\
& & Yes & No & 1.5 & & & $1.32\times[10^{0.4(1.82\beta+4.33)}-1]$\
& & No & Yes & 0.0 & $-2.439 + 4.661{E(B-V)}$ & & $1.47\times[10^{0.4(2.14\beta+5.22)}-1]$\
& & No & Yes & 1.5 & & & $1.44\times[10^{0.4(1.82\beta+4.43)}-1]$\
& & No & No & 0.0 & & & $1.38\times[10^{0.4(2.14\beta+5.22)}-1]$\
& & No & No & 1.5 & & & $1.35\times[10^{0.4(1.82\beta+4.43)}-1]$\
\
& Reddy+15 & Yes & Yes & 0.0 & $-2.383 + 4.568{E(B-V)}$ & & $1.43\times[10^{0.4(1.83\beta+4.37)}-1]$\
& & Yes & Yes & 1.5 & & & $1.39\times[10^{0.4(1.51\beta+3.59)}-1]$\
& & Yes & No & 0.0 & & & $1.40\times[10^{0.4(1.82\beta+4.77)}-1]$\
& & Yes & No & 1.5 & & & $1.30\times[10^{0.4(1.51\beta+3.59)}-1]$\
& & No & Yes & 0.0 & $-2.439 + 4.568{E(B-V)}$ & & $1.46\times[10^{0.4(1.83\beta+4.47)}-1]$\
& & No & Yes & 1.5 & & & $1.42\times[10^{0.4(1.51\beta+3.67)}-1]$\
& & No & No & 0.0 & & & $1.37\times[10^{0.4(1.83\beta+4.47)}-1]$\
& & No & No & 1.5 & & & $1.33\times[10^{0.4(1.51\beta+3.67)}-1]$\
\
& SMC (Gordon+03) & Yes & Yes & 0.0 & $-2.383 + 11.192{E(B-V)}$ & & $1.47\times[10^{0.4(1.07\beta+2.55)}-1]$\
& & Yes & Yes & 1.5 & & & $1.45\times[10^{0.4(0.94\beta+2.23)}-1]$\
& & Yes & No & 0.0 & & & $1.38\times[10^{0.4(1.07\beta+2.55)}-1]$\
& & Yes & No & 1.5 & & & $1.35\times[10^{0.4(0.94\beta+2.23)}-1]$\
& & No & Yes & 0.0 & $-2.439 + 11.192{E(B-V)}$ & & $1.50\times[10^{0.4(1.07\beta+2.61)}-1]$\
& & No & Yes & 1.5 & & & $1.49\times[10^{0.4(0.94\beta+2.29)}-1]$\
& & No & No & 0.0 & & & $1.40\times[10^{0.4(1.07\beta+2.61)}-1]$\
& & No & No & 1.5 & & & $1.38\times[10^{0.4(0.94\beta+2.29)}-1]$\
\
Meurer+99 & — & — & — & — & $-2.23 + 5.01{E(B-V)}$ & & $2.07\times[10^{0.4(1.99\beta+4.43)}-1]$
\[tab:sumrelations\]
B. Inferences of IRX from [*Spitzer*]{}/MIPS $24$$\mu$m Data {#sec:l8lir}
============================================================
Prior to the launch of [*Herschel*]{}, most non-UV-based inferences of the dust content of $L^{\ast}$ galaxies at $z\ga 1.5$ relied on the detection of the redshifted mid-IR emission bands, commonly associated with PAHs, with the [*Spitzer*]{} MIPS instrument. A number of early studies of local and $z\sim 1$ galaxies suggested that PAH emission correlates with total dust emission (e.g., @roussel01 [@forster03; @forster04]), though with some variations with metallicity, ionizing intensity, star-formation-rate surface density, and stellar population age (e.g., @normand95 [@helou01; @alonso04; @engelbracht05; @hogg05; @madden06; @draine07b; @smith07; @galliano08; @hunt10; @sales10; @elbaz11; @magdis13; @seok14]). Recently, @shivaei16 presented the first statistically significant trends showing lower ratios of the $7.7$$\mu$m-to-total IR luminosity, $L_{\rm 7.7}/L_{\rm IR}$, with higher ionization intensity, lower gas-phase metallicity, and younger ages for galaxies at $z\sim 2$ from the MOSFIRE Deep Evolution Field Survey [@kriek15]. The aforementioned studies have suggested either a delayed enrichment of PAHs in young galaxies or the destruction of PAHs in high ionization and low metallicity environments.
In light of these findings, we considered $L_{\rm 7.7}/L_{\rm IR}$ for the predominantly blue, low luminosity galaxies in our sample.[^5] The galaxies in our sample as a whole exhibit $\langle L_{\rm 7.7}/L_{\rm IR}\rangle =
0.12\pm0.03$, similar within $1\sigma$ to that computed in @reddy12a, $\langle L_{\rm 7.7}/L_{\rm IR}\rangle =
0.18\pm0.03$, where the latter has been corrected for the difference in this ratio when assuming the @elbaz11 dust template rather than those of @chary01. However, when divided into bins of UV slope, we find that $\langle L_{\rm 7.7}/L_{\rm IR}\rangle$ is substantially lower for galaxies with the reddest $\beta$, as well as for the brightest ($M_{1600}\le -21$) and faintest ($M_{1600} > -19$) galaxies in our sample (Figure \[fig:l8lir\]).
The low ratio ($\langle L_{\rm 7.7}/L_{\rm IR}\rangle = 0.07\pm0.01$) observed for galaxies with the reddest UV slopes may be related to significant $9.9$$\mu$m silicate absorption affecting the observed $24$$\mu$m flux. An alternative explanation invokes an IR luminosity that is boosted in the presence of AGN, though we consider this possibility unlikely as we have removed obscured AGN from the sample based on their IRAC colors (Section \[sec:sample\]). Regardless, the lower $\langle L_{\rm 7.7}/L_{\rm IR}\rangle$ found for these red galaxies is partly responsible for a similar low ratio found for the faintest galaxies in our sample, since many of the dustiest galaxies in our sample are also UV-faint (Figure \[fig:zmag\]). Isolating the UV-faint galaxies with slopes bluer than $\beta=-1.4$ yields an unconstraining lower limit on $\langle L_{\rm 7.7}/L_{\rm IR}\rangle$.
Finally, we note that the brightest galaxies in our sample ($M_{1600}\le -21$) also exhibit a very low $\langle L_{\rm
7.7}/L_{\rm IR}\rangle = 0.06\pm0.01$. Such galaxies are on average a factor of $1.6\times$ younger than $M_{1600}>-21$ galaxies ($\approx
500$ vs. $\approx 800$Myr). Thus, their lower mean ratio may be related to a deficit of PAHs due to younger stellar population ages, or may be related to harder ionization fields and/or lower gas-phase metallicities. Unfortunately, we are unable to fully explore how the $L_{7.7}$-to-$L_{\rm IR}$ ratio varies with age given unconstraining lower limits on this value for galaxies with ages $\la 500$Myr. Irrespective of the physical causes for changes in the PAH-to-infrared luminosity ratio, our results suggest that caution must be used when adopting a single-valued conversion to recover $L_{\rm IR}$ from mid-IR measurements. For example, assuming the mean ratio found for our sample would result in $24$$\mu$m-inferred $L_{\rm IR}$ that are a factor of $\approx 2$ lower than the “true” values for the reddest ($\beta >-0.8$) and UV-brightest ($M_{1600}\le -21$) galaxies in our sample.
[^1]: Below, we consider the effect of stellar population age on the IRX-$\beta$ relations. In that context, the ages derived for the vast majority of galaxies in our sample are within $\delta\log[{\rm Age/yr}] \simeq 0.1$dex to those derived assuming a constant star-formation history.
[^2]: As discussed in @reddy12a, stacking on the science images themselves yields results similar to those obtained by stacking on the residual images.
[^3]: While the 160$\mu$m PSF has a half-width at half-maximum that is larger than the exclusion radius of $3\farcs
35$, the agreement in the average $f_{100}/f_{160}$ ratio, or far-infrared color, between the stack of the full sample and that of the 465 galaxies suggests that the bias factors also recover successfully the average 160$\mu$m stacked flux.
[^4]: The stellar masses obtained with @conroy10 models (Section \[sec:sample\]) are on average within $0.1$dex of those obtained assuming the fiducial (BPASS) model with the same [@chabrier03] IMF.
[^5]: As virtually all of the galaxies in our sample are undetected in the PACS imaging, we were not able to normalize the $24$$\mu$m images by $1/L_{\rm IR}$ before stacking them (e.g., in the same way that we were able to normalize them by $1/L_{\rm UV}$; Section \[sec:stacking\]). However, as the stacked fluxes are weighted by $1/L_{\rm UV}$, and $L_{\rm UV}\propto SFR\propto L_{\rm
IR}$ for all but the dustiest galaxies in our sample (Section \[sec:irxuvlum\]), we assumed that the ratio of the average luminosities is similar to the average ratio of the luminosities, i.e., $\langle L_{\rm 7.7}\rangle/\langle L_{\rm
IR}\rangle \approx \langle L_{\rm 7.7}/L_{\rm IR}\rangle$ (see discussion in @shivaei16). Thus, we simply divided $L_{\rm
7.7}$ by $L_{\rm IR}$ for each stack presented in Table \[tab:stackedresults\] in order to deduce the average ratio $\langle L_{\rm 7.7}/L_{\rm IR}\rangle$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A connection between holomorphic and generating family invariants of Legendrian knots is established; namely, that the existence of a ruling (or decomposition) of a Legendrian knot is equivalent to the existence of an augmentation of its contact homology. This result was obtained independently and using different methods by Fuchs and Ishkhanov [@fuchs-ishk]. Close examination of the proof yields an algorithm for constructing a ruling given an augmentation. Finally, a condition for the existence of an augmentation in terms of the rotation number is obtained.'
address: 'Haverford College, Haverford, PA 19041'
author:
- 'Joshua M. Sabloff'
bibliography:
- 'rulings.bib'
title: Augmentations and Rulings of Legendrian Knots
---
Introduction
============
A fundamental problem in Legendrian knot theory is to determine when two knots are (or are not) Legendrian isotopic.[^1] Bennequin’s 1983 paper [@bennequin] started the enterprise by introducing the two “classical” invariants of Legendrian knots, the Thurston-Bennequin invariant $tb$ and the rotation number $r$. Classification results based on these invariants followed in the early 1990’s: Legendrian unknots [@yasha-fraser], torus knots [@etnyre-honda:knots], and figure eight knots [@etnyre-honda:knots] are completely classified by their topological type and classical invariants.
Starting in the late 1990’s, two methods for constructing “non-classical” invariants of Legendrian knots were developed. The first is a relative version of the contact homology of Eliashberg, Givental, and Hofer [@egh]. This theory uses holomorphic techniques to associate a non-commutative differential graded algebra (DGA) to a knot diagram. The homology of the DGA is invariant under Legendrian isotopy. In [@chv], Chekanov rendered this theory combinatorially computable. He then used a linearized version of it to distinguish examples of Legendrian $5_2$ knots in the standard contact $\rr^3$ that have the same classical invariants.[^2] The second method is based on generating families, i.e. families of functions whose critical values generate fronts of Legendrian knots. Chekanov has produced “ruling” or “decomposition” invariants based on generating families that can distinguish his original $5_2$ examples (see [@chv:survey; @chv-pushkar]). In addition, Traynor has fashioned a non-classical theory based on generating families for Legendrian links in the solid torus [@lisa:links].
The goal of this paper is to strengthen a connection, discovered by Fuchs [@fuchs:augmentations], between the ability to linearize the contact homology DGA and the non-vanishing of Chekanov’s count of rulings for Legendrian knots in the standard contact $\rr^3$. It is possible to linearize the contact homology DGA if and only if there exists an augmentation, i.e. a map $\varepsilon$ from the algebra to the base ring that sends the image of the differential to zero. It is useful to further stipulate that the augmentation has support on generators of grading zero modulo $2r(K)$ or grading divisible by a divisor $\rho$ of $2r(K)$. The former are “graded” augmentations; the latter are “$\rho$-graded” augmentations. Fuchs’ original result was:
If a front diagram of a Legendrian knot $K$ has a (graded or $\rho$-graded) normal ruling, then the contact homology DGA of $K$ has a (graded or $\rho$-graded) augmentation.
The central result of this paper is the converse, which Fuchs and Ishkhanov have proved independently, using different methods, in [@fuchs-ishk]:
If the contact homology DGA of a Legendrian knot $K$ has a (graded or $\rho$-graded) augmentation, then any front diagram of $K$ has a (graded or $\rho$-graded) normal ruling.
A consequence is an easy criterion for checking if the contact homology DGA of a Legendrian knot has an augmentation:
If the Chekanov-Eliashberg DGA of a Legendrian knot $K$ has a $2$-graded augmentation, then its rotation number is zero.
These results contribute to recent work that examines the relationship between the contact homology and generating family approaches to constructing non-classical invariants. Ng and Traynor found that a linearized version of the contact homology DGA and generating family homology contain the same information for a large class of two-component links in the solid torus [@lenny-lisa]. Work of Zhu [@zhu] and of Ekholm, Etnyre, and Sullivan [@ees:graph-trees] (see also [@fukaya-oh]) shows that a different sort of generating function homology that uses “graph trees” for a single set of generating functions can be used to compute the contact homology DGA. The ideas behind this work have already provided the motivation for Ng’s combinatorial construction of invariants of topological braids and knots using the contact homology of Legendrian tori in $ST^*\rr^3$ [@lenny:knot-invts-1].
The rest of the paper is organized as follows: Section \[sec:background\] lays out the necessary background and notation for diagrams of Legendrian knots, the contact homology DGA, and normal rulings. Section \[sec:main\] contains the proof of Theorem \[thm:main\] using a modification of a plat diagram of a Legendrian knot. Finally, the proof of Theorem \[thm:rot-ruling\] appears in Section \[sec:augm-r\].
Acknowledgments {#acknowledgments .unnumbered}
---------------
This paper has greatly benefited from discussions with John Etnyre, Lisa Traynor, Lenny Ng, and especially Paul Melvin, who first conjectured Theorem \[thm:rot-ruling\] based on computations done by himself and Sumana Shrestha. Conversations with Dmitry Fuchs helped to clarify the hypotheses in Theorem \[thm:rot-ruling\], and the referee’s comments greatly improved the exposition of the proof of Theorem \[thm:main\].
Background Notions
==================
Diagrams of Legendrian Knots
----------------------------
This section briefly reviews some basic notions of Legendrian knot theory. For a more comprehensive introduction, see [@etnyre:knot-intro; @lecnotes].
The **standard contact structure** on $\rr^3$ is the completely non-integrable $2$-plane field given by the kernel of $\alpha = dz -
y\,dx$. A **Legendrian knot** is an embedding $K: S^1 \to \rr^3$ that is everywhere tangent to the contact planes. In particular, the embedding must satisfy $$\mylabel{eqn:leg-cond}
\alpha(K') = 0.$$ An ambient isotopy of $K$ through Legendrian knots is a **Legendrian isotopy**.
There are two useful projections of Legendrian knots. The **Lagrangian projection** is given by the map $$\pi_l: (x,y,z) \mapsto (x,y),$$ while the **front projection** is given by $$\pi_f: (x,y,z) \mapsto (x,z).$$ The Lagrangian and front projections of a Legendrian trefoil knot appear in Figure \[fig:numbered-trefoil\].
![(a) Lagrangian and (b) front diagrams of a Legendrian trefoil knot. The meaning of the numbers in (a) will become clear in Section \[ssec:chv-el\].](figs/trefoil.ps)
In the front projection, the $y$ coordinate of a knot may be recovered from the slope of the front projection via (\[eqn:leg-cond\]): $$\mylabel{eqn:y-slope}
y = \frac{dz}{dx}.$$ This fact has several consequences:
- The front projection of a Legendrian knot is never vertical. Instead of vertical tangencies, front projections have *cusps* like those on the extreme left and right of Figure \[fig:numbered-trefoil\](b).
- There is no need to specify crossing information at a double point: the strand with the smaller slope always has a smaller $y$ coordinate. This means that it will pass in *front* of the strand with the larger slope, as the $y$ axis must point into the page in the front projection.
- Any circle in the $xz$ plane that has no vertical tangencies and that is immersed except at finitely many cusps lifts to a Legendrian knot via equation (\[eqn:y-slope\]).
A front diagram is in **plat position** if all of the left cusps have the same $x$ coordinate, all of the right cusps have the same $x$ coordinate, and no two crossings have the same $x$ coordinate. For example, the diagram of the trefoil in Figure \[fig:numbered-trefoil\](b) is in plat position. The $x$ coordinates of the crossings and cusps are the **singular values** of the front. Any front diagram may be put into plat position using Legendrian versions of Reidemeister type II moves and planar isotopy.
Though the front projection is easier to work with, it is more natural to define the contact homology DGA using the Lagrangian projection. Ng’s **resolution** procedure (see [@lenny:computable], and Figures 2 and 3 in particular) gives a canonical translation from a front diagram to a Lagrangian diagram. This procedure, in fact, was used to derive the Lagrangian projection in Figure \[fig:numbered-trefoil\](a) from the front projection in Figure \[fig:numbered-trefoil\](b). Combinatorially, there are three steps:
1. Smooth the left cusps;
2. Replace the right cusps with a loop (see the right side of the Lagrangian projection in Figure \[fig:numbered-trefoil\]); and
3. Resolve the crossings so that the overcrossing strand is the one with smaller slope.
A key feature of the resolution procedure is that the heights of the crossings in the Lagrangian projection strictly increase from left to right, with the jumps in height between crossings as large as desired.
As mentioned in the introduction, there are two “classical” invariants for Legendrian knots up to Legendrian isotopy. The first classical invariant is the **Thurston-Bennequin number** $tb(K)$, which measures the twisting of the contact planes around the knot $K$. The second classical invariant, the **rotation number** $r(K)$, is defined for *oriented* Legendrian knots. It measures the turning of the tangent direction to $K$ inside the contact planes with respect to the trivialization given by the vector fields $\partial_y$ and $\partial_x + y\partial_z$. The rotation number of an oriented Legendrian knot $K$ may be computed using the rotation number of the tangent vector to the Lagrangian projection in the plane. In the front projection, the rotation number is half of the difference between the number of downward-pointing cusps and the number of upward-pointing cusps.
The Contact Homology DGA and Augmentations
------------------------------------------
This section contains a brief review of the definition of the contact homology DGA of a Legendrian knot. The DGA was originally defined by Chekanov in [@chv] for Lagrangian diagrams; see also [@ens].
Let $K$ be an oriented Legendrian knot in the standard contact $\rr^3$ with a generic Lagrangian diagram $\pi_l(K)$. Label the crossings by $q_1, \ldots, q_n$. Let $\alg$ be the graded free unital tensor algebra over $\zz/2$ generated by the set $\{q_1, \ldots,
q_{n}\}$.[^3] To define the grading, a capping path $\gamma_i$ needs to be assigned to each crossing. A **capping path** is one of the two paths in $\pi_l(K)$ that starts at the overcrossing of $q_i$ and ends when $\pi_l(K)$ first returns to $q_i$, necessarily at an undercrossing. Assume, without loss of generality, that the strands of $\pi_l(K)$ at each crossing are orthogonal. The **grading** of $q_i$ is: $$|q_i| \equiv 2 r(\gamma_i) - \frac{1}{2} \pmod{2r(K)}.$$ Extend the grading to all words in $\alg$ by letting the grading of a word be the sum of the gradings of its constituent generators.
It is simple to assign gradings directly from a plat diagram. Assign a grading of $1$ to each generator coming from a right cusp. To assign a grading to a crossing, begin as in [@chv:survey] by letting $C(K)$ be the set of points on $K$ corresponding to cusps of $\pi_f(K)$. The **Maslov index** is a locally constant function $$\mu: K \setminus C(K) \to \zz/2r(K)$$ that satisfies the relations depicted in Figure \[fig:maslov\] near the cusps. This function is well-defined up to an overall constant. Near a crossing $q_i$, let $\alpha_i$ (resp. $\beta_i$) be the strand of $\pi_f(K)$ with more negative (resp. positive) slope. Assign the grading $|q_i| \equiv \mu(\alpha_i) - \mu(\beta_i)
\pmod{2r(K)}$.

The next step is to define a differential on $\alg$ by counting certain immersions of the disk into $\pi_l(K)$. Label the corners of $\pi_l(K)$ as in Figure \[fig:disks\](a). The immersions of interest are the following:
Given a generator $q_i$ and an ordered set of generators $\mathbf{q} = \{ q_{j_1}, \ldots, q_{j_k}\}$, let $\Delta(q_i; \mathbf{q})$ be the set of orientation-preserving immersions $$f: D^2 \to \rr^2$$ that map $\partial D^2$ to $\pi_l(K)$ (up to smooth reparametrization), with the property that the restriction of $f$ to the boundary is an immersion except at the points $q_i, q_{j_1},
\ldots, q_{j_k}$ and these points are encountered in counter-clockwise order along the boundary. In a neighborhood of $q_i$ and the points in $\mathbf{q}$, the image of the disk under $f$ has the form indicated in Figure \[fig:disks\](b) near $q_i$ and in Figure \[fig:disks\](c) near $q_{j_l}$.

Finally, define the differential as follows:
The differential is defined on a generator $q_i$ by the formula: $$\df q_i = \sum_{\Delta(q_i; \mathbf{q})} \#\left(
\Delta(q_i; \mathbf{q}) \right) q_{j_1} \cdots
q_{j_k},$$ where $\# \Delta(q_i; \mathbf{q})$ is the number of elements in the set $\Delta(q_i; \mathbf{q})$, counted modulo 2. Extend $\df$ to all of $\alg$ via linearity and the Leibniz rule.
Note that the sum in the definition of $\df$ is finite, and that if $\Delta(q_i; \mathbf{q})$ is nonempty, then the height of the crossing at $q_i$ is greater than the sum of the heights of the crossings $q_{j_l}$; see [@chv].
In a diagram coming from the resolution of a plat diagram, the disks in the differential take on a simple form:
1. The disks are embedded, and
2. The intersection of any vertical line with a disk is connected.
Number the crossings of the trefoil knot as in Figure \[fig:numbered-trefoil\]. The first three crossings have grading $0$, whereas the crossings that come from cusps in the plat diagram have grading $1$. The only nontrivial differentials are: $$\begin{split}
\df q_4 &= 1 + q_1 + q_3 + q_1 q_2 q_3, \\
\df q_5 &= 1 + q_1 + q_3 + q_3 q_2 q_1.
\end{split}$$
The central results in this theory are:
1. The differential has degree $-1$.
2. The differential satisfies $\df^2=0$.
3. The “stable tame isomorphism class” of the DGA is invariant under Legendrian isotopy.
The “stable” in part (3) of the theorem comes from the following operation on a DGA $(\alg, \df)$: the **degree $i$ stabilization** $S_i(\alg, \df)$ adds two new generators $\beta$ and $\alpha$ to the algebra, where $$|\beta| = i \text{ and } |\alpha| = i-1,$$ and the differential is extended to the new generators by: $$\df \beta = \alpha \text{ and } \df \alpha = 0.$$ For the purposes of this paper, a stable tame isomorphism between two DGAs $(\alg, \df)$ and $(\alg', \df')$ is a DGA isomorphism $$\psi: S_{i_1} \left( \cdots S_{i_m}(\alg) \cdots \right) \to
S_{j_1} \left( \cdots S_{j_n}(\alg') \cdots \right).$$
It is not easy to use the DGA to distinguish between Legendrian knots, as it — and its homology — are fairly complicated objects. Chekanov found computable invariants by linearizing the DGA. Asking whether the DGA has a graded augmentation is a first step in generating linearized invariants:
An **augmentation** is an algebra map $\varepsilon: \alg \to \zz/2$ that satisfies $\varepsilon \circ
\df = 0$ and $\varepsilon(1) = 1$. If, in addition, the augmentation has support on generators of degree zero, then it is **graded**; if it has support on generators divisible by a divisor $\rho$ of $2r(K)$, then it is $\rho$-**graded**.
It is easy to extend a (graded or $\rho$-graded) augmentation over a stabilization: simply send both $\beta$ and $\alpha$ to $0$. In the case of a degree $0$ stabilization — or degree divisible by $\rho$ in the $\rho$-graded case — there is another possible extension: $$\varepsilon (\beta) = 1 \text{ and } \varepsilon(\alpha) = 0.$$ That is, if $|\beta|=0$, $\varepsilon(\beta)$ can be either $0$ or $1$. Either way, Theorem \[thm:dga\](3) implies:
The existence of a (graded or $\rho$-graded) augmentation is invariant under Legendrian isotopy.
The DGA for the trefoil knot in the previous example has five graded augmentations. For grading reasons, all of the augmentations are zero on $q_4$ and $q_5$, and it is easy to check that the following assignments work:
$q_1$ $q_2$ $q_3$
----------------- ------- ------- -------
$\varepsilon_1$ 1 0 0
$\varepsilon_2$ 1 1 0
$\varepsilon_3$ 1 1 1
$\varepsilon_4$ 0 1 1
$\varepsilon_5$ 0 0 1
Rulings
-------
The other object involved in Theorem \[thm:main\] is a (graded or $\rho$-graded) normal ruling. Suppose that a Legendrian knot $K$ has a front diagram whose singular values all have distinct $x$ coordinates. A **ruling** of such a front diagram of $K$ consists of a one-to-one correspondence between the set of left cusps and the set of right cusps and, for each pair of corresponding cusps, two paths in the front diagram that join them. The ruling paths must satisfy the following conditions:
1. Any two paths in the ruling meet only at crossings or at cusps; and
2. The interiors of the two paths joining corresponding cusps are disjoint, and hence they meet only at the cusps and bound a topological disk. Note that these disks are similar to those used to define the differential , but they may have “obtuse” corners; see Figure \[fig:switch-config\](b), for example.
As Fuchs notes, these conditions imply that the paths cover the front diagram and the $x$ coordinate of each path in the ruling is monotonic.
At a crossing, either the two ruling paths incident to the crossing pass through each other or one path lies entirely above the other. In the latter case, say that the ruling is **switched** at the crossing. Near a crossing, call the two ruling paths that intersect the crossing **crossing paths** and the ruling paths that are paired with the crossing paths **companion paths**. If all of the switched crossings of a ruling are of types (a–c) in Figure \[fig:switch-config\], then the ruling is **normal**. If all of the switched crossings have grading $0$ (resp. grading divisible by $\rho$), then the ruling is **graded** (resp. $\rho$-graded). It is not hard to see that in a graded ruling, both crossing paths have the same Maslov index in configurations (a–c), as do the companion paths in configurations (b) and (c).

The trefoil pictured in Figure \[fig:numbered-trefoil\] has exactly three graded normal rulings. They are pictured in Figure \[fig:trefoil-rulings\].
![The three normal rulings of the trefoil knot in Figure \[fig:numbered-trefoil\].](figs/trefoil-rulings.ps)
The following theorem of Chekanov shows that normal rulings are interesting objects in Legendrian knot theory:
The number of (graded or $\rho$-graded) normal rulings[^4] is invariant under Legendrian isotopy.
From Augmentation to Ruling
===========================
In light of Corollary \[cor:augm-invt\] and Theorem \[thm:ruling-invt\], the proof of Theorem \[thm:main\] — that the existence of an augmentation implies the existence of a ruling — only needs to consider Lagrangian diagrams that come from resolving plats. The proof consists of extending the ruling crossing by crossing from left to right. The extension procedure will produce only (graded or $\rho$-graded) normal switches, so the challenge will be to prove that the paths paired in the ruling match up at the right cusps. To do this, the proof adopts Fuchs’ philosophy of using Legendrian isotopy to simplify the differential at the expense of expanding the number of generators. In practice, this means converting plat diagrams into “dipped diagrams” in which certain crossings are closely related to rulings (see Section \[ssec:dipped\]). By tracing the original augmentation through the stable tame isomorphisms that relate the DGAs of the original diagram and of the dipped diagram (see Section \[ssec:type2\]), it will be possible to use properties of the augmentation of the dipped diagram to conclude that the ruling paths match at the right cusps (see Section \[ssec:tracing\]).
Dipped Diagrams
---------------
A **dip** in a plat diagram looks innocent in the front projection: it appears as the small wiggles pictured in Figure \[fig:dips\](a). The new front is clearly isotopic to the original one. The Lagrangian diagram, however, has changed dramatically; see Figure \[fig:dips\](b).

To see the transition to the dipped diagram in the Lagrangian projection in terms of Reidemeister moves, start by numbering the strands from bottom to top. Using a Type II move, push strand $k$ over strand $l$ ($k>l$) in ascending lexicographic order, e.g. $3$ crosses $2$ after $3$ crosses $1$, and $4$ crosses $1$ after $3$ crosses $2$. If $k$ crosses $l$ after $i$ crosses $j$, write $(i,j)
\prec (k,l)$. The new generators for the modified diagram are simple to describe: assuming $k > l$, denote by $b_{kl}$ the leftmost crossing of the strands $k$ and $l$ and by $a_{kl}$ the rightmost crossing. Say that the $b_{kl}$ generators belong to the $b$-**lattice** and the $a_{kl}$ generators belong to the $a$-**lattice**; see Figure \[fig:dips\](b). It is not hard to check that $|b_{kl}| = \mu(l) - \mu(k)$; note that this is the negative of the grading of a crossing $q_i$. Since the differential lowers degree by $1$, it follows that $|a_{kl}| = |b_{kl}| - 1$.
The differential interacts straightforwardly with the new generators:
Suppose that $a$ and $b$ are the new crossings created by a Type II move during the creation of a dip, and let $y$ be any other crossing. The generator $a$ appears at most once in any term of $\df y$, and if $a$ appears in $\df y$, then $b$ does not.
Consider a disk with a negative corner at $a$. As shown in Figure \[fig:partial-dip\], this corner must lie in the bottom left or top right quadrant adjacent to $a$. In the case where the corner is at the bottom left, there is only one possible disk that comes from $\df b$. Otherwise, the corner is at the top right and there are two cases. First, suppose that the next corner on the upper strand lies in the $a$-lattice. The disk must then lie entirely inside the $a$-lattice, as pictured in Figure \[fig:partial-dip\](a). In particular, the disk satisfies the conditions of the lemma.
Second, suppose that the next corner on the upper strand lies outside — and hence to the right of — the $a$-lattice. As shown in Figure \[fig:partial-dip\](b), the lower strand must also exit the $a$-lattice without any further corners. Note that the dipped diagram comes from modifying the resolution of a “simple” front (see [@lenny:computable], Section 2.3) whose right cusps are all pushed out to the right, so any portion of a disk lying to the right of the dip must have connected vertical slices. It follows that the rest of the disk must lie to the right of the figure, and hence that the disk satisfies the conditions of the lemma.

Type II Moves and DGA Maps
--------------------------
In order to understand how the augmentations before and after the formation of a dip are related, a closer examination of the stable DGA isomorphism induced by a type II move is necessary. Suppose that $(\alg', \df')$ is the DGA for a knot diagram before a type II move and that $(\alg, \df)$ is the DGA afterward. As shown in [@chv], the type II move gives rise to a DGA isomorphism $$\psi: (\alg, \df) \to S(\alg', \df').$$ In particular, note that this map preserves grading. If $a$ and $b$ are the two new generators that appear during a type II move, then the first step in defining $\psi$ is to order the generators of by height: let $\{x_1, \ldots, x_N\}$ denote generators of height less than that of $a$ in increasing height order and let $\{y_1, \ldots,
y_M\}$ denote generators of height greater than that of $b$ in increasing height order. Note that, since lowers height, $\df
y_j$ does not contain any generators $y_k$ with $k \geq j$.
It is possible to construct a dip in the plat diagram so that this ordering takes on the following form. Suppose the strand $k$ is pushed over strand $l$. Each $x_j$ either lies to the left of the dip, or $x_j = a_{nm}$ or $b_{nm}$ with $n-m \leq k-l$. Similarly, $y_j$ either lies to the right of the dip, or $y_j = a_{nm}$ or $b_{nm}$ with $n- m > k-l$.
The definition of the map $\psi$ needs a *vector space* map $H$ defined on $S(\alg')$ by: $$H(w) = \begin{cases}
0 & w \in \alg' \\
0 & w=Q\beta R \quad \text{with\ } Q \in \alg' \\
Q\beta R & w=Q\alpha R \quad \text{with\ } Q \in \alg'.
\end{cases}$$
Also write $\df b = a + v$, where $v$ is a sum of words consisting entirely of the letters $x_1, \ldots, x_N$. Inductively define maps $\psi_i$ on the generators of $\alg$ by: $$\psi_0 (w) = \begin{cases}
\beta & w = b \\
\alpha + v & w = a \\
w & \text{otherwise}
\end{cases}$$ and $$\psi_i (w) = \begin{cases}
y_i + H\psi_{i-1}(\df y_i) & w=y_i \\
\psi_{i-1}(w) & \text{otherwise.}
\end{cases}$$ That the resulting map $\psi = \psi_M$ is a DGA isomorphism between $\alg$ and $S(\alg')$ was proven in [@chv].[^5]
If there is an augmentation $\varepsilon'$ on $S(\alg')$, then $\varepsilon = \varepsilon' \psi$ is an augmentation on . It is straightforward to see that $\varepsilon(x_j) = \varepsilon'(x_j)$ and that: $$\mylabel{eqn:trans-aug}
\varepsilon(b) = \varepsilon'(\beta) \text{ and } \varepsilon(a) =
\varepsilon'(v).$$ Recall that if $|\beta| = 0$, then $\varepsilon'(\beta)$ may be chosen arbitrarily. In a plat diagram, there is a straightforward inductive condition to determine if $\varepsilon$ will differ from $\varepsilon'$ on a generator $y_j$:
After a type II move involved in making a dip in a plat diagram, suppose that $\varepsilon(y_i)$ has been determined for all $i < j$. Then $\varepsilon'(y_j) \neq \varepsilon(y_j)$ if and only if $\varepsilon'(\beta) =1$ and there exists an odd number of terms in $\df y_j$ that are of the form $QaR$, where $Q, R \in \alg'$, $\varepsilon(Q) = 1$ and $\varepsilon(R) = 1$.
Since $$\mylabel{eqn:psi-yj}
\psi(y_j) = y_j + H\psi(\df y_j),$$ the augmentations $\varepsilon$ and $\varepsilon'$ disagree on $y_j$ if and only if $\varepsilon'(H \psi (\df y_j)) \neq 0$. The proof that the latter is equivalent to the second condition in the lemma proceeds by induction on $j$.
For $j=1$, let $P$ be the sum of terms in $\df y_1$ that do not contain $a$. Lemma \[lem:dip-disks\] implies that $\df y_1$ has the form: $$\mylabel{eqn:df-yj}
\df y_1 = P + \sum_k Q_k a R_k,$$ where $Q_k, R_k \in \alg'$. Since the differential lowers height, $P$ lies in the algebra generated by $\{x_1, \ldots, x_N, b\}$. It follows that: $$\begin{split}
H \psi(\df y_1) &= H\bigl(\psi(P) + \sum_k Q_k (\alpha + v) R_k \bigr) \\
&= \sum_k Q_k \beta R_k,
\end{split}$$ since the $Q_k \alpha R_k$ are the only terms containing $\alpha$. The lemma follows in this case. This argument also shows that $\alpha$ does not appear in $\psi(y_1)$.
In general, write out $\df y_j$ as in equation (\[eqn:df-yj\]). As before, the generator $a$ only appears where indicated, and $Q_k$ and $R_k$ lie in the algebra generated by $\{x_1, \ldots, x_N, y_1,
\ldots, y_{j-1} \}$. Inductively, $\psi(y_i)$ does not contain $\alpha$ for $i < j$, so the images of $Q_k$, $R_k$, and $P$ under $\psi$ do not contain $\alpha$. This implies that $H \psi (Q_k
\alpha) = \psi(Q_k) \beta$. Computing as before, then, $$\mylabel{eqn:H-psi}
H \psi(\df y_j) = \sum_k \psi(Q_k) \beta \psi(R_k).$$ Once again, this implies that $\alpha$ does not appear in $\psi(y_j)$, so this fact may be used inductively. The lemma now follows from (\[eqn:psi-yj\]), (\[eqn:H-psi\]), and the fact that $\varepsilon' \psi = \varepsilon$.
Extension of the Ruling
-----------------------
The heart of the proof of Theorem \[thm:main\] extends ruling paths that start at a common left cusp over successive crossings to the right. In the Lagrangian projection that comes from resolving a plat diagram, label the crossings that correspond to crossings of the plat by $q_1, \ldots, q_n$. The extension procedure has three parts: First, extend the ruling over $q_j$; then place a dip between $q_j$ and $q_{j+1}$; and finally construct an augmentation $\varepsilon_{j+1}$ on the DGA of the newly dipped diagram. The augmentations $\varepsilon_{j+1}$ will have the following property:
At any dip, $a_{kl}$ is augmented if and only if the strands $k$ and $l$ are paired in the portion of the ruling between $q_j$ and $q_{j+1}$.
The construction begins at the left cusps, where any ruling must pair paths incident to the same cusp. The first step is to construct $\varepsilon_1$ on the diagram that results from placing a dip between the left cusps and $q_1$. Consider the type II move that pushes strand $k$ over strand $l$, and use the notation for augmentations and generators that was set up around equation (\[eqn:trans-aug\]). There are three considerations that go into computing $\varepsilon$ from $\varepsilon'$:
1. A choice for $\varepsilon'(\beta)$ must be made. In this case, choose $\varepsilon'(\beta) = 0$; it immediately follows from (\[eqn:trans-aug\]) that $\varepsilon(b_{kl}) = 0$.
2. The value of $\varepsilon(a_{kl})$ is determined from $\varepsilon'(v_{kl})$ via (\[eqn:trans-aug\]). In this case, Figure \[fig:left-cusp-dip\] shows that $v_{kl}$ is a sum of words in $b_{ij}$ (for $(i,j) \prec (k,l)$) and contains a $1$ if $(k,l) =
(2m, 2m+1)$ for some $m$. Since $\varepsilon'(b_{ij}) = 0$ for all $(i,j) \prec (k,l)$ by step (1), it is simple to compute $\varepsilon'(v_{kl})$, and hence $\varepsilon(a_{kl})$: $$\label{eqn:a-kl-augm}
\varepsilon(a_{kl}) = \varepsilon'(v_{kl}) = \begin{cases}
1 & (k,l) = (2m, 2m+1) \\
0 & \text{otherwise.}
\end{cases}$$

3. Finally, Lemma \[lem:y-condition\] is used to check if there are any “corrections” to other $a_{ij}$ generators with $(i,j)
\prec (k,l)$ but $i-j \geq k-l$. In this case, since $\varepsilon'(\beta) = 0$, no such changes can occur.
At all stages, then, this process gives an augmentation that satisfies (\[eqn:a-kl-augm\]), and hence $\varepsilon_1$ satisfies Property (R).
Now begin the extension procedure proper. At the crossing $q_j$, extend the ruling paths as follows: if $\varepsilon_j(q_j) = 1$ and the ruling to the left of $q_j$ matches the situation in configurations (a), (b), or (c) in Figure \[fig:switch-config\], then there is a switch at $q_j$. Otherwise, there is no switch. By construction, the ruling paths have only (graded or $\rho$-graded) normal switches.
The next part of the extension procedure is to understand the augmentation $\varepsilon_{j+1}$ that results from the construction of a dip between $q_j$ and $q_{j+1}$ using the three steps above. The choice of augmentations on the $\beta$ generators in step (1) should lead to $\varepsilon_{j+1}$ satisfying Property (R) if $\varepsilon_j$ does. The exact choice of augmentations depends on $\varepsilon_j(q_j)$ and the configuration of the ruling near the crossing $q_j$.
First, suppose that $\varepsilon_j(q_j) = 0$ and consider the Type II move that pushes strand $k$ over strand $l$. For step (1), choose $\varepsilon'(\beta) = 0$.
For step (2), consider $\varepsilon'(v_{kl})$. Since neither $q_j$, nor any crossing in the $b$-lattice, is augmented, the only totally augmented disks in $v_{kl}$ have a positive corner at $b_{kl}$ and a single augmented negative corner in the $a$-lattice to the left of $q_j$; see Figure \[fig:eps0-disks\]. If such a disk exists, the negative corner must occur where two ruling strands cross each other, since $\varepsilon'$ satisfies property (R) on the $a$-lattice to the left. The facts that $q_j$ is not switched in the ruling and that there are no other corners on the disk imply that $b_{kl}$ — and hence $a_{kl}$ — must also be crossings of ruling strands. Thus, $\varepsilon'(v_{kl}) = \varepsilon(a_{kl}) = 1$ if and only if $k$ and $l$ are paired in the ruling.
Finally, since $\varepsilon'(\beta) = 0$, Lemma \[lem:y-condition\] shows that there are no corrections to the augmentations of $a_{ij}$ for $(i,j) \prec (k,l)$. Thus, the previous paragraph shows that $\varepsilon_{j+1}$ satisfies property (R).

From now on, assume that $\varepsilon_j(q_j) = 1$; the proof will examine each configuration in Figure \[fig:switch-config\] in turn. For configuration (a), suppose that the strands $i$ and $i+1$ cross and that these strands are paired with $L$ and $K$, respectively. That is, $K > i+1 >i>L$. Divide the dipping process into three parts:
$(k,l) \prec (i+1,i)$
: Choose $\varepsilon'(\beta)=0$. To determine $\varepsilon(a_{kl})$, consider totally augmented disks in $v_{kl}$. As before, the leftmost negative corner of a totally augmented disk must involve strands paired in the ruling. If neither $k$ nor $l$ is a crossing strand, then, as above, $\varepsilon'(a_{kl}) = \varepsilon(a_{kl}) = 1$ if and only if $k$ and $l$ are paired in the ruling. Otherwise, Figure \[fig:config-a-1\] shows that there is one totally augmented disk in each of $v_{i+1,L}$ and $v_{i,L}$. Thus, $$\mylabel{eqn:config-a-1}
\varepsilon(a_{i+1,L}) = \varepsilon(a_{iL}) = 1.$$

Since $\varepsilon'(\beta)=0$, there are no corrections to the augmentations of previously constructed crossings in the $a$-lattice.
$(k,l) = (i+1,i)$
: First, note that $|b_{i+1,i}|=0$ if the augmentation is graded: the Maslov indices of the crossing strands must agree, and $b_{i+1,i}$ involves the crossing strands. A similar fact holds for a $\rho$-graded augmentation. Hence, it is possible to choose $\varepsilon'(\beta)=1$; it follows that $\varepsilon(b_{i+1,i}) = 1$ as well.
It is easy to see that $v_{i+1,i} = 0$, so $\varepsilon(a_{i+1,i}) =
0$. There is one correction to consider. The disk in Figure \[fig:config-a-2\] contributes the term $a_{i+1,i} a_{iL}$ to $\df a_{i+1,L}$. This is the only disk with a negative corner at $a_{i+1,i}$ whose other negative corners are augmented since $a_{iL}$ is the only crossing involving strand $L$ that is augmented. Equation (\[eqn:config-a-1\]) shows that $\varepsilon(a_{i+1,L})=\varepsilon(a_{iL})=1$, so Lemma \[lem:y-condition\] implies that $$\mylabel{eqn:config-a-2}
\varepsilon(a_{i+1,L}) = 0.$$ Thus, the augmentation on all crossings created up to this point satisfies property (R).

$(k,l) \succ (i+1,i)$
: Choose $\varepsilon'(\beta) = 0$. As in the case of $(k,l) \prec (i+1,i)$, if neither strand is a crossing strand, then the augmentation for $a_{kl}$ matches the augmentation in the $a$-lattice to the left. On the other hand, Figure \[fig:config-a-3\] shows that there is a single totally augmented disk in $v_{K,i+1}$ and two totally augmented disks in $v_{Ki}$. Thus, $$\mylabel{eqn:config-a-3}
\varepsilon(a_{K,i+1}) = 1 \quad \text{and} \quad \varepsilon(a_{Ki}) = 0.$$

Since $\varepsilon'(\beta)=0$, there are no corrections to the augmentations of previously constructed crossings in the $a$-lattice.
The end result is an augmentation that satisfies property (R) on the new $a$-lattice: for crossing strands, equations (\[eqn:config-a-1\], \[eqn:config-a-2\], \[eqn:config-a-3\]) show that only $a_{iL}$ and $a_{K,i+1}$ are augmented; otherwise, the augmentation is simply transferred from the $a$-lattice to the left.
The next case to consider is configuration (b). Again, suppose that the crossing strands are $i$ and $i+1$, paired with $K$ and $L$, respectively, so that $i+1>i>K>L$. This time, the dipping process should be divided into five steps:
$(k,l) \prec (K,L)$
: As in the first case in configuration (a), set $\varepsilon'(\beta)=0$ and transfer the augmentations from the $a$-lattice on the left.
$(k,l) = (K,L)$
: Note that $|b_{KL}| = 0$ for a graded augmentation: the crossing strands have the same Maslov index, and hence so do the companion strands since they both lie below their corresponding crossing strands. Thus, it is possible to set $\varepsilon'(\beta) = 1$, and hence obtain $\varepsilon(b_{KL})=1$.
Since $K$ and $L$ are not paired in the ruling and are not crossing strands, $\varepsilon'(v_{KL}) = 0$, so $\varepsilon(a_{KL}) = 0$. Further, there are no corrections, as any disk in the $a$-lattice with a negative corner at $a_{KL}$ must have an augmented negative corner of the form $a_{L*}$ (see Figure \[fig:config-a-2\]). Since $L$ is paired with $i$, the only augmented crossing of this form has yet to appear in the dip.
$(K,L) \prec (k,l) \prec (i+1,i)$
: Set $\varepsilon'(\beta) =
0$. There are several augmented disks contributing to $v_{kl}$; see Figure \[fig:config-b-1\]:
- Two for $v_{iL}$, and hence $\varepsilon(a_{iL}) = 0$. Note that one of these disks uses the fact that $\varepsilon(b_{KL}) =
1$.
- One for $v_{iK}$, and hence $\varepsilon(a_{iK}) = 1$.
- One for $v_{i+1,L}$, and hence $\varepsilon(a_{i+1,L}) = 1$. Note that the existence of this disk relies on the fact that $\varepsilon(b_{KL}) = 1$.
- One for $v_{i+1,K}$, and hence $\varepsilon(a_{i+1,K}) = 1$.
Since $\varepsilon'(\beta) = 0$, there are no corrections at this stage.

$(k,l) = (i+1,i)$
: Set $\varepsilon'(\beta) = 1$. As usual, $\varepsilon'(v_{i+1,i}) = 0$, so $\varepsilon(a_{i+1,i}) = 0$. There is one correction in this case: since one term in $\df
a_{i+1,K}$ is $a_{i+1,i} a_{iK}$, Lemma \[lem:y-condition\] implies that $\varepsilon(a_{i+1,K})$ changes to $0$.
$(k,l) \succ (i+1,i)$
: Set $\varepsilon'(\beta)=0$. As in the final case in configuration (a), the augmentation is simply transferred from the the dip on the left.
In sum, the augmentation on the new dip satisfies property (R): for crossing strands, only $a_{i+1,L}$ and $a_{iK}$ are augmented; otherwise, the augmentation is simply transferred from the $a$-lattice to the left.
The arguments for the other configurations is similar; see Table \[tbl:config-augm\] for a list of which $\beta$ generators to augment in each case. This completes the extension of the ruling and of $\varepsilon_j$ over a dip.
Configuration Augmented Generators
--------------- ------------------------
a Crossing
b,c Crossing and companion
d None
e,f Companion
: Which $\beta$ generators are augmented in each configuration (with $\varepsilon_j(q_j) = 1$)?
As mentioned above, the proof of Theorem \[thm:main\] will be complete if the paired ruling paths match at the right cusps. This holds true if and only if, in the dip just to the left of the right cusps, $a_{2k, 2k-1}$ is augmented for $k=1, \ldots, c$. As shown in Figure \[fig:right-cusp-dip\], the differential of the $k^{th}$ right cusp in the dipped diagram is: $$\df q_{n+k} = 1 + a_{2k, 2k-1}.$$ Since $\varepsilon_1$ satisfies property (R), the inductive extension argument above shows that $\varepsilon_n$ does as well. The fact that $\varepsilon_n$ is a genuine augmentation implies that $a_{2k, 2k-1}$ is augmented. Theorem \[thm:main\] follows since $\varepsilon_n$ obeys property (R).

The proof can be refined to give an algorithm for constructing a ruling from the augmentation, and can even be carried out without passing to the dipped diagram. As in the proof, the idea is to extend the ruling over a crossing $q_j$ given the value $\varepsilon_j(q_j)$. Before, it was not necessary to explicitly find these values, but it *is* possible to determine them.
The key to finding $\varepsilon_j(q_k)$ for $k > j$ is Lemma \[lem:y-condition\]. Disks of the form $QaR$, where $Q, R
\in \alg'$, appear in the original plat as disks with a positive corner at $q_k$, negative corners at $Q$ and $R$, and a line segment to the right of $q_j$ that joins the crossing strands (if the $\beta$ generator at the crossing is augmented) or the companion strands (if the corresponding $\beta$ generator is augmented); see Figure \[fig:crossing-ends\]. The value of $\varepsilon_j(q_k)$ differs from that of $\varepsilon_{j+1}(q_k)$ if there is an odd number of these disks with $\varepsilon_{j+1}(Q)=1$ and $\varepsilon_{j+1}(R) = 1$. If two $\beta$ generators are augmented, then the procedure should be performed once for each $\beta$ with the $\beta$ between the lower-numbered strands going first.
Figure \[fig:ruling-proc\] demonstrates the procedure on the trefoil with the augmentation that marks all three crossings of degree 0. Note that all of the disks involved in adjusting the augmentation have no negative corners, so $Q$ and $R$ are always $1$ in Lemma \[lem:y-condition\], and the condition is easy to apply.


Rotation Number and Rulings
===========================
This brief section contains the proof of Theorem \[thm:rot-ruling\]. By Theorem \[thm:main\], it suffices to prove that if an oriented front diagram of $K$ has a $2$-graded normal ruling then $r(K)=0$.
It is easy to check that the strands at a crossing with even grading are both oriented to the left or both to the right. This implies that the boundary of a disk in a graded normal ruling inherits a coherent orientation from the knot, and hence that each disk pairs an upward (resp. downward) right cusp with a downward (resp. upward) left cusp. Thus, $$\begin{aligned}
2 r(K) &=& \#\text{down cusps} - \#\text{up cusps} \\
&=& \#\text{down right cusps} - \#\text{up left cusps} \\
& & \quad +
\#\text{down left cusps} - \#\text{up right cusps} \\
&=& 0.\end{aligned}$$
[^1]: See Section \[sec:background\] for the basic definitions in Legendrian knot theory.
[^2]: This invariant is also referred to as the Chekanov-Eliashberg DGA in the literature.
[^3]: It is possible to define the algebra over $\zz[T,
T^{-1}]$; see [@ens].
[^4]: Chekanov calls them **admissible decompositions** in [@chv:survey]; Chekanov and Pushkar call them **positive involutions** in [@chv-pushkar].
[^5]: This appears to be slightly different from the map given in [@chv; @ens]; it is not hard to check, however, that the definition is equivalent.
|
{
"pile_set_name": "ArXiv"
}
|
=cmbx10 =cmr10 =cmti10 =cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8 =cmr7
plus 2mm minus 2mm
DESY 93-090 ISSN 0418-9833\
August 93\
[**The decay $b \to s \gamma$ in the MSSM revisited**]{}\
[F.M. Borzumati ]{}\
[*II. Institut für Theoretische Physik [^1]* ]{}\
[*Universität Hamburg, 22761 Hamburg, Germany*]{}\
[**Abstract**]{}
> We present a re-analysis of the decay ${b\to s \gamma}$ in the Minimal Supersymmetric Standard Model with radiatively induced breaking of $SU(2) \times U(1)$. We extend this analysis to regions of the supersymmetric parameter space wider than those previously studied. Results are explicitly presented for $m_t=150\GeV$. We emphasize the consequences for future searches of charged Higgs and charginos from a measurement of the branching ratio for this decay compatible with the Standard Model prediction. In spite of the strong sensitivity of this decay to the effects of supersymmetry, we find that, at the moment, no lower limit on the mass of these particles can be obtained. The large chargino contributions to the branching ratio for ${b\to s \gamma}$, for large values of ${\tan \beta}$, are effectively reduced by pushing the lightest eigenvalue of the up-squark mass matrix closer to the electroweak scale, i.e. by increasing the degree of degeneracy among the up-squarks.
Introduction and Motivations
============================
The recent results from the CLEO Collaboration [@CLEO] on radiative decays of the $B$ meson have rekindled attention on the transition ${b\to s \gamma}$. Interesting is the improvement of the upper limit on the branching ratio of this inclusive decay, $\BR({b\to s \gamma})< 5.4\times 10^{-4} \,@\,95\%$CL, now closer to the range of values allowed in the Standard Model (SM).
The possible implications of this measurement on the determination of unknown parameters of the SM, however, seem nowadays much weaker than they appeared in the past. The hope of pinning down the mass of the top quark, $m_t$, from the inclusive process lost ground as soon as it became clear how important a role was played by the QCD corrections in the $\BR({b\to s \gamma})$ [@QCD; @GRSPWI; @OTHERS; @GRICHO; @MISIAK]. The size of these corrections at the leading order (LO) in QCD brings in a rather severe uncertainty on the theoretical determination of this branching ratio, which we denote in the SM as $\BR({b\to s \gamma})\vert_{\rm SM}$. This uncertainty, primarily due to the unknown value of the scale $\mu$ at which the strong coupling constant $\alpha_{\rm s}(\mu)$ should be evaluated, amounts roughly to a factor of two [@ALIGR]. It has been recently observed that the only new outcome of the measurement of $\BR({b\to s \gamma})\vert_{\rm SM}$ maybe the determination of the value of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $K_{\rm ts}$ [@ALIGR]. The predictiveness of such a measurement will clearly improve once the next-to-leading order (NLO) QCD corrections to the SM effective Hamiltonian are made available.
The poor knowledge of $\BR({b\to s \gamma})\vert_{\rm SM}$ cannot be overlooked when one attempts to use this decay to determine the value of parameters present in extensions of the SM. It is undoubted, by now, that this decay can be quite sensitive to the presence of particles and interactions predicted in models which enlarge the SM Hamiltonian. Nevertheless, given the imminent detection of ${b\to s \gamma}$, the question one should try to answer, as precisely as possible, is to what extent relevant portions of the parameter spaces of these models can actually be excluded.
Several papers have recently appeared rediscussing the implications of this decay for supersymmetric models [@HEWETT; @BARGER; @DIAZ; @OSHIMO; @TANIMOTO; @BARGIUD; @NANOPOU]. In particular, it can be inferred from references [@HEWETT] and [@BARGER] that the present CLEO upper limit, puts already strong constraints on the mass of the supersymmetric charged Higgs, $m_{H^\pm}$, independently of the value of ${\tan \beta}$, the ratio of the two vacuum expectation values, $v_1$,$v_2$, present in this model (${\tan \beta}=v_2/v_1$). These constraints would almost pre-empt the charged Higgs searches at LEPII, close the possibility of the top quark decay $t\to bH^+$ and the region of parameter space $({\tan \beta},m_{H^\pm})$ inaccessible at SSC/LHC. The conclusions reached in these papers rely on the assumption that the charged Higgs contribution to the one-loop diagrams mediating the ${b\to s \gamma}$ decay is the dominant one among all the supersymmetric contributions.
Five different sets of contributions to the decay ${b\to s \gamma}$ are present in supersymmetry. They can be classified according to the virtual particles exchanged in the loop: [*a)*]{} the SM contribution with exchange of $W^-$ and up-quarks; [*b)*]{} the charged Higgs contribution with $H^-$ and up-quarks; [*c)*]{} the chargino contribution with $\widetilde\chi^-$ and up-squarks (${\widetilde}u$); [*d)*]{} the gluino contribution with $\tilde g$ and down-squarks (${\widetilde}d$); and finally [*e)*]{} the neutralino contribution with $\widetilde\chi^0$ and down-squarks. A complete evaluation of these contributions was performed in ref. [@US] within the framework of the Minimal Supersymmetric Standard Model (MSSM) with radiatively induced breaking of $SU(2)\times U(1)$.
The MSSM is the low-energy remnant of spontaneously broken $N=1$ supergravity theories with: i) canonical kinetic terms (flat Kähler metric) for all the scalar fields; ii) no extra superfields besides those needed for a minimal supersymmetric version of the SM; iii) no baryon- and/or lepton-number violating terms in the superpotential. These restrictions lead to a simplified structure of the soft supersymmetry-breaking terms such that the overall number of parameters of the MSSM, in addition to the gauge and yukawa couplings, can be reduced to [*five*]{}. They are: i) $\mu$, the dimensional parameter in the superpotential which couples the superfields containing the two Higgs doublets; ii) $m$, the common soft breaking mass for all the scalars of the theory; iii) $M$, the common soft breaking gaugino mass; iv) $A$, and v) $B$, the dimensionless parameters appearing respectively in the trilinear ad bilinear soft breaking scalar terms obtained by taking the scalar components of the corresponding terms in the superpotential. In the presence of a flat Kähler metric, $B$ and $A$ are connected through the relation $B=A-1$. The request of having the breaking of the gauge group $SU(2) \times U(1)$ induced by renormalization effects enforces functional relations among these initial parameters and the scale of the electroweak breaking. Thus, the number of independent parameters, in addition to those present in the SM, is only [*three*]{}. We choose them to be, as in [@US], $m,M,{\tan \beta}$.
The entire low-energy spectrum and all the couplings present in this model can be expressed in terms of these parameters. The calculation of the individual contributions to the total amplitude ${\cal A}({b\to s \gamma})\vert_\MSSM$ and the final branching ratio $\BR({b\to s \gamma})\vert_\MSSM$ is then relatively straightforward. Results of this calculation were presented in [@US] only for the case of $m_t = 130\GeV$ and ${\tan \beta}= 2,8$.
It was observed there that the two main supersymmetric contributions to $\BR({b\to s \gamma})\vert_\MSSM$, competitive with the SM exchange of $W^-$ and top quark, were [*b)*]{} and [*c)*]{}. Smaller was found the contribution [*d)*]{}, which had been cherished in the past as the most likely mechanism to produce supersymmetric enhancements of flavour changing neutral current (FCNC) processes. The contribution [*e)*]{} appeared to be totally negligible. It was also noticed that, in the two cases studied, the largest value of $\BR({b\to s \gamma})\vert_\MSSM$ was obtained in a region of parameter space were the Higgs contribution [*b)*]{} was, indeed, the dominant one.
Destructive interferences among the abovementioned classes of contributions to the amplitude ${\cal A}({b\to s \gamma})\vert_\MSSM$ were observed in non-trivial portions of the supersymmetric parameter space. Nevertheless, the total results obtained for $\BR({b\to s \gamma})\vert_\MSSM$ were almost always showing an effective enhancement of the SM prediction. Small suppressions were observed, for ${\tan \beta}=2$, only in a tiny region at the higher end of the range of values considered for the parameter $m$. Finally, the band of possible values obtained for $\BR({b\to s \gamma})\vert_\MSSM$, in the case ${\tan \beta}=8$, had a width, due to the variation of the remaining free parameter $M$, which was fairly constant for increasing $m$. Branching ratios exceeding the SM prediction by more than a factor two could still be observed at the higher end of $m$, where the Higgs contribution is rather small.
These last two occurrences can be easily explained if one neglects for a moment the contributions [*d)*]{} and [*e)*]{}, which played a minor role in the two cases studied in [@US]. Since the Higgs contribution to ${\cal A}({b\to s \gamma})\vert_\MSSM$ has always the same sign of the SM contribution, it is clear that the small suppressions of $\BR({b\to s \gamma})\vert_\MSSM$ with respect to the SM prediction observed for ${\tan \beta}= 2$, and the still sizable enhancement present at large values of $m$ for ${\tan \beta}= 8$ are due to negative and positive chargino contributions in absolute size bigger than the Higgs contribution.
These observations undermine already the main assumption of refs. [@HEWETT; @BARGER]. The dominance of one of the supersymmetric contributions over the remaining ones depend strongly on the region of parameter space considered. Therefore, in general, the burden of restrictions which experimental findings may impose on the allowed values of $\BR({b\to s \gamma})\vert_\MSSM$ has to be shared by all the supersymmetric contributions. The question of where and if these restrictions can be translated in clear-cut bounds on specific masses remains still open.
Already in ref. [@DIAZ], where only the Higgs contribution to ${b\to s \gamma}$ was considered, it was shown that a consistent inclusion of radiative corrections to the tree level Higgs potential brings back in the calculation of the branching ratio the dependence of supersymmetric masses and couplings and weakens the strong predictions of [@HEWETT; @BARGER], at least for large enough values of ${\tan \beta}$.
The possibility of (positive and negative) chargino contributions sizably exceeding, in absolute value, not only the Higgs contribution, but also the SM one, for relatively large values of ${\tan \beta}$, ${\tan \beta}\gtap 10$, was recently emphasized in ref. [@OSHIMO]. It was also shown that, always for ${\tan \beta}\gtap 10$, the gluino contribution may not be so small, leading therefore to further enhancements of $\BR({b\to s \gamma})\vert_{\rm MSSM}$. Aim of this paper is to extend the analysis of ref. [@US] to regions of parameter space wide enough to allow: i) significant answers to the questions raised in [@HEWETT; @BARGER]; ii) confirmations of the results obtained in [@OSHIMO]; iii) an investigation of the possibility that these results may threaten the chargino searches at LEP II. Two well distinguished aspects enter in the calculation of $\BR({b\to s \gamma})\vert_\MSSM$: the calculation of the supersymmetric mass spectrum to be inputted in $\BR({b\to s \gamma})\vert_\MSSM$ and the actual calculation of the branching ratio. For details on these two aspects, the reader is referred to refs. [@US; @IO] and [@US], respectively. Nevertheless a few points for each of them are reported and emphasized in Sects. 2 and 3, in order to set the notation, to clarify approximations and assumptions made in this analysis. Furthermore, some features of the supersymmetric parameter space, relevant for the ${b\to s \gamma}$ decay, are described in Sect. 3. A thorough discussion of the results obtained is given in Sect. 4, followed then by the conclusions. Finally, a list of misprints/errors for refs. [@US; @IO] is given in the Appendix.
The Supersymmetric Mass Spectrum
================================
Notation, Procedure and Inputs
------------------------------
We briefly outline in this section the procedure used in the calculation of the supersymmetric mass spectrum to be inputted in $\BR({b\to s \gamma})\vert_\MSSM$. This calculation is performed within the framework of the MSSM with spontaneous breaking of the gauge group $SU(2)\times U(1)$ induced by renormalization effects.
At the electroweak scale, the supersymmetric mass spectrum is described by a lagrangian which we denote as ${\cal{L}}_\MSSM(M_Z)$. As for the calculation of $\BR({b\to s \gamma})\vert_\MSSM$, this is nothing more than a SM lagrangian extended by the introduction of new fermion and scalar particles and of new renormalizable interaction terms.
In particular, two scalars are present for each up- and down-quark, the up- and down-squarks ${{\widetilde}u}_i$, ${{\widetilde}d}_i$. Two and one scalars are also associated to each charged and neutral lepton the charged sleptons and sneutrinos ${{\widetilde}l}_i$, ${{\widetilde}\nu}_j$, respectively. An additional charged scalar Higgs, $H^\pm$ and three neutral Higgs, $H_1^0$, $H_2^0$ (CP-even), and $H_3^0$ (CP-odd) are present. Finally, one has two additional charged fermions, the charginos ${{\widetilde}\chi_k}^-$, related to the supersymmetric partners of the $W$-boson and of the charged higgs bosons, (the $W$-ino and Higgs-ino, respectively) and four neutral fermions, the neutralinos ${{\widetilde}\chi_l}^0$, related to the supersymmetric partners of the three neutral higgs and of the neutral gauge boson. The lightest of all these sets of particles is usually labelled by $i\!=\!j\!=\!l\!=\!1$, except for the lighter chargino and the lightest neutral higgs, conventionally denoted as ${{\widetilde}\chi_2}^-$ and $H_2^0$.
The complexity of ${\cal{L}}_\MSSM(M_Z)$ reduces considerably if one relates it to the simpler form which ${\cal L}_\MSSM$ has at a scale of the order of the Planck mass, ${\cal {L}}_\MSSM(M_P)$. For the specific form of this lagrangian the reader is referred to [@US; @IO]. We shall remind here only that ${\cal {L}}_\MSSM(M_P)$, remnant of a theory with spontaneously broken $N=1$ local supersymmetry after decoupling of gravity, consist of two terms: the globally supersymmetric version of the SM with two Higgs doublets (2HDM) [^2], which we denote as ${\cal {L}}^{^{\ {\rm gl-susy}}}_{\rm 2HDM}(M_P)$, plus a collection of soft terms, explicitly breaking this global supersymmetry, ${\cal {L}}_{\rm soft}(M_P)$.
${\cal {L}}^{^{\ {\rm gl-susy}}}_{\rm 2HDM}(M_P)$ has the minimal particles content (listed above) and minimal set of interaction terms needed to supersymmetrize the SM. The presence of an extra Higgs doublet, with the corresponding supersymmetric partner, has already been mentioned. An extra bilinear term, with dimensional coupling $\mu$, mixes the two Higgs superfields in the superpotential. No baryon- and lepton-violating terms, are included in the superpotential. Finally, canonical kinetic terms are assumed for all scalar superfields in the initial supergravity lagrangian.
The form of the soft breaking terms ${\cal {L}}_{\rm soft}(M_P)$ is a compromise between [*a)*]{} the need of a structure general enough to account for different possible mechanisms (unknown) of spontaneous breaking of the underlying local supersymmetry, [*b)*]{} a criterion of simplicity requiring the introduction of a minimum number of additional parameters. All the scalar and gaugino mass terms in ${\cal {L}}_{\rm soft}(M_P)$ are given the same couplings, $m^2$ and $M$, respectively, and the trilinear and bilinear soft-breaking terms, obtained by taking the scalar components of the corresponding terms in the superpotential, appear with the two dimensionless parameters $A$ and $B$. The abovementioned requirement of canonical kinetic terms for all the scalar fields in the underlying theory (flat Kähler metric) relates $A$ and $B$ according to $B=A-1$.
These properties define what is known as MSSM. [*Four*]{} is the number of new parameters of this model, in addition to those present in the SM: $\mu$, $m$, $M$, $A$.
As mentioned, the lagrangian ${\cal {L}}_\MSSM(M_P)$ describes an effective model obtained after decoupling of all the higher dimension operators weighted by inverse powers of $M_P$, in principle not negligible at scales ${\cal O}(M_P)$. This model is embedded in a grand-unified theory, with unification scale $M_X$, about three order of magnitude below $M_P$. The presence of additional heavy particle associated to the grand-unified gauge group should be kept into account when evolving the parameters of this model to lower scales. The procedure usually adopted is to neglect both type of terms. All other renormalization effects between $M_P$ and $M_X$ are also ignored and ${\cal L}_\MSSM(M_P)$ is simply equated to ${\cal L}_\MSSM(M_X)$. A quantitative estimate of how these effects, if taken into account, could alter the rigid universality of couplings in ${\cal {L}}_\MSSM(M_X)$ and therefore affect the low-energy spectrum does not exist [^3].
We follow here, as in [@US], this conventional procedure and obtain the low-energy spectrum of the supersymmetric particles listed before, as solution of a set of renormalization group (RG) equations. The initial conditions for these equations can be easily expressed in terms of $\mu,m,M,A$ and the values of gauge and yukawa couplings at $M_X$. The further request that the mass parameters for the scalar particles evolve down to the electroweak scale in such a way to induce the breaking of $SU(2) \times U(1)$ enforces functional relations among the four parameters of the model and allows to eliminate some of them.
Given the complexity of the full low-energy scalar potential, one starts imposing that the correct vacuum is recovered through the minimization of a subset of it, i.e. the Higgs potential, which at low energy reads:\
& & + ([[g\^2 + g\^[2]{}]{}]{}) (H\_1\^2 +H\_2\^2 ) + ([[g\^2 - g\^[2]{}]{}]{}) H\_1\^2 H\_2\^2 - g\^2 \_[ij]{} H\^i\_1 H\^j\_2 \^2, \[higgspot\] where $H_1$, $H_2$ are the two Higgs doublets, $g$ and $g'$ denote the $SU(2)$ and $U(1)$ gauge coupling constants, respectively, and $\epsilon_{12}=1$. The terms in (\[higgspot\]) relative to the neutral Higgs fields in $H_1$ and $H_2$ are explicitly reported in [@US]; see note in the Appendix.
The three mass parameters $\mu_1^2,\mu_2^2,\mu_3^2$, at $M_X$ given by \_1\^2(M\_X) = \_2\^2(M\_X) = m\^2+\^2, \_3\^2(M\_X) = -( A-1 )m , are crucial for the breaking of $SU(2)\times U(1)$. The requirement that the minimization of the potential (\[higgspot\]) yields the correct vacuum provide us with two relations linking the two vacuum expectations values $v_1,v_2$ (with $v_2$ and $v_1$ giving rise to the mass of up and down quarks, respectively) to $\mu_1^2,\mu_2^2,\mu_3^2$ and ultimately to the initial parameters $\mu,m,M,A$. The two conditions obtained through the minimization of (\[higgspot\]) can therefore be used to trade two of these initial parameters, as for example $\mu$ and $A$, for $M_Z$ ($M_Z=1/2 ({{g^2 \!+ g^{\prime 2}}})(v_1^2\!+v_2^2)$) and ${\tan \beta}$ .
Thus, one is left with ${\tan \beta}$, $(m,M)$ as independent parameters of the MSSM, in addition to those already present in the SM. The choice of $m$ and $M$ as independent parameters and $\mu$ and $A$ as derived ones is obviously completely arbitrary. For fixed values of $m_t$ and ${\tan \beta}$, not all points in the 2-dimensional space $(m,M)$ provide physical realization of the MSSM.
To begin with, the minimization of the potential (\[higgspot\]) may not provide an adequate approximation to the minimization of the full scalar potential. One has to guarantee that no other dangerous minima breaking charge and/or colour appear below the correct one. To this aim one can examine each point of the $(m,M)$-space by means of a set of necessary conditions among which, only those relative to not too large yukawa couplings can become sufficient [@GUNHABSHER]. So far, this is still the best known procedure to avoid charge- and colour- breaking minima and the one also used for the present analysis. For all the other tests which each point of the space $(m,M)$ has to pass before being accepted as a viable one, the reader is referred to [@US]. When a point qualifies as such, $\mu$ and $A$ can be evaluated through the minimization conditions of (\[higgspot\]). Once gauge and yukawa coupling at $M_X$ are also calculated, the initial conditions for the RG equations relative to the full low-energy supersymmetric spectrum are completely specified.
No corrections to the RG-improved tree-level higgs potential (\[higgspot\]) are included in this analysis. This approximation may not be completely satisfactory in some regions of the supersymmetric parameter space, in particular in those where some fine-tuning of parameters may be required. We try, in general, to avoid these regions; see choice of values for ${\tan \beta}$, for $m_t=150\,$GeV. For the remaining ones, we believe that this procedure can provide good indications of the complex interplay of the different contributions to ${b\to s \gamma}$. Within this approximation, the running masses of the the physical Higgs bosons have the simple structure: m\_[H\_1\^0,H\_2\^0]{}\^2 = [1 2]{} , \[h12mass\] m\_[H\^]{}\^2 = M\_W\^2 + m\_[H\_3\^0]{}\^2, \[hpmass\] where in turn, $m_{H_3^0}^2$ is given by a combination of the mass parameters in (\[higgspot\]): m\_[H\_3\^0]{}\^2 = \_1\^2 + \_2\^2 = , \[h3mass\] with the second equality induced by the requirement of radiative breaking of $SU(2)\times U(1)$.
The value of gauge and yukawa couplings at $M_X$ are the remaining ingredient needed in order to integrate the RG equations for the supersymmetric parameters. If no threshold effects for supersymmetric particles are considered, as done in the present analysis, gauge and yukawa couplings obey a set of RG equations not coupled to those for the supersymmetric parameters. Supersymmetric particles contribute to the value of their RG equation coefficients, at energies above $M_{\rm SUSY}$. Below this scale the RG equations for the 2HDM are used. At this same scale, chosen here to be $2 M_Z$, the evolution down of all the remaining supersymmetric parameters is also stopped.
Thus, following this procedure, the evolution of the low-energy couplings (M\_Z) = 0.114, \^2 \_W = 0.233, [\_[em]{}]{}(M\_Z)= 1/128 \[input\] allows to obtain the grand-unification scale $M_X$ and the common value of the gauge couplings at $M_X$, $\alpha_{\rm GUT}$. The chosen value of $\as(M_Z)$ in (\[input\]), small when compared with the LEP results [@ALPHAS], is well within the currently allowed values of $\as(M_Z)$; see the world average $\as(M_Z) = 0.118 \pm 0.007$ reported in [@ALPHAS]. Changes in the value of $\as(M_Z)$ may produce (slightly) different values of $M_X$ and $\alpha_{\rm GUT}$. These changes are, however, almost unconsequential for the evaluation of the supersymmetric mass spectrum. Since we are interested in the global features of this spectrum, obtained over a wide region of the $(m,M)$-space, the effect of a variation of $\as(M_Z)$ can be compensated by a small shift in the value of $M$.
Similar is, in principle, the procedure for the evaluation of the yukawa couplings at $M_X$. After fixing the parameter $m_t(M_Z)$, a large uncertainty exists on the value of the running mass $m_b(M_Z)$, which corresponds to the wide range of validity for the physical bottom quark mass, $4.6 \!- \!5.3\GeV$ [@PDB]. This uncertainty can be exploited to obtain unification of the yukawa couplings at $M_X$. The usually imposed $SU(5)$-type of unification requires equality of bottom and tau couplings, $h_b(M_X) = h_\tau(M_X)$. Not clear is how stringent this condition has to be considered. The consequences of a strict unification have been recently analyzed [@YUKUNIF]. Two-loop evolution equations for gauge and yukawa couplings are used in these calculations as well as an improved approximation of $M_{\rm SUSY}$, depending on one of the two free supersymmetric parameters remaining after fixing $m_t$ and ${\tan \beta}$. Strong constraints are obtained for the allowed values of ${\tan \beta}$, for $m_t$ fixed and the physical bottom mass varying within the range $4.6-5.3\,$GeV.
We adopt here the more conservative approach of ref. [@OLECH]. It was shown in this paper that a relaxation of the yukawa couplings unification at the level of 15-20% still allows all possible values of ${\tan \beta}$. This procedure yields the same SUSY mass spectrum which one obtains by imposing a more precise yukawa couplings unification (at the 1% level), but allowing the physical bottom mass to go beyond the range of $4.6 \!-\!5.3\GeV$. As in [@OLECH], we consider running bottom masses, at $M_Z$, as high as $3.6\!-\!3.7\GeV$, for $m_t=150\GeV$.
Appropriate lower bounds, corresponding as closely as possible to the current experimental limits, have to be imposed on the supersymmetric masses thus calculated. Since the negative searches of supersymmetric particles rely on assumptions which may not be valid throughout the full range of the MSSM parameter space, one can only try to obtain a rough match of the experimental situation.
While it seems realistic to implement the limits at $45\GeV$ imposed by LEP I on almost all particles, the CDF limits on squarks and gluino masses pose some problem [@CDFBOUNDS]. The limits at $126$ and $141\GeV$ respectively, are obtained under the assumptions of a massless photino, complete degeneracy among squarks and under the assumption that squarks and gluinos decay directly into the lightest supersymmetric particle, without intermediate decays into charginos and neutralinos. When the possibility of these cascade decays is considered, for one particular choice of the supersymmetric parameters, it is found that the lower limit on squark masses, now a function of $m_{{\widetilde}g}$, disappears for big enough values of the gluino mass [@CDFBOUNDS]. Charginos and neutralinos can be light enough in the MSSM to give substantial branching ratios for cascade decays of not too light squarks and gluinos. Thus, we adopt the strategy of imposing an “average bound” of $120\GeV$ and $100\GeV$ for gluino and squarks masses, respectively.
We make an exception for the lightest eigenvalue of the up-squark mass matrix, or stop (${{\widetilde}u}_1$). Given the presence of the parameter $m_t$ in the left-right entries of this matrix, stops can indeed be rather light. The CDF limit does not apply to this case since the decay mode ${{{\widetilde}u}_1} \to t + {{{\widetilde}\chi}_1}^0$ is forbidden or highly disfavoured (depending on the value of $m_t$) and one can only rely on the limits from LEP. This limit has only recently been brought up to $39\GeV$ [@STOPBOUND]. In view of possible improvements we already impose a lower limit of $45\GeV$.
The limit on the mass of the lightest eigenvalue of the down-squark mass matrix, or sbottom (${{\widetilde}d}_1$), deserves also some comments. It is sometimes argued that large values of ${\tan \beta}$ in the left-right entries of this matrix can give rise to sbottom masses as small as, or smaller than, the stop mass. This point, however, has never been explicitly checked in any previous supersymmetric search. We shall report later on, at the end of this section, on the negative evidence we obtain for light ${{\widetilde}d}_1$, at least up to the value of ${\tan \beta}$ we consider. Moreover, since the CDF searches, based on the decay mode ${{{\widetilde}d}_1} \to b + {{{\widetilde}\chi}_1}^0$ should be effective in this case, we impose also for this mass the limit of $100\GeV$.
Finally, we assume a lower bound of $45\GeV$ on the mass of the lightest neutral Higgs $m_{H_2^0}$ (for a summary of the lower limits obtained for this mass by the four LEP Collaboration, see for example [@HIGGSBOUND]) and a bound of $20\GeV$ on the lightest neutralino mass (induced in this model by the chargino limit). No lower bound has to be imposed on the charged Higgs mass since the theoretical low-energy value for $m_{H^\pm}$ (\[hpmass\]), already exceeds the lower limit coming from LEP.
In short, the lower bounds imposed in the remaining of this paper are: & [ ]{} &\
& & m\_[g]{} > 120, m\_[[d\_1]{},[u\_2]{}]{} > 100, m\_[u\_1]{} > 45,\
& & m\_[\^-\_2]{} > 45, m\_[\_1]{} > 45, m\_[l\_1]{} > 45,\
& & m\_[H\_2\^0]{} > 45, m\_[\^0\_1]{} > 20. \[expbounds\]
Parameter Space
---------------
In the remaining of this section we show the supersymmetric parameter space explored for $m_t=150\GeV$ and different values of ${\tan \beta}$ in general from $3$ to $3$ and up to the value $35$ when searching for $m_{H^\pm} < m_t$. The value $m_t=150\,$GeV seems representative enough to discuss typical features of the MSSM and results to be expected for $\BR({b\to s \gamma})\vert_\MSSM$. The range of ${\tan \beta}$ is also sufficiently wide to allow a survey of possible realizations of the MSSM obtained in the regime $h_t >> h_b$ up to the one where it is $h_t \gtap h_b$. Smaller values of ${\tan \beta}$, as well as larger ones, for which $h_t$ approaches too closely $h_b$, are avoided. The error due to the fact that no loop corrections to the tree-level potential (\[higgspot\]) are included in this analysis, may not be negligible in these cases.
Thus, for this chosen value of $m_t$ and fixed values of ${\tan \beta}$, the region of the 2-dimensional space $(m,M)$ within the limits $0\!<\!m\!<\!500\GeV$, $-250\!<\!M\!<\!250\GeV$ is scanned picking up points at regular intervals of $6\GeV$ in both directions. Each point is then tested to verify whether it allows physical realization of the MSSM and the full low-energy spectrum is calculated if this turns out to be the case.
We show in fig. \[parspace\] the results of this scanning. No realizations of the MSSM with radiative breaking of $SU(2)\times U(1)$ are obtained in the white areas of this figure, while the regions where these realizations are possible are covered by dots. It should be mentioned here that the points shown in fig. \[parspace\] can be “multiple” points. Given the non-linearity of the relations among the original parameters induced by the requirement of radiative breaking of $SU(2)\!\times\!U(1)$ different solutions for the pair $(\mu,A)$ are possible when $m$, $M$, and ${\tan \beta}$ are fixed. The dark and light dots distinguish regions allowed and forbidden by the lower limits (\[expbounds\]). The observation that wider regions of parameter space are excluded by the same bounds (\[expbounds\]), for increasing values of ${\tan \beta}$, suggests the appearance of lighter supersymmetric masses when going from ${\tan \beta}=3$ to 30.
The points of fig. \[parspace\], are plotted in fig. \[mumtwo\] in the plane $(\mu_R,M_2)$, where $\mu_R$ is the renormalized values of the parameter $\mu$ and $M_2$ the renormalized value of the $SU(2)$-gaugino mass $M$. Of these two parameters, entering in the chargino mass matrix, $M_2$ is related to $M$ by a constant factor (depending on the input (\[input\])) $\sim0.8$ and $\mu_R$ is not too dissimilar from $\mu$. In order to focus on the smaller values of $\mu_R$, not all the points of fig. \[parspace\] are shown in this figure: the full range $-250<M<250\,$GeV is covered in the plots of fig. \[mumtwo\], whereas restrictions on the $m$-range are performed for the different values of ${\tan \beta}$. When the full range $0<m<500\,$GeV is considered, values of $\vert \mu_R\vert$ as big as $700\GeV$ are obtained for ${\tan \beta}=3$ and $500\GeV$ for the larger values of ${\tan \beta}$. Superimposed in fig. \[mumtwo\] are the contour lines for $m_{{{\widetilde}\chi_2}^-} = 45\GeV$ (dashed lines) and $m_{{{\widetilde}\chi_2}^-} = 90\GeV$ (solid lines), respectively the lower limit on the chargino mass obtained at LEP I and the limit which LEP II may impose.
=16 cm
The strong correlations introduced by the radiative breaking of $SU(2) \times U(1)$ are visible in this figure. For ${\tan \beta}=9$-$30$ and the smallest values of $m$ considered, the solution $\mu \gtap M$ is obtained. For increasing values of $m$, $\vert\mu\vert$ increases for $M<0$ and decreases for $M>0$, creating the diagonal bands of fig. \[mumtwo\].
We observe that for ${\tan \beta}\gtap 9$ these bounds are wider for $M<0$ than for $M>0$. Altogether different are the solutions obtained for ${\tan \beta}=3$: very narrow are the bands one obtains for $\mu_R>0$, whereas large distributions of points are present for negative $\mu_R$. For ${\tan \beta}\gtap 9$ a different type of solutions is also present: small values of $\mu$ are obtained even for relatively large values of $M$ and not too large values of $m$. These solutions, tend to align along smaller and smaller values of $\mu$, for increasing ${\tan \beta}$.
The differences observed for the solutions obtained below and above ${\tan \beta}= 9$ affect the distributions of masses obtained; see discussion on charged Higgs and chargino masses. In the following subsections we shall show the typical ranges of masses obtained for the supersymmetric particles relevant to the decay ${b\to s \gamma}$ and the location of some interesting intervals of these masses in the $(m,M)$-plane.
Charged Higgs Mass
------------------
We start here with the charged Higgs. We show in fig. \[cnhiggs\] the position occupied in the $(m,M)$-plane by intervals of progressively heavier ${H^\pm}$. Masses below $100\GeV$ are indicated by the vertical lines closer to the forbidden regions, whereas successive intervals of masses, $100\!-\!150\GeV$, $150\!-\!200\GeV$, $200\!-\!250\GeV$ and $250\!-\!300\GeV$, form layers, distinguished in fig. \[cnhiggs\] by different types of lines, wrapped one on top of the other around the innermost one. By adding additional layers with larger values of masses, all the points in fig. \[parspace\] would be recovered.
The regular structure obtained for the $m_{H^\pm}$ contour levels can be understood upon inspection of the decomposition of $m_{H^\pm}^2$ in terms of the four initial parameters: m\_[H\^]{}\^2= c\_1 \^2 + c\_2 M\^2 + c\_3 m\^2 + c\_4 (Am)\^2 + c\_5 AMm . \[higgsdecomp\]
=12.4cm
The value of the coefficients $c_i$, reported explicitly for $\mu_1^2-m^2$, $\mu_2^2-m^2$ ($m_{H^\pm}^2= \mu_1^2+\mu_2^2 +M_W^2$) in the following table,
and the requirement $\vert \mu\vert \gtap \vert M \vert$, imposed by the radiative breaking of $SU(2)\times U(1)$ (see fig. \[mumtwo\]), explains the almost semi-circular shapes obtained for the $m_{H^\pm}$ contour levels.
The steady decrease of the coefficients for $m^2$ and the overall decrease of the weight of the remaining contributions, for increasing ${\tan \beta}$, explains the other interesting feature shown in fig. \[cnhiggs\]: for the larger values of ${\tan \beta}$, the low ranges of masses invade almost completely the $(m,M)$-region considered here, while for ${\tan \beta}=3$, the same region can be covered only when going to masses $m_{H^\pm}$ as big as $850\!-\!900\GeV$.
The first two layers in fig. \[cnhiggs\] represent roughly the allowed phase space for the decay $t\to bH^+$, for $m_t=150\GeV$. The interplay between the expansion of the mass intervals in fig. \[cnhiggs\] and the differences in shape and size of the regions removed by the bounds (\[expbounds\]) for increasing ${\tan \beta}$, makes the decay $t\to bH^+$ less “probable” not in the case of ${\tan \beta}=3$, where less points are found in the first two layers, but for some intermediate value of ${\tan \beta}$ between 3 and 15.
We plot in fig. \[cnthb\] this phase space in the plane $(m_{H^+},\tan\beta)$, for specific choices of ${\tan \beta}$, from $3$ to $35$. The values of $m_b(M_Z)$ considered here range from $3.5$ to $3.7$. It will be shown in the next section, in fig. \[rsthb\], how this phase space is reduced by the requirement that $\BR({b\to s \gamma})\vert_\MSSM$ be limited to the range of values compatible with the SM prediction. Since the region of parameter space probed here is relatively small, we use in this as well as in fig. \[rsthb\] a finer grid with a spacing of only $2\GeV$ while scanning the same portion of the $(m,M)$-plane. The increased number of points obtained and the hidden degree of freedom remaining when ${\tan \beta}$ and $m_{H^+}$ are fixed, make each line obtained look almost like a continuous one. The isolated points observed at the beginning of some of these lines are due to the appearance of isolated corners at the edge of each layer when the lower limits (\[expbounds\]) are imposed.
For this set of lower bounds, the value of ${\tan \beta}$ where the decay $t\to bH^+$ is less probable seems indeed to be ${\tan \beta}\sim 9$: starting from ${\tan \beta}=3$, the allowed range of $m_{H^\pm}$ decreases, reaching a minimum for ${\tan \beta}= 9$, and increases then again for larger values of ${\tan \beta}$. The results described here differ from those presented in [@OLECH], where rather heavy charged Higgses are obtained except when $h_b \sim h_t$. This disagreement may be due to the different assumptions in the two calculations, although the precise reason has still to be pinned down.
Chargino and Lightest Up-Squark Masses
--------------------------------------
We consider here how different intervals of masses for the lightest chargino ${{\widetilde}\chi_2}^-$ are distributed in the $(m,M)$-plane. The results are shown in fig. \[cncharg\]. For clarity, only two ranges, $m_{{{\widetilde}\chi_2}^-}\!<\!50\GeV$ and $50\!\leq \!m_{{{\widetilde}\chi_2}^-}\!\leq\!100\GeV$ are shown in this figure. The range with the smaller values of masses occupies the central part of each plane, while the second interval is split almost symmetrically around the previous one. The full $(m,M)$-region considered here is covered with equally symmetric strips of increasingly larger values of $m_{{{\widetilde}\chi_2}^-}$.
The modest dependence on $m$ of the mass intervals shown in fig. \[cncharg\] can be explained with the correlation $\vert \mu\ \vert \sim \vert M \vert $ observed for small values of $m$ and with the increased probability of having ${{\widetilde}\chi_2}^-$ as a state of pure $W$-ino for increasing $m$ (when $\vert \mu \vert$ can substantially exceed $\vert M\vert$). Enlargement of the almost horizontal strips shown in fig. \[cncharg\] for large values of $m$ are obtained for ${\tan \beta}=9-30$, but are not present for ${\tan \beta}=3$.
The patterns in the correlations between $\mu_R$ and $M_2$ observed in fig. \[mumtwo\] should also explain the striking similarity in the results obtained for ${\tan \beta}=9-30$ (except for a change in size of the region removed by (\[expbounds\]) in the case ${\tan \beta}=30$) and the differences between these results and those obtained for ${\tan \beta}<9$.
We observe also that the constraint $m_{{{\widetilde}\chi_2}^-}> 45\GeV$ is, for not too small $m$, the most severe among the lower bounds (\[expbounds\]), for ${\tan \beta}= 9-30$. In these cases, the shape of the region removed by (\[expbounds\]) corresponds practically to the contour line $m_{{{\widetilde}\chi_2}^-} = 45\GeV$. In contrast, in the case ${\tan \beta}=3$, for not too small $m$, this shape is given by the gluino bound $m_{{\widetilde}g}>120\GeV$. For all values of ${\tan \beta}$, sizable regions of parameter space with $45 <m_{{{\widetilde}\chi_2}^-}<50\GeV$ remain after imposing the lower limits (\[expbounds\]). They are also present at large $m$, where $m_{H^-}$ can range from $850\!-\!900\GeV$, for ${\tan \beta}=3$, to $200\!-\!250\GeV$, for ${\tan \beta}=30$.
It should therefore become clear what was observed in the introduction: the chargino contributions to ${b\to s \gamma}$ can definitely exceed the Higgs contributions for large values of $m$ and small values of ${\tan \beta}$. Chargino contributions exceeding the SM and Higgs contributions can in principle be expected also for larger values of ${\tan \beta}$, provided not too heavy stop squarks are obtained in the same regions.
The distribution of different ranges of mass for a first and second generation of squarks is expected to follow very regular patterns. The expected shapes should be ellipsoidal with a smaller vertical axis (compare with those obtained for $m_{H^\pm}$) due to the large gluino contribution to the renormalization of squarks masses. The coefficients for the decomposition of the squared masses of first and second generation of squarks in term of the original four supersymmetric parameters (for the choice (\[input\])) are explicitly given in ref. [@IO]. Deviations from these simple patterns are expected for the lightest up- and down- mass eigenvalues, due to the presence of potentially large left-right mixings in the squarks mass matrices.
The result of the investigation of the lightest up-squark mass matrix is shown in fig. \[cnstop\]. Only the intervals of masses $0\!-\!50\GeV$, $50\!-\!100\GeV$ and $100\!-\!150\GeV$ are plotted here since the figures tend to become rather complex if intervals with larger values of masses are also included. The values of $m_{{\widetilde}u_1}$ can indeed be rather small and for the two largest values of ${\tan \beta}$ considered in this analysis masses $m_{{{\widetilde}u}_1}$ as small as $45\!-\!50\GeV$ appear in coincidence with equally small values of $m_{{{\widetilde}\chi_2}^-}$.
Gluino, Neutralino and Lightest Down-Squark Masses
--------------------------------------------------
We come now to a brief analysis of the masses relevant to the gluino and neutralino contribution to ${b\to s \gamma}$.
The values of $m_{{\widetilde}g}$ we are dealing with in this study are easily obtained through a scaling of $M$ by a factor $\sim 2.6$ (again dependent on the choice (\[input\])).
Patterns of masses absolutely similar to those shown in fig. \[cncharg\] for the mass of the lightest chargino ${{\widetilde}\chi_2}^-$ are obtained for the lightest neutralino ${{\widetilde}\chi_1}^0$, provided the value of each interval of mass is divided by a factor $\sim 2 $.
Gluinos and neutralinos are coupled to down-squarks ${{\widetilde}d}_i$ when exchanged as virtual particles inside the loop mediating the decay ${b\to s \gamma}$. Strong deviations from the simple distributions of masses obtained for the first and second generation of squarks are expected for the sbottom, when large values of ${\tan \beta}$ are considered.
We show in fig. \[cnsbott\] some of the allowed values of masses obtained for the sbottom squark. We search for the intervals $ 0 \leq m_{{{\widetilde}d_1}}\leq 50 \GeV$, $50 \leq m_{{{\widetilde}d_1}}\leq 100\GeV$, $100\leq m_{{{\widetilde}d_1}}\leq 150\GeV$ and $150\leq m_{{{\widetilde}d_1}}\leq 200\GeV$. No small values of this mass, i.e. $0\ltap m_{{{\widetilde}d_1}}\ltap 50\GeV$, are obtained, even for the largest value of ${\tan \beta}$ considered. Moreover, also in the first interval of masses, $50\!-\!100\GeV$, we obtain values of $m_{{\widetilde}d_1}$ clustering quite closely around $\sim 100\GeV$. These results justify a posteriori the lower bound on $m_{{{\widetilde}d}_1}$, imposed in our analysis, independently of ${\tan \beta}$.
Although the shapes obtained are more regular than those observed for $m_{{\widetilde}u_1}$, the presence of the large left-right mixing terms in the down-squark mass matrix, becomes visible for large values of ${\tan \beta}$. Again, a regular expansion of the mass-intervals considered, is observed for increasing values of ${\tan \beta}$.
More variables than the sheer values of the lightest masses, shown in this section, are responsible for the prediction of $\BR({b\to s \gamma})\vert_\MSSM$. Very important is, for example, the size of the off-diagonal terms responsible for the couplings ${ {\widetilde}g}$-${{\widetilde}d}_i$-${{\widetilde}d}_j$ and ${{ {\widetilde}\chi_k}^-}$-${{\widetilde}d}_i$-${{\widetilde}d}_j$ and the degeneracy among the different mass eigenvalues entering in the calculation. The possibility of enhancements/suppressions, which one may guess by simply analyzing the mass spectrum, has therefore to be verified for each point of the supersymmetric parameter space considered.
The Branching Ratio
===================
The decay ${b\to s \gamma}$, like all $b \to s $ transitions, is characterized by the presence of two different scales $m_{\,l}^2 $ and $m_{\,h}^2$, with $m_{\,l}^2 << m_{\,h}^2$. Within the SM, $m_{\,h}^2$ indicates generically the mass of $W$ and $Z$ bosons and of the top quark exchanged as virtual particles, while $m_{\,l}^2 $ is the scale of the remaining light quarks. Large logarithmic terms $\log\left(m_{\,h}^2/m_{\,l}^2\right)$ appear when QCD corrections to this decay are included.
The customary procedure used to properly resums these large logarithms is to rely on the use of the effective Hamiltonian H\_[eff]{}(b s)= - \_[j]{} C\_[j]{}() O\_[j]{}() , \[effham\] obtained from the Hamiltonian of the underlying theory through an expansion in inverse powers of $m_{\,h}^2$. Except for the link coming from the common scale $\mu$, the dependence on the heavy degrees of freedom is formally separated from the dynamics of the light fields to which the theory is now reduced. This dependence is recovered through insertions on the light fields of the operators in (\[effham\]) with generalized charges, or Wilson coefficients, $C_j(\mu^2/m_{\,h}^2)$. In the following, we indicate operators as $O_{j}(\mu)$, and coefficients as $C_j(\mu)\vert_{\rm SM}$, with a subscript as a reminder of the underlying theory. The normalization of the operators $ O_{j}(\mu)$ is such that the coefficients $C_j(\mu)\vert_{\rm SM}$ match at $\mu =M_W$ the matrix elements obtained in the SM at the zeroth order in $\as$. The value of these coefficients at any scale $\mu$ is then obtained through RG techniques: the evolution down from $M_W$ takes appropriate care of switching on the SU(3) interaction and correctly resums the large logarithms $\log{\mu^2/m_{\,h}^2}$.
At a fixed order in the perturbative expansion, the amplitudes for these $b \to s$ transitions depend on the renormalization scale $\mu$. Although one may want $\mu \sim {\cal O}(m_b)$, the precise value of this scale is unknown. The uncertainty due to this arbitrariness is the biggest one in the theoretical determination of $\BR({b\to s \gamma})\vert_{\rm SM}$ at the LO in QCD and could be reduced if higher order QCD corrections to all elements in (\[effham\]) were included.
Up to eight operators have been found to contribute to the decay ${b\to s \gamma}$. Nevertheless, the basis has often been truncated to subsets of three operators: O\_[1]{} &=& (|[c]{}\_[L ]{} [\^]{} b\_[L ]{}) (|[s]{}\_[L ]{} [\_]{} c\_[L ]{})\
O\_[2]{} &=& (|[c]{}\_[L ]{} [\^]{} b\_[L ]{}) (|[s]{}\_[L ]{} [\_]{} c\_[L ]{})\
O\_[7]{} &=& m\_b ( |[s]{}\_ \^ [P\_R\^]{} b\_) F\_, \[basis\] or alternatively $O_2$, $O_7$ and $O_8$, with O\_[8]{} = m\_b ( |[s]{}\_ \^ [P\_R]{} T\^A\_ b\_) G\^[A]{}\_ . \[gluonoper\] Relatively simple and closed forms for the coefficient $C_7(\mu)\vert_{\rm SM}$ are then obtained as function of the initial values $C_1(M_W)\vert_{\rm SM}$: C\_7 ()\_[SM]{} = \^[-]{} { C\_7(M\_W)\_[SM]{} - C\_2(M\_W)\_[MS]{} } , \[corr1\] in the first case, or $$C_7 (\mu)\vert_{\rm SM} = \eta^{-\frac{16}{23}}
\left\{ C_7(M_W)\vert_{\rm SM}
+ \frac{8}{3} \left(
\eta^{\frac{2}{23}} \! - 1 \right) C_8(M_W)\vert_{\rm SM}
- \, \frac{232}{513} \left(
\eta^{\frac{19}{23}} \! -1 \right) C_2(M_W)\vert_{\rm SM}
\right\} {\nonumber}$$ \[corr2\] in the second one. The symbol $\eta$ indicates $\as(\mu)/\as(M_{W})$ and $C_1(M_W)\vert_{\rm SM}$, $C_2(M_W)\vert_{\rm SM}$, $C_7(M_{W})\vert_{\rm SM}$ and $C_8(M_W)\vert_{\rm SM}$ are respectively: C\_1(M\_W)\_[SM]{} & = & 0 ,\
C\_2(M\_W)\_[SM]{} & = & K\_[cs]{}\^\*K\_[cb]{} ,\
C\_7(M\_W)\_[SM]{} & = & - [A]{}\^\_[SM]{} / ,\
C\_8(M\_W)\_[SM]{} & = & - [A]{}\^g\_[SM]{} / , with $K_{ij}$ elements of the Cabibbo-Kobayashi-Maskawa matrix and ${\cal A}^\gamma_{\rm SM}$ and ${\cal A}^g_{\rm SM}$ given in eqs. 40-44 of ref. [@US]. The branching ratio, depending on $\mu$, is then obtained as ([bs ]{};)\_[SM]{} = (b c e )\_[exp]{} \[branch\] where the width $\Gamma({b\to s \gamma};\mu)\vert_{\rm SM}$ reads: ([bs ]{};)\_[SM]{} = \^2. \[width\]
Estimates of the size of the uncertainty introduced in the theoretical evaluation of $\BR({b\to s \gamma})\vert_{\rm SM}$ due to different truncations of the basis $\{O_j(\mu)\}$ and of the error with respect to the calculation where the complete basis has been retained, give values never exceeding 10-15% [@GRSPWI; @MISIAK]. By roughly the same value is dominated the error due to the assumption that only two different scales enter in this calculation, whereas the situation $m_{\,l}^2 \!<< \!M_W^2 \!< \!m_t^2$ should be considered [@GRICHO]. The errors introduced by these two types of approximations are well within the uncertainty due to the dependence of $\BR({b\to s \gamma})\vert_{\rm SM}$ on the renormalization scale $\mu$. Fixing, for example, $m_t=140\GeV$ and assuming the SM relation $\vert K_{\rm cb}\vert = \vert K_{\rm ts}\vert$, ratios $\BR({b\to s \gamma};\mu\!=\!2.5\GeV) / \BR({b\to s \gamma};\mu\!=\!10\GeV) $ as big as $1.7$ [@ALIGR] are found [^4].
If to this uncertainty one also adds the one induced by the unknown value of $m_t$ and the uncertainty in the experimentally allowed value of $\vert K_{\rm ts}\vert$, it is easy to conclude that the value of $\BR({b\to s \gamma})\vert_{\rm SM}$ is rather poorly known. The uncertainty of calculations of this branching ratio in extensions of the SM is, in general, expected to increase, matching at best the one for the SM prediction.
No new operators contributing to ${b\to s \gamma}$ are found in the MSSM, nor in the 2HDM and no modification due to the presence of extra particles are introduced in the evolution relations (\[corr1\]) and (\[corr2\]). Thus, the obvious change with respect to the SM calculation comes, in both models, in the evaluation of the Wilson coefficients. In particular, in the 2HDM the coefficient $C_7(M_W)$ reads: C\_7(M\_W)\_[2HDM]{} = - / , \[coeffhigg\] whereas, in the MSSM, it is: C\_7(M\_W)\_= - / , \[coeffsusy\] where ${\cal A}^\gamma_{\rm SM}$,${\cal A}^\gamma_{H^-}$ ${\cal A}^\gamma_{{\widetilde}{\chi}^-}$,${\cal A}^\gamma_{{\widetilde}g}$ and ${\cal A}^\gamma_{{\widetilde}{\chi}^o}$, listed in eqs. (40)-(45) of ref. [@US], indicate the five different contributions mediating ${b\to s \gamma}$ in this model. Thus, the uncertainty related to the truncation of the basis $\{O_j(\mu)\}$ and to the renormalization scale dependence is, for both models, the uncertainty of the SM calculation.
Worse is, in contrast, the approximations to two scales, $m_{\,l}^2 $, $m_{\,h}^2$, with $m_{\,l}^2 << m_{\,h}^2$, As shown in the previous section, a rather rich spectrum of masses is obtained in the MSSM and a wide spread of scales can appear in the calculation of $C_7(M_W)\vert_\MSSM$. If one indicates by $m_i^2$ and $m_j^2$ the squared masses of two generic particles exchanged in the 1-loop diagrams, different possibilities such as a) $ m_{\,l}^2 \!<< M_W^2 \! < m_i^2 \! < m_j^2 $, b) $ m_{\,l}^2 \!<< m_i^2 \! < M_W^2 \! < m_j^2 $, c) $ m_{\,l}^2 \!<< m_i^2 \! < m_j^2 \! < M_W^2 $ may occur. The possibility a) is shared by the 2HDM, when $m_{H^\pm} \gtap M_W$ and $m_{H^\pm} \neq m_t$, while the remaining ones are typical of the MSSM (lightest chargino, stop and lightest neutralino can be well below $M_W$).
Thus, the approximation to two scales acceptable in the SM, and (possibly) in the 2HDM, may not be equally good in some portions of the 3-dimensional parameter space of the MSSM. We shall make nevertheless this approximation. As in ref. [@US], we shall neglect renormalization effects for the neutralino contribution, see next section, since neutralinos can indeed be rather light. Even this strategy, however, may not be fully adequate, since down squarks, to which neutralinos are coupled, are quite heavy. Nevertheless, the procedure chosen to treat this contribution, given its smallness, is not likely to represent the main source of uncertainty related to the identification of all scales to $M_W$, which comes, presumably from the chargino contribution.
Finally, we shall calculate the coefficient $C_7(M_W)$ “relatively” to the SM, i.e. choosing a particular value for $m_t$ and imposing the SM relations $\vert K_{cb}\vert = \vert K_{ts}\vert $ and $\vert K_{tb}\vert = \vert K_{cs}\vert = 1$ while scanning the 3-dimensional parameter space $({\tan \beta},m,M)$, and we shall rely on the form of the coefficient $C_7(\mu)\vert_\MSSM$ as given in (\[corr1\]), evaluated at an average value of $\mu$, $\mu\sim 5\GeV$. A quantitative check of the differences one may obtain by using (\[corr2\]) will also be made. The supersymmetric spectrum of masses and couplings, obtained as described in the previous section, is used for the calculation of $C_7(M_W)$.
Moreover, assuming that the precision of the MSSM calculation is the same as the SM one, we shall exclude the regions of this 3-dimensional parameter space which give $\BR({b\to s \gamma})\vert_\MSSM$ outside the range of uncertainty of the SM result, for the same value of $m_t$ and of the CKM elements. Since the dependence on the unknown parameter $\mu$ is the most severe of all uncertainties of the SM calculation, we shall choose the range obtained when varying the scale $\mu$ from $2.5$ to $10\GeV$, as range of the allowed values of $\BR({b\to s \gamma})\vert_\MSSM$. Thus, for $m_t=150\GeV$, we shall retain only the regions of parameter space where it is [^5]: 2.910\^[-4]{} < BR([bs ]{}; = 5)\_ < 4.810\^[-4]{}. \[allowed\] Shifts of about $10\%$ in the position of this interval can be obtained when values of $\as(M_W)$ different from the one which can be obtained from choice (\[input\]) are used ($\as(m_b)$ is fixed). The choice made here on one side maximizes the effects of QCD corrections to the ${b\to s \gamma}$ decay, on the other, allows to reach down to lower values of the supersymmetric masses. For the final value of $\BR({b\to s \gamma})$, the two effects tend to compensate. We do not observe significant changes in the resulting exclusion patterns. In view of the inevitable comparison between the exclusion patterns obtained in the 2HDM and the MSSM, which will arise in the next section, we observe here that, for the same values of $m_t$, $m_{H^\pm}$ and ${\tan \beta}$, (\[coeffsusy\]) and (\[coeffhigg\]) are related by: C\_7(M\_W)\_ C\_7(M\_W)\_[2HDM]{} + C\_7\^(M\_W) with obvious definition of the symbol $\Delta C_7^\prime(M_W)$. Similar expressions can be written for the coefficients $C_8(M_W)\vert_{\rm 2DHM}$ and $C_8(M_W)\vert_\MSSM$. Widths and branching ratios in the two models are related according to: ([bs ]{})\_& = & ([bs ]{})\_[2HDM]{} + \^\
([bs ]{})\_& = & ([bs ]{})\_[2HDM]{} + BR\^ \[brtwohiggs\] with $\Delta \Gamma^{\,\prime}$ given by: \^ { \^2 + 2 C\_7(M\_W)\_[2HDM]{} } and $\Delta B\!R^{\,\prime}$ obtained from $\Delta \Gamma^{\,\prime}$ as in (\[branch\]). The claims made in refs. [@HEWETT; @BARGER] correspond to the limit $\Delta C_7^\prime(M_W) \sim 0 $, i.e. they apply, more pertinently, to the 2HDM.
Results
=======
Amplitude and Branching Ratio
-----------------------------
We present in this section a discussion of the numerical results for $\BR({b\to s \gamma})\vert_\MSSM$ obtained in this analysis and of the consequences that a possible measurement of the branching ratio for this decay compatible with the SM prediction can have for searches of supersymmetric particles. As already done in the previous section, we shall limit ourselves to one specific value of the top quark mass, $m_t=150\GeV$ since completely similar features are obtained for different values of $m_t$.
We plot in fig. \[branrat\] the values obtained for the branching ratio $\BR({b\to s \gamma})\vert_\MSSM$, as a function of $m$, for the points of the parameter space $(m,M)$ remaining after the lower limits (\[expbounds\]) are implemented. It is understood in this section that the bounds (\[expbounds\]) are always imposed and that the allowed points of the $(m,M)$ parameter space, for fixed ${\tan \beta}$ are only the dark points of fig. \[parspace\]. The degeneracy observed for fixed $m$, in fig. \[branrat\] corresponds to the dependence on $M$. The horizontal straight line indicates $\BR({b\to s \gamma})\vert_{\rm SM}$, i.e. the value taken by the branching ratio in the SM for the same values of parameters used to calculate $\BR({b\to s \gamma})\vert_\MSSM$. We observe that, in spite of the different experimental cuts on the supersymmetric masses here applied, the result obtained for ${\tan \beta}=3$ reproduces quite closely the shape of points already obtained in ref. [@US] for $m_t=130\GeV$ and ${\tan \beta}=2$, while large enhancements are obtained for larger values of ${\tan \beta}$.
=10.0cm
In order to understand the origin of these enhancements, we consider the ratios of the different supersymmetric contributions to the total amplitude over the SM amplitude, in the same spirit as already done in [@US]. We call these ratios $R_i$. They are defined as:\
& & = \[ratios\] where the subindex $i$ indicates $H^-$, ${\widetilde}\chi^-$, ${\widetilde}g$, ${\widetilde}\chi^o$. The multiplicative factor $\eta ^{-16/23}$ in the numerator has been omitted in the neutralino contribution, as in ref. [@US].
=10.0cm
We explicitly show in figs. \[amp03\], \[amp15\], \[amp30\] the results obtained for ${\tan \beta}= 3$, $15$ and $30$. For the remaining values of ${\tan \beta}$, ${\tan \beta}= 9$, $20$, $25$, we obtain ratios $R_i$ which are smooth transitions between the ratios plotted for two adjacent values of ${\tan \beta}$. We plot each $R_i$ versus one of the two types of masses relevant for the particular contribution considered. As before, the vertical width correspond to the degree of freedom still remaining when ${\tan \beta}$ and one other supersymmetric parameter are fixed.
=10.0cm
Fig. \[amp03\] shows features analogous to those obtained in [@US] for $m_t=130\GeV$, ${\tan \beta}=2$. The largest possible supersymmetric contribution, in this case, is indeed given by the Higgs contribution and the sum of the different contributions leads to an overall, modest enhancement of $\BR({b\to s \gamma})\vert_\MSSM$ over $\BR({b\to s \gamma})\vert_{\rm SM}$. The ratio $R_{H^-}$ is always positive and, as shown in figs. \[amp03\], \[amp15\] and \[amp30\], is almost insensitive to the increase of ${\tan \beta}$, for ${\tan \beta}> 3$. The only effect of ${\tan \beta}$ on $R_{H^-}$ is an indirect one: since larger values of ${\tan \beta}$ provide in general lighter $H^-$ in the same 2-dimensional parameter space $(m,M)$, one observe that the smooth curve for $R_{H^-}$ stops earlier and earlier when ${\tan \beta}$ increases.
Positive and negative values of $R_i$ are obtained for all the remaining supersymmetric contributions. The new feature shown by figs. \[amp15\] and \[amp30\] is the overwhelming predominance of the chargino contribution, already ranging between -0.8 and 1 for ${\tan \beta}= 9$ and exceeding the Higgs contribution by one order of magnitude for ${\tan \beta}$ between 20 and 25. Nevertheless, as it is shown by the well spread distribution of $R_{{{\widetilde}\chi}^-}$ above and below zero, it should be clear that there exist regions of the parameter space $(m,M)$ where it is $\vert R_{{{\widetilde}\chi}^-}\vert < R_{H^-} $ even for large values of ${\tan \beta}$, as well as regions where $\vert R_{{{\widetilde}\chi}^-}\vert$ exceeds $R_{H^-}$ for ${\tan \beta}=3$. The rapid increase of $ \vert R_{{{\widetilde}\chi}^-} \vert $ for increasing ${\tan \beta}$ is due to the fact that light enough ${{\widetilde}u}_1$ can be found, in these cases, in coincidence with small values of $m_{{{\widetilde}\chi_2}^-}$, as it can be seen by comparing figs. \[cnhiggs\] and \[cnstop\]. In particular, quite visible in these two figures are regions where $m_{{{\widetilde}\chi_2}^-} $ and $m_{{{\widetilde}u}_1} $ have the minimum allowed value of $45\GeV$.
A more modest, but still sizable increase is observed for the ratio $\vert R_{{\widetilde}g} \vert$. This is due, in average, to lighter squarks ${{{\widetilde}d_1}}$ (see fig. \[cnstop\]) and to larger mixing parameters ${{\widetilde}g}$-${{\widetilde}d_i}$-${{\widetilde}d_j}~(i\neq j)$ obtained for larger values of ${\tan \beta}$. The same features, lighter ${{{\widetilde}d_1}}$ and larger mixings ${{\widetilde}\chi_1}^0$-${{\widetilde}d_i}$-${{\widetilde}d_j}~(i\neq j)$, together with the possibility of very light neutralinos ($\sim 20\GeV$) are responsible for a quite rapid increase of the neutralino contribution. This increase is more reminiscent of the growth of the chargino contribution (about a factor $20$ when going from ${\tan \beta}= 3$ to ${\tan \beta}=30$) than the growth of the gluino contribution (only a factor 5) and is clearly related to the possibility of having small $\vert \mu_R\vert$ for large values of ${\tan \beta}$. It is not big enough, however, to beat the initial disadvantage of this contribution and to make it reach the value of Higgs and SM contributions.
One last comment has to be made on the relative sign of the different ratios $R_i$. While $R_H$ is always positive, the signs of $R_{{{\widetilde}\chi}^-}$, $R_{{\widetilde}g}$ and $R_{{{\widetilde}\chi}^0}$ for each point of the supersymmetric parameter space depend on several elements, such as the sign of $M$ and $\mu$, the relative size and sign of gaugino and higgsino components for chargino and neutralino contributions, and of the two components mediating for each contribution the helicity flip of the process. Due to the complex interplay among these elements, it is clear that all contributions, even if subleading, can become important restrictions on the allowed values of $\BR({b\to s \gamma})\vert_\MSSM$ are imposed. Only the neutralino contribution plays a minor rôle in the determination of the consequences of such a restriction.
Before starting this discussion, few more points have to be clarified. The possibility of big enhancement-/suppression-factors for $\BR({b\to s \gamma})$ in the case of relatively large values of $\tan\beta$ should alert one to check if similar results occur for other rare processes. In this circumstance, one should worry about the possibility that the regions of supersymmetric parameter space responsible for the strongest deviations from the predictions for $\BR({b\to s \gamma})\vert_{\rm SM}$ are not already excluded.
The decay modes $b\to s l^+ l^-$, $b\to s \nu \bar{\nu}$, $B_s \to \tau^+ \tau^-$, studied in [@BBOOK] and [@US] are still far enough from being experimentally detected to pose a real threat. Surprises may in principle come from $B^0-\bar B^0$ oscillations. We have explicitly checked that the supersymmetric contributions to the $B^0_d-\bar B^0_d$ mixing, for $m_t=150\GeV$, yield deviations from the SM prediction of about $10 \%$ and at most $15 \%$ in very limited regions of the supersymmetric parameter space. These results confirm even for large values of ${\tan \beta}$ what was already observed in [@US] for $m_t=130\GeV$ and $\tan\beta=2,8$. They also confirm other claims made in the past, i.e. that supersymmetric contributions to FCNC box diagrams yield results smaller than those obtained in the SM, even for values of supersymmetric masses and couplings which allow sizable enhancements of the decay ${b\to s \gamma}$ [@USOLD].
Given the results obtained for ${b\to s \gamma}$, one would expect even a more spectacular supersymmetric enhancements for the decay $b\to s g $, with an on-shell gluon. This decay mode contributes together with $b \to s q \bar{q}$ and $b \to s g g $ to the so-indicated $b \to s ``g''$, which leads to the observation of non-leptonic charmless $B$-decays. One would need enhancements of about two order of magnitude to have $\Gamma(b \to s g)$ comparable to the SM value of $\Gamma(b \to s q \bar{q})$. Thanks to the presence of the additional coupling ${{\widetilde}g}$-${{\widetilde}g}$-$g$, this requirement may not be so hard to be meet, especially for the large values of ${\tan \beta}$ considered here. In addition, since (smaller) enhancements to the decay mode $b \to s q \bar{q}$ are also possible, one may think that deviations from the SM predictions for non-leptonic, charmless $B$-decays could be not completely negligible. A calculation which consistently puts together all these contributions, however, is not available, as yet. Moreover, any comparison with experimental results would have to rely on the use of some specific hadronization model.
We leave this check aside, but we do consider a related issue, i.e. the possibility that large enhancements of the decay $b \to s g$ may bring substantial differences in $\BR({b\to s \gamma})\vert_\MSSM$ if the form (\[corr2\]) of the coefficient $C_7(\mu)$ is used. We obtain changes in the patterns of points shown in the previous figures never exceeding the 10% level, even in the regions of parameter space where the most optimistic enhancements of $b \to s g$ are found.
Exclusion Patterns
------------------
We come now to study the consequences that the restriction (\[allowed\]) to the allowed values of $\BR({b\to s \gamma})\vert_\MSSM$ enforces on the supersymmetric parameter space. Given the wide spread of points above and below the central value for $\BR({b\to s \gamma})\vert_{\rm SM}$ shown in fig. \[branrat\], one expects these restrictions to be rather drastic and more severe for the largest values of ${\tan \beta}$, where the biggest enhancements are present. Except for ${\tan \beta}=3$, where still about 50% of the original points remain after condition (\[allowed\]) is imposed, only fractions ranging from 24% to 18% survive for ${\tan \beta}\geq 15$. Thus, the interesting question to be answered is which ranges of supersymmetric masses are left in the points of the original parameter space which pass the test (\[allowed\]).
Before tackling this issue, we plot the values which $\BR({b\to s \gamma})\vert_{\rm 2HDM}$ (implicitly defined in (\[brtwohiggs\])) has in these remaining points, i.e. the values which $\BR({b\to s \gamma})\vert_\MSSM$ would assume if all the supersymmetric contributions would be neglected. The results, plotted versus $m_{H^+}$, are shown in fig. \[brred2h2h\]. Only in the case ${\tan \beta}=3$ the range of points shown in this figure has been reduced in order to avoid the extremely large values of $m_{H^-}$ which would be reached (up to $900\GeV$). The curves obtained (including the portion not shown for ${\tan \beta}=3$) are all above the SM result (the solid line) and cross the region (\[allowed\]), delimited by dashed lines, for $m_{H^-}$ between $450$ and $500\GeV$ depending on the particular value of ${\tan \beta}$. We observe that values between $100$ and $150\GeV$ for $m_{H^-}$ are still allowed, while they would give too large branching ratios in a 2DHM not embedded in the proper supersymmetric scenario.
Thus, the possibility for the decay $t \to b H^+$ is still open, although all the points of the $(m,M)$-parameter space where $\Delta C_7^\prime (M_W),\,\Delta \BR^\prime\sim 0$ and where $H^\pm$ is not prohibitively heavy are to be excluded. This possibility is not restricted to the value $m_t=150\GeV$, to which the previous figures refer, but it is, in general, shared by all the reasonably large values of $m_t$ allowed by the model. For comparison, we show in fig. \[rsthb\] what remains of the phase space shown in fig. \[cnthb\] after the condition (\[allowed\]) is imposed. We remind that this figure, together with fig. \[cnthb\] is obtained by scanning the $(m,M)$-plane with a much finer grid than the one used for all the remaining plots. This explain why the points shown here for small values of ${\tan \beta}$, typically ${\tan \beta}=3$, although rather sparse, seem nevertheless more numerous than those shown in fig. \[brred2h2h\]. Wider ranges of $m_{H^-}$ remain for the large values of ${\tan \beta}$ where stronger are the restrictions of the parameter space $(m,M)$, since the light ${H^-}$ are, in this case, found in wider portions of the plane $(m,M)$.
=12.4cm
Since the chargino contribution is by far the dominant one, in large regions of the supersymmetric parameter space here considered, one should focus on this contribution and check whether sizable regions of the parameter space explorable at LEP II are not already excluded by (\[allowed\]). To this aim, we plot the points of fig. \[branrat\] in the plane $(\mu_R,M_2)$ as shown in fig. \[lepexcl\]. We distinguish the regions allowed and forbidden by (\[allowed\]) by making use of faint and thick points, respectively. As can be seen by comparing this figure with fig. \[mumtwo\], the white areas denote the regions where no realizations of the MSSM are possible and the regions excluded by the limits (\[expbounds\]).
Superimposed are the contour lines $m_{{{\widetilde}\chi_2}^-} = 45\GeV$ (dashed lines) and $m_{{{\widetilde}\chi_2}^-} = 90\GeV$ (solid lines), corresponding to regions probed at LEP I and explorable at LEP II, respectively (see for comparison [@GRIVAZ]). We observe that the reduction of parameter space is rather significant. We remind that not all the physical points compatible with the lower limits (\[expbounds\]) are plotted in figure \[lepexcl\]. Points surviving condition (\[allowed\]) in the branch with negative $M_2$ and negative $\mu_R$ are present for values of $\mu_R$ not shown in this figure. For all values of ${\tan \beta}$ considered here, we always find points where the lightest chargino can have masses compatible with the experimental cut applied in this analysis, $m_{{{\widetilde}\chi_2}^-} \sim 45 \GeV$, among the points remaining after imposing condition (\[allowed\]). This can be more explicitly seen in fig. \[chglexcl\] where the same points are plotted in the plane $(m_{{{\widetilde}\chi_2}^-},m_{{\widetilde}g})$. Although the regions excluded by condition (\[allowed\]) are quite wide, the chargino searches at LEP II do not appear to be threatened by a measurement of ${b\to s \gamma}$ at the SM level.
=10.0cm
Since the situation $\Delta \BR^\prime >> \BR({b\to s \gamma})\vert_{\rm SM,2HDM}$, due to large chargino contributions, is observed in non-trivial portions of the supersymmetric parameter space, there remain the possibility that the lightest eigenvalue of the up-squark mass matrix ${{\widetilde}u}_1$, the stop mass, is the mass more seriously affected by the restriction (\[allowed\]). To explore this possibility, we plot the points of fig. \[branrat\] in the plane $(m_{{{\widetilde}u}_1},m_{{\widetilde}g})$, as shown in fig. \[stopglexcl\]. Again, the thick and faint points denote the regions allowed and forbidden by (\[allowed\]). We observe that the values of $m_{{\widetilde}u_1}$ in the “allowed” points are never below $70-80\GeV$. The regions of parameter space where $m_{{\widetilde}u_1}$ and $m_{{{\widetilde}\chi }_2^-} $ are simultaneously very small ($\sim 45\GeV$), responsible for the conspicuous enhancements observed, are removed by condition (\[allowed\]). Given the distribution of masses observed in figs. \[cncharg\] and \[cnstop\], however, light enough ${{{\widetilde}\chi_2}^-}$ can still be present in coincidence with quite heavy ${{\widetilde}u}_1$.
Only discrete values of ${\tan \beta}$ have been considered here. Therefore, the possibility of light ${{\widetilde}u}_1$ escaping our search still exists, in principle. Nevertheless, since our choice of values should give us an almost complete overview on this parameter, we consider this possibility rather remote.
Summarizing, all points of the $(m,M)$-plane where $\Delta \BR \sim 0$, corresponding to the situation $\Delta C_7(M_W) \sim 0$, the pure 2HDM case, or $\Delta C_7(M_W) \sim-2 C_{\rm 2HDM}$, as well as the points with $\Delta \BR >> \BR({b\to s \gamma})\vert_{\rm SM,2HDM}$ have to be excluded. The possibility that substantial regions of the supersymmetric parameter space remain, relies on the strong destructive interferences among [*all*]{} supersymmetric contributions, and on the rather large uncertainty of the SM prediction.
The typical situation observed in the regions of parameter space surviving condition (\[allowed\]) is that negative contributions $R_{{{\widetilde}\chi}^-} + R_{{\widetilde}g}$ are obtained, matching in size $R_H$. The importance of all supersymmetric contributions, in particular even of the more modest gluino contribution, is shown explicitly in fig. \[amp30smrs\] for the case ${\tan \beta}= 30$. We plot in this figure the ratios $R_i$ for the points remaining after condition (\[allowed\]) is imposed. We observe how significant a role the small value $R_{{\widetilde}g}\sim .2$ plays for the survival of a region where the chargino contribution is still rather large, $R_{{{\widetilde}\chi }^-}\sim -2.5$. Similar interesting interplays of different contributions are observed for all values of ${\tan \beta}$ considered. Only the neutralino contribution has little influence on the exclusion patterns induced by (\[allowed\]).
Values of $\as(M_W)$ different from the one used for the determination of the interval (\[allowed\]) are not likely to change the qualitative features observed here. In contrast, conditions more restrictive than (\[allowed\]) may exclude wider regions of the supersymmetric parameter space. These conditions, though, should be imposed on calculations which could claim a precision higher than the one so far achieved.
Conclusions
===========
We have analyzed the supersymmetric contribution to ${b\to s \gamma}$ within the framework of the MSSM with radiative breaking of $SU(2)\times U(1)$. Wide regions of supersymmetric parameter space have been surveyed. Results have been explicitly reported only for $m_t=150\,$GeV and different values of ${\tan \beta}$, in a range wide enough to allow a fairly complete overview of the situations one encounters when considering ratios of top and bottom yukawa couplings from $h_t/h_b>>1$ to $h_t/h_b \sim 1$.
The modest enhancements over the SM prediction obtained for relatively large ratios $h_t/h_b$ ($m_t = 130\,$GeV, ${\tan \beta}= 2,6$) in ref. [@US] are confirmed, whereas substantial deviations from the SM prediction are found for large values of ${\tan \beta}$, due, primarily, to conspicuous chargino contributions.
As pointed out by several authors, a measurement of the branching ratio for this decay at the SM level can exclude relevant portions of the supersymmetric parameter space. The still big uncertainty plaguing the theoretical determination of $\BR({b\to s \gamma})\vert_{\rm SM}$, however, and destructive interferences among the different supersymmetric contributions almost produce complicated exclusion patterns. The overall effect is that no clear-cuts bounds on chargino and charged Higgs masses are obtained, whereas an increase in the degeneracy of the up-squark mass matrix, is enough to reduce the large chargino contributions and bring $\BR({b\to s \gamma})\vert_{\rm MSSM}$ down to the accepted values (\[allowed\]).
Stronger constraints could be presumably obtained if the error on the theoretical SM and MSSM prediction would be reduced. To begin with, some effort should be put in refining the level of precision of the SM prediction, by calculating the NLO QCD correction to the decay rate. Were more accurate SM results available, one would have to worry about matching the precision of the MSSM calculation to the SM level.
Besides the complex evaluation of NLO QCD corrections within the MSSM framework, one should also attempt a refinement of the determinations of the physical realizations of the MSSM. The first, obvious improvement with respect to the calculation here presented is the inclusion of radiative corrections to the tree-level Higgs potential. Moreover, the underlying grand-unification structure of the MSSM model should be better specified. Issues related to the yukawa couplings unification (as for example the accuracy at which these unification conditions have to be imposed) are to be settled. Needed are also evaluations of the uncertainties induced on the low-energy supersymmetric mass spectrum by approximations made while running down from the Planck scale $M_P$ to the electroweak scale $M_Z$, such as the neglect of the heavy particles required by the grand-unification gauge group, as well as the neglect of non-renormalizable terms still sizable at the Planck scale. Finally, once more control on these aspects is reached, and more indications about the underlying grand-unified structure are obtained, the constraints coming from the lack of observation of nucleon decays should also be implemented.
3ex [*Note Added:*]{} After the completion of this work we became aware of the existence of refs. [@LAST; @OKADA]. The first one confirms the results of ref. [@OSHIMO] for the chargino contribution, points to the light stop mass as to the culprit of the large enhancements obtained for large values of ${\tan \beta}$ and as to the parameter likely to be affected by measurements of the decay ${b\to s \gamma}$ at the SM level. Different conclusions are reached in [@OKADA], where it is argued that very light stops are still possible, in contrast with the results of ref. [@LAST] and the ones obtained in the present analysis.
3ex [**Acknowledgements**]{} The author acknowledges discussions with A. Ali, J.F. Grivaz, H. Haber, R. Hempfling, A. Masiero, M. Peskin, S. Pokorski and P. Zerwas. She is also grateful to C. Vohwinkel for tips on handling and storing the numerous and huge data files needed for this analysis.
[**Appendix: Errata**]{} In this appendix we list errata for refs. [@US] and [@IO].
[**ref. [@US]:**]{}
$\bullet$ The symbols $H_1^0$, $H_2^0$, reserved for the two CP-even neutral Higgs fields, should be replaced in eq. (8) by $H_1^1$, $H_2^2$, respectively the first component of doublet $H_1$ and the second component of doublet $H_2$. Following the notation of ref. [@GUNHAB], adopted in ([@US]) as well as in the present paper, these two components are related to the physical fields $H_1^0$, $H_2^0$ and $H_3^0$ according to: H\_1\^1 & = & v\_1 + ( H\_1\^0 -H\_2\^0 + i H\_3\^0 )\
H\_2\^2 & = & v\_2 + ( H\_1\^0 +H\_2\^0 + i H\_3\^0 ).
$\bullet$ The equation (A.1) in Appendix A should read &=& (\_3 M\_3\^2 + 3\_2 M\_2\^2 + \_1 M\_1\^2)\
& & - \
& & - . The spurious factor 2 multiplying the term $(1/9){{\widetilde}{\alpha}_1}M_1^2$ in the first bracket on the left-hand side of (A.1) is clearly a misprint and was not included in the RG equations used in the actual calculations.
[**ref. [@IO]:**]{}
$\bullet$ The $2\times 2$ mass matrices in eqs. (54) and (55) should read: $$M_{{\widetilde}t}^2 = \left( \begin{array}{cc}
m_{{\widetilde}{Q}_{33}}^2 +m_t^2 -\vert DT_U^L\vert \ \ &
\left(A_t m +\mu_{\rm R} \cot \beta \right) m_t {\nonumber}\\[1.5ex]
\!\! \left(A_t m + \mu_{\rm R} \cot \beta \right) m_t &
\ m_{{\widetilde}{U}_{33}}^2 +m_t^2 -\vert DT_U^R\vert \
\end{array} \right)\phantom{\ .} {\nonumber}\label{stopmatrix}$$ $$M_{{\widetilde}b}^2 = \left( \begin{array}{cc}
m_{{\widetilde}{Q}_{33}}^2 +m_b^2 +\vert DT_D^L\vert \ \ &
\left(A_t m +\mu_{\rm R} \ {\tan \beta}\right) m_b {\nonumber}\\[1.5ex]
\!\! \left(A_t m +\mu_{\rm R} \ {\tan \beta}\right) m_b &
\ m_{{\widetilde}{D}_{33}}^2 +m_b^2 +\vert DT_D^R\vert \
\end{array} \right) \ . {\nonumber}\label{sbotmatrix}$$
[99]{} E. Thorndike et al., CLEO Collaboration, CLEO preprint CLN 93/1212, CLEO 93-06.
S. Bertolini, F. Borzumati and A. Masiero, ;\
N.G. Deshpande, P. Lo, J. Trampetic, G. Eilam and P. Singer, .
B. Grinstein, R. Springer and M.B. Wise, ; .
R. Grinjanis, P.J. O’Donnell, M. Sutherland and H. Navelet, , (E) ;\
G. Cella, G. Curci, G. Ricciardi and A. Viceré, ;\
A. Ali and C. Greub, ; .
P. Cho and B. Grinstein, .
M. Misiak, and .
A. Ali and C. Greub, DESY preprint DESY 93-065, ZU-TH 11/93, (1993).
J.L. Hewett, .
V. Barger, M.S. Berger and R.J. Phillips, .
M.A. Diaz, .
N. Oshimo, .
T. Hayashi, M. Matsuda and M. Tanimoto, Kogakkan University preprint, KU-01-93, AUE-01-93, EHU-01-93 (1993).
R. Barbieri and G.F. Giudice, .
J.L. Lopez, D. Nanopoulos, G.T. Park, .
S. Bertolini, F. Borzumati, A. Masiero and G. Ridolfi, .
S. Bertolini, F. Borzumati and A. Masiero in [*“B Decays”*]{}, S. Stone (ed.), World Scientific (Singapore), 1992.
F. Borzumati, in [*“Phenomenological Aspects of Supersymmetry”*]{}, W. Hollik, R. Rückl and J. Wess (eds.), Springer Lectures Notes in Physics, Springer Verlag (Heidelberg), 1992.
P. Langacker and N. Polonsky, .
J.F. Gunion, H.E. Haber and M. Sher, .
S. Bethke, Heidelberg University preprint, HD-PY-92-13, (1992), Talk given at 26th International Conference on High Energy Physics (ICHEP 92), Dallas, TX, 6-12 Aug 1992.
Particle Data Group, [*Phys. Rev.*]{} 45 (1993) Part II.
V. Barger, M.S. Berger and P. Ohmann, ;\
M. Carena, S. Pokorski and C.E.M. Wagner, Max-Planck Inst. preprint, MPI-Ph/93-10 (1993);\
P. Langacker and N. Polonsky, University of Pennsylvania preprint, UPR-0556-T (1993).
M. Olechowski and S. Pokorski, .
F. Abe, et al., CDF Collaboration, .
R. Keraenen, DELPHI internal report, DELPHI 92-172 PHYS 255 (1993).
A. Sopczak, CERN-PPE/92-137 (1992).
S. Bertolini, F. Borzumati and A. Masiero, ;\
S. Bertolini, F. Borzumati and A. Masiero, ;\
S. Bertolini, F. Borzumati and A. Masiero, , (E) .
J.-F. Grivaz, in [*“Physics and Experiments with Linear Colliders, Saariselkä, Finland, 9-14 September 1991”*]{}, vol. I, R. Orava, P. Eerola and M. Nordberg (eds.), World Scientific (Singapore), 1992.
R. Garisto and J.N. NG, TRIUMF preprint, TRI-PP-93-66 (1993).
Y. Okada, KEK Preprint 93-68, KEK-TH-365 (1993).
J.F. Gunion and H.E. Haber, .
[^1]: Supported by the Bundesministerium für Forschung und Technologie, 05 5HH 91P(8), Bonn, FRG.
[^2]: Two types of two-Higgs-doublet-models are known, the so-called type I and type II, with different couplings of the Higgs bosons to fermions. By the acronym 2HDM we indicate throughout all this paper the type II model, with Higgs bosons coupling to fermions as in the MSSM.
[^3]: There exist estimates of the uncertainties introduced on the gauge couplings unification [@LANGPOL] by non-renormalizable terms and threshold effects for the heavy particles. These are found to be small.
[^4]: The calculation in [@ALIGR] differs from ours since includes in the evaluation of $\BR({b\to s \gamma})\vert_{\rm SM}$ also the QCD breemsstrahlung process ${b\to s \gamma}g$. Nevertheless, our results can be easily compared with those in [@ALIGR] since this additional process accounts for an extra factor $K(\mu)$ on the left-hand side of (\[width\]). The $\mu$ dependence of $K(\mu)$ is completely negligible and therefore one should be able to compare the uncertainty related to the scale dependence which we shall claim later on, with the one obtained in [@ALIGR].
[^5]: This interval of allowed values of $\BR({b\to s \gamma})\vert_{\rm SM}$, roughly corresponds to the one obtained in ref. [@ALIGR], when the factor $K(\mu)$ is removed.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we obtain some inequalities by using a kernel and an inequality which is a result of Young inequality. Besides we give some applications to special means.'
address:
- '$^{\blacklozenge }$Atatürk University, K.K. Education Faculty, Department of Mathematics, Erzurum 25240, Turkey'
- '$^{\spadesuit }$Ağri İbrahim Çeçen University, Faculty of Education, Department of Mathematics, Ağri 04100, Turkey'
- '$^{\clubsuit }$Kilis 7 Aralik University, Faculty of Science and Arts, Department of Mathematics, Kilis 79100, Turkey'
author:
- 'M.Emin Özdemir$^{\blacklozenge }$'
- 'Mustafa Gürbüz$^{\spadesuit }$'
- 'Mevlüt Tunç$^{\clubsuit }$'
date: 'November 30, 2012'
title: New Estimates on Integral Inequalities and Their Applications
---
[^1]
Introduction
============
Let $f:I\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a convex function on the interval $I$ of real numbers and $a,b\in I$ with $a<b$. The inequality$$f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}f\left( x\right)
dx\leq \frac{f\left( a\right) +f\left( b\right) }{2} \label{hh}$$is well known in the literature as Hermite-Hadamard’s inequality for convex functions [@SC].
In [@SR], Dragomir and Agarwal proved one lemma and some Hadamard’s type inequalities for convex functions as following:
\[l\]Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $%
a<b. $ If $f^{\prime }\in L\left[ a,b\right] ,$ then the following equality holds:$$\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx=\frac{b-a}{2}\int_{0}^{1}\left( 1-2t\right)
f^{\prime }\left( ta+\left( 1-t\right) b\right) dt.$$
Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $%
a<b. $ If $\left\vert f^{\prime }\right\vert $ is convex on $\left[ a,b%
\right] ,$ then the following inequality holds:$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{\left( b-a\right)
\left( \left\vert f^{\prime }\left( a\right) \right\vert +\left\vert
f^{\prime }\left( b\right) \right\vert \right) }{8}.$$
Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $%
a<b, $ and let $p>1.$ If the new mapping $\left\vert f^{\prime }\right\vert
^{p/\left( p-1\right) }$ is convex on $\left[ a,b\right] ,$ then the following inequality holds:$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{\left( b-a\right) }{%
2\left( p+1\right) ^{1/p}}\left[ \frac{\left( \left\vert f^{\prime }\left(
a\right) \right\vert ^{p/\left( p-1\right) }+\left\vert f^{\prime }\left(
b\right) \right\vert ^{p/\left( p-1\right) }\right) }{2}\right] ^{\left(
p-1\right) /p}.$$
We recall the well-known Young’s inequality which can be stated as follows.
(**Young’s inequality**, see [@2], p. 117) If $a,b>0$ and $p,q>1$ satisfy $\frac{1}{p}+\frac{1}{q}=1,$ then$$ab\leq \frac{a^{p}}{p}+\frac{b^{q}}{q}. \label{yo}$$Equality holds if and only if $a^{p}=b^{q}.$
[@MEV] If we take $a=t^{\frac{1-p}{p^{2}}}$ and $b=t^{\frac{1}{%
pq}}$ in (\[yo\]), we have$$1\leq \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right) t^{\frac{1}{p}%
} \label{hay}$$for all $t\in \left( 0,1\right) .$
Chebyshev inequality is given in the following theorem.
Let $f,g:\left[ a,b\right] \rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be integrable functions, both increasing or both decreasing. Furthermore, let $p:\left[ a,b\right] \rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
_{0}^{+}$ be an integrable function. Then
$$\int_{a}^{b}p\left( x\right) dx\int_{a}^{b}p\left( x\right) f\left( x\right)
g\left( x\right) dx\geq \int_{a}^{b}p\left( x\right) f\left( x\right)
dx\int_{a}^{b}p\left( x\right) g\left( x\right) dx. \label{1.7}$$
If one of the functions $f$ and $g$ is nonincreasing and the other is nondecreasing, then the inequality in (\[1.7\]) is reversed. Inequality (\[1.7\]) is known in the literature as Chebyshev inequality and so are the following special cases of (\[1.7\]):$$\int_{a}^{b}f\left( x\right) g\left( x\right) dx\geq \frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\int_{a}^{b}g\left( x\right) dx \label{1.8}$$and $$\int_{0}^{1}f\left( x\right) g\left( x\right) dx\geq \int_{0}^{1}f\left(
x\right) dx\int_{0}^{1}g\left( x\right) dx. \label{1.9}$$
In the following sections our main results are given.
New Results
===========
\[t1\]Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable convex mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $a<b.$ If $f,f^{\prime }\in L\left[ a,b\right] ,$ for $p,q>1,$ $\frac{1%
}{p}+\frac{1}{q}=1,$ the following inequality holds:$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{\left( b-a\right) ^{%
\frac{1}{p}}p}{\left( p+1\right) ^{1+\frac{1}{p}}}\left(
\int_{a}^{b}\left\vert f^{\prime }\left( x\right) \right\vert ^{q}dt\right)
^{\frac{1}{q}}$$
By Lemma \[l\] in [@SR] we have$$\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx=\frac{b-a}{2}\int_{0}^{1}\left( 1-2t\right)
f^{\prime }\left( ta+\left( 1-t\right) b\right) dt. \label{1}$$As we choose $f$ is convex on $I^{\circ }$ by using Hadamard’s inequality, we can see that both sides are positive of Lemma 2.1
On the other hand by using Young’s inequality we have $\left( t\in \left[ 0,1%
\right] ,\text{ }p>1\right) $$$1\leq \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right) t^{\frac{1}{p}%
}$$which is proved in [@MEV]. If we integrate both sides of above inequality respect to $t$ over $\left[ 0,1\right] $ we get$$1\leq \int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}%
\right) t^{\frac{1}{p}}\right) dt \label{2}$$
By multipliying both sides of (\[1\]) and (\[2\]) we have$$\begin{aligned}
\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx &\leq &\frac{b-a}{2}\int_{0}^{1}\left( \frac{%
1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt
\notag \\
&&\times \int_{0}^{1}\left( 1-2t\right) f^{\prime }\left( ta+\left(
1-t\right) b\right) dt \label{3}\end{aligned}$$To use Hölder’s inequality we apply properties of ablosute value as$$\begin{aligned}
\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert &\leq &\frac{b-a}{2}%
\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right)
t^{\frac{1}{p}}\right) dt \\
&&\times \int_{0}^{1}\left\vert \left( 1-2t\right) f^{\prime }\left(
ta+\left( 1-t\right) b\right) \right\vert dt\end{aligned}$$By using Hölder’s inequality we have$$\begin{aligned}
&&\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \\
&\leq &\frac{b-a}{2}\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-%
\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt \\
&&\times \left( \int_{0}^{1}\left\vert 1-2t\right\vert ^{p}dt\right) ^{\frac{%
1}{p}}\left( \int_{0}^{1}\left\vert f^{\prime }\left( ta+\left( 1-t\right)
b\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}} \\
&=&\frac{b-a}{2}\left( \frac{2p}{p+1}\right) \left( \int_{0}^{\frac{1}{2}%
}\left( 1-2t\right) ^{p}dt+\int_{\frac{1}{2}}^{1}\left( 2t-1\right)
^{p}dt\right) ^{\frac{1}{p}} \\
&&\times \left( \int_{0}^{1}\left\vert f^{\prime }\left( ta+\left(
1-t\right) b\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}} \\
&=&\frac{b-a}{2}\left( \frac{2p}{p+1}\right) \left( \frac{1}{p+1}\right) ^{%
\frac{1}{p}}\left( \int_{0}^{1}\left\vert f^{\prime }\left( ta+\left(
1-t\right) b\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}.\end{aligned}$$By simple calculation we get the desired result.
If we choose $p=q=2$ in *Theorem \[t1\], we have*$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{2\left( b-a\right) ^{%
\frac{1}{2}}}{3^{\frac{3}{2}}}\left( \int_{a}^{b}\left\vert f^{\prime
}\left( x\right) \right\vert ^{2}dt\right) ^{\frac{1}{2}}.$$
\[t2\]Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable convex mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $a<b.$ If $f,f^{\prime }\in L\left[ a,b\right] ,$ for $p,q>1,$ $\frac{1%
}{p}+\frac{1}{q}=1,$ the following inequality holds:$$\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\leq \frac{p\left( p-1\right) }{\left(
p+1\right) \left( 2p+1\right) }\left[ f\left( b\right) -f\left( a\right) %
\right]$$
The same steps are followed as in Theorem \[t1\] until (\[3\]). Then we know that the function$$f\left( t\right) =1-2t$$is nonincreasing on $\left[ 0,1\right] ,$ and since $f$ is convex, $%
f^{\prime \prime }$ is positive on $I^{\circ }$. So $f^{\prime }$ is nondecreasing. By using these phrases we can use Chebyshev inequality as:$$\begin{aligned}
&&\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx \\
&\leq &\frac{b-a}{2}\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-%
\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt \\
&&\times \int_{0}^{1}\left( 1-2t\right) f^{\prime }\left( ta+\left(
1-t\right) b\right) dt \\
&\leq &\frac{b-a}{2}\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-%
\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt \\
&&\times \int_{0}^{1}\left( 1-2t\right) dt\int_{0}^{1}f^{\prime }\left(
ta+\left( 1-t\right) b\right) dt \\
&=&\frac{f\left( b\right) -f\left( a\right) }{2}\int_{0}^{1}\left( \frac{1}{p%
}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt \\
&&\times \int_{0}^{1}\left( 1-2t\right) dt\end{aligned}$$Since both of the functions $\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-%
\frac{1}{p}\right) t^{\frac{1}{p}}\right) $ and $\left( 1-2t\right) $ are nonincreasing, we can use Chebyshev inequality again as:$$\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\leq \frac{f\left( b\right) -f\left( a\right)
}{2}\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}%
\right) t^{\frac{1}{p}}\right) \left( 1-2t\right) dt$$By simple calculation, the proof is completed.
If we choose $p=1,1$ in *Theorem \[t2\], we have*$$\frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\leq \frac{11}{483}\left[ f\left( b\right)
-f\left( a\right) \right] .$$
\[t3\]Let $f:I^{\circ }\subseteq
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differantiable convex mapping on $I^{\circ },$ $a,b\in I^{\circ }$ with $a<b.$ If $f,f^{\prime }\in L\left[ a,b\right] ,$ for $p,q>1,$ $\frac{1%
}{p}+\frac{1}{q}=1,$ the following inequality holds:$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{2^{\frac{1}{q}}p}{%
\left( p+1\right) \left( b-a\right) }\left( \int_{a}^{b}\left\vert x-\frac{%
a+b}{2}\right\vert \left\vert f^{\prime }\left( x\right) \right\vert
^{q}dx\right) ^{\frac{1}{q}}.$$
The same steps are followed as in Theorem \[t1\] until (\[3\]). Then by using convexity of $f$ and properties of absolute value we have$$\begin{aligned}
\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert &\leq &\frac{b-a}{2}%
\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-\frac{1}{p}\right)
t^{\frac{1}{p}}\right) dt \\
&&\times \int_{0}^{1}\left\vert \left( 1-2t\right) f^{\prime }\left(
ta+\left( 1-t\right) b\right) \right\vert dt.\end{aligned}$$By using Power-mean inequality we have$$\begin{aligned}
&&\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \label{m} \\
&\leq &\frac{b-a}{2}\int_{0}^{1}\left( \frac{1}{p}t^{\frac{1}{p}-1}+\left( 1-%
\frac{1}{p}\right) t^{\frac{1}{p}}\right) dt \notag \\
&&\times \left( \int_{0}^{1}\left\vert 1-2t\right\vert dt\right) ^{\frac{1}{p%
}}\left( \int_{0}^{1}\left\vert \left( 1-2t\right) \right\vert \left\vert
f^{\prime }\left( ta+\left( 1-t\right) b\right) \right\vert ^{q}dt\right) ^{%
\frac{1}{q}} \notag \\
&=&\frac{2^{\frac{1}{q}}p}{p+1}\left( \int_{0}^{1}\left\vert \left(
1-2t\right) \right\vert \left\vert f^{\prime }\left( ta+\left( 1-t\right)
b\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}} \notag\end{aligned}$$
And using the change of the variable $x=ta+\left( 1-t\right) b,$ $t\in \left[
0,1\right] $, inequality (\[m\]) can be writen as$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{2^{\frac{1}{q}}p}{%
\left( p+1\right) \left( b-a\right) }\left( \int_{a}^{b}\left\vert x-\frac{%
a+b}{2}\right\vert \left\vert f^{\prime }\left( x\right) \right\vert
^{q}dt\right) ^{\frac{1}{q}}.$$
Then the proof is completed.
If we choose $p=q=2$ in *Theorem \[t3\], we have*$$\left\vert \frac{f\left( a\right) +f\left( b\right) }{2}-\frac{1}{b-a}%
\int_{a}^{b}f\left( x\right) dx\right\vert \leq \frac{2^{\frac{3}{2}}}{3}%
\left( \int_{0}^{1}\left\vert x-\frac{a+b}{2}\right\vert \left\vert
f^{\prime }\left( x\right) \right\vert ^{q}dx\right) ^{\frac{1}{2}}$$
Applications to special means
=============================
We now consider the applications of our Theorems to the following special means
The arithmetic mean: $A=A\left( a,b\right) :=\frac{a+b}{2},$ $a,b\geq 0,$
The geometric mean: $G=G\left( a,b\right) :=\sqrt{ab},$ $a,b\geq 0,$
The harmonic mean: $H=H\left( a,b\right) :=\frac{2ab}{a+b},$ $a,b\geq 0,$
The logarithmic mean: $L=L\left( a,b\right) :=\left\{
\begin{array}{l}
a\text{ \ \ \ \ \ \ \ \ \ \ \ \ if \ \ }a=b \\
\frac{b-a}{\ln b-\ln a}\text{ \ \ \ \ \ if \ \ }a\neq b%
\end{array}%
\right. ,$ $a,b\geq 0,$
The Identric mean: $I=I\left( a,b\right) :=\left\{
\begin{array}{l}
a\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if \ \ }a=b \\
\frac{1}{e}\left( \frac{b^{b}}{a^{a}}\right) ^{\frac{1}{b-a}}\text{ \ \ \ \
\ if \ \ }a\neq b%
\end{array}%
\right. ,$ $a,b\geq 0,$
The p-logarithmic mean:$L_{p}=L_{p}\left( a,b\right) :=\left\{
\begin{array}{l}
\left[ \frac{b^{p+1}-a^{p+1}}{\left( p+1\right) \left( b-a\right) }\right]
^{1/p}\text{ \ \ \ \ \ if \ \ }a\neq b \\
a\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if \ \ }a=b%
\end{array}%
\right. ,$ $\ \ \ p\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
\backslash \left\{ -1,0\right\} ;$ $a,b>0.$
The following inequality is well known in the literature:$$H\leq G\leq L\leq I\leq A\leq K$$
It is also known that $L_{p}$ is monotonically increasing over $p\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
,$ denoting $L_{1}=A,$ $L_{0}=I$ and $L_{-1}=L.$
The following propositions holds:
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b$ and $n\in
%TCIMACRO{\U{2115} }%
%BeginExpansion
\mathbb{N}
%EndExpansion
,$ $n\geq 2.$ Then, we have$$\left\vert A\left( a^{n},b^{n}\right) -L_{n}\left( a,b\right) \right\vert
\leq \frac{n.p.\left( b-a\right) }{\left( p+1\right) ^{1+\frac{1}{p}}}%
A^{n-1}\left( a,b\right) . \label{31}$$
The proof is immediate from Theorem \[t1\] applied for $f(x)=x^{n}$, $x\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$.
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b$ and $n\in
%TCIMACRO{\U{2115} }%
%BeginExpansion
\mathbb{N}
%EndExpansion
,$ $n\geq 2.$ Then, for all $p>1$, the following inequality holds:$$A\left( a^{n},b^{n}\right) -L_{n}\left( a,b\right) \leq \frac{p\left(
p-1\right) }{\left( p+1\right) \left( 2p+1\right) }\left[ b^{n}-a^{n}\right]
. \label{32}$$
The proof is immediate from Theorem \[t2\] applied for $f(x)=x^{n}$, $x\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$.
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b$ and $n\in
%TCIMACRO{\U{2115} }%
%BeginExpansion
\mathbb{N}
%EndExpansion
,$ $n\geq 2.$ Then, for all $p>1$, the following inequality holds:$$\left\vert A\left( a^{n},b^{n}\right) -L_{n}\left( a,b\right) \right\vert
\leq \frac{2^{\frac{1}{q}}p}{\left( p+1\right) \left( b-a\right) }\left(
\int_{a}^{b}\left\vert x-\frac{a+b}{2}\right\vert \left\vert f^{\prime
}\left( x\right) \right\vert ^{q}dx\right) ^{\frac{1}{q}}. \label{33}$$
The proof is immediate from Theorem \[t3\] applied for $f(x)=x^{n}$, $x\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$.
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b.$ Then, we have$$\left\vert A\left( a^{-1},b^{-1}\right) -L^{-1}\left( a,b\right) \right\vert
\leq \frac{p\left( b-a\right) ^{\frac{1}{p}}}{\left( p+1\right) ^{1+\frac{1}{%
p}}}\left( \int_{a}^{b}\left\vert x\right\vert ^{-2q}dx\right) ^{\frac{1}{q}%
}.$$
The proof is obvious from Theorem \[t1\] applied for $f(x)=1/x$, $x\in
\lbrack a,b]$.
$\bigskip $
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b.$ Then, we have$$A\left( a^{-1},b^{-1}\right) -L^{-1}\left( a,b\right) \leq \frac{2p\left(
p-1\right) }{\left( p+1\right) \left( 2p+1\right) }H^{-1}\left( a,b\right) .$$
The proof is obvious from Theorem \[t2\] applied for $f(x)=1/x$, $x\in
\lbrack a,b]$.
Let $a$,$b\in
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
^{+},$ $a<b.$ Then, we have$$\left\vert A\left( a^{-1},b^{-1}\right) -L^{-1}\left( a,b\right) \right\vert
\leq \frac{2^{\frac{1}{q}}p}{\left( p+1\right) \left( b-a\right) }\left(
\int_{a}^{b}\left\vert x-\frac{a+b}{2}\right\vert \left\vert x\right\vert
^{-2q}dx\right) ^{\frac{1}{q}}.$$
The proof is obvious from Theorem \[t3\] applied for $f(x)=1/x$, $x\in
\lbrack a,b]$.
[9]{} S.S. Dragomir, C. E. M. Pearce, Selected Topic on Hermite- Hadamard Inequalities and Applications, URL:http://www.maths.adelaide.edu.au /Applied /staff /cpearce.html
S.S. Dragomir, R.P. Agarwal, Two Inequalities for Differentiable Mappings and Applications to Special Means of Real Numbers and to Trapezoidal Formula, Appl. Math. Lett., Vol. 11, No. 5, pp. 91-95, 1998.
S.S. Dragomir, R.P. Agarwal and N.S. Barnett, Inequalities for beta and gamma functions via some classical and new integral inequalities, J. Inequal. & Appl., 5 (2000), 103-165.
M. Tunç, Two New Definitions on Convexity and Related Inequalities, arxiv:1205.5189v1 \[math.CA\], 23 May 2012.
[^1]: $^{\spadesuit }$Corresponding Author
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Talal Ahmed Chowdhury
- '& Salah Nasri'
title: Lepton Flavor Violation in the Inert Scalar Model with Higher Representations
---
Introduction {#intro}
============
Neutrino oscillation provides the direct evidence for lepton flavor violation in the neutrino sector. Therefore, one also expects LFV in the charged lepton sector which is yet to be observed. This is a generic prediction in most of the neutrino mass models and depending on the realization details of the model, the rates of different LFV processes can be very different. In this paper, we have focused on radiative neutrino mass model at one loop proposed in [@Ma:2006km], known as the scotogenic model, where the scalar content of the model is the inert doublet. Apart from its role in neutrino mass generation, the inert doublet has been extensively studied in the context of dark matter [@Deshpande:1977rw; @LopezHonorez:2006gr; @Dolle:2009fn; @LopezHonorez:2010tb; @Agrawal:2008xz; @Andreas:2009hj; @Nezri:2009jd; @Cao:2007rm], mirror model and extra generation [@Martinez:2011ua; @Melfo:2011ie], electroweak phase transition [@Chowdhury:2011ga; @Borah:2012pu; @Gil:2012ya; @Cline:2013bln; @Ahriche:2015mea] and collider studies [@Lundstrom:2008ai; @Dolle:2009ft; @Belanger:2015kga]. As the higher scalar representation is not forbidden by any symmetry in the model, the immediate generalization of the doublet, the quartet with isospin $J=3/2$ was studied in [@AbdusSalam:2013eya] to check whether it is viable in providing both light scalar dark matter and strong electroweak phase transition in the universe. Here we have incorporated higher scalar representation instead of the doublet in the scotogenic model and determined the viable $SU(2)_{L}$ fermion multiplet for generating neutrino mass. LFV processes in the scotogenic model with inert doublet has been studied in [@Kubo:2006yx; @Sierra:2008wj; @Suematsu:2009ww; @Adulpravitchai:2009gi; @Toma:2013zsa; @Vicente:2014wga] (and references therein). The extension of the scotogenic model has been addressed in [@Ma:2008cu; @Law:2013saa]. Also larger multiplets have been incorporated in type III seesaw model [@Ren:2011mh] and in models of radiative neutrino mass generation at higher order with dark matter [@Ahriche:2015wha].
The generalization of scotogenic model with higher $SU(2)_{L}$ half-integer representation does not change the parameter set of the Lagrangian of the inert doublet at the renormalizable level. Therefore it gives us the opportunity to investigate the predictions of LFV processes for different scalar representations for the same region of parameter space. In particular, we compare the LFV processes for the doublet and the quartet in the light of current experimental bounds and future sensitivities.
There have been many great experimental efforts to detect positive LFV signal in $l_{\alpha}\rightarrow l_{\beta}\gamma$, $l_{\alpha}\rightarrow 3l_{\beta}$ and $\mu-e$ conversion rate in nuclei. In the case of muon radiative decay, the MEG collaboration [@Adam:2011ch] has put a limit of $\text{Br}(\mu\rightarrow e\gamma)<5.7\times 10^{-13}$ [@Adam:2013mnn] and will have sensitivity of $6\times 10^{-14}$ after acquiring data for three more years [@Baldini:2013ke]. In addition, current bound on branching ratio of lepton flavor violating 3-body decay, $\mu\rightarrow ee\overline{e}$ is $1\times 10^{-12}$ set by SINDRUM experiment [@Bellgardt:1987du] and Mu3e experiment will reach a sensitivity of $10^{-16}$ [@Blondel:2013ia]. Furthermore, SINDRUM II experiment has put current limit on muon to electron ($\mu-e$) conversion rate in Gold (Au) and Titanium (Ti) nucleus of $7\times 10^{-13}$ [@Bertl:2006up] and $4.3\times 10^{-12}$ [@Dohmen:1993mp] respectively. The future projects Mu2e [@Glenzinski:2010zz; @Bartoszek:2014mya], DeeMe [@Natori:2014yba], COMET [@Kuno:2013mha] and PRISM/PRIME [@Kuno:2005mm; @Barlow:2011zza] will improve this bound from $10^{-14}$ to $10^{-18}$. For other LFV processes and their experimental bounds, please see Table I of [@Toma:2013zsa]. We have compared the predictions of the LFV processes $\mu\rightarrow e \gamma$, $\mu\rightarrow ee\overline{e}$ and $\mu-e$ conversion rate in Au and Ti for both doublet and quartet scalars and our comparison has revealed that the contributions of the quartet in all LFV processes are larger than those of the doublet for the same region of parameter space. Consequently, the contribution of higher scalar representation to LFV processes have better experimental prospects.
The paper is organized as follows. We describe the model in section \[zpt\]. In section \[LFVprocesses\] we present the relevant formulas of $\mu\rightarrow e \gamma$, $\mu\rightarrow ee\overline{e}$ and $\mu-e$ conversion processes for the inert doubler and quartet. We present the result in section \[resultdiscussion\] and conclude in section \[conclusion\]. appendix \[massspecscalar\] contains the mass spectrum of the inert doublet and quartet in our parametrization. The expressions of the loop functions are given in appendix \[loopappendix\]. In appendix \[muegZver\] we collect the Feynman diagrams for $\mu e \gamma$ vertices, $\mu e Z$ vertices and box diagrams.
The Model {#zpt}
=========
Any multiplet charged under $SU(2)_L \times U(1)_Y$ gauge group is characterized by the quantum numbers $J$ and $Y$, with the electric charge of a component in the multiplet is given by $Q=T_3+Y$. For half-integer representation $J=n/2$, $T_3$ ranges from $-\frac{n}{2}$ to $\frac{n}{2}$. So the hypercharge of the multiplet needs to be $Y=\pm T_3$ for one of the components to have neutral charge. For integer representation $n$, similar condition holds for hypercharge.
The generalized scotogenic model involves one half-integer $SU(2)_{L}$ scalar multiplet $\Delta$ with hypercharge $Y=1/2$ and three generations of real ($Y=0$) odd dimensional fermionic multiplets, $F_{i}$ ($i=1-3$) charged under $Z_{2}$ symmetry, $\Delta\rightarrow -\Delta$ and $F_{i}\rightarrow -F_{i}$. When the scalar multiplet is fixed to be $J=n/2$ , $n$ odd, there are two choices for fermionic multiplet which can give $Z_{2}$ even $SU(2)_{L}\times U(1)_{Y}$ invariant Yukawa term with the lepton doublet; $J=\frac{n-1}{2}$ or $\frac{n+1}{2}$. The charged lepton sector is augmented by the following terms $${\cal L}\supset -\frac{M_{F_{i}}}{2}\overline{F_{i}^{c}}P_{R}F_{i}+y_{i\alpha}\overline{F}_{i}.
l_{\alpha}.\Delta+\text{h.c}
\label{yuk1}$$ where the dot represents the proper contractions among $SU(2)$ indices. In the subsequent analysis we have chosen fermion multiplet to be $J=\frac{n-1}{2}$.
The general Higgs-scalar multiplet potential , symmetric under $Z_2$, can be written in the following form, $$\begin{aligned}
\label{potq}
V_0(\Phi,\Delta)&=&- \mu^2 \Phi^\dagger \Phi + M_0^2 \Delta^\dagger \Delta +
\lambda_1 (\Phi^\dagger \Phi)^2 + \lambda_2 (\Delta^\dagger \Delta)^2
+\lambda_3 |\Delta^\dagger T^a \Delta|^2
+\alpha \Phi^\dagger \Phi \Delta^\dagger \Delta\nonumber\\
&+&\beta \Phi^\dagger
\tau^a\Phi \Delta^\dagger T^a \Delta
+\gamma[ (\Phi^T\epsilon \tau^a\Phi) (\Delta^T
C T^a \Delta)^\dagger+h.c] \end{aligned}$$ Here, $\tau^a$ and $T^{a}$ are the $SU(2)$ generators in fundamental and $\Delta$’s representation respectively. $C$ is an antisymmetric matrix analogous to charge conjugation matrix defined as, $$C T^{a} C^{-1}=-T^{aT}$$ Since $C$, is an antisymmetric matrix, it can only be defined for even dimensional space, i.e only for half-integer representation. If the isospin of the representation is $J$ then $C$ is $(2J+1)\times (2J+1)$ dimensional matrix. The generators are normalized in such a way so that they satisfy, for fundamental representation, $Tr[\tau^a
\tau^b]=\frac{1}{2} \delta^{ab}$ and for other representations, $Tr(T^{a} T^{b})=D_{2}(\Delta)\delta^{ab}$. Also $T^a T^a=C_{2}(\Delta)$. Here, $D_{2}(\Delta)$ and $C_{2}(\Delta)$ are Dynkin index and second Casimir invariant for $\Delta$’s representation. Notice that, $\gamma$ term is only allowed for representation with $(J,Y)=(\frac{n}{2},\frac{1}{2})$ and it is essential for the generation of neutrino mass at one-loop.
The scalar representation with $(J,Y)=(\frac{n}{2},\frac{1}{2})$ and the fermionic representation with $(J,Y)=(\frac{n-1}{2},0)$ have the component fields denoted as $\Delta^{(Q)}$ and $F^{(Q)}$ respectively where $Q$ is the electric charge. They are written explicitly as $$\label{hr}
\bf{\Delta_{\frac{n}{2}}}=\begin{pmatrix}
\Delta^{(\frac{n+1}{2})}\\
...\\
\Delta^{(0)}\equiv\frac{1}{\sqrt{2}}(S+i\, A)\\
...\\
\Delta^{(-\frac{n-1}{2})}\\
\end{pmatrix}
\,\textrm{ and }\,
\bf{F_{\frac{n-1}{2}}}= \begin{pmatrix}
F^{(\frac{n-1}{2})}\\
...\\
F^{(0)}\\
...\\
F^{(\frac{-n+1}{2})}\\
\end{pmatrix}$$ For the former representation every component represents a unique field while for the latter there is a redundancy $F^{(-Q)} =
(F^{(Q)})^{*}$.
The choices for real fermion multiplet with the doublet are either $(J,Y)=(0,0)$ or $(1,0)$ and with the quartet, choices are either $(J,Y)=(1,0)$ or $(2,0)$. Our analysis has focused on the following pairs of scalar and fermionic multiplets: $(\Delta_{J=\frac{1}{2}},
{F_{i}}_{J=0})$ and $(\Delta_{J=\frac{3}{2}},
{F_{i}}_{J=1})$. In component fields, the doublet scalar $D$, right handed (RH) neutrino, $N_{R_{i}}$ and the quartet scalar $\Delta$ and the triplet fermion ${\bf F}_{i}$ are expressed as $$D=\begin{pmatrix}
C^{+}\\
D^{0}\equiv \frac{1}{\sqrt{2}}(S+i A)\\
\end{pmatrix}\,,\,
N_{R_{i}}\,,\,
\Delta=\begin{pmatrix}
\Delta^{++}\\
\Delta^{+}\\
\Delta^{0}\equiv \frac{1}{\sqrt{2}}(S+iA)\\
\Delta^{'-}
\end{pmatrix}\,\,
\text{and}\,\,
{\bf F}_{i}=\begin{pmatrix}
F^{+}\\
F^{0}\\
F^{-}
\end{pmatrix}_{i}
\label{fieldreps}$$
Mass spectra
------------
We now sketch the general form of mass spectrum for the scalar and fermionic multiplet which was also presented in [@AbdusSalam:2013eya]. The neutral component of the scalar multiplet ($Y=1/2$) will have $T_{3}$ eigenvalue as $T_3=-\frac{1}{2}$. Now for the Higgs vacuum expectation value, $\langle
\Phi\rangle=(0,\frac{v}{\sqrt{2}})^{T}$, the term $\langle\Phi^\dagger
\rangle\tau^3\langle\Phi\rangle$ gives $-\frac{v^2}{4}$. So masses for the neutral components, $S$ and $A$ are splitted by the $\gamma$ term as $$\begin{aligned}
\label{nc}
m_{S}^2 &=& M_{0}^2+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta+p(-1)^{p+1}\gamma\right)
v^2\label{masseqscalar}\\
m_{A}^2&=&M_{0}^2+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta-p(-1)^{p+1}\gamma\right)
v^2\label{masseqnpseudo}\end{aligned}$$ Here, $p=\frac{1}{2}\text{Dim}(\frac{n}{2})=1,2,...$ comes from $2p\times2p$ $C$ matrix. For the charged component, with $T_3=m$, where, $m=n/2,n/2-1,...,-n/2$, we have, $$m_{(m)}^2=M_{0}^2+\frac{1}{2}\left(\alpha-\frac{1}{2}\beta\,m\right) v^2.$$
Moreover, because of the $\gamma$ term, there will be mixing between components carrying same amount of charge. A component of the multiplet is denoted as $|J,T_3\rangle$. Components with $|\frac{n}{2},m\rangle$ and $|\frac{n}{2},-(m+1)\rangle$ (such that $-m-1\geq -\frac{n}{2}$) will have positive and negative charge $Q=m+\frac{1}{2}$ respectively. Now $\langle\Phi\rangle^T\epsilon \tau^a\langle\Phi\rangle$ gives $\frac{v^2}{2\sqrt{2}}$. Therefore, the mixing matrix between components with charge $|Q|$ is, $$\label{sc1}
M^2_{Q}=\begin{pmatrix}
m^2_{(m)}&\frac{\gamma v^2}{4}\sqrt{\left(\frac{n}{2}-m\right)\left(\frac{n}{2}+m+1\right)}\\
\\
\frac{\gamma v^2}{4}\sqrt{\left(\frac{n}{2}-m\right)\left(\frac{n}{2}+m+1\right)}& m^2_{(-m-1)}
\end{pmatrix}$$ And the mass eigenstates are, $$\begin{aligned}
\Delta_{1}^{'Q}&=&\cos\theta_{Q}\,\Delta^{Q}_{(m)}+\sin\theta_{Q}\,\Delta_{(-m-1)}^{*Q}\nonumber\\
\Delta_{2}^{'Q}&=&-\sin\theta_{Q}\,\Delta^{Q}_{(m)}+\cos\theta_{Q}\,\Delta_{(-m-1)}^{*Q}
\label{egstate}\end{aligned}$$ where we have $$\tan 2\theta_{Q}=\frac{2(M^{2}_{Q})_{12}}{(M^{2}_{Q})_{11}-(M^{2}_{Q})_{22}}
\label{egstate1}$$
Note that the real fermionic multiplet is degenerate at the tree level. However, there is a small splitting between the charged and neutral component due to radiative correction which is $O(100\,\text{MeV})$ [@Cirelli:2005uq]. This splitting is needed in order to treat the neutral fermion as the dark matter candidate.
Neutrino mass generation
------------------------
The light neutrino masses are generated at one-loop level as shown in figure \[neutrinomassgen\]. The neutrino mass matrix is expressed as $$\begin{aligned}
(m_{\nu})_{\alpha\beta}&=\sum_{i=1}^{3}\frac{y_{\alpha i}y_{i\beta}
M_{F_{i}}}{16\pi^2}\left\{C^{2}_{\frac{1}{2},0,-\frac{1}{2}}
\left[\frac{m^2_{S}}{m^{2}_{S}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{S}}{m^{2}_{F_{i}}}
-\frac{m^2_{A}}{m^{2}_{A}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{A}}{m^{2}_{F_{i}}}\right]\right.\nonumber\\
&+\left.\sum_{Q\neq0}
C_{\frac{1}{2},m+\frac{1}{2},m}C_{\frac{1}{2},-m-\frac{1}{2},-m-1}R_{1,m}R_{2,-m-1}
\left[\frac{m^{2}_{Q,1}}{m^{2}_{Q,1}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{Q,1}}{m^{2}_{F_{i}}}-
\frac{m^{2}_{Q,2}}{m^{2}_{Q,2}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{Q,2}}{m^{2}_{F_{i}}}\right]\right\}\nonumber\\
&=\left. (y^{T}\Lambda y)_{\alpha\beta}\right.
\label{nuemass}\end{aligned}$$ Here $C_{m_1,m_2,m_3}$ is the Clebsh-Gordon (CG) coefficient and $m_{1}$, $m_2$ and $m_3$ are the $T_{3}$ eigenvalues of lepton doublet, fermion and scalar multiplet respectively. Non-zero CG coefficient requires $m_{1}+m_{3}=m_{2}$. Also $R_{i,m}$ is the element of the rotation matrix that mixes the two scalar components with same charge $|Q|$ and $m^{2}_{Q,i}$ are the corresponding mass eigenvalues. Moreover, $\Lambda_{i}$ is the loop function, $$\begin{aligned}
\Lambda_{i}&=
\frac{M_{F_{i}}}{16\pi^2}\left\{C^{2}_{\frac{1}{2},0,-\frac{1}{2}}
\left[\frac{m^2_{S}}{m^{2}_{S}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{S}}{m^{2}_{F_{i}}}
-\frac{m^2_{A}}{m^{2}_{A}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{A}}{m^{2}_{F_{i}}}\right]
+\sum_{Q\neq0}
C_{\frac{1}{2},m+\frac{1}{2},m}C_{\frac{1}{2},-m-\frac{1}{2},-m-1}\right.\notag\\
&\left.R_{1,m}R_{2,-m-1}
\left[\frac{m^{2}_{Q,1}}{m^{2}_{Q,1}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{Q,1}}{m^{2}_{F_{i}}}-
\frac{m^{2}_{Q,2}}{m^{2}_{Q,2}-m^{2}_{F_{i}}}\text{ln}\frac{m^{2}_{Q,2}}{m^{2}_{F_{i}}}\right]\right\}
\label{mass2}
\end{aligned}$$
![Neutrino mass generation in the inert doublet (first figure from the left) and the quartet (second and third figures).[]{data-label="neutrinomassgen"}](neutrino_mass.pdf "fig:"){width="5cm"} ![Neutrino mass generation in the inert doublet (first figure from the left) and the quartet (second and third figures).[]{data-label="neutrinomassgen"}](neutrino_mass_quartet_1.pdf "fig:"){width="5cm"} ![Neutrino mass generation in the inert doublet (first figure from the left) and the quartet (second and third figures).[]{data-label="neutrinomassgen"}](neutrino_mass_quartet_2.pdf "fig:"){width="5cm"}
Therefore the neutrino mass at one loop in the doublet case is given by $$(m_{\nu})^{\text{doublet}}_{\alpha\beta}=\sum_{i=1}^{3}\frac{y_{\alpha i}y_{i\beta}
M_{N_{i}}}{16\pi^2}\left[\frac{m^2_{S}}{m^{2}_{S}-m^{2}_{N_{i}}}\text{ln}\frac{m^{2}_{S}}{m^{2}_{N_{i}}}
-\frac{m^2_{A}}{m^{2}_{A}-m^{2}_{N_{i}}}\text{ln}\frac{m^{2}_{A}}{m^{2}_{N_{i}}}\right]
\label{neutrinodoublet}$$ where $M_{N_{i}}$ is the mass of the i-th right handed neutrino. When $m^{2}_{S}\sim m^{2}_{A}\equiv m^{2}_{0}$ then eq. (\[neutrinodoublet\]) gets simplified $$(m_{\nu})^{\text{doublet}}_{\alpha\beta}=\sum_{i=1}^{3}\frac{y_{\alpha i}y_{i\beta}
\gamma v^2}{16\pi^2 M_{N_{i}}}\left[\frac{m^2_{N_{i}}}{m^{2}_{0}-m^{2}_{N_{i}}}
+(\frac{m^2_{N_{i}}}{m^{2}_{0}-m^{2}_{N_{i}}})^2\text{ln}\frac{m^{2}_{N_{i}}}{m^{2}_{0}}\right]
\label{neutrinodoublet1}$$
On the other hand, the neutrino mass matrix in the quartet case is given by $$(M_{\nu})^{\text{quartet}}_{\alpha\beta}=\sum_{i=1}^{2}y_{\alpha i}\Lambda_{i}y_{i\beta}
\label{mass3}$$ with the loop factor, $$\begin{aligned}
\Lambda^{\text{quartet}}_{i}&=\left.\frac{1}{3(4\pi)^2}M_{Fi}\left[\frac{m_{S}^2}{m_{S}^2-M_{Fi}^2}
\text{ln}\frac{m_{S}^2}{M_{Fi}^2}-\frac{m_{A}^2}{m_{A}^2-M_{Fi}^2}\text{ln}\frac{m_{A}^2}{M_{Fi}^2}\right]\right.\nonumber\\
&+\left.\frac{1}{6(4\pi)^2}\sin 2\theta M_{Fi}\left[\frac{m_{\Delta^{+}_{1}}^2}{m_{\Delta^{+}_{1}}^2-
M_{Fi}^2}\text{ln}\frac{m_{\Delta^{+}_{1}}^2}{M_{Fi}^2}
-\frac{m_{\Delta^{+}_{2}}^2}{m_{\Delta^{+}_{2}}^2-M_{Fi}^2}\text{ln}\frac{m_{\Delta^{+}_{2}}^2}{M_{Fi}^2}\right]\right.
\label{neutrinoquartet}
\end{aligned}$$ Explicit expressions of masses in the inert doublet and quartet models are included in appendix \[massspecscalar\].
The neutrino mass matrix can be diagonalized as $$U^{T}_{PMNS}\,m_{\nu}\,U_{PMNS}\equiv \hat{m}_{\nu}
\label{mass1}$$ where $$U_{PMNS}=\begin{pmatrix}
c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{-i\delta}& c_{12}c_{23}-s_{12}s_{23}s_{13}e^{-i\delta}&s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{-i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{-i\delta}& c_{23}c_{13}
\end{pmatrix}
\times \begin{pmatrix}
1&0&0\\
0& e^{i\alpha/2}&0\\
0 & 0 & e^{i\beta/2}
\end{pmatrix}
\label{upmns}$$ Here, $c_{ij}=\cos\theta_{ij}$, $s_{ij}=\sin\theta_{ij}$, $\delta$ is the Dirac phase and $\alpha$, $\beta$ are the Majorana phases.
The Yukawa matrix $y_{i\alpha}$ ($\alpha=e,\mu,\tau$) is expressed using the Casas-Ibarra parametrization [@Casas:2001sr] so that the chosen parameter space automatically satisfies the low energy neutrino parameters, $$y=\sqrt{\Lambda}^{-1}\,R\,\sqrt{\hat{m}_{\nu}}\,U^{\dagger}_{PMNS}
\label{casas}$$ where $R$ is a complex orthogonal matrix.
Perturbativity
--------------
If there are N generations of right handed fermion multiplet, perturbativity of the Yukawa gives the following constraint [@Casas:2010wm; @Heeck:2012fw] $$\text{Tr}(y^\dagger y)=\sum_{i=1}^{3}\sum_{j=1}^{N}|R_{ij}|^{2}\frac{\hat{m}_{\nu_{i}}}{\Lambda_{j}}\simlt O(1)
\label{yukawa1}$$
If $R$ is taken to be real, the constraint translates into the largest ratio, $\frac{\hat{m}_{\nu_{i}}}{\Lambda_{j}}\simlt O(1)$, whereas for the general case when $R$ is complex, each entry will be bounded as $|R_{ij}|\simlt \sqrt{\frac{\Lambda_{j}}{3N\hat{m}_{\nu_{i}}}}$.
Lepton flavor violating processes {#LFVprocesses}
=================================
In this section we have presented the relevant analytical formulas of LFV processes for the doublet and quartet case. In the standard model due to the GIM suppression the rate of $\mu\rightarrow e\gamma$ becomes $\sim 10^{-54}$ thus negligible. On the other hand the presence of heavy right handed neutrino that mixes with left handed (LH) neutrinos, spoils the GIM suppression and one could obtain the rate which can be probed by experiment [@Cheng:1976uq; @Cheng:1977nv; @Cheng:1980tp; @Ma:1980gm; @Lim:1981kv; @Ilakovac:1994kj; @Blum:2007he]. In inert scalar models, $Z_{2}$ symmetry forbids the mixing between LH and RH neutrinos but the enhancements in the LFV processes are provided by the $C^{\pm}-N_{R_{i}}$ loops in the doublet and $\Delta-F_{i}$ loops in the quartet model. We have focused on three LFV processes: $\mu\rightarrow e\gamma$, $\mu\rightarrow ee\overline{e}$ and $\mu-e$ conversion in nuclei in this paper as they have the most stringent limits from the experiments.
$\mu\rightarrow e\gamma$
------------------------
The branching ratio for $\mu\rightarrow e\gamma$, normalized by $\text{Br}(\mu\rightarrow e\overline{\nu_{e}}\nu_{\mu})$, is [@Hisano:1995cp; @Toma:2013zsa] $$\text{Br}(\mu\rightarrow e\gamma)=\frac{3(4\pi)^3\alpha_{em}}{4G_{F}^2}|A_{D}|^2\,\text{Br}(\mu\rightarrow e\nu_{\mu}\overline{\nu_{e}})
\label{mutoegma}$$ where $A_{D}$ is the dipole form factor. The Feynman diagrams of one-loop contributions by the doublet and quartet to the $\mu e \gamma$ vertex that enters into the dipole form factor calculation, are given in figure \[muegammavertices\].
The contributions from the doublet is the following, $$A^{\text{doublet}}_{D}=\sum_{i=1}^{3}\frac{y^{*}_{e i}y_{i\mu}}{32\pi^2}\frac{1}{m^2_{C}}F^{(n)}(x_{i\sigma})
\label{addoublet}$$ Here $F^{(n)}(x)$ is the loop function given in the appendix \[loopappendix\] and $x_{i\sigma}=m^{2}_{N_{i}}/m^{2}_{\sigma}$, where $\sigma=C^{+}$. On the other hand, the quartet contribution will have two parts $$A^{\text{quartet}}_{D}=A^{\text{quartet}}_{D(n)}+A^{\text{quartet}}_{D(c)}
\label{quartetdipole0}$$ where $A^{\text{quartet}}_{D(n)}$ is the contribution of the neutral component and $A^{\text{quartet}}_{D(c)}$ is that of the charged component of the fermion triplet. Also, for the notational convenience, we introduce generalized Yukawa coupling $y_{i\alpha\sigma}=y_{i\alpha}C_{\sigma}$ where $C_{\sigma}$ is the corresponding Clebsh Gordon coefficient associated with $\sigma$-th component of the quartet. The two contributions are $$A^{\text{quartet}}_{D(n)}=\sum_{i=1}^{3}\sum_{\sigma}\frac{y^{*}_{e i\sigma}y_{i\mu\sigma}}{32\pi^2}\frac{1}{m^2_{\sigma}}F^{(n)}(x_{i\sigma})
\label{adquartet1}$$ where $x_{i\sigma}=m^{2}_{F^{0}_{i}}/m^{2}_{\sigma}$, $\sigma=\Delta^{+}_{1},\,\Delta^{+}_{2}$. And $$A^{\text{quartet}}_{D(c)}=-\sum_{i=1}^{3}\sum_{\sigma}\frac{y^{*}_{e i\sigma}y_{i\mu\sigma}}{32\pi^2}\frac{1}{m^2_{\sigma}}F^{(c)}(x_{i\sigma})
\label{adquartet1}$$ where $x_{i\sigma}=m^{2}_{F^{\pm}_{i}}/m^{2}_{\sigma}$, and $\sigma=\Delta^{++},\,S,\,A$.
$\mu\rightarrow ee\overline{e}$
-------------------------------
Now we turn to $\mu\rightarrow ee\overline{e}$ decay. The branching ratio is given as [@Hisano:1995cp; @Arganda:2005ji; @Toma:2013zsa] $$\begin{aligned}
\text{Br}(\mu\rightarrow ee\overline{e})&=\frac{3(4\pi)^2\alpha_{em}^2}{8G_{F}^2}
\left[|A_{ND}|^2+|A_{D}|^2\left(\frac{16}{3}\text{ln}\frac{m_{\mu}}{m_{e}}-\frac{22}{3}\right)+\frac{1}{6}|B|^2\right.\notag\\
&+\left.\frac{1}{3}(2|F^{L}_{Z}|^2+F^{R}_{Z}|^2)+\left(-2A_{ND}A^*_{D}+\frac{1}{3}A_{ND}B^*-\frac{2}{3}A_{D}B^*+\text{h.c}\right)\right]\notag\\
&\times \text{Br}(\mu\rightarrow e\overline{\nu_{e}}\nu_{\mu})
\label{mutoeee}\end{aligned}$$ where $A_{D}$ and $A_{ND}$ are the dipole and non-dipole contribution from the photonic penguin diagrams respectively. Also $B$ represents the contribution from the box diagrams. Moreover, $F^{L}_{Z}$ and $F^{R}_{Z}$ are given as $$F^{L}_{Z}=\frac{F_{Z}g^{l}_{L}}{g^2m_{Z}^2\sin^2\theta_{W}}\,\,\,,\,\,\,F^{R}_{Z}=\frac{F_{Z}g^{l}_{R}}{g^2m_{Z}^2\sin^2\theta_{W}}
\label{zcontrib}$$ Here, $F_{Z}$ is the Z-penguin contribution and $g^{l}_{L}$ and $g^{l}_{R}$ are the Z-boson coupling to the LH and RH charged leptons respectively. In this model, Higgs penguin contribution will be suppressed by the small electron Yukawa coupling, and therefore we have only considered the photon penguin, Z-boson penguin and box diagrams.
### $\gamma$-penguin contribution
First let us consider contributions from the photon penguin diagrams. In this case the $\gamma$ line of $\mu e \gamma$ vertex given in figure \[muegammavertices\] will have $\overline{e}e$ attached to it. The photonic non-dipole contribution, $A_{ND}$ for the doublet is in the following $$A^{\text{doublet}}_{ND}=\sum_{i=1}^{3}\frac{y^{*}_{e i}y_{i\mu}}{96\pi^2}\frac{1}{m^2_{C}}G^{(n)}(x_{i\sigma})$$
The photonic non-dipole contribution, for the case of the quartet, will again have two parts, $$A^{\text{quartet}}_{ND}=A^{\text{quartet}}_{ND(n)}+A^{\text{quartet}}_{ND(c)}
\label{quartetnondipole0}$$ Here $A^{\text{quartet}}_{ND(n)}$ is the contribution of the neutral component and $A^{\text{quartet}}_{ND(c)}$ is the contribution of the charged component of the fermion triplet. $$A^{\text{quartet}}_{ND(n)}=\sum_{i=1}^{3}\sum_{\sigma = \Delta^{+}_{1},\,\Delta^{+}_{2}}\frac{y^{*}_{e i\sigma}y_{i\mu\sigma}}{96\pi^2}\frac{1}{m^2_{\sigma}}G^{(n)}(x_{i\sigma})
\label{adquartet1}$$ where again $x_{i\sigma}=m^{2}_{F^{0}_{i}}/m^{2}_{\sigma}$. And the charged component of fermion triplet contributes as follows, $$A^{\text{quartet}}_{ND(c)}=-\sum_{i=1}^{3}\sum_{\sigma}\frac{y^{*}_{e i\sigma}y_{i\mu\sigma}}{96\pi^2}\frac{1}{m^2_{\sigma}}G^{(c)}(x_{i\sigma})
\label{adquartet1}$$ with $x_{i\sigma}=m^{2}_{F^{\pm}_{i}}/m^{2}_{\sigma}$, and $\sigma = \Delta^{++},\,S,\,A$. The loop functions $F^{(n)}(x)$, $F^{(c)}(x)$, $G^{(n)}(x)$ and $G^{(c)}(x)$ are given in the appendix \[loopappendix\].
### Z-penguin contribution
Now we focus on the Z-penguin diagram. The Feynman diagrams of one-loop contributions from the doublet and the quartet to the $\mu e Z$ vertex are presented in figure \[muezvertices\]. In Z-penguin diagram, the $Z$ line of $\mu e Z$ vertex will have $\overline{e}e$ line attached to it. For the doublet, the contribution is given by the neutral fermion. Following the formulas given in [@Arganda:2005ji; @Abada:2014kba; @Arganda:2014lya][^1] $$F^{\text{doublet}}_{Z(n)}=-\frac{1}{16\pi^2}\sum_{i=1}^{3}y^{*}_{e i}y_{i\mu}\left[2\,g_{ZC^{+}C^{-}}\,
C_{24}(m_{N_{i}},m_{C},m_{C})+g^{l}_{L}B_{1}(m_{N_{i}},m_{C})\right]
\label{doubletZ}$$ Here, $g_{ZC^{+}C^{-}}$ is the Z boson coupling to $C^{\pm}$ of the doublet and $g^{l}_{L}$ is the Z boson coupling to LH charged leptons given by $$g^{l}_{L}=\frac{g}{\cos\theta_{W}}\left(-\frac{1}{2}+\sin^2\theta_{W}\right)
\label{Zlepton}$$
On the other hand, the quartet contribution is $$F^{\text{quartet}}_{Z}=F^{\text{quartet}}_{Z(n)}+F^{\text{quartet}}_{Z(c)}
\label{quartetZ0}$$ where the neutral fermion of the triplet contributes as $$\begin{aligned}
F^{\text{quartet}}_{Z(n)}& =-\frac{1}{16\pi^2}\sum_{i=1}^{3}\sum_{\sigma_{1},\sigma_{2}}\left[2\,y^{*}_{e i\sigma_{1}}y_{i\mu\sigma_{2}}\,g_{Z\sigma_{1}\sigma_{2}}\,
C_{24}(m_{F^{0}_{i}},m_{\sigma_{1}},m_{\sigma_{2}})\right.\notag\\
& \left. +y^{*}_{e i\sigma_{1}}y_{i\mu\sigma_{1}}g^{l}_{L}B_{1}(m_{F^{0}_{i}},m_{\sigma_{1}})\right]
\label{quartetZ1}
\end{aligned}$$ where $\sigma_{1,2}\in \{\Delta^{+}_{1},\Delta^{+}_{2}\}$ and $g_{Z\sigma_{1}\sigma_{2}}$ is the Z boson coupling to $\sigma_{1}$ and $\sigma_{2}$ scalars of the quartet. The charged fermion of the triplet has the following contribution $$\begin{aligned}
F^{\text{quartet}}_{Z(c)} & =-\frac{1}{16\pi^2}\sum_{i=1}^{3}\sum_{\sigma_{1},\sigma_{2}}\left\{y^{*}_{e i\sigma_{1}}y_{i\mu\sigma_{1}}\,g_{ZF^{\pm}_{i}\overline{F^{\pm}_{i}}}\, \left[\left(2C_{24}(m_{\sigma_{1}},m_{F^{\pm}_{i}},m_{F^{\pm}_{i}})+\frac{1}{2}\right)\right.\right. \notag \\
& +\left. m^{2}_{F^{\pm}_{i}} C_{0}(m_{\sigma_{1}},m_{F^{\pm}_{i}},m_{F^{\pm}_{i}})\right] +2\,y^{*}_{e i\sigma_{1}}y_{i\mu\sigma_{2}}\,g_{Z\sigma_{1}\sigma_{2}}\, C_{24}(m_{F^{\pm}_{i}},m_{\sigma_{1}},m_{\sigma_{2}}) \notag \\
&\left. + y^{*}_{e i\sigma_{1}}y_{i\mu\sigma_{1}}g^{l}_{L}B_{1}(m_{F^{\pm}_{i}},m_{\sigma_{1}})\right\}
\label{quartetZ2}\end{aligned}$$ Here $\sigma_{1}$ and $\sigma_{2}$ range over the $S,\,A,\ \Delta^{++}$, and $g_{ZF^{\pm}_{i}\overline{F^{\pm}_{i}}}$ is the coupling of Z boson to charged fermions. Moreover, $B_{1}$, $C_{0}$ and $C_{24}$ are the loop functions, adopted from [@Arganda:2005ji; @Arganda:2014lya; @Abada:2014kba], presented in the appendix \[loopappendix\]. As $B_{1}$ and $C_{24}$ arise from divergent loop integrals, for large $M$, $$C_{24}(M,m,m)\rightarrow \frac{1}{4}\text{ln}\frac{M^2}{\mu^2}\,\,,\,\,B_{1}\rightarrow \frac{1}{2}\text{ln}\frac{M^2}{\mu^2}
\label{decoup}$$ Therefore the combination $2 xC_{24}+yB_{1}$ in Z-penguin contribution eq. (\[quartetZ1\]) or in eq. (\[quartetZ2\]) is vanishing at very large mass $M$ when there are specific relations set by group theoretical requirements in vertex factors $x$ and $y$.
### Box contribution
Lastly the box contribution for the doublet case, presented in figure \[boxfigs\], is [@Arganda:2005ji] $$e^2\,B^{\text{doublet}}_{(n)}=\frac{1}{16\pi^2}\sum_{i,j=1}^{3}\left[\frac{\tilde{D}_{0}}{2}y^{*}_{ei}y_{i\mu}y^{*}_{ej}y_{je}
+D_{0}m_{N_{i}}m_{N_{j}}y^{*}_{ei}y^{*}_{ei}y_{j\mu}y_{je}\right]
\label{boxdoublet}$$ where, $\tilde{D}_{0}=\tilde{D}_{0}(m_{N_{i}},m_{N_{j}},m_{C},m_{C})$ and $D_{0}=D_{0}(m_{N_{i}},m_{N_{j}},m_{C},m_{C})$ are loop functions given in the appendix B.
For the quartet case, the contribution of the box diagram can be written as $$B^{\text{quartet}}=B^{\text{quartet}}_{(n)}+B^{\text{quartet}}_{(c)}
\label{boxquartet0}$$ with $B^{\text{quartet}}_{(n)}$ is the contribution due to the neutral fermions and it is given by $$e^2\,B^{\text{quartet}}_{(n)}=\frac{1}{16\pi^2}\sum_{i,j=1}^{3}\sum_{\sigma_{1},\sigma_{2}}\left[\frac{\tilde{D}_{0}}{2}y^{*}_{ei\sigma_{1}}y_{i\mu\sigma_{2}}y^{*}_{ej\sigma_{2}}y_{je\sigma_{1}}
+D_{0}m_{F^{0}_{i}}m_{F^{0}_{j}}y^{*}_{ei\sigma_{1}}y^{*}_{ei\sigma_{2}}y_{j\mu\sigma_{2}}y_{je\sigma_{1}}\right]
\label{boxquartet1}$$ where, $\tilde{D}_{0}=\tilde{D}_{0}(m_{F^{0}_{i}},m_{F^{0}_{j}},m_{\sigma_{1}},m_{\sigma_{2}})$ and $D_{0}=D_{0}(m_{F^{0}_{i}},m_{F^{0}_{j}},m_{\sigma_{1}},m_{\sigma_{2}})$. Here, $\sigma_{1,2}$ ranges over $\Delta^{+}_{1}$ and $\Delta^{+}_{2}$.
The term $B^{\text{quartet}}_{(c)} $ corresponds to the contribution of the charged fermions and it reads $$e^2\,B^{\text{quartet}}_{(c)}=\frac{1}{16\pi^2}\sum_{i,j=1}^{3}\sum_{\sigma_{1},\sigma_{2}}\frac{\tilde{D}_{0}}{2}y^{*}_{ei\sigma_{1}}
y_{i\mu\sigma_{2}}y^{*}_{ej\sigma_{2}}y_{je\sigma_{1}}$$ Here, $\tilde{D}_{0}=\tilde{D}_{0}(m_{F^{\pm}_{i}},m_{F^{\pm}_{j}},m_{\sigma_{1}},m_{\sigma_{2}})$ and $\sigma_{1,2}$ ranges over $\Delta^{++},\,S,\,A$.
$\mu-e$ conversion in nuclei
----------------------------
The conversion rate, normalized by the muon capture rate is [@Kitano:2002mt; @Arganda:2007jw; @Toma:2013zsa; @Crivellin:2014cta] $$\begin{aligned}
\text{CR}(\mu-e,\text{Nucleus})&=\frac{p_{e}E_{e}m^3_{\mu}G_{F}^2\alpha_{em}^3Z_{eff}^{4}F_{p}^2}{8\pi^2 Z\,\Gamma_{\text{capt}}}
\left\{|(Z+N)(g^{(0)}_{LV}+g^{(0)}_{LS})+(Z-N)(g^{(1)}_{LV}+g^{(1)}_{LS})|^{2}\right.\notag\\
&+\left.|(Z+N)(g^{(0)}_{RV}+g^{(0)}_{RS})+(Z-N)(g^{(1)}_{RV}+g^{(1)}_{RS})|^{2}\right\}
\label{mueconv}\end{aligned}$$ Here, $Z$ and $N$ are the number of protons and neutrons in the nucleus, $Z_{eff}$ is the effective atomic charge, $F_{p}$ is the nuclear matrix element and $\Gamma_{\text{capt}}$ represents the total muon capture rate. $p_{e}$ and $E_{e}$ are the momentum and energy of the electron (taken as $\sim m_{\mu}$ in the numerical evaluation). $g^{(0)}_{XK}$ and $g^{(1)}_{XK}$ ($X=L,R$ and $K=V,S$) in the above expression are given as $$\begin{aligned}
g^{(0)}_{XK}=\frac{1}{2}\sum_{q=u,d,s}(g_{XK(q)}G^{(q,p)}_{K}+g_{XK(q)}G^{(q,n)}_{K})\nonumber\\
g^{(1)}_{XK}=\frac{1}{2}\sum_{q=u,d,s}(g_{XK(q)}G^{(q,p)}_{K}-g_{XK(q)}G^{(q,n)}_{K})
\label{nuclear1}\end{aligned}$$ $g_{XK(q)}$ are the couplings in the effective Lagrangian describing $\mu-e$ conversion, $${\cal L}_{eff}=-\frac{G_{F}}{\sqrt{2}}\sum_{q}\left\{[g_{LS(q)}\overline{e}_{L}\mu_{R}+g_{RS(q)}\overline{e}_{R}\mu_{L}]\overline{q}q+
[g_{LV(q)}\overline{e}_{L}\gamma^{\mu}\mu_{L}+g_{RV(q)}\overline{e}_{R}\gamma^{\mu}\mu_{R}]\overline{q}\gamma_{\mu}q\right\}$$ $G^{(q,p)},\, G^{(q,n)}$ are the numerical factors that arise when quark matrix elements are replaced by the nucleon matrix elements, $$\langle p|\overline{q}\Gamma_{K}q|p\rangle=G^{(q,p)}_{K}\overline{p}\Gamma_{K}p\,\,,\,\,
\langle n|\overline{q}\Gamma_{K}q|n\rangle=G^{(q,n)}_{K}\overline{n}\Gamma_{K}n
\label{nuclear2}$$ For the inert scalar model, the $\mu-e$ conversion rate receives the $\gamma$, Z and Higgs penguin contributions. In $\gamma$ and Z penguin diagrams, $\overline{q}q$ (q=u,d,s) line is attached to $\gamma$ line of $\mu e \gamma$ vertex and Z boson line of $\mu e Z$ vertex respectively. It doesn’t receive any box contribution because there is no coupling between inert scalars and quarks because of the $Z_{2}$ symmetry. Moreover, Higgs penguin contribution is small compared to $\gamma$ and Z penguin diagrams because of small Yukawa couplings thus neglected in our numerical analysis. The relevant effective coupling for the conversion in the inert scalar model is $$\begin{aligned}
g_{LV(q)}&=&g^{\gamma}_{LV(q)}+g^{Z}_{LV(q)}\nonumber\\
g_{RV(q)}&=&g_{LV(q)}|_{L\leftrightarrow R}\nonumber\\
g_{LS(q)}&\approx& 0\,\,\,,\,\,\,g_{RS(q)}\approx 0\nonumber\end{aligned}$$ The relevant couplings are $$\begin{aligned}
g^{\gamma}_{LV(q)}&=&\frac{\sqrt{2}}{G_{F}}e^2Q_{q}(A_{ND}-A_{D})\label{nuclear31}\\
g^{Z}_{LV(q)}&=&-\frac{\sqrt{2}}{G_{F}}\frac{g^{q}_{L}+g^{q}_{R}}{2}\frac{F_{Z}}{m_{Z}^2}
\label{nuclear32}\end{aligned}$$ Here $Q_{q}$ is the electric charge of the quarks and Z boson couplings to the quarks are $$g^{q}_{L}=\frac{g}{\cos\theta_{W}}(T^{q}_{3}-Q_{q}\sin^2\theta_{W})\,\,,\,\,
g^{q}_{R}=-\frac{g}{\cos\theta_{W}}Q_{q}\sin^2\theta_{W}
\label{nuclear4}$$ Also the relevant numerical factors for nucleon matrix elements are $$G^{(u,p)}_{V}=G^{(d,n)}_{V}=2\,\,,\,\,G^{(d,p)}_{V}=G^{(u,n)}_{V}=1
\label{nuclear5}$$
Results and Discussion {#resultdiscussion}
======================
In this section we have presented our numerical results and discussed the phenomenological implications of those results for larger scalar multiplets. But before presenting the results, we have listed all the constraints regarding dark matter and collider searches so that our analysis can focus on parameter space for where both inert doublet and quartet models are viable.
There are two possible dark matter (DM) candidates in the inert scalar models. In the doublet model they are the lightest right handed neutrino, $N_{1}$ and the lightest neutral scalar, $S$ of the doublet. On the other hand, in the quartet model the neutral component of the lightest fermion triplet, $F_{1}^{0}$ and the lightest neutral scalar, $S$ of the quartet can play the dark matter role. In both cases fermionic and scalar DM give rise to different phenomenology. In this preliminary study of comparing different LFV rates in inert scalar models, we have chosen the scalar as the DM particle and used the constraints associated with it in our analysis.
Constraints and parameter space {#constranitssec}
-------------------------------
### Collider constraints {#colliderconst}
For the doublet scalar, the collider searches have put the following mass constraints, $m_{C^{+}}\simgt 100$ GeV, $m_{S}\simgt 65-80$ GeV and $m_{A}\simgt 140$ GeV [@Lundstrom:2008ai; @Dolle:2009ft; @Gustafsson:2012aj; @Aoki:2013lhm; @Belanger:2015kga]. Although there hasn’t been any collider studies on the quartet, one can recast the constraints of the doublet case onto the quartet. As the quartet scalar has the cascade decay channel, we can expect multilepton final states along with missing transverse energy similar to doublet. Therefore, the mass constraints for quartet, compatible with bounds on electroweak precision observable [@Beringer:1900zz], are $m_{\Delta_{1,2}}, m_{\Delta^{++}}\simgt 100$ GeV, $m_{S} \simgt 65-80$ GeV and $m_{A} \simgt 140$ GeV. Considering $S$ as the DM also set the mass hierarchy in quartet components: $m_{S}<m_{\Delta_{1}^{+}}<m_{\Delta^{++}}<m_{\Delta_{2}^{+}}<m_{A}$. In contrast, the scalar masses in the TeV scale for both doublet and quartet are fairly unconstrained.
In the case of fermions, the masses of RH neutrino in the doublet case are not constrained by current collider data. In contrast, fermion triplet of the quartet case, having gauge interaction, will have an accessible collider signature. In [@Aad:2013yna] the mass of the charged component of the triplet is excluded up to 270 GeV with 8 TeV 20.3 $\text{fb}^{-1}$ LHC data. Moreover, in [@Cirelli:2014dsa] it was shown that the projected reach for 14 TeV collider with 3 $\text{ab}^{-1}$ luminosity (High luminosity LHC phase) would be $M_{F}\simlt 500$ GeV, for (future) 100 TeV pp collider with 3 $\text{ab}^{-1}$ luminosity in mono-jet searches, $M_{F}\simlt 1.3$ TeV, and with 30 $\text{ab}^{-1}$ luminosity, $M_{F}\simlt 1.7$ TeV.
### DM Constraints {#darkmatter}
The dark matter density of the universe measured by Planck collaboration is $\Omega_{DM}h^2=0.1196\pm 0.0031\,(68\%\,\text{CL})$ [@Ade:2013zuv]. In the inert scalar model, there are two viable mass region of scalar DM. They are the low mass region ($m_{S}<m_{W}$) and the high mass region ($m_{S}\gg m_{W}$). The low mass DM region of doublet model has been extensively studied. In addition, same region for DM in the quartet was addressed in [@AbdusSalam:2013eya] where it was shown that it is harder to achieve low mass dark matter with correct relic density compared to the doublet because, for most of the parameter space, bounds on electroweak $T$ parameter sets the mass of single charged component, $\Delta_{1}^{+}$ close to the DM mass and therefore it is not only in tension with collider bounds but also opens up coannihilation channel and leads to a sub-dominant DM in the universe.
In the high mass region of the doublet, as shown in [@Hambye:2009pw], the DM mass starts from a lower bound of $m_{0}=534\pm 25$ GeV (where the thermal freeze-out only happens through the gauge interaction) to $20$ TeV if the higgs-scalar coupling, $\lambda_{S}\simlt 2\pi$. The maximal mass splitting compatible with correct relic density, are $$|m_{A}-m_{S}|\simlt 16.9\,\text{GeV},\,\,\, |m_{C^{+}}-m_{S}|\simlt 14.6\,\text{GeV}
\label{doubletmasssplit}$$ when $m_{S}\sim O(5\,\text{TeV})$.
In the case of high mass region for the quartet, we have used FeynRules [@Alloul:2013bka] to generate the model files for MicrOMEGAS [@Belanger:2013oya] and have found out that the DM mass starts from a lower bound of $2.46$ TeV (freeze out only through gauge interaction)[^2] to upper bound of $14$ TeV set again by $\lambda_{S}\simlt 2\pi$ bound. In this case, the mass splitting between the DM and other components are $$\begin{aligned}
|m_{A}-m_{S}|&\simlt& 16\,\text{GeV},\,\,|m_{\Delta_{2}^{+}}-m_{S}|\simlt 14\,\text{GeV}\nonumber\\
|m_{\Delta^{++}}-m_{S}|&\simlt& 12\,\text{GeV},\,\,|m_{\Delta_{1}^{+}}-m_{S}|\simlt 1\,\text{GeV}
\label{quartetmasssplit}\end{aligned}$$ when $m_{S}\sim O(5\,\text{TeV})$. figure \[allowerdrelic\] presents the $m_{S}-\lambda_{S}$ plane with allowed region for both doublet and quartet scalar DM by the relic density and direct detection bound [@Akerib:2013tjd]. Here, $\lambda_{S}$ is effective coupling of $S$ to Higgs field as can be seen in eq. (\[masseqscalar\]). From figure \[allowerdrelic\], we can see that there is an overlapping region on the plane where doublet and quartet DM satisfy the constraints simultaneously.
The $\gamma$ coupling which controls the mass splitting between scalar (DM) and pseudoscalar component, has the range $\gamma\in [10^{-9},2.7]$ for the doublet and $\gamma\in[10^{-9},1.36]$ to be consistent with the relic density. But it gets another constraint from bounds on DM inelastic scattering with nuclei. If the typical velocity of a DM particle, $\chi$ is $\beta_{\chi} c\sim 220\,\text{km}/\text{sec}$, the inelastic scattering is kinematically forbidden if the splitting $\Delta_{\chi}$ between DM and the next to lightest component is larger, $$\Delta_{\chi}>\frac{\beta_{\chi}^2 m_{\chi}M_{\text{nucleus}}}{2(m_{\chi}+M_{\text{nucleus}})}\nonumber$$ Therefore one would require, $\gamma\simgt 10^{-5}$ to kinematically forbid the inelastic scattering of scalar DM with O(TeV) mass. As the inelastic scattering is mediated by the exchange of Z boson and the scattering cross section is in the order of $10^{-40}-10^{-39}\,\text{cm}^2$, which is much larger than the direct detection bounds, the allowed range of $\gamma$ for doublet and quartet DM are $\gamma\in [10^{-5},2.7]$ and $\gamma\in[10^{-5},1.36]$, respectively.
![Correlation between the mass of the DM, $m_{S}$ and the effective coupling between the Higgs and the DM, $\lambda_{S}$ for the doublet and quartet case. Here, the white region is excluded by the direct detection bound from the LUX collaboration [@Akerib:2013tjd]. The left figure represents the correlation without taking into account the Sommerfeld enhancement in the thermal freeze-out. In the right figure, for the green shaded region, Sommerfeld enhancement is not negligible.[]{data-label="allowerdrelic"}](allowed_relic.pdf "fig:"){width="7.5cm"} ![Correlation between the mass of the DM, $m_{S}$ and the effective coupling between the Higgs and the DM, $\lambda_{S}$ for the doublet and quartet case. Here, the white region is excluded by the direct detection bound from the LUX collaboration [@Akerib:2013tjd]. The left figure represents the correlation without taking into account the Sommerfeld enhancement in the thermal freeze-out. In the right figure, for the green shaded region, Sommerfeld enhancement is not negligible.[]{data-label="allowerdrelic"}](allowedrelic_SE_fin.pdf "fig:"){width="7.5cm"}
### Gamma ray constraints and Sommerfeld enhancement {#sommerfeld}
Compared to the collider searches and DM direct detection experiments, indirect detection can set limits on the inert scalar DM at the TeV mass range because of a certain enhancement in the annihilation cross sections.
At small relative velocity, two particles interacting via a long range force receive non-perturbative enhancement in the interaction cross section which is known as Sommerfeld enhancement [@sommerfeldref]. When the mass of the DM is much larger than the mass of W and Z bosons, the electroweak interaction effectively behaves like a long range force, thus pair annihilation cross sections of the DM also receive Sommerfeld enhancements as pointed in [@Hisano:2003ec; @Hisano:2004ds; @Hisano:2005ec]. At present, as the relative velocity of DM is about $10^{-3}$, Sommerfeld enhancement significantly boosts the indirect detection signals, specially the gamma rays produced from the DM annihilation and put stringent constraint on the DM in the light of the experimental observations. In fact it was shown for the case of wino dark matter [@Fan:2013faa; @Cohen:2013ama] and minimal DM models (5-plet fermion and 7-plet scalar with zero hypercharge) [@Cirelli:2015bda; @Garcia-Cely:2015dda; @Aoki:2015nza] (and references therein) that they are highly constrained to be the dominant DM of the universe by the experimental limits on gamma ray spectrum due to the Sommerfeld enhancement in the pair annihilation cross section.
Having electroweak charge, the heavy DM component of the inert scalar multiplet is also expected to have enhancements in both weak and scalar interactions. Although the full treatment of Sommerfeld enhancement for inert scalar model is beyond the scope of this work, following [@ArkaniHamed:2008qn; @Slatyer:2009vg], we introduce the dimensionless parameters to curve out the regions of the parameter space where the enhancement takes place and where the enhancement is negligible. The parameters are, $\epsilon_{v_{\text{DM}}}=(v_{\text{DM}}/c)/\alpha$, $\epsilon_{\phi}=(m_{\phi}/m_{\text{DM}})/\alpha$ and $\epsilon_{\delta}=\sqrt{2\delta/m_{\text{DM}}}/\alpha$. Here $v_{\text{DM}}$ is the relative velocity of the DM particle, $m_{\phi}$ is the mass of the gauge boson carrying the force, $\delta$ is the mass splitting between the DM and the next to lightest charged component of the multiplet and $\alpha$ is the coupling constant of the relevant interaction. It was shown in [@Slatyer:2009vg] that the Sommerfeld enhancement is relevant if $\epsilon_{v_{\text{DM}}},\,\epsilon_{\phi},\,\epsilon_{\delta}\simlt 1$. On the other hand, it is negligible for the region of parameter space where any of $\epsilon_{v_{\text{DM}}},\,\epsilon_{\phi},\,\epsilon_{\delta}> 1$.
In the case of the minimal DM models, the processes contributing to the gamma spectrum from DM annihilation are, DM DM $\rightarrow W^{+}W^{-},ZZ$ where the decay and fragmentation of W and Z pairs produce secondary photons and DM DM $\rightarrow \gamma\gamma,\gamma Z$ producing line spectrum of mono energetic photons. The Sommerfeld enhancement takes place when the DM-DM two particle state changes into $\text{DM}^{+}\text{DM}^{-}$ two particle state, where $\text{DM}^{\pm}$ is the next to lightest charged state, by exchanging W boson and subsequently charged states annihilate. For the minimal DM case, the DM and next to lightest charged state is almost degenerate (only loop induced mass splitting of the $O(100)$ MeV), so $\epsilon_{\delta}<1$ for $\alpha_{w}=1/30$ and TeV scale DM and one can have Sommerfeld enhanced annihilation cross section. On the other hand, for the inert scalar models, the following terms in the scalar potential $$V\supset \beta \Phi^\dagger
\tau^a\Phi \Delta^\dagger T^a \Delta
+\gamma[ (\Phi^T\epsilon \tau^a\Phi) (\Delta^T
C T^a \Delta)^\dagger+h.c]
\label{scalarmasssplit}$$ can split the DM component and other charged component of the multiplet after electroweak symmetry breaking. For example, for quartet, when $m_{S}=3$ TeV and $\delta=m_{\Delta^{+}_{1}}
-m_{S}=1.5$ GeV, $\epsilon_{\delta}$ is $1.001$. In addition, from figure \[allowedepsilon\], we can see that the bounds on electroweak precision observables allow maximum mass splitting to be $8.78$ GeV and corresponding $\epsilon_{\delta}$ is $2.46$. Therefore for such mass splitting, according to [@Slatyer:2009vg], the Sommerfeld enhancement can be negligible in the inert scalar models.
![$\epsilon_{\delta}$ vs $\delta=m_{\Delta^{+}_{1}}-m_{S}$ for DM mass, $m_{S}=3000$ GeV in the quartet. Here, blue points are allowed by stability conditions on the scalar potential and perturbative limits on scalar couplings. Red points are allowed by the bounds on electroweak precision observables.[]{data-label="allowedepsilon"}](epsilondelta.pdf){width="7.5cm"}
Moreover, Sommerfeld enhancement also affects the thermal freeze-out of the minimal DM as pointed out in [@Cirelli:2007xd; @Cirelli:2009uv]. Such enhancement is also expected in the case of inert scalar DM. But if the thermal freeze-out happens after the electroweak phase transition, one can introduce enough mass splitting so that $\epsilon_{\delta}> 1$. In fact, $\delta=1.5$ GeV is compatible with the observed DM relic density of the universe with $m_{S}=3$ TeV for both doublet and quartet scalar DM. On the other hand, if freeze-out temperature, $T_{F}$ is larger than the critical temperature of electroweak phase transition, $T_{\text{PT}}$, the thermal freeze-out takes place before the electroweak phase transition and there will not be any mass splitting to suppress the enhancement. Therefore thermal DM scenario of inert scalar DM will be different than that of the broken phase. But the value of the critical temperature of the electroweak phase transition depends on the model, order of the transition and its dynamics (see for example [@Quiros:1999jp; @Morrissey:2012db]) . For this reason, we consider the range, $T_{\text{PT}}=100-200$ GeV for the transition temperature. Now if $x_{f}=M_{\text{DM}}/T_{F}\sim 20$, we see that for $m_{\text{DM}}>4$ TeV, freeze out takes place in the unbroken phase and will involve Sommerfeld enhanced annihilation cross-sections.
On the other hand, for $M_{\text{DM}}<4$ TeV, the DM freezes out in the broken phase. So one can introduce the enough mass splitting, $\delta\sim 1.5$ GeV between the DM and next to lightest charged state to suppress the enhancement in the annihilation cross sections.
For the inert scalar multiplets, apart from gauge interactions, the DM interacting via higgs exchange is also expected to have enhancement. In this case, the Yukawa potential $V_{\text{sc}}$ experienced by the DM is $$V_{\text{sc}}(r)
=\alpha_{\text{sc}}\frac{e^{-m_{h}r}}{r}\,\,\,\text{with}\,\,\,
\alpha_{\text{sc}}=\frac{\lambda_{S}^2}{4\pi} \frac{v^2}{m_{S}^2}
\label{scalaryukawa}$$ For example, if $m_{S}=1$ TeV and $\lambda_{S}=\pi$, $\alpha_{\text{sc}}=0.047$ so $\epsilon_{\phi}=2.6$ for the Higgs exchange, therefore the enhancement is generally not important for scalar interaction with the DM mass at TeV range.
In summary, although the DM with mass at TeV range in the inert scalar model is expected to have Sommerfeld enhancement in the gauge interactions and can have significantly enhanced indirect detection signal, there is a small common region of parameter space for doublet and quartet as seen from figure \[allowerdrelic\] (right) where one can have enough mass splitting to suppress the Sommerfeld enhancement in the inert scalar models and such mass splitting is compatible with the observed DM relic density. Therefore in the subsequent analysis, we only focus that small region of parameter space with benchmark point, $m_{S}=3$ TeV and $\delta=1.5$ GeV and have left the complete analysis of Sommerfeld enhancement in the inert quartet case for future work [@futureDMcase].
### Scalar coupling and LFV rates with scalar DM {#gammascalarDM}
There is a correlation between the $\gamma$ coupling of the scalar sector and the rate of LFV processes when $R$ in eq. (\[yuk1\]) is a real orthogonal matrix. As we can see from eq. (\[neutrinodoublet1\]) and eq. (\[neutrinoquartet\]) that the smaller value of $\gamma$ leads to smaller value of the loop factor $\Lambda_{i}$ and thus neutrino mass. This in turn increases the Yukawa coupling, as in eq. (\[casas\]), and becomes inconsistent with perturbativity bound eq. (\[yukawa1\]) when $\gamma$ is very small. On the other hand, large value of $\gamma$ implies larger separation in $m_{S}$ and $m_{A}$ and also in $m_{\Delta^{+}_{1}}$ and $m_{\Delta^{+}_{2}}$, thus larger value of $\Lambda_{i}$ and in this case the value of Yukawa coupling is reduced. In figure \[gammacomparison\] (left), We have illustrated this by comparing $\text{Br}(\mu\rightarrow e\gamma)$ for $\gamma=10^{-9}$ and $10^{-5}$ respectively. We can see that for $\gamma=10^{-5}$, the rate has become out of reach for current and future experiments. Therefore in the case of real $R$ matrix, $\gamma\sim O(10^{-9})$ leads to appreciable LFV rates. However we have seen in Sec. \[darkmatter\] that as one would require, $\gamma\simgt 10^{-5}$ to kinematically forbid the inelastic scattering of scalar DM with O(TeV) mass so considering only real $R$ will lead to negligible rates of LFV processes.
On the other hand, in the case of complex $R$, such correlation between $\gamma$ and the rates of LFV processes is not straightforward because the size of Yukawa coupling also depends on the imaginary part of the complex angles in $R$. For simplicity, we have added an imaginary part, Im(z), in three angles of $R$ and in figure \[gammacomparison\] (right), we can see that, despite having $\gamma=10^{-5}$, $\text{Br}(\mu\rightarrow
e\gamma)$ become comparable to the current bound with increasing values of Im(z). Again perturbativity of the Yukawa coupling typically put upper bound on Im(z) of $O(3-5)$. Therefore, one can have viable scalar DM in both doublet and quartet models where $\xi>1$ with appreciable LFV rates by tuning the Imaginary part of complex angles in $R$.
![Left figure presents the dependence of the rate of LFV processes on the $\gamma$ when the $R$ is a real matrix. Here we have considered only $\text{Br}(\mu\rightarrow e\gamma)$ for illustration. The brown and blue represents the rate in the doublet and quartet cases respectively for $\gamma=10^{-9}$. On the other hand, the orange and red points represents the rate in the doublet and quartet cases respectively for $\gamma=10^{-5}$. Right figure presents the correlation of the rate in doublet (brown points) and quartet (blue points) with imaginary part of the complex angle, Im(z), when we consider complex $R$ matrix. Here The scalar mass is fixed at $m_{\text{scalar}}=3000$ GeV and $\gamma=10^{-5}$. The black horizontal line is the current bound $5.7\times 10^{-13}$ and red line is projected bound $6\times 10^{-14}$.[]{data-label="gammacomparison"}](fin_comparison_gamma.pdf "fig:"){width="7.5cm"} ![Left figure presents the dependence of the rate of LFV processes on the $\gamma$ when the $R$ is a real matrix. Here we have considered only $\text{Br}(\mu\rightarrow e\gamma)$ for illustration. The brown and blue represents the rate in the doublet and quartet cases respectively for $\gamma=10^{-9}$. On the other hand, the orange and red points represents the rate in the doublet and quartet cases respectively for $\gamma=10^{-5}$. Right figure presents the correlation of the rate in doublet (brown points) and quartet (blue points) with imaginary part of the complex angle, Im(z), when we consider complex $R$ matrix. Here The scalar mass is fixed at $m_{\text{scalar}}=3000$ GeV and $\gamma=10^{-5}$. The black horizontal line is the current bound $5.7\times 10^{-13}$ and red line is projected bound $6\times 10^{-14}$.[]{data-label="gammacomparison"}](complex_angle.pdf "fig:"){width="7.5cm"}
### Viable parameter space {#viableparametersp}
The parameter space for the model consists of $\{M_{0},\alpha,\beta,\gamma\}$ of the scalar sector and $\{M_{N(F)},y_{i\alpha}\}$ of the fermionic sector. Here $M_{N}$ and $M_{F}$ are the masses of RH neutrino and real fermion triplet (as the components of the triplet are degenerate at tree level) respectively.
The focus of this preliminary study is the comparison among different LFV rates in doublet and quartet model with scalar DM. At first, from figure \[allowerdrelic\], as an exemplary point, we have chosen the mass of scalar DM to be $m_{S}=3$ TeV with $\lambda_{S}=1.3$ in the $m_{S}-\lambda_{S}$ plane so that scalar DM is viable both in doublet and quartet model. Moreover, $\gamma$ is set to be $10^{-5}$ to be consistent with bounds from DM direct detection. As the components of the scalar multiplet are almost degenerate apart from the very small splitting induced by non zero $\gamma$. Therefore we set the average mass of the scalar components at $m_{\text{scalar}}=3$ TeV.
There are two sets of fermion mass range we have considered in our analysis. For the comparison of LFV rates with the variation of fermion masses both in doublet and quartet model, we have evaluate them in two sets, namely, i) where $\xi=M^2_{N(F)}/m^2_{\text{scalar}}<1$ so that the scalar component ceases to be the DM and ii) where $\xi>1$ where the scalar component is the DM. We have varied the masses of RH neutrinos and the fermion triplet within the range, $M_{N(F)}\in (270\,\text{GeV},\,30\,\text{TeV})$ which encompasses both sets mentioned above. $270$ GeV is taken as the lower limit of fermion mass as triplet fermion is excluded up to that mass in collider searches. Also such range is considered to see how the LFV rates vary with the mass of the fermion in addition to the DM aspects of inert scalar model.
We have used the experimental values of low energy neutrino parameters, $U_{\text{PMNS}}$, $\Delta m^2_{\text{solar}}$ and $\Delta^2_{\text{atm}}$ as the input in eq. (\[yuk1\]) for Yukawa couplings. For both normal and inverted hierarchies, we could only vary the lowest neutrino mass, $m_{\nu_{1}}$, the Dirac phase, $\delta$ and Majorana phases, $\alpha_{\nu},\, \beta_{\nu}$ and three complex angles, $z_{1},\,z_{2},\,z_{3}$ of, $R$. In our numerical analysis,as an simplification, the lowest neutrino mass is set to $m_{\nu}=1$ meV, $\delta\in [0, 2\pi]$, $\alpha=\beta=0$ and common imaginary part in $z_{i}=\theta_{i}+i\,\text{Im}(z_{i})$, Im(z) with the range $(0,5)$.
Summarizing, our input parameters in the numerical scans are $\{M_{0},\alpha,\beta,\gamma,M_{N}=M_{F}=\tilde{M},m_{\nu_{1}},\delta,\alpha_{\nu},\beta_{\nu},\theta_{1}
,\theta_{2},\theta_{3},\text{Im}(z)\}$ satisfying all the constraints mentioned above. Therefore, we can compare the LFV rates in both models for common viable point in the parameter space.
LFV processes {#LFVprocessessection}
-------------
In the inert scalar models with scalar DM in the high mass regime, there is no direct correlation between the Yukawa couplings and DM properties. Also we have seen that the real matrix $R$ and $\gamma\simgt 10^{-5}$ (scalar DM direct detection constraint) give rise to small Yukawa couplings which in turn lead to LFV rates beyond the reach of current and future experiments as seen in figure \[gammacomparison\] (left). But the size of the Yukawa coupling can be enhanced by varying the imaginary part of complex angles of $R$ without substantially affecting the phenomenology of the scalar DM and despite having $\gamma\simgt 10^{-5}$, we can easily obtain the LFV rates within the experimental range.
So first we have compared the rates of $\mu\rightarrow e\gamma$, $\mu\rightarrow ee\overline{e}$ and $\mu-e$ conversion rate with $\gamma=10^{-5}$ and the real $R$ matrix by varying the fermion masses for the doublet and quartet models. Then we vary Im(z) within its constrained limits and determine the region allowed by current and future bounds on the rates of these three LFV processes for both doublet and quartet cases. Also when $\xi>1$, we have our scalar dark matter in both doublet and quartet models.
### $\text{Br}(\mu\rightarrow e \gamma)$ {#muegammasec}
Due to the excellent bound put by the MEG collaboration [@Adam:2011ch; @Adam:2013mnn], $\mu\rightarrow e\gamma$ is one of the most well studied LFV processes. figure \[Brmuegamma\] shows the comparison of this process between the doublet (brown points) and the quartet (blue points) scalar. We can see that the quartet contribution to $\mu\rightarrow e\gamma$ is larger than that of the doublet. For the same parameter point, in the quartet case, additional charged and neutral scalar ($\Delta_{1}^{\pm}$, $\Delta_{2}^{\pm}$, $\Delta^{\pm\pm}$, $S$ and $A$) and fermion states ($F^{0}_{i}$ and $F^{\pm}_{i}$) enter in the loop compared to single charged scalar ($C^{\pm}$) and neutral fermion state ($N_{i}$) in the doublet case and as the contributions of extra states are additive, the rate has increased in the quartet case than that of the doublet. From figure \[Brmuegamma\] we can see that $\text{Br}(\nu\rightarrow e \gamma)$ is larger for the quartet than the doublet for both $\xi<1$ and $\xi>1$ (where the doublet and quartet scalars are the DM).
![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\text{Br}(\mu\rightarrow e\gamma)$ for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken $M_{N(F)}$ to be degenerate, random Dirac phase $\delta$ and random real matrix $R$. Also we have set Majorana phases $\alpha_{\nu}$ and $\beta_{\nu}$ to be zero in this case. The scalar mass is fixed at $m_{\text{scalar}}=3000$ GeV. Also $\gamma=10^{-5}$ and light neutrino mass, $m_{\nu_{1}}=1$ meV. []{data-label="Brmuegamma"}](muegammafinal_NH.pdf "fig:"){width="7.5cm"} ![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\text{Br}(\mu\rightarrow e\gamma)$ for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken $M_{N(F)}$ to be degenerate, random Dirac phase $\delta$ and random real matrix $R$. Also we have set Majorana phases $\alpha_{\nu}$ and $\beta_{\nu}$ to be zero in this case. The scalar mass is fixed at $m_{\text{scalar}}=3000$ GeV. Also $\gamma=10^{-5}$ and light neutrino mass, $m_{\nu_{1}}=1$ meV. []{data-label="Brmuegamma"}](muegammafinal_IH.pdf "fig:"){width="7.5cm"}
### $\text{Br}(\mu\rightarrow e e\overline{e})$ {#mueeesection}
In $\mu\rightarrow e e \overline{e}$, the dominant contributions are coming from $\gamma$-penguin and Box diagrams. The Higgs penguin diagram is suppressed by the small electron Yukawa coupling. The Z penguin contribution is small because of the cancellation that takes place between $C_{24}$ and $B_{1}$ terms in eq. (\[doubletZ\]) and also between the same terms in eq. (\[quartetZ1\]) when $m_{\sigma_{1}}=m_{\sigma_{2}}$. Moreover, similar cancellation takes place between the first two lines and third line of eq. (\[quartetZ2\]) due to the specific relations among the couplings in front of the vertices. Therefore Z penguin contribution is also small in $\mu\rightarrow e e \overline{e}$ for both inert doublet and quartet case. Also note that the Z contribution in the quartet case is relatively bigger than that in the doublet because in the quartet $m_{\sigma_{1}}$ and $m_{\sigma_{2}}$ are not exactly equal when $\sigma_{1}\neq \sigma_{2}$. Hence one receives larger Z-penguin contribution in the quartet compared to the doublet. Still this contribution is numerically not significant if we compare it with $\gamma$ penguin diagram or box diagram contributions. From figure \[Brmueee\], we can see that $\text{Br}(\mu\rightarrow e e \overline{e})$ is larger for quartet (blue points) compared to the doublet (brown points) for both $\xi<1$ and $\xi>1$ cases.
![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\text{Br}(\mu\rightarrow ee\overline{e})$ for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken same input parameters as in $\text{Br}(\mu\rightarrow
e\gamma)$. []{data-label="Brmueee"}](mueeefinal_NH.pdf "fig:"){width="7.5cm"} ![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\text{Br}(\mu\rightarrow ee\overline{e})$ for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken same input parameters as in $\text{Br}(\mu\rightarrow
e\gamma)$. []{data-label="Brmueee"}](mueeefinal_IH.pdf "fig:"){width="7.5cm"}
### $\mu-e$ conversion rate {#mueconvsection}
Another prominent LFV process currently under investigation is the $\mu-e$ conversion in nuclei. Here we have calculated the $\mu-e$ conversion rate for Ti and Au nuclei in the inert model with doublet and quartet. From figure \[mueconveTifig\], we can see that the $\mu-e$ conversion rate is larger for the quartet (blue points) compared to the doublet (brown points). The dip occurs in the doublet contribution at $\xi=1$ because at that value, the dipole contribution $A^{\text{doublet}}_{D}$ and the non-dipole contribution $A^{\text{doublet}}_{ND}$ are equal as they are coming from single $\gamma$ penguin diagram involving charged scalar $C^{\pm}$ and neutral fermion $N_{i}$ and eq. (\[nuclear31\]) indicates that the effective coupling is zero for doublet at that point. On the other hand, for quartet case $A^{\text{quartet}}_{D}$ and $A^{\text{quartet}}_{ND}$ at $\xi=1$ are different because more than one charged scalar contribute to the $\gamma$ penguin diagrams. Again we can see from figure \[mueconveTifig\] that the conversion rate is larger for the quartet than that for the doublet for both $\xi<1$ and $\xi>1$ cases. We have not included the figure for $\mu-e$ conversion rate in Au nuclei as it is similar to figure \[mueconveTifig\].
![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\mu-e$ conversion rate for Ti nucleus. for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken same input parameters as in $\text{Br}(\mu\rightarrow
e\gamma)$. []{data-label="mueconveTifig"}](mueconvfinal_NH.pdf "fig:"){width="7.5cm"} ![Correlation between $\xi=M^2_{N(F)}/m^2_{\text{scalar}}$ and $\mu-e$ conversion rate for Ti nucleus. for doublet (brown points) and quartet (blue points) with normal (left fig.) and inverted (right fig.) hierarchy for light neutrino mass. Here we have taken same input parameters as in $\text{Br}(\mu\rightarrow
e\gamma)$. []{data-label="mueconveTifig"}](mueconvfinal_IH.pdf "fig:"){width="7.5cm"}
### LFV rates in the doublet and quartet {#doubletvsquartet}
As expected, the LFV rates seen in figure \[Brmuegamma\], \[Brmueee\] and \[mueconveTifig\] are very small for real $R$ and $\gamma=10^{-5}$. The rates will reduce even more if we increase $\gamma$. Still the rates are larger for the quartet compared to the doublet for $\xi<1$ and $\xi>1$ case where scalar is treated as the DM candidate. Now we increase the value of Im(z) and calculate the LFV rates with increasing values of $\tilde{M}$.
![The $\xi-\text{Im(z)}$ plane for degenerate $M_{N(F)}$, random Dirac phase $\delta$, zero Majorana phases $\alpha_{\nu}=\beta_{\nu}=0$ and light neutrino mass, $m_{\nu_{1}}=1$ meV. The scalar mass is $m_{\text{scalar}}=3000$ GeV with $\gamma=10^{-5}$. In (left), the current bounds are imposed: $\text{Br}(\mu\rightarrow e\gamma)\simlt 5.9\times 10^{-13}$, $\text{Br}(\mu\rightarrow ee\overline{e})
\simlt 1\times 10^{-12}$ and $\mu-e$ conversion rate for Ti $\simlt 4.3\times 10^{-12}$. In (right), the future sensitivity are considered: $\text{Br}(\mu\rightarrow e\gamma)\simlt 6.4\times 10^{-14}$, $\text{Br}(\mu\rightarrow ee\overline{e})
\simlt 1\times 10^{-16}$ and $\mu-e$ conversion rate for Ti $\simlt 10^{-18}$.[]{data-label="LFVconstraints"}](LFV_NH.pdf "fig:"){width="7.5cm"} ![The $\xi-\text{Im(z)}$ plane for degenerate $M_{N(F)}$, random Dirac phase $\delta$, zero Majorana phases $\alpha_{\nu}=\beta_{\nu}=0$ and light neutrino mass, $m_{\nu_{1}}=1$ meV. The scalar mass is $m_{\text{scalar}}=3000$ GeV with $\gamma=10^{-5}$. In (left), the current bounds are imposed: $\text{Br}(\mu\rightarrow e\gamma)\simlt 5.9\times 10^{-13}$, $\text{Br}(\mu\rightarrow ee\overline{e})
\simlt 1\times 10^{-12}$ and $\mu-e$ conversion rate for Ti $\simlt 4.3\times 10^{-12}$. In (right), the future sensitivity are considered: $\text{Br}(\mu\rightarrow e\gamma)\simlt 6.4\times 10^{-14}$, $\text{Br}(\mu\rightarrow ee\overline{e})
\simlt 1\times 10^{-16}$ and $\mu-e$ conversion rate for Ti $\simlt 10^{-18}$.[]{data-label="LFVconstraints"}](LFV_NH_1.pdf "fig:"){width="7.5cm"}
From figure \[LFVconstraints\], we can see that LFV rates in the quartet are more constrained than those in the doublet for common parameter space satisfying all the restrictions of Sec. \[constranitssec\]. The allowed regions on $\xi$-Im(z) plane are reduced further for both doublet and quartet models if one imposes the sensitivity of future lepton flavor violating experiments. The case for inverted hierarchy shows similar pattern so we have only presented results regarding normal hierarchy.
Conclusions {#conclusion}
===========
The scotogenic model is a well studied neutrino mass model and lepton flavor violation is one of its important phenomenological aspects. In this study we present the comparison among different LFV processes in the inert doublet and the quartet model, taking into account the current experimental limits and future sensitivity. There are two possible dark matter candidates in the inert scalar models: scalar and fermionic DM. In this study we have considered scalar DM and evaluated LFV rates for common parameter space subjected to collider bounds, DM constraints for doublet and quartet model and low-energy neutrino parameters. Our results are summarized as follows
- $\text{Br}(\mu\rightarrow e \gamma)$, $\text{Br}(\mu\rightarrow e e \overline{e})$ and $\mu-e$ conversion rates in nuclei in the quartet model are larger than those in the doublet model for the same parameter space as seen from figure \[Brmuegamma\], \[Brmueee\] and \[mueconveTifig\]. In the case of higher scalar representation more particles enter into the loops and their contributions are additive in the LFV processes. Therefore we can have larger rates of different LFV processes compared to the lower scalar representation. From figure \[LFVconstraints\], we can see that, LFV processes in higher scalar representation are more constrained by the current and near-future experiments. In addition, this phenomenological result is complementary to the appearance of low scale Landau pole for higher representations [@AbdusSalam:2013eya; @DiLuzio:2015oha; @Hamada:2015bra].
- There is no significant deviation from figure \[Brmuegamma\]-\[LFVconstraints\] for non-degenerate right handed neutrinos and real fermion triplets. In the case of large hierarchy, $m_{N_{3}}>>m_{N_{1,2}}$, the dominant contribution comes from only the lightest generation.
We would like to emphasize here that the conclusion of our preliminary study is applicable to the inert scalar models where scalar DM is considered. But there is much room for an improved analysis. For example, in the case of fermionic DM, the DM constraints will be different and will have different viable parameter set for the LFV rate comparison. Also one needs to study the DM properties and viability of a common parameter space where $\xi\sim 1$. Therefore, further quantitative analysis of the fermionic DM aspects in the quartet model will be presented in a future publication [@futureDMcase]. Furthermore, similar analysis can be carried out for $\tau\rightarrow \mu\gamma$, $\tau\rightarrow ee\overline{e}$, $\tau\rightarrow \mu\mu\overline{\mu}$ in the inert scalar models to probe the flavor structure of the Yukawa sector and to have better constraints on the higher scalar representation in the light of experimental limits.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Avelino Vicente for stimulating discussion. T.A.C is grateful to Fernando Quevedo, Bobby Acharya and the HECAP section of ICTP for the support and the hospitality where the initial part of this work has been carried out. We are also indebted to the Referee for the constructive report, for which, the result and presentation of our study has been substantially improved.
Scalar masses {#massspecscalar}
=============
Inert Doublet
-------------
The mass spectrum for the inert doublet in our parametrization eq. (\[potq\]) is, $$\begin{aligned}
m^2_{S}&=&M^2_{0}+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta+\gamma\right)v^2\nonumber\\
m^2_{A}&=&M^2_{0}+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta-\gamma\right)v^2\nonumber\\
m^2_{C}&=&M^2_{0}+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta\right)v^2
\label{doubetmass}\end{aligned}$$
Inert Quartet
-------------
In the inert quartet case, the $\gamma$ term, apart from splitting $S$ and $A$, also mixes two single charged components of the quartet. According to eq. (\[sc1\]), the mass matrix for single charged fields in $(\Delta^{+},\Delta^{'+})$ basis is $$M^2_{+}=\begin{pmatrix}
M_0^2+\frac{1}{2}(\alpha-\frac{1}{4}\beta)v^2& \frac{\sqrt{3}}{2}\gamma v^2\\
\frac{\sqrt{3}}{2}\gamma v^2& M_0^2+\frac{1}{2}(\alpha+\frac{3}{4}\beta)v^2
\end{pmatrix}$$ Diagonalizing the mass matrix, we have mass eigenstates for single charged fields, $\Delta_1^+=\Delta^{+}\cos\theta +\Delta^{'+}\sin\theta$, $\Delta_2^+=-\Delta^{+}\sin\theta +\Delta^{'+}\cos\theta$ with $\tan2\theta=-\frac{2\sqrt{3}\gamma}{\beta}$.
Therefore the mass spectrum of the quartet is $$\begin{aligned}
\label{qrmass1}
m^2_{S(A)}&=&M^2_{0}+\frac{1}{2}\left(\alpha+\frac{1}{4}\beta\mp 2\gamma\right)v^2\nonumber\\
m^2_{\Delta^{++}}&=&M^2_{0}+\frac{1}{2}\left(\alpha-\frac{3}{4}\beta\right)v^2\nonumber\\
m^2_{\Delta^{+}_{1}(\Delta^{+}_{2})}&=&M^2_{0}+\frac{1}{2}\left(\alpha+\frac{1}{2}\beta \mp \frac{1}{2}\sqrt{\beta^2+12\gamma^2}\right)v^2
\label{massquartet}\end{aligned}$$
Because of the mixing between two single charged states, the mass relation is $$m_{S}^2+m_{A}^2=m_{\Delta_{1}^{+}}^2+m_{\Delta_{2}^{+}}^2$$
Loop functions {#loopappendix}
==============
The loop functions relevant for the dipole and non-dipole form factors from $\mu e \gamma$ vertex are $$\begin{aligned}
F^{(n)}(x)&=&\frac{1-6x+3 x^2+2 x^3-6 x^2\text{ln}x}{6(1-x)^{4}}\label{dipoleneu}\\
F^{(c)}(x)&=&\frac{2+3x-6x^2+x^3+6x\text{ln}x}{6(1-x)^{4}}\label{dipolechar}\\
G^{(n)}(x)&=&\frac{2-9x+18x^2-11x^3+6x^3\text{ln}x}{6(1-x)^4}\label{nondipoleneu}\\
G^{(c)}(x)&=&\frac{16-45x+36x^2-7x^3+6(2-3x)\text{ln}x}{6(1-x)^4}\label{nondipolechar}\end{aligned}$$
In the following we collect the Passarino-Veltman loop functions. $$B_{1}(m_{1},m_2)=-\frac{1}{2}-\frac{m_1^{4}-m_{2}^{4}+2m_{1}^4\text{ln}\frac{m_{2}^2}{m_{1}^{2}}}{4(m_{1}^2-m_{2}^2)^2}
+\frac{1}{2}\text{ln}\frac{m_{2}^{2}}{\mu^{2}}
\label{bfunction}$$
$$C_{0}(m_{1},m_{2},m_{3})=\frac{m_{2}^{2}(m_{1}^2-m_{3}^2)\text{ln}\frac{m_{2}^2}{m_{1}^2}-(m_{1}^{2}-m_{2}^{2})m_{3}^2
\text{ln}\frac{m_{3}^{2}}{m_{1}^{2}}}{(m_{1}^2-m_{2}^{2})(m_{1}^{2}-m_{3}^{2})(m_{2}^{2}-m_{3}^{2})}$$
$$\begin{aligned}
C_{24}(m_{1},m_{2},m_{3})&=\frac{1}{8(m_{1}^2-m_{2}^{2})(m_{1}^{2}-m_{3}^{2})(m_{2}^{2}-m_{3}^{2})}\left[-2(m_{1}^{2}+m_{2}^{2})
m_{3}^{4}\,\text{ln}\frac{m_{3}^{2}}{m_{1}^{2}}
-(m_{3}^2-m_{1}^{2})\right.\notag\\
&\left.\left(2m_{2}^{4}\,\text{ln}\frac{m_{2}^{2}}{m_{1}^{2}}
+(m_{1}^{2}-m_{2}^{2})(m_{2}^{2}-m_{3}^{2})
\left(2\,\text{ln}\frac{m_{1}^2}{\mu^{2}}-3\right)\right)\right]\end{aligned}$$
$$\begin{aligned}
\tilde{D}_{0}(m_1,m_2,m_3,m_4)&=&\frac{m_{2}^{4}\,\text{ln}\frac{m_{2}^{2}}{m_{1}^{2}}}
{(m_{2}^{2}-m_{1}^{2})(m_{2}^{2}-m_{3}^{2})(m_{2}^{2}-m_{4}^{2})}-\frac{m_{3}^{4}\,\text{ln}\frac{m_{3}^{2}}{m_{1}^{2}}}
{(m_{3}^{2}-m_{1}^{2})(m_{3}^{2}-m_{2}^{2})(m_{3}^{2}-m_{4}^{2})}\nonumber\\
&-&\frac{m_{4}^{4}\,\text{ln}\frac{m_{4}^{2}}{m_{1}^{2}}}
{(m_{4}^{2}-m_{1}^{2})(m_{4}^{2}-m_{2}^{2})(m_{4}^{2}-m_{3}^{2})}
\label{dtilde}\end{aligned}$$
$$\begin{aligned}
D_{0}(m_1,m_2,m_3,m_4)&=&\frac{m_{2}^{2}\,\text{ln}\frac{m_{2}^{2}}{m_{1}^{2}}}
{(m_{2}^{2}-m_{1}^{2})(m_{2}^{2}-m_{3}^{2})(m_{2}^{2}-m_{4}^{2})}-\frac{m_{3}^{2}\,\text{ln}\frac{m_{3}^{2}}{m_{1}^{2}}}
{(m_{3}^{2}-m_{1}^{2})(m_{3}^{2}-m_{2}^{2})(m_{3}^{2}-m_{4}^{2})}\nonumber\\
&-&\frac{m_{4}^{2}\,\text{ln}\frac{m_{4}^{2}}{m_{1}^{2}}}
{(m_{4}^{2}-m_{1}^{2})(m_{4}^{2}-m_{2}^{2})(m_{4}^{2}-m_{3}^{2})}
\label{dzero}\end{aligned}$$
$\mu e\gamma$ vertex, $\mu e Z$ vertex and box diagrams {#muegZver}
=======================================================
$\mu e \gamma$ vertex {#muegammafeyn}
---------------------
Here we present in figure \[muegammavertices\] the Feynman diagrams of one-loop contributions of the doublet and quartet to the $\mu e \gamma$ vertex.
![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](initial_me_e_gamma.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](initial_me_e_gamma_1.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](initial_me_e_gamma_2.pdf "fig:"){width="4cm"}
![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_0.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_1.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_2.pdf "fig:"){width="4cm"}
![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_0_20.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_1_11.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_2_11.pdf "fig:"){width="4cm"}
![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_0_21.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_1_12.pdf "fig:"){width="4cm"} ![$\mu e\gamma$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muegammavertices"}](quartet_dipole_2_12.pdf "fig:"){width="4cm"}
$\mu e Z$ vertex {#muezfeyn}
----------------
We present in figure \[muezvertices\] the Feynman diagrams of one-loop contributions of the doublet and the quartet to the $\mu e Z$ vertex.
{width="4cm"} {width="4cm"} {width="4cm"}
{width="4cm"} {width="4cm"} {width="4cm"}
{width="4cm"} {width="4cm"} {width="4cm"} {width="4cm"}
![$\mu e Z$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muezvertices"}](quartet_z_0_22.pdf "fig:"){width="4cm"} ![$\mu e Z$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muezvertices"}](quartet_z_0_21.pdf "fig:"){width="4cm"} ![$\mu e Z$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muezvertices"}](quartet_z_1_12.pdf "fig:"){width="4cm"} ![$\mu e Z$ vertex and the self energy diagrams of the external fermions for the doublet (first row) and the quartet cases (second to fourth rows).[]{data-label="muezvertices"}](quartet_z_2_12.pdf "fig:"){width="4cm"}
Box diagrams {#boxfeyn}
------------
The box diagrams for the doublet and the quartet cases are given in figure \[boxfigs\],
![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](doublet_box.pdf "fig:"){width="5cm"} ![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](doublet_box_1.pdf "fig:"){width="5cm"}
![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](quartet_0_box.pdf "fig:"){width="5cm"} ![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](quartet_box_1.pdf "fig:"){width="5cm"}
![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](quartet_1_box.pdf "fig:"){width="5cm"} ![Box diagrams for the doublet (first row) and the quartet (second and third rows).[]{data-label="boxfigs"}](quartet_2_box.pdf "fig:"){width="5cm"}
[99]{}
E. Ma, Phys. Rev. D [**73**]{}, 077301 (2006) \[hep-ph/0601225\]. N. G. Deshpande and E. Ma, Phys. Rev. D [**18**]{}, 2574 (1978). L. Lopez Honorez, E. Nezri, J. F. Oliver and M. H. G. Tytgat, JCAP [**0702**]{}, 028 (2007) \[hep-ph/0612275\]. E. M. Dolle and S. Su, Phys. Rev. D [**80**]{}, 055012 (2009) \[arXiv:0906.1609 \[hep-ph\]\]. L. Lopez Honorez and C. E. Yaguna, JCAP [**1101**]{}, 002 (2011) \[arXiv:1011.1411 \[hep-ph\]\]. P. Agrawal, E. M. Dolle and C. A. Krenke, Phys. Rev. D [**79**]{}, 015015 (2009) \[arXiv:0811.1798 \[hep-ph\]\]. S. Andreas, M. H. G. Tytgat and Q. Swillens, JCAP [**0904**]{}, 004 (2009) \[arXiv:0901.1750 \[hep-ph\]\]. E. Nezri, M. H. G. Tytgat and G. Vertongen, JCAP [**0904**]{}, 014 (2009) \[arXiv:0901.2556 \[hep-ph\]\]. Q. -H. Cao, E. Ma and G. Rajasekaran, Phys. Rev. D [**76**]{}, 095011 (2007) \[arXiv:0708.2939 \[hep-ph\]\]. H. Martinez, A. Melfo, F. Nesti, G. Senjanović, Phys. Rev. Lett. [**106**]{}, 191802 (2011). \[arXiv:1101.3796 \[hep-ph\]\].
A. Melfo, M. Nemevšek, F. Nesti, G. Senjanovi' c, Y. Zhang, Phys. Rev. [**D84** ]{} (2011) 034009. \[arXiv:1105.4611 \[hep-ph\]\].
T. A. Chowdhury, M. Nemevšek, G. Senjanović and Y. Zhang, JCAP [**1202**]{}, 029 (2012) \[arXiv:1110.5334 \[hep-ph\]\]. D. Borah and J. M. Cline, Phys. Rev. D [**86**]{}, 055001 (2012) \[arXiv:1204.4722 \[hep-ph\]\]. G. Gil, P. Chankowski and M. Krawczyk, Phys. Lett. B [**717**]{}, 396 (2012) \[arXiv:1207.0084 \[hep-ph\]\]. J. M. Cline and K. Kainulainen, Phys. Rev. D [**87**]{}, 071701 (2013) \[arXiv:1302.2614 \[hep-ph\]\]. A. Ahriche, G. Faisel, S. Y. Ho, S. Nasri and J. Tandean, Phys. Rev. D [**92**]{} (2015) 3, 035020 \[arXiv:1501.06605 \[hep-ph\]\]. E. Lundstrom, M. Gustafsson and J. Edsjo, Phys. Rev. D [**79**]{}, 035013 (2009) \[arXiv:0810.3924 \[hep-ph\]\]. E. Dolle, X. Miao, S. Su and B. Thomas, Phys. Rev. D [**81**]{}, 035003 (2010) \[arXiv:0909.3094 \[hep-ph\]\]. M. Gustafsson, S. Rydbeck, L. Lopez-Honorez and E. Lundstrom, Phys. Rev. D [**86**]{}, 075019 (2012) \[arXiv:1206.6316 \[hep-ph\]\]. M. Aoki, S. Kanemura and H. Yokoya, Phys. Lett. B [**725**]{}, 302 (2013) \[arXiv:1303.6191 \[hep-ph\]\]. G. Belanger, B. Dumont, A. Goudelis, B. Herrmann, S. Kraml and D. Sengupta, arXiv:1503.07367 \[hep-ph\].
S. S. AbdusSalam and T. A. Chowdhury, JCAP [**1405**]{}, 026 (2014) \[arXiv:1310.8152 \[hep-ph\]\].
J. Kubo, E. Ma and D. Suematsu, Phys. Lett. B [**642**]{} (2006) 18 \[hep-ph/0604114\]. D. Aristizabal Sierra, J. Kubo, D. Restrepo, D. Suematsu and O. Zapata, Phys. Rev. D [**79**]{} (2009) 013011 \[arXiv:0808.3340 \[hep-ph\]\]. D. Suematsu, T. Toma and T. Yoshida, Phys. Rev. D [**79**]{} (2009) 093004 \[arXiv:0903.0287 \[hep-ph\]\]. A. Adulpravitchai, M. Lindner and A. Merle, Phys. Rev. D [**80**]{} (2009) 055031 \[arXiv:0907.2147 \[hep-ph\]\]. T. Toma and A. Vicente, JHEP [**1401**]{} (2014) 160 \[arXiv:1312.2840, arXiv:1312.2840 \[hep-ph\]\]. A. Vicente and C. E. Yaguna, JHEP [**1502**]{} (2015) 144 \[arXiv:1412.2545 \[hep-ph\]\]. E. Ma and D. Suematsu, Mod. Phys. Lett. A [**24**]{} (2009) 583 \[arXiv:0809.0942 \[hep-ph\]\]. S. S. C. Law and K. L. McDonald, JHEP [**1309**]{} (2013) 092 \[arXiv:1305.6467 \[hep-ph\]\]. B. Ren, K. Tsumura and X. G. He, Phys. Rev. D [**84**]{} (2011) 073004 \[arXiv:1107.5879 \[hep-ph\]\].
A. Ahriche, K. L. McDonald, S. Nasri and T. Toma, Phys. Lett. B [**746**]{} (2015) 430 \[arXiv:1504.05755 \[hep-ph\]\].
J. Adam [*et al.*]{} \[MEG Collaboration\], Phys. Rev. Lett. [**107**]{} (2011) 171801 \[arXiv:1107.5547 \[hep-ex\]\]. J. Adam [*et al.*]{} \[MEG Collaboration\], Phys. Rev. Lett. [**110**]{} (2013) 201801 \[arXiv:1303.0754 \[hep-ex\]\]. A. M. Baldini, F. Cei, C. Cerri, S. Dussoni, L. Galli, M. Grassi, D. Nicolo and F. Raffaelli [*et al.*]{}, arXiv:1301.7225 \[physics.ins-det\]. U. Bellgardt [*et al.*]{} \[SINDRUM Collaboration\], Nucl. Phys. B [**299**]{} (1988) 1. A. Blondel, A. Bravar, M. Pohl, S. Bachmann, N. Berger, M. Kiehn, A. Schoning and D. Wiedner [*et al.*]{}, arXiv:1301.6113 \[physics.ins-det\]. W. H. Bertl [*et al.*]{} \[SINDRUM II Collaboration\], Eur. Phys. J. C [**47**]{} (2006) 337. C. Dohmen [*et al.*]{} \[SINDRUM II Collaboration\], Phys. Lett. B [**317**]{} (1993) 631. D. Glenzinski \[Mu2e Collaboration\], AIP Conf. Proc. [**1222**]{} (2010) 383. L. Bartoszek [*et al.*]{} \[Mu2e Collaboration\], arXiv:1501.05241 \[physics.ins-det\]. H. Natori \[DeeMe Collaboration\], Nucl. Phys. Proc. Suppl. [**248-250**]{} (2014) 52. Y. Kuno \[COMET Collaboration\], PTEP [**2013**]{} (2013) 022C01. Y. Kuno, Nucl. Phys. Proc. Suppl. [**149**]{} (2005) 376. R. J. Barlow, Nucl. Phys. Proc. Suppl. [**218**]{} (2011) 44. M. Cirelli, N. Fornengo and A. Strumia, Nucl. Phys. B [**753**]{} (2006) 178 \[hep-ph/0512090\]. M. Cirelli, A. Strumia and M. Tamburini, Nucl. Phys. B [**787**]{}, 152 (2007) \[arXiv:0706.4071 \[hep-ph\]\]. M. Cirelli and A. Strumia, New J. Phys. [**11**]{}, 105005 (2009) \[arXiv:0903.3381 \[hep-ph\]\]. J. A. Casas and A. Ibarra, Nucl. Phys. B [**618**]{} (2001) 171 \[hep-ph/0103065\]. J. A. Casas, J. M. Moreno, N. Rius, R. Ruiz de Austri and B. Zaldivar, JHEP [**1103**]{} (2011) 034 \[arXiv:1010.5751 \[hep-ph\]\]. J. Heeck, Phys. Rev. D [**86**]{}, 093023 (2012) \[arXiv:1207.5521 \[hep-ph\]\].
T. P. Cheng and L. F. Li, Phys. Rev. Lett. [**38**]{} (1977) 381. T. P. Cheng and L. F. Li, Phys. Rev. D [**16**]{} (1977) 1425. T. P. Cheng and L. F. Li, Phys. Rev. Lett. [**45**]{} (1980) 1908. E. Ma and A. Pramudita, Phys. Rev. D [**24**]{} (1981) 1410. C. S. Lim and T. Inami, Prog. Theor. Phys. [**67**]{} (1982) 1569. A. Ilakovac and A. Pilaftsis, Nucl. Phys. B [**437**]{} (1995) 491 \[hep-ph/9403398\]. A. Blum and A. Merle, Phys. Rev. D [**77**]{} (2008) 076005 \[arXiv:0709.3294 \[hep-ph\]\]. J. Hisano, T. Moroi, K. Tobe and M. Yamaguchi, Phys. Rev. D [**53**]{} (1996) 2442 \[hep-ph/9510309\]. E. Arganda and M. J. Herrero, Phys. Rev. D [**73**]{} (2006) 055003 \[hep-ph/0510405\]. M. E. Krauss, W. Porod, F. Staub, A. Abada, A. Vicente and C. Weiland, Phys. Rev. D [**90**]{}, no. 1, 013008 (2014) \[arXiv:1312.5318 \[hep-ph\]\]. A. Abada, M. E. Krauss, W. Porod, F. Staub, A. Vicente and C. Weiland, JHEP [**1411**]{} (2014) 048 \[arXiv:1408.0138 \[hep-ph\]\]. E. Arganda and M. J. Herrero, arXiv:1403.6161 \[hep-ph\]. R. Kitano, M. Koike and Y. Okada, Phys. Rev. D [**66**]{} (2002) 096002 \[Phys. Rev. D [**76**]{} (2007) 059902\] \[hep-ph/0203110\]. E. Arganda, M. J. Herrero and A. M. Teixeira, JHEP [**0710**]{} (2007) 104 \[arXiv:0707.2955 \[hep-ph\]\]. A. Crivellin, M. Hoferichter and M. Procura, Phys. Rev. D [**89**]{} (2014) 093024 \[arXiv:1404.7134 \[hep-ph\]\]. J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**86**]{}, 010001 (2012). G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Rev. D [**88**]{} (2013) 11, 112006 \[arXiv:1310.3675 \[hep-ex\]\]. M. Cirelli, F. Sala and M. Taoso, JHEP [**1410**]{} (2014) 033 \[JHEP [**1501**]{} (2015) 041\] \[arXiv:1407.7058 \[hep-ph\]\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], Astron. Astrophys. [**571**]{}, A16 (2014) \[arXiv:1303.5076 \[astro-ph.CO\]\]. T. Hambye, F.-S. Ling, L. Lopez Honorez and J. Rocher, JHEP [**0907**]{} (2009) 090 \[JHEP [**1005**]{} (2010) 066\] \[arXiv:0903.4010 \[hep-ph\]\]. A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, Comput. Phys. Commun. [**185**]{} (2014) 2250 \[arXiv:1310.1921 \[hep-ph\]\]. G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Comput. Phys. Commun. [**185**]{} (2014) 960 \[arXiv:1305.0237 \[hep-ph\]\]. D. S. Akerib [*et al.*]{} \[LUX Collaboration\], Phys. Rev. Lett. [**112**]{} (2014) 091303 \[arXiv:1310.8214 \[astro-ph.CO\]\]. A. Sommerfeld, Ann. Phys. [**11**]{}, 257 (1931)
J. Hisano, S. Matsumoto and M. M. Nojiri, Phys. Rev. Lett. [**92**]{}, 031303 (2004) \[hep-ph/0307216\]. J. Hisano, S. Matsumoto, M. M. Nojiri and O. Saito, Phys. Rev. D [**71**]{}, 063528 (2005) \[hep-ph/0412403\]. J. Hisano, S. Matsumoto, O. Saito and M. Senami, Phys. Rev. D [**73**]{}, 055004 (2006) \[hep-ph/0511118\]. N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, Phys. Rev. D [**79**]{} (2009) 015014 \[arXiv:0810.0713 \[hep-ph\]\]. T. R. Slatyer, JCAP [**1002**]{} (2010) 028 \[arXiv:0910.5713 \[hep-ph\]\]. J. Fan and M. Reece, JHEP [**1310**]{}, 124 (2013) \[arXiv:1307.4400 \[hep-ph\]\]. T. Cohen, M. Lisanti, A. Pierce and T. R. Slatyer, JCAP [**1310**]{}, 061 (2013) \[arXiv:1307.4082\]. M. Cirelli, T. Hambye, P. Panci, F. Sala and M. Taoso, JCAP [**1510**]{}, no. 10, 026 (2015) \[arXiv:1507.05519 \[hep-ph\]\]. C. Garcia-Cely, A. Ibarra, A. S. Lamperstorfer and M. H. G. Tytgat, JCAP [**1510**]{}, no. 10, 058 (2015) \[arXiv:1507.05536 \[hep-ph\]\]. M. Aoki, T. Toma and A. Vicente, JCAP [**1509**]{}, 063 (2015) \[arXiv:1507.01591 \[hep-ph\]\]. M. Quiros, hep-ph/9901312. D. E. Morrissey and M. J. Ramsey-Musolf, New J. Phys. [**14**]{}, 125003 (2012) \[arXiv:1206.2942 \[hep-ph\]\].
L. Di Luzio, R. Grober, J. F. Kamenik and M. Nardecchia, arXiv:1504.00359 \[hep-ph\].
Y. Hamada, K. Kawana and K. Tsumura, arXiv:1505.01721 \[hep-ph\]. T. A. Chowdhury and S. Nasri, *[in preparation.]{}*
[^1]: [@Arganda:2005ji] contained a mistake in the calculation of Z-penguin diagram which was pointed out in [@Krauss:2013gya]. Subsequently, correct results were presented in [@Abada:2014kba] and [@Arganda:2014lya]. Moreover, $C_{00}$ of [@Abada:2014kba] and $C_{24}$ of [@Arganda:2014lya] only differ by an overall minus sign.
[^2]: without considering the Sommerfeld enhancement
|
{
"pile_set_name": "ArXiv"
}
|
0[|\_0]{} æ[\_]{} 0[\_0]{} \#1\#2[ [[\#1]{} ]{}]{} \#1\#2 \#1[\#1]{} \#1[(\#1)]{} \#1\#2[[**\[\#1\] \#2\
**]{}]{} \#1 \#1 \#1 \#1[fig. \[\#1\]]{} \#1 \#2 \#3 [[*Phys. Lett.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Nucl. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Z. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Phys. Rev.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Phys. Rep.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Phys. Rev. Lett.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Mod. Phys. Lett.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*Rev. Mod. Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3[[*Comp. Phys. Commun. *]{}[**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3[[*J. Phys. *]{}[**\#1**]{} (\#2) \#3]{} \#1 \#2 \#3 [[*J. High Energy Phys.*]{} [**\#1**]{} (\#2) \#3]{} \#1 [[hep-ph/\#1]{}]{}
RAL-TR-98-003\
LU–TP 98–7\
hep-ph/9804296\
April 1998
[**New and Old Jet Clustering Algorithms\
for Electron-Positron Events**]{}\
[Stefano Moretti]{}\
[*Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX, UK*]{}\
[E-mail: [[email protected]]{}]{}\
[Leif Lönnblad, Torbjörn Sjöstrand]{}\
[*Department of Theoretical Physics, Lund University, Lund, Sweden*]{}\
[E-mail: [[email protected]]{}, [[email protected]]{}]{}\
[**Abstract**]{}\
Over the years, many jet clustering algorithms have been proposed for the analysis of hadronic final states in $e^+e^-$ annihilations. These have somewhat different emphasis and are therefore more or less suited for various applications. We here review some of the most used and compare them from a theoretical and experimental point of view.
Introduction {#sec_intro}
============
Clustering algorithms have come to be an indispensable tool in the study of multi-hadronic events. They take the large number of particles produced in high-energy scatterings and cluster them into a small number of ‘jets’. Such a simplified characterization of the event should help focus on the main properties of the underlying dynamics. In particular, the reconstructed jets should reflect the partonic picture, and thus allow a separation of perturbative and non-perturbative QCD physics aspects. Of course, such a separation can never be perfect, since there will always be smearing effects that cannot be compensated, and since there is not even a well-defined transition from perturbative to non-perturbative QCD.
Jet finders can be applied to a variety of tasks. The number of well-separated jets found in an event sample allows a determination of an $\as$ value. The distribution in angles between jets can be used as test of the fundamental properties of QCD, such as the gluon spin and the QCD color factors. The flow of particles around jet directions probes soft physics, both perturbative and non-perturbative. The clustering of jets may help to identify massive particles, such as $W^\pm$ and $t$, or to search for new ones.
The essential ingredients of jet clustering algorithms are basically the same independently of the phenomenological applications. Nonetheless, the kinematics and dynamics of, e.g., $e^+e^-$, $ep$ and $p\overline{p}$ collisions are sufficiently different that computational methods have to be modified accordingly (see, e.g., [@ellis; @ktclus]). We will in this paper concentrate on algorithms for electron-positron annihilations, where there are no spectator jets and thus schemes can be made especially simple.
Over the years, several algorithms have been proposed for the study of $e^+e^-$ events. Recently, advances in the understanding of soft perturbative physics lead to the introduction of further ones [@camjet]. This has made it even more difficult for a user to understand differences and to know which algorithm to use where. The purpose of the current paper is to review several of the existing jet finders and compare them in various ways. Neither the choice of algorithms nor the selection of comparisons is exhaustive, but it should still help give some useful hints. We also introduce a few new hybrid algorithms to better understand the results. By using several event generators, we cross-check our findings. In a sense, our study is an update of the corresponding one carried out in Ref. [@BKSS], in view of the new algorithms that have been proposed since then [@camjet; @diclus] and of the advent of LEP2.
The conclusions might seem disappointing at first glance: while some algorithms fare markedly less well than the better ones, there is not [*one*]{} single best choice that sticks out in [*all*]{} phenomenological contexts we have studied. However, this should be of no surprise. In fact, given the varied use, there need not exist one algorithm that is optimal everywhere. Instead, we will show that, depending on the tasks assigned to the algorithm and on the physics dominion where this is applied, it is often possible to clearly individuate the most suitable to use.
In the following Section we review the historical evolution of clustering algorithms and describe some of the more familiar ones. Sections \[sec\_pertcomparison\] and \[sec\_nonpertcomparison\] contain comparisons between algorithms, for next-to-leading-order (NLO) and resummed perturbative QCD results, jet rates, jet energy and angle reconstruction, $W^\pm$ mass reconstruction, and so on. Finally Section \[sec\_summary\] contains a summary and outlook.
Clustering algorithms {#sec_algorithms}
=====================
The first studies of jet structure in $e^+e^-$ annihilations were undertaken to establish the spin $1/2$ nature of quarks [@MarkI]. It was then only necessary to define a common event axis for two back-to-back jets, and for this purpose [*event measures*]{} such as thrust [@thrust] or sphericity [@sphericity] are quite sufficient.
With the search for and study of gluon jets at PETRA it became necessary to define and analyze three-jet structures. It is possible to generalize thrust to triplicity [@triplicity], in which the longitudinal momentum sum is maximized with respect to three jet axes. The maximization procedure can be rather time-consuming, however, in view of the large number of possibilities to subdivide particles into three groups. Since three jets span a plane (neglecting initial-state QED radiation), special tricks are possible: if all particles are projected onto an event plane, they can be ordered in angle such that only contiguous ranges of particles need be considered as candidate jets. The tri-jettiness measure [@trijettiness] uses the sphericity tensor to define the event plane, and thereafter finds the subdivision into three jets by minimizing the sum of squared transverse momenta, where each $p_{\perp}$ (or, equivalently, $k_{\perp}$) is defined relative to the jet axis the particle is assigned to.
Such special-purpose algorithms have the disadvantage that, first, one procedure is needed to determine the number of jets in an event and, thereafter, another to find the jet axes. The algorithms may also be less easily generalizable to an arbitrary number of jets, or very time-consuming. The task is not hopeless: with some tricks and approximations, thrust/triplicity can be extended to an arbitrary number of jet axes [@Bab; @God; @Bac] and tri-jettiness to four jets [@Wu]. However, alternatives were sought, and more generic jet algorithms started to be formulated. Several ideas were proposed and explored around 1980 [@Bab; @Bac; @Dor; @Dau; @Lan]. Most were based on a binary clustering, wherein the number of [*clusters*]{}[^1] is reduced one at a time by combining the two most (in some sense) nearby ones. The joining procedure is stopped by testing against some criterion, and the final clusters are called jets. An alternative technique, top-down rather than bottom-up, is that of the minimum spanning tree, where a complete set of links are found and then gradually removed to subdivide the event suitably [@Dor].
The starting configuration for the binary joining normally had each final-state particle as a separate cluster, but some algorithms contained a ‘preclustering’ step [@Bac; @Dau]. Here, very nearby particles are initially merged according to some simplified scheme, in order to speed up the procedures or to make them less sensitive to soft-particle production. The possibility of ‘reassignment’ between clusters was also used to improve on the simple binary joining recipe [@Bab; @God; @Bac]. Normally all particles were assigned to some jet but, in the spirit of the Sterman–Weinberg jet definition [@StermanWeinberg], a few algorithms allowed some fraction of the total energy to be found outside the jet cones [@Dau].
The distance measure between clusters always contained an angular dependence, explicit or implicit, while the energy/absolute momentum entered in different ways or not at all. As one example, of some interest to compare with later measures, we note the use of thrust/triplicity generalized to $n$-jet axes [@Bab; @God; @Bac]: $$T_n = \frac{1}{E_{\mathrm{tot}}} \max \sum_{i=1}^n |\bfp_i| =
\frac{1}{E_{\mathrm{tot}}} \max \sum \sqrt{E_i^2 - m_i^2} \approx
1 - \frac{1}{2E_{\mathrm{tot}}} \min \sum \frac{m_i^2}{E_i} ~,$$ where each $\bfp_i$ is obtained as the vector sum of the momenta of the particles assigned to jet $i$ (of energy $E_i$ and mass $m_i$). Thus a maximization of $T_n$ is almost the same as a minimization of $\sum m_i^2$, except that more energetic jets also can have a larger mass. Note that the relation $m_{\mathrm{jet}}^2 \propto
E_{\mathrm{jet}}$ is approximately respected by non-perturbative iterative jet fragmentation models [@FF; @AGIS].
The algorithms thus were rather sophisticated. Seen from a modern perspective, the main shortcoming is that in those days they could only be tested against generators producing a fixed number of partons — two, three or, at most, four — based on a leading-order (LO) matrix-element (ME) description. Therefore a ‘correct’ number of jets existed, and criteria were devised to find this number. Those criteria tended to be rather complex, at times even contrived, and thus often over-shadowed the simplicity of the basic algorithm. However, there is probably no fundamental reason why not several of these algorithms could have been used successfully even today, at least for some tasks.
The oldest algorithm still in use is the one [@luclus], which again is based on a binary joining scheme, with additional preclustering and reassignment steps. There were two advances. One was the choice of transverse momentum as distance measure, which is better adapted to the conventional picture of non-perturbative jet fragmentation and thus allows a cleaner separation of perturbative and non-perturbative aspects of the QCD dynamics. The other was that no attempt was made to define a correct number of jets, but rather a parameter was left free, with the explicit purpose to correspond to different ‘jet resolution powers’.
The algorithm [@jade] offered a further simplification, in that only the binary joining was retained, without preclustering or reassignment. The choice of distance measure was based on invariant mass, corresponding to what was available in most ${\cal O}(\as^2)$ calculations of the time [@secondorderme]. The algorithm was therefore optimal for $\as$ determination studies, and came to set the standard. By contrast, it performs less well in the handling of event-by-event hadronization corrections, i.e., in the matching of jet directions and energies between the parton and hadron level [@BKSS].
Advances in the understanding of the perturbative expansion showed that soft-gluon emission does not exponentiate when ordered in invariant mass, while it does if transverse momentum is used instead [@exponentiation; @durham]. This gave birth to the algorithm [@durham]. Alternatives such as the one were also proposed [@BKSS].
Recently, further advances in the understanding of soft-gluon emission has lead to the introduction of new algorithms based on the scheme, the and the ones [@camjet], which modify the clustering procedure of the former in order to remedy some of its shortcomings. Given the huge increase in the computing power of modern computers, one can now reverse the historical trend towards simplification without compromising the efficiency of the algorithm, e.g., indulging in procedures more sophisticated than the simple binary joining.
The (also called ) algorithm [@diclus] does not really fit into the above scheme, in that it is not based on the binary joining of two clusters to one but on the joining of three clusters to two. This is well matched to the dipole picture of cascade evolution. Like in many other algorithms, the distance measure is based on transverse momentum.
The connection between perturbative QCD cascades and jet clustering algorithms is not only limited to the case. In general one may describe clustering algorithms as an attempt to reconstruct a QCD cascade backwards in time. In fact, when we in the following argue that one clustering should be performed *before* another it is based on experience from how to formulate QCD cascades where color coherence is correctly taken into account. Such QCD cascades have been the basis of the enormous success modern event generators have had in describing the detailed structure of annihilation events.
(2000,700)(-100,0) (1050,650)[(0,0)[$\kappa$]{}]{} (1650,100)[(0,0)[$y$]{}]{} (400,100)[(1,0)[1200]{}]{} (1000,100)[(0,1)[600]{}]{} (500,100)[(1,1)[500]{}]{} (1500,100)[(-1,1)[500]{}]{} (850,650)[(0,0)\[r\]]{} (900,700)[(0,-1)[100]{}]{} (600,400)[(0,0)\[r\]]{} (650,450)[(1,-1)[70]{}]{} (1400,400)[(0,0)\[l\]]{} (1350,450)[(-1,-1)[70]{}]{} (1020,150)[(1,0)[100]{}]{} (980,150)[(-1,0)[100]{}]{} (1000,50)[(0,0)\[c\]]{}
It should be noted, however, that the notion of time ordering in QCD cascades is not unambiguous. Looking at the three most successful coherent cascade implementations today, they all have different ordering of emissions. orders emissions in angle while orders in invariant mass with an additional angular constraint to ensure coherence. Finally orders the emissions in transverse momentum. In fig. \[fig:timeordering\] we show the approximate phase space available for gluon emission in an annihilation event, in the plane of logarithm of transverse momentum ($\kappa$) and rapidity ($y$). The notion of time is indicated for the three programs. In the and cases, where emissions from the $q$ and $\bar{q}$ are treated separately, there is one direction for each, while in the case there is only one direction for the ordering of emissions from the $q\bar{q}$ *dipole*.
These three descriptions, although very different, are consistent with perturbative QCD and it has not been possible to say that one in better than another, although some experimental observables have been suggested [@PhotonOrdering]. Common for all programs is that they treat gluon emissions in a coherent way, and it may be easiest to look at this in terms of angular ordering. In the following we present three example diagrams of $\ee\rightarrow q_1+\bar{q}_2+g_3+g_4+...$. In all cases we have drawn them as one Feynman diagram, but in general all multi-gluon states are of course coherent sums of many diagrams. It is clear that a good clustering algorithm in some sense should cluster an event according to the dominating diagram for each given partonic state.
In the following Subsections we give a more detailed description of several of the currently used algorithms. The order is not purely historical, but is rather intended to allow a gradual introduction of new concepts.
{#subsec_JADE}
The algorithm [@jade] may be viewed as the archetype of a binary joining scheme.
In this class of methods, a distance measure $d_{ij}$ between two clusters $i$ and $j$ is defined as a function of their respective four-momenta, $p_{i,j} = (E_{i,j}, \bfp_{i,j})$. Since the measure is normally not Lorentz invariant, it is assumed that the analysis is performed in the hadronic rest frame of the event. To the extent this frame is not known, the lab frame is used instead, and the effects of initial-state QED radiation should then be included as a correction to the final physics results.
The algorithm starts from a list of particles, that is considered as the initial set of clusters. The two clusters with the smallest relative distance are found and then merged into one, provided their distance is below the desired minimum separation $d_{\mathrm{cut}}$. The four-momentum of the new cluster $k$ is found from its constituents $i$ and $j$ by simple addition, e.g., $p_k = p_i + p_j$. The joining procedure is repeated, until all pairs of clusters have a separation above $d_{\mathrm{cut}}$. This final set of clusters is called jets.
In the algorithm the distance measure is given by $$d_{ij}^2 = 2 E_i E_j (1 - \cos\theta_{ij}),$$ where $\theta_{ij}$ is the opening angle between the momentum vectors of the two clusters. As written here, $d_{ij}$ has dimensions of mass. The scaled expression $$\label{yJ}
y_{ij} = \frac{d_{ij}^2}{E_{\mathrm{vis}}^2} =
\frac{2 E_i E_j (1 - \cos\theta_{ij})}{E_{\mathrm{vis}}^2}$$ is more often quoted. The visible energy $E_{\mathrm{vis}}$ would agree with the centre-of-mass (CM) energy for a perfect detector but, to the extent that some particles are lost or mismeasured, normalization to $E_{\mathrm{vis}}$ gives some cancellation of errors between numerator and denominator. In the following we will usually give the $y$-expression, but note that a translation between the two alternative forms is always possible. This also applies to the cut-off scale $y_{\mathrm{cut}} = d_{\mathrm{cut}}^2/E_{\mathrm{vis}}^2$.
Whether the dimensional or scaled dimensionless form is preferable is normally a matter of application and physics point of view. The $\as$ evolution with energy, and all other comparisons of jet rates at different energies, are best done in terms of scaled variables $y$. The transition between perturbative and non-perturbative physics, on the other hand, is expected to occur at some fixed dimensional scale of the order of 1 GeV. Such a hypothesis is supported, e.g., by the observable scaling violations of fragmentation functions in jets defined by a fixed $\ycut$. Therefore, we expect the ‘true’ partonic multiplicity of an event to increase with energy, tracing the increase of the hadronic multiplicity, while the jet rate above a given $y$ drops, tracing the running of $\as$.
The $d_{ij}$ measure above is closely related to the invariant mass $$m_{ij}^2 = (p_i + p_j)^2 = m_i^2 + m_j^2 + 2 (E_i E_j -
|\bfp_i| |\bfp_j| \cos\theta_{ij}),
\label{Jade_E_dist}$$ and the use of the correct mass as distance measure defines the so-called E variant of the scheme. Given its Lorentz invariant character, mass would have been a logical choice, had it not suffered from instability problems. The reason is well understood: in general, particles tend to cluster closer in invariant mass in the region of small momenta. The clustering process therefore tends to start in the center of the event, and only subsequently spreads outwards to encompass also the fast particles. Rather than clustering slow particles around the fast ones (where the latter naïvely should best represent the jet directions), the invariant mass measure tends to cluster fast particles around the slow ones.
The $d_{ij}$ and $m_{ij}$ measures coincide when $m_i = m_j = 0$. For non-vanishing cluster masses $d_{ij}$ normally drops below $m_{ij}$, and the difference between the two measures increases with increasing net momentum of the pair. This tends to favor clustering of fast particles somewhat, and thus makes the standard algorithm more stable than the one based on true invariant mass.
There would seem to be a mismatch in comparisons between fixed-order perturbation theory based on the correct invariant mass expression [@secondorderme] and experimental analyses based on the $d$ measure. However, the perturbative results are normally presented in terms of massless outgoing partons, so the $m$ and $d$ measures agree on the parton level. A definition of hadronic cluster separation as if clusters were massless therefore better matches the partonic picture, and should give smaller hadronization corrections. When performing NLO perturbative calculations, it is of course then of decisive importance to impose the same kind of clustering scheme as will be used on the hadron level.
Further variants of the scheme have been introduced [@BKSS]. In the p alternative, the energy of a cluster $k$ is defined to be $E_k = |\bfp_k|$, so that the cluster is explicitly made massless, at the expense of violating energy conservation when pairing two clusters. In the E0 scheme, massless clusters are instead obtained by momentum violation, defining $\bfp_k = E_k (\bfp_i + \bfp_j) / |\bfp_i + \bfp_j|$. In this paper we stay with the standard scheme, however.
{#subsec_Durham}
The algorithm [@durham] can be obtained from the one by a simple replacement of the distance measure from mass to transverse momentum. In scaled variables $$\label{yD}
y_{ij} =
\frac{2 \min(E_i^2, E_j^2) (1 - \cos\theta_{ij})}{E_{\mathrm{vis}}^2},$$ i.e., with $E_i E_j \to \min(E_i^2, E_j^2)$. Some special features should be noted. Firstly, strictly speaking, the measure is transverse energy rather than transverse momentum, just like the measure is based on energies. Secondly, the transverse momentum is defined asymmetrically, as the $p_{\perp}$ of the lower-energy one with respect to a reference direction given by the higher-energy one. And, thirdly, the angular dependence only agrees with that of the transverse momentum $p_{\perp} = E \sin\theta$ for small angles, where $\sin^2 \theta \approx 2(1 - \cos\theta)$. The reason for retaining the same angular dependence as in is obvious enough: the correct $p_{\perp}$ would vanish for two back-to-back particles and thus allow unreasonable jet assignments. The algorithm has eventually taken over the rôle of standard jet finder. There are two main reasons for preferring .
Firstly, fixed-order perturbative corrections are quite sizeable for the algorithm. This is particularly true for the case of the NLO ones to the three-jet rate $f_{3}(\ycut)$ (see later on, in Sect. \[subsec\_fixedorder\] for its definition) [@Yellow; @jadef3]. The importance of this aspect is evident if one considers that $f_3(\ycut)$ provides a direct measurement of $\as$. Such behaviours can be seen by noticing the large renormalization scale dependence of $f_3(\ycut)$ at NLO, indicating that higher order corrections are not yet negligible in the perturbative expansion. Since it was (and still is) unthinkable with present computational technology to attempt the evaluation of next-to-next-to-leading order (NNLO) terms, the path to be necessarily followed in order to reduce the scale dependence of $f_3(\ycut)$ was to define new clustering algorithms having smaller perturbative corrections. Secondly, the jet fractions obtained in the scheme do not show the usual Sudakov exponentiation of multiple soft-gluon emission [@exponentiation], despite having an expansion of the form $\as\ln^2\ycut$ at small values of the resolution parameter.
-3.0cm
(455,200) (0,0)
(250,100)(300,100) (300,100)(400,120)[2]{}[4]{} (300,100)(470,80)
(250,100)(200,100) (100,120)(200,100)[2]{}[4]{} (200,100)(30,80)
(10,70)\[\][$q_1$]{} (442.5,70)\[\][$\bar q_2$]{} (375,120)\[\][$g_3$]{} (75,120)\[\][$g_4$]{}
(225,90)\[\][$\times$]{}
-1.5cm
The source of such misbehaviors at both large (i.e., in fixed-order calculations) and small (i.e., in the resummation of leading logarithms) $\ycut$ values is indeed the same, namely, the large rate of soft gluons radiated in the hard scattering process and the way they are dealt with in the clustering procedure. The problem can be exemplified by referring to one of the possible configurations in which two soft gluons $g_3$ and $g_4$ can be emitted by two leading (i.e., highly energetic) back-to-back quarks $q_1$ and $\bar q_2$. Let us imagine the first gluon to be radiated in one of the two hemispheres defined by the plane transverse to the axis of the two quarks and the second one on the opposite side (i.e., the ‘seagull diagram’ of Ref. [@camjet]: see Fig. \[fig\_seagull\]). By adopting as measure $y_{ij}$ the expression given in eq. (\[yJ\]), the iterative algorithm would combine the two gluons with each other first, so that the net results is a ‘ghost jet’ in a direction along which no original parton can be found. Such behaviours end up representing a serious challenge in perturbative calculations. On the one hand, the problems encountered by fixed-order QCD in cancelling divergences are amplified by the clustering of two soft particles, so that in general one naturally expects larger higher order terms. On the other hand, such unnatural clustering induces a redistribution of the partons in the final state that spoils the exponentiation properties of large logarithms $\ln\ycut$ for $\ycut\ar0$ [@partition].
The simple modification [@yuri] given in eq. (\[yD\]) is enough to cure the two above mentioned problems. This is clear if one considers that, by adopting the measure, in the seagull diagram configuration one of the soft gluons will always be combined first with the nearby high-energy quark, unless the angle that it forms with the other gluon is smaller than that with respect to the leading parton. As a consequence, the stability of the fixed-order results is greatly improved and the factorization of large leading and next-to-leading logarithms guaranteed.
{#subsec_LUCLUS}
Historically, [@luclus] has not been used for $\as$ determinations and related QCD studies, but is instead widely used for other jet topics, such as search for new particles in invariant mass distributions. In main properties it is similar to the algorithm, but with several differences.
Firstly, the transverse-momentum-based distance measure is $$\label{yL}
y_{ij} = \frac{2 |\bfp_i|^2 |\bfp_j|^2 (1 - \cos\theta_{ij})}{(|\bfp_i| + |\bfp_j|)^2 E_{\mathrm{vis}}^2}.$$ Geometrically, in the small-angle approximation, this can be viewed as the transverse momentum of either particle with respect to a reference direction given by the vector sum of the two momenta.
Apart from the difference between $|\bfp|$ and $E$, the step from the to the distance measure is given by the replacement $\min(E_i, E_j) \to E_i E_j/(E_i + E_j)$. Clearly the two expressions agree when either of $i$ or $j$ is much softer than the other, so all the soft-gluon exponentiation properties of the measure carry over to the one. In the other extreme, when $E_i = E_j$, the two $y$-expressions differ by a factor of 4 (that is, by a factor of 2 at $p_{\perp}$ level).
The usage of $|\bfp|$ rather than $E$ in is based on non-perturbative physics considerations, specifically on the properties of string fragmentation [@AGIS]. Here, primary particles are given a Gaussian transverse momentum spectrum with respect to the string direction, typically around 400 MeV, common for all particle species. Secondary decays give a final mean $p_{\perp}$ that is around 100 MeV lower for pions than for Kaons or protons, but this is still smaller than the mass difference between the particles. Therefore, a jet is a set of particles with limited $p_{\perp}$ with respect to the common jet direction, and using $E_{\perp}$ only introduces unnecessary smearing. From a perturbative point of view, arguments could be raised for the use of energy (see the discussion on the algorithm above). Also note that, in the string model, the $p_{\perp}$ width of a non-perturbative jet is independent of longitudinal momentum, to first approximation. This concept is preserved by the symmetric way in which defines $p_{\perp}$. The asymmetric $p_{\perp}$ definition of is more appropriate if high-energy particles are better lined up in $p_{\perp}$ with the true jet axis than low-energy ones. This may occur when multi-partonic states are considered, see discussion below, so the matter is not quite clearcut.
The original routine differs from the others presented here in that it contains preclustering and reassignment steps. These options can both be switched off, individually, but the reassignment step was a part of the basic philosophy at the time the algorithm was written. The preclustering one, on the other hand, was purely intended to speed up the algorithm without affecting the final results significantly. The amount of preclustering can be varied, with much preclustering giving a faster algorithm at the expense of some residual effects of the preclustering step. Speed was an important consideration at the time the algorithm was originally formulated, but is normally no issue with modern workstations. Today users should therefore feel free to switch off this step entirely.
First consider the reassignment aspect. When two clusters are merged, some particles belonging to the new and bigger cluster may actually be closer to another cluster. A simple example is once again the seagull diagram of Fig. \[fig\_seagull\], with the quark-gluon opening angles not necessarily small. With the $p_{\perp}$ measure, it can happen (in fact, more easily than with the one, see Fig. \[fig\_measure\] later on) that the two soft particles are first combined to one cluster and thereafter this cluster is merged with one of the hard particles. One of the soft particles is that way combined with the hard particle it is furthest away from. The ‘natural’ subdivision would have been with one hard and one nearby soft particle in each final cluster. That is, a procedure that is good for going from four to three clusters and from three to two clusters may be less good for the combined operation of going from four to two clusters. The problem is that simple binary joining algorithms do not allow previous assignments to be corrected in the light of new information.
Hence the reassignment: after each joining of two clusters, each particle in the event is reassigned to its nearest cluster. For particle $i$, this means that the distance $d_{ij}$ to all clusters $j$ in the event has to be evaluated and compared. After all particles have been considered, and only then, are cluster momenta recalculated to take into account any reassignments. To save time, the assignment procedure is not iterated until a stable configuration is reached. (Again, the time cost of these iterations could be acceptable today but it was not at the time the algorithm was written.) All particles are reassigned after each binary joining step, however, and not only those of the new cluster. Therefore an iteration is effectively taking place in parallel with the cluster joining. Only at the very end, when all $d_{ij} > d_{\mathrm{cut}}$, is the reassignment procedure iterated to convergence — still with the possibility to continue the cluster joining if some $d_{ij}$ should drop below $d_{\mathrm{cut}}$ due to the reassignment.
The algorithm was conceived mainly based on non-perturbative considerations. The reassignment procedure is completely deterministic, however, and can therefore also be applied to any perturbative calculation, just like the simple binary joining. The price is that analytic calculations become more difficult to survey. A reassignment cannot occur after the first binary joining of an event, though, but only after the second. It therefore does not affect leading or NLO results, but only NNLO and higher orders.
In the preclustering step the original large number of particles are put together in a smaller number of clusters. This is done as follows. The particle with the highest momentum is found, and thereafter all particles within a distance $d_{ij} < d_{\mathrm{init}}$ from it. Here it is intended that $d_{\mathrm{init}} \ll d_{\mathrm{cut}}$ for preclustering to give negligible effects. Together these very nearby particles are allowed to form a single cluster. For the remaining particles, not assigned to this cluster, the procedure is iterated, until all particles have been used up. Particles in the central momentum region, $|\bfp| < 2d_{\mathrm{init}}$ are treated separately: if their vectorial momentum sum is above $2d_{\mathrm{init}}$ they are allowed to form one cluster, otherwise they are left unassigned in the initial configuration and only appear in the first reassignment step.
The value of $d_{\mathrm{init}}$, as long as reasonably small, should have no physical importance, in that the same final cluster configuration will be found as if each particle initially is assumed to be a cluster by itself. That is, the particles clustered at this step are so nearby that they almost inevitably must enter the same jet. ‘Mistakes’ in the preclustering can however be corrected by the reassignment procedure in later steps of the iteration. Therefore reassignment may be seen as a prerequisite and guarantee for successful preclustering.
In this respect, we would like to give a word of caution, about the actual meaning of ‘reasonably small’. The value chosen for $d_{\mathrm{init}}$ should depend on the $d_{\mathrm{cut}}$-range considered in the analysis. For example, the default value of 0.25 GeV is clearly inappropriate for $\ycut=d_{\mathrm{cut}}^2/s\approx 0.0001$, as some residual effects of preclustering are then visible (see Sects. \[subsec\_tubemodel\] and \[subsec\_jetrates\] later on). A scaling, e.g., like $d_{\mathrm{init}} = d_{\mathrm{cut}}/10$ would have removed them (we have explicitly verified this in our numerical simulations). Though we recommend the mentioned scaling, should the preclustering step be retained, we have decided to keep the default value of 0.25 GeV in order to illustrate the consequences of a fixed $d_{\mathrm{init}}$ for small $\ycut$.
From a perturbative physics point of view, the $d_{\mathrm{init}}$ parameter plays a rôle very similar to that, e.g., of the $y_0$ parameter in the phase space slicing method of handling higher-order corrections to MEs (see, e.g., Sect. 4.8 of [@LEP2QCDgen]). Below $y_0$ the cancellation of real and virtual corrections is carried out analytically in an approximate treatment of phase space, while between $y_0$ and $y_{\mathrm{cut}}$ the addition of contributions is performed numerically with full kinematics. Hence $y_0$ should be picked as small as computer resources allow, and always much smaller than the physical $y_{\mathrm{cut}}$ parameter.
In this paper we will focus our attention on four possible options of the algorithm, namely the default one, and it stripped off either preclustering or reassignment or both. We will call the latter the scheme with the measure (with the acronym DL), as this effectively differs from the algorithm introduced in Sect. \[subsec\_Durham\] only in the choice of the distance measure.
has always been distributed as part of . With the merger of and the routine has been renamed , but we will here refer to it by its original name.
{#subsec_Geneva}
The algorithm [@BKSS] is based on pure binary joining, with a dimensionless distance measure $$\label{yG}
y_{ij} = \frac{8}{9} \frac{E_i E_j (1 - \cos\theta_{ij})}{(E_i + E_j)^2}.$$ Unlike the other algorithms studied, the measure depends only on the energies of the particles to be combined, and not on the energy of the event. The energy factor $E_i E_j/(E_i + E_j)^2 \approx \min(E_i, E_j)
/ \max(E_i, E_j)$ favors the clustering of soft particles to the hardest ones and disfavors the combination of soft particles with each other. The soft-gluon problems of the algorithm are thus avoided, indeed in a more effective way than in the scheme. In fact, a soft gluon will only be combined with another soft gluon if the angle between them is [*much*]{} smaller than the angle between the former and the nearby high-energy particle. As a consequence, it turns out that the scheme exhibits a more reduced scale dependence as compared to the algorithm in the three-jet rate at NLO [@BKSS]. Indeed, in Ref. [@genevabest] it was shown that such property remains true also in the case of the NLO four-jet rates $f_4(\ycut)$. Furthermore, it has been pointed out [@genevabest] that the algorithm is particularly sensitive to the number of light flavors, this rendering it most suitable for the study of New Physics effects. For example, in the context of four-jet events in $e^+e^-$ annihilations [@genevabest], where the existence of the so-called ‘light gluino’ events has been advocated in the past years [@window]. As for the exponentiation properties of large logarithms at small values of the resolution parameter, these have not thoroughly been studied yet for this scheme. However, a simple example[^2], should help understanding that the algorithm can manifest severe misassignment problems in the soft regime. It suffices to consider a $q_1\bar q_2 g_3g_4$ configuration (e.g., with the antiquark and the two gluons in the same hemisphere), with the two gluons produced via a triple-gluon vertex and ordered in energy, such that $E_2\gg E_3\gg E_4$. In the region where $\theta_{34}<\theta_{23}$ the gluon $g_4$ can be assigned by the algorithm to the antiquark $\bar q_2$ rather than to other gluon $g_3$, since $(E_2+E_4)^2\gg(E_3+E_4)^2$ in the denominator of eq. (\[yG\]). Hence, one expects the $C_F$ factor instead of the correct $C_A$ one to describe the radiation intensity of such an event. This induces a breakdown of the correct exponentiation picture [@exponentiation; @durham]: see eqs. (\[Fx\])–(\[Fy\]) in Subsect. \[subsec\_resummed\]. By contrast, a transverse momentum measure, such as the (\[yD\]) or (\[yL\]) ones, would have more naturally assigned $g_4$ to $g_3$, as in the limit $\theta_{34}\ll\theta_{23}$ the full infrared (i.e., both soft and collinear, driven by the gluon propagator) singularity sets on, which renders the triple-gluon diagram dominant in the kinematics above.
The algorithm has had little phenomenological impact so far. One reason is that it is rather sensitive to hadronization effects, as already pointed out in Ref. [@BKSS]. For instance, compare a single original large-energy hadron with a system of hadrons of the same total energy and collinear within the hadronization $p_{\perp}$ spread of a few hundred MeV. Then the former can collect particles further away in angle at the early stages of clustering. Therefore clustering of gluon jets, which start out with a lower energy and tend to fragment softer than quark jets, is disfavored. This will introduce systematic biases, e.g., in jet energy distributions, that can only be unfolded given a detailed understanding of the hadronization process. In addition, the algorithm is more sensitive to measurement errors since the measure contains the energy of individual particles also in the denominator, where other algorithms have instead the total visible energy, which is more precisely measured.
{#subsec_Angular}
This algorithm maintains the same measure (\[yD\]) of the scheme, while modifying the clustering procedure. It was introduced in Ref. [@camjet] to obviate one of the flaws of that algorithm: namely, its tendency of inducing ‘junk-jet’ formation at small values of the resolution parameter.
-3.0cm
(455,200) (-10,0)
(250,100)(50,100) (250,100)(300,100) (300,100)(315,115)[2]{}[2]{} (300,100)(350,95) (350,95)(470,80) (350,95)(420,100)[2]{}[4]{} (30,90)\[\][$q_1$]{} (442,70)\[\][$\bar q_2$]{} (395,92)\[\][$g_3$]{} (295,112.5)\[\][$g_4$]{}
(225,90)\[\][$\times$]{}
-1.5cm
The problem is as follows. Let us imagine the configuration in Fig. \[fig\_soft\_unresolved\], with two back-to-back high-energy partons (the quarks $q_1$ and $\bar
q_2$) plus a double gluon emission ($g_3$ and $g_4$) [@camjet] in the same hemisphere defined by the plane transverse to the direction dictated by the two quarks, one of the gluons being at large angle and soft ($g_4$) and the other ($g_3$) collinear to the nearby leading particle on the same side ($\bar q_2$). Then, according to the clustering procedure adopted in the scheme, one usually starts from the softest particle (i.e., one of the two gluons: here $g_4$) and merges this with the nearest in [angle]{}, to minimize the $p_{\perp}$-measure. Thus, such a particle gets clustered not with one of the leading partons (i.e., $\bar q_2$ here) but, typically, with the softest among the particles which happen to lie on the same side (i.e., to the other gluon, $g_3$, in our example). This is contrary to our picture of the large-angle $g_4$ being emitted coherently by the $\bar q_2$ and $g_3$, so that most of the recoil to the $g_4$ should have been taken by the more energetic $\bar q_2$. Such a procedure gets iterated in the case in which more particles are involved (e.g., radiated in between the least, $g_4$, and the most energetic, $\bar q_2$, ones in one hemisphere). Since at each stage the new pseudo-particle acquires more and more four-momentum, in the end the latter will have a $p_{\perp}$ relative to the leading particle in the same hemisphere larger than the resolution scale adopted. This way, a third jet is eventually resolved. A good algorithm should then be designed so that the starting configuration remains classified as a two-jet final state down to the smallest possible values of the resolution $\ycut$, at which the third (junk-)jet separates. However, one should notice that if one hemisphere of an event is significantly broadened by multiple soft-gluon emission, where the gluons together carry away non-negligible energy and $p_{\perp}$, it would be reasonable to argue that the event could be legitimately recognised as a three-jet one. Clearly, in this as in many other cases, it is the status of our present computation technology and of its list of priorities that induces the choice of strategy to be adopted. In fact, the latter needs to be neither unique nor definitive. On the other hand, as well demonstrated in Ref. [@stan] (see, e.g., Fig. 1 there) and as we shall further see below, the remedy adopted by the (and , too) scheme in order to alleviate the above mechanism appears more than adequate for present investigations.
In Ref. [@camjet] it was shown that a simple modification of the algorithm suffices to delay the onset of junk-jet formation, which results mainly from a non-optimal sequence of clustering, rather than from a poor definition of the test variable (as was the case for the algorithm in the seagull diagram). The key to reduce the severity of the problem resides in distinguishing between the variable $v_{ij}\equiv 2(1-\cos\theta_{ij})$, used to decide which pair of objects to test first, and the variable $y_{ij}$ to be compared with the resolution parameter $\ycut$. The algorithm then operates as follows. One considers first the pair of objects $(ij)$ with the smallest value of $v_{ij}$ (in Ref. [@camjet], this procedure was referred to as ‘angular-ordering’). If $y_{ij}<\ycut$, they are combined. Otherwise the pair with the next smallest value of $v_{ij}$ is considered, and so on until either a $y_{ij}<\ycut$ is found or, if not, clustering has finished and all remaining objects are defined as jets.
Coming back to the example configuration described before, but with the new clustering procedure, one should expect the collinear quark $\bar q_2$ and gluon $g_3$ to be paired first, with the soft, large-angle gluon $g_4$ eventually joining the new cluster. In case more radiation is present around the leading quark, the procedure always iterates so that the pairing always starts amongst the particles collinear to the leading quark $\bar q_2$, with the soft, large-angle gluon $g_4$ entering the clustering procedure only at the very end. Indeed, this way, the original configuration will more likely be recognised as a two-jet one in the than in the original . We will exemplify this in Sect. \[subsec\_jetrates\].
By generalizing the procedure to the full hard scattering process, one indeed realizes that, at a given $\ycut$, the two-jet fraction at NLO as given by the is larger than in the original scheme, as illustrated in Ref. [@camjet]. Conversely, the three-jet rate at the same order is smaller. Thus, since it is not unreasonable to argue that jet algorithms having smaller NLO terms may also have smaller higher-order corrections, one would imagine the scale dependence of the three-jet rate for the to be reduced as compared to that of the algorithm. This was shown explicitly again in Ref. [@camjet] (see also Sect. \[subsec\_fixedorder\] later on). Given the phenomenological relevance of $f_3(\ycut)$, this should represent an improvement from the point of view of the accuracy achievable, e.g., in $\as$ determinations, given that the theoretical error should diminish accordingly. As for the exponentiation properties at small $\ycut$, these remain unspoilt in the new algorithm, as discussed in Ref. [@camjet].
Before closing this Section, we should remind the reader that the should be intended as an intermediate step between the and schemes, rather than a new one. Indeed, we will recall in the next Section another shortcoming of the algorithm which carries over into the one and which can have a strong impact in jet-rate studies. Nonetheless, for purposes of comparison, we will present results for the on the same footing as the other algorithms.
{#subsec_Cambridge}
The algorithm was defined and its properties discussed in Ref. [@camjet]. It implements the same distance measure as the scheme, while further modifying the clustering procedure of the one. As a matter of fact, the sole introduction of $v_{ij}$ is not enough to remedy the problem of ‘misclustering’ of the algorithm, that is, the tendency of soft ‘resolved’ particles of attracting wide-angle radiation [@camjet].
Let us imagine that the soft, large-angle gluon $i=4$ in the previous example has been eventually resolved at a certain (possibly low) scale $\ycut$ (i.e., $y_{4j}>\ycut$). Clearly, the very same ability that it had of attracting radiation (because of its softness) as unresolved parton (i.e., when $y_{4j}<\ycut$) survives above the new $\ycut$. In particular, if further wide-angle (with respect to the leading quark in the same hemisphere of the resolved gluon: i.e., $\bar q_2$) radiation occurs, say, two additional gluons $g_5$ and $g_6$ one of which ($g_5$) happens to lie in angle a little closer to $g_4$ than to the other, then such a gluon will be erroneously assigned (to $g_4$ rather than to $g_6$), when $\theta_{45}>\theta_{4\hat{23}}$ (thus assuming that $\bar q_2$ and $g_3$ have already been clustered).
In order to cure this problem, the scheme implements the sterilization (i.e., the removal from the table of particles participating in the clustering) of the softest particle in a resolved pair, a procedure called ‘soft-freezing’. In our example, once $g_4$ has been removed from the sequence of clustering (when $y_{4\hat{23}}>\ycut$), then the unwanted pairing of $g_4$ and $g_5$ (yielding $y_{45}<\ycut$ if the gluons are soft enough) is successfully prevented.
-1.7cm
(455,200) (-10,0)
(180,150)(175,125)[2]{}[2]{} (175,125)(115,130)[2]{}[4]{} (200,100)(175,125)[2]{}[3]{} (200,100)(50,70) (250,100)(200,100) (250,100)(300,100) (300,100)(325,115)[2]{}[2]{} (300,100)(350,95) (350,95)(470,80) (350,95)(420,100)[2]{}[4]{} (27.5,60)\[\][$q_1$]{} (442,70)\[\][$\bar q_2$]{} (395,92)\[\][$g_3$]{} (305,112.5)\[\][$g_4$]{} (170,145)\[\][$g_5$]{} (90,117.5)\[\][$g_6$]{}
(225,90)\[\][$\times$]{}
-1.5cm
Note, however, that the diagram in is not the only important one for the described parton configuration. For example, we should also consider the diagram where $g_5$ is attached to $\bar{q}_2$ (to the left of $g_4$), in which case the freeze-out of $g_4$ could prevent $g_5$ from being correctly clustered. The relative importance of the two, as well as of the others appearing at the same order, is clearly a function of the dynamics of the final state. Numerical results will finally establish the effectiveness and/or the limitations of the above approach.
As a matter of fact, the mentioned misclustering phenomenon could well manifest itself in studies of high multi-jet rates (as in Fig. \[fig\_soft\_resolved\]) as well as in those of the internal jet sub-structure when examining the history of the mergings (e.g., in the ‘would-be’ cluster $\hat{45}$, artificially over-populated with gluons). In particular, it was shown in Ref. [@camjet], that such an additional step plugged onto the algorithm allows one to increase the final event multiplicity (e.g., that of the NLO $f_3(\ycut)$ rate) while not deteriorating the scale dependence of the results. In addition, as for the scheme, the properties of factorization/exponentiation of large $\ln\ycut$ terms at small $\ycut$’s remain completely unaffected.
Altogether, as shown in Ref. [@camjet], the scheme came at the time of those studies to represent the most suitable choice when dealing with phenomenological studies involving infrared (i.e., soft and collinear) configurations of hadrons/partons, also in view of its performances with respect to the size of the hadronization corrections (see Ref. [@camjet] and Sect. \[sec\_nonpertcomparison\]). In our forthcoming studies we will allow for a variation of the basic scheme. Namely, we will also adopt the measure along with the one, and we will label the corresponding algorithm as CL (with the same clustering as the original , though).
{#subsec_DICLUS}
The algorithm is very different from the other ones considered in this paper, in that each step clusters three jets into two, rather than two into one. Although unconventional in this respect, it is not unnatural. If one considers, e.g., the Lund string fragmentation model, a hadron is produced in the color field between two partons rather than stemming from one individual parton. Especially soft hadrons between jets can never unambiguously be assigned to one jet. Also on the perturbative level, gluons are emitted coherently by neighboring partons. The partonic cascade can therefore be formulated in terms of color-dipole radiation of gluons from pairs of color-connected partons, as in the program [@ariadne]. In a conventional parton cascade, this coherence is instead formulated in terms of angular ordering.
Just as a conventional binary clustering algorithm can be viewed as an attempt to reconstruct backwards a parton shower step by step, the algorithm tries to reconstruct a dipole cascade[^3]. The ordering variable in is a Lorentz-invariant transverse momentum measure defined for an emitted parton $i$ with respect to the two emitting partons $j$ and $k$ as $$\label{yAR}
p^2_{\perp i(jk)}=\frac{(s_{ji}-(m_i+m_j)^2)(s_{ik}-(m_i+m_k)^2)}
{s_{ijk}}\label{LL:invpt},$$ where $s_{ij}$($s_{ijk}$) is the squared invariant mass of two (three) partons. When reconstructing the dipole cascade backwards in time with , the same measure is used and the clustering procedure is as follows.
- For each cluster $i$, find the two other clusters $j$ and $k$ for which $p^2_{\perp i(jk)}$ is minimized.
- Take the combination $i,j,k$ which gives the minimum $p^2_{\perp
i(jk)}$, and if this is below a cutoff, remove cluster $i$ and distribute its energy and momentum among $j$ and $k$.
These steps are repeated until no $p^2_{\perp i(jk)}$ is below cutoff.
The joining is performed in the rest-frame of the three clusters, which are replaced by two massless, back-to-back ones aligned with the one of the original clusters with the largest energy ([mode=1]{}). Alternatively, the new clusters are placed in the plane of the original ones with an angle $\psi=\frac{E_k^2}{E_j^2+E_k^2}(\pi-\theta_{jk})$ from the highest energy cluster $j$ ($k$ is the second highest energy cluster and $\theta_{jk}$ is the angle between $j$ and $k$) ([mode=0]{}). These two options correspond exactly to the two ways of distributing transverse recoil in an emission in . Mode 1 is more similar to the binary algorithms and is the one mostly used in this paper.
In fact, for most cases, the measure in eq. (\[LL:invpt\]) is closer to the transverse mass of cluster 2 rather than its transverse momentum, except when two clusters are almost at rest w.r.t. each other. Removing the subtraction of masses in eq. (\[LL:invpt\]) gives a measure which is closer to transverse mass everywhere: $$\label{mtAR}
m^2_{\perp i(jk)}=\frac{s_{ji}s_{ik}}{s_{ijk}}\label{LL:invmt}.$$ This measure is used in [mode=2]{}, which is otherwise the same as [mode=1]{}. Since the jets in are always massless, the only difference will be in the initial clustering of massive hadrons, but it turns out that this actually makes some difference in the reconstruction of jet energies and angles below.
There is no straight-forward translation between the distance measure in and the ones used in binary clustering algorithms, since it depends on three clusters rather than two. However, in the limit of small measures, the $p_{\perp i(jk)}$ is equal to the measure taken between the two softest clusters in the rest frame of the clusters $i,j,k$. Also, if the cluster $i$ is much softer than $j$ and $k$, and much closer to, e.g., $j$ than $k$, $p_{\perp i(jk)}$ is again equal to the $d_{ij}$.
To better understand how works we look at the example diagrams above. In , the first step would be to cluster $g_3$ into $\bar{q}_2$ and $g_4$ (or $g_4$ into $q_1$ and $g_3$). In most cases the anti-quark would be given the major part or the gluons transverse momentum, thus resembles the other transverse momentum based algorithms for this case. However, in some cases the neighboring gluon will get a large fraction of the transverse momentum, especially if the invariant mass of $g_3$ and $g_4$ is smaller than that of $g_3$ and $\bar{q}_2$. This may happen even if the angle between $g_3$ and $g_4$ is larger than that between $g_3$ and $\bar{q}_2$, so the problem present in algorithms based on invariant mass measures is not solved completely.
In , assuming $g_4$ has smaller than $g_3$, the first step would be to cluster $g_4$ into $q_1$ and $g_3$, giving extra to $g_3$ possibly pushing it above the cutoff. This is a good description of how this parton configuration would have been generated in a dipole cascade. However, parton cascades in general only agrees completely with perturbative QCD in the limit of strong ordering of emissions where recoils do not matter, but, as we have discussed above, it makes some difference here and it would certainly be more reasonable to say that $g_4$ was radiated by $g_3$ and $\bar{q}_2$ coherently. Finally in , assuming now that $g_3$ and $\bar{q}_2$ have already been joined, could very well cluster $g_5$ into $g_6$ and $g_4$, which is how it would have been produced in a dipole cascade, although if $g_5$ is soft, it would be more reasonable to have it produced from a dipole between $g_6$ and the ($g_4g_3\bar{q}_2$) system, where the latter acts coherently as one color charge.
Since clusters three particles into two, it is not directly possible to say which final state hadron belongs to which jet. It is, however, possible to assign each particle to a jet after the jet directions have been found, simply by finding the two jets $j$ and $k$ for each particle $i$ for which $p^2_{\perp i(jk)}$ is smallest and then assigning particle $i$ to the jet which is closest in angle in the rest frame of $ijk$. In this way it is also possible to redefine the jet directions and energies by summing the momentum of the particles assigned to each jet. This reclustering is used below and is then labeled ‘[reclustered]{}’.
In the remainder of the paper, in order to avoid any confusion with the algorithm, we will sometimes label the three modes of the / scheme by AR0, AR1 and AR2, for the [mode=0,1,2]{} cases respectively.
Perturbative comparisons {#sec_pertcomparison}
========================
In this Section we will compare the performances of the jet clustering algorithms introduced in Sect. \[sec\_intro\] with respect to several quantities calculable in perturbative QCD which are relevant to hadronic studies in electron-positron annihilations. It is subdivided in two Subsections. In the first we deal with fixed-order results whereas in the second we present resummed perturbative quantities. The treatment in Subsect. \[subsec\_fixedorder\] is mainly numerical, whereas in Subsect. \[subsec\_resummed\] is analytical.
To produce the results in the first case, we have made use of the ‘QCD parton generator’ [@EERAD][^4]. Such programs calculate NLO corrections to arbitrary infrared-safe two- and three-jet quantities, through the order ${\cal O}(\as^2)$ in QCD perturbation theory. Although they resort to Monte Carlo (MC) multidimensional integration techniques, they differ substantially from the QCD-based ‘Monte Carlo event generators’ that we will introduce later on (in Sect. \[sec\_nonpertcomparison\]). For a start, the former compute the exact ${\cal O}(\as^2)$ ME result, rather than implementing only the infrared QCD dynamics in the usual ${\cal O}(\alpha_s)$ ME + Parton Shower (PS) modeling [@book]. In addition, the phase-space configurations generated are not necessarily positive definite, so that a probabilistic interpretation is not possible. Finally, these programs only consider partonic states and no treatment of the hadronization process is given. This kind of generators thus represents a complementary tool for QCD analyses to the phenomenological MCs which will be described and used in Sect. \[sec\_nonpertcomparison\].
One final remark, before we start our investigations in pQCD. That is, although we look here at some individual properties, we remind the reader that when choosing an algorithm for a particular measurement, one may have to consider many different aspects altogether. When, e.g., measuring $\as$ from the three-jet rate, it is not enough to find the algorithm with smallest scale dependence, especially if this behavior is found at a larger resolution scale where the three-jet rate is lower and thus giving a larger statistical error in the measurement. The goal must be to minimize the total error which may include both the statistical error as well as systematical errors due to detector unfolding, hadronization corrections, scale dependencies, etc. This is however well beyond the scope of our study.
Fixed-order perturbative results {#subsec_fixedorder}
--------------------------------
In this Subsection we study the $y$-dependent three-jet fraction[^5] $f_3(y)$, defined through the relation \[f3\] f\_3(y) = ( ) A(y) + ( )\^2 (B(y)-2A(y)) + ... , having implicitly assumed the choice $\mu=Q$ of the renormalization scale (in the $\overline{\mbox{MS}}$ scheme). In eq. (\[f3\]), $\as$ represents the strong coupling constant whereas $A(y)$ and $B(y)$ are the so-called leading and next-to-leading ‘coefficient functions’ of the three-jet rate, respectively. The terms of order $\cO{\as^2}$ involving $A(y)$ take account of the normalization to $\st$ rather than to $\sigma_{0}$, which we assume throughout the paper. In fact, we define the $n$-jet fraction $f_n(y)$ as \[fn\] f\_n(y)= =, where $\sigma_n(y)$ is the $n$-jet production cross section at a given $y$. If $\st$ identifies the [*total*]{} hadronic cross section $\st=\sigma_{0}(1+\as/\pi+ ... )$, $\sigma_{0}$ being the lowest-order Born one, then the constraint $\sum_n f_n(y)=1$ applies. For $n=3$, eq. (\[f3\]) represents the three-jet fraction in NLO approximation in perturbative-QCD (pQCD).
Out of the thirteen jet clustering algorithms that we originally chose for our study, we focus here our attention on the D, A, C, DL, CL, AR0 and AR1 schemes. We neglect considering the others for the following reasons. On the one hand, the J and G schemes have already been documented extensively in the specialized literature [@BKSS; @Yellow] and, on the other hand, we would expect them to have little phenomenological applications in the future, at least in QCD studies in the infrared dominion. In fact, as already recalled, the former does not allow for factorization properties of large logarithms $\ln y$ at small $y$-values whereas for the latter these have not been proven to hold yet. Indeed, they both share the feature of being based on an obsolete invariant mass measure, whose flaws go beyond the realm of perturbative QCD, as it is reflected by the more fundamental rôle played by the transverse momentum in setting the scale of jet evolution, as the argument of the running coupling, and in defining the boundary between perturbative and non-perturbative physics [@book]. Furthermore, of the four possible options of the scheme introduced previously, we only consider here the simplest one (which we labeled DL), which implements neither the preclustering nor the reassignment steps. Anyhow, because of the kinematic simplicity of the partonic final states entering in the NLO three-jet rates, the differences among the four implementations turn out to be very marginal. Firstly, the reassignment option is inactive in three-jet rates until the NNLO, see Sect. \[subsec\_LUCLUS\]. Secondly, the preclustering procedure can be incorporated easily with imperceptible effects, so that the cancellations between the loop- and the bremsstrahlung-diagrams still take place effectively without deteriorating the accuracy of the results. (If this is not the case, the $d_{\mathrm{init}}$ parameter has not been set appropriately, see Sect. \[subsec\_LUCLUS\].) Thus, the claim made in the literature, that the algorithm is not suitable for perturbative calculations (see, e.g., Ref. [@BKSS]), does not apply in the present context: i.e., in [*numerical*]{} computations of NLO three-jet observables. Also, although analytical calculations with the original scheme may be prohibitively difficult, one can certainly say that, without preclustering and reassignment (here, DL scheme), remains a reasonable option to adopt, especially in view of some of the results that we will present in the following. Besides, its properties with respect to the Sudakov exponentiation of soft-gluon emission in the resummation procedure of large $\ln y$ logarithms are on the same footing as the D, A and C schemes [@camjet] (see next Section). For we note that the measures in eqs. (\[yAR\]) and (\[mtAR\]) are equivalent for massless partons, so that in the following AR2 coincides with AR1 as they have the same recoil assignment (see Subsect. \[subsec\_DICLUS\]).
Fig. \[fig\_afunction\_bis\] shows the $A(y)$ function, over the range $ 0.001 \leq y \Ord 0.1$, for the selected algorithms. Notice that in several cases the curves coincide. In particular, it occurs for the D, A and C schemes, the DL and CL algorithms and the AR0 and AR1 ones, respectively. This is evident if one considers that (apart from the C and CL options) the various schemes, within each of the three subsets, differ only in the clustering procedure of unresolved particles, which clearly does not affect the LO three-jet rates. As for the C and CL algorithms, one should consider that, for $n=3$ partons, kinematical constraints impose that, on the one hand, the two closest particles are also those for which $y$ is minimal and, on the other hand, the identification of the softest of the three partons as a jet implies that the remaining two particles are naturally the most energetic and far apart. This ultimately means that the $A(y)$ function is the same also for the schemes implementing the soft-freezing step.
The pattern of the curves in Fig. \[fig\_afunction\_bis\] is easily understood in terms of the measures of the algorithms. For given values of angle and energies (or three-momenta) of the parton pair $(ij)$, $\theta_{ij}$ and $E_{i,j}$ (or $|\bfp_{i,j}|$), respectively, the value of $y_{ij}$ is generally larger in the D, A and C schemes, as compared to the DL and CL ones: see eqs. (\[yD\]) and (\[yL\]). Therefore, over an identical portion of phase space, more three-parton events will be accepted as three-jet ones in the D, A and C algorithms than in the DL and CL ones (see also Fig. \[fig\_measure\] below), this ultimately increasing the value of $A(y)$. The comparison of the two measures (\[yD\]) and (\[yL\]) to that of the schemes (\[yAR\]) is clearly less straightforward, as already discussed. For a fixed $y$, the latter is on average larger than that of the DL measure but smaller than that of the D one, as can be deduced from Fig. \[fig\_afunction\_bis\].
In Fig. \[fig\_bfunction\_bis\] we present the NLO $B(y)$ function. Because of the different recombination procedures of the schemes, the various curves now all separate. The interplay between the D, A and C rates has been analysed in great detail in Ref. [@camjet], so we do not repeat those discussions in this paper. We only spot here the correspondence existing between, on the one hand, the D and C algorithms and, on the other hand, the DL and CL ones. In the sense that, the former two differ from each other in the same way as the latter two do: i.e., in the angular-ordering and soft-freezing procedure recommended in Ref. [@camjet]. Indeed, the relative behaviors (of D vs. C and of DL vs. CL) are qualitatively similar. Further notice the tendency of the DL, CL, AR0 and AR1 schemes of yielding at small $y$’s a $B(y)$ consistently higher than that due to the other three algorithms, and vice versa at large values of the resolution parameter. In other terms, they emphasize the real four-parton component with one unresolved pair more than the virtual three-parton at small $y$-values, and vice versa as $y$ increases. However, we remind that the peaking of $B(y)$ at different $y$ values in itself does not have to say much, since the definition of $y$ is not the same. The simplest measure of the difference in cancellations between real and virtual contributions is instead the maximum value of $B(y)$.
We are now going to carefully investigate the interplay of the $A(y)$ and $B(y)$ functions in the expression of the three-jet fraction since, as recalled in Ref. [@camjet], from the point of view of perturbative studies, a ‘good’ jet clustering algorithm should allow for a reduced $\mu$-scale dependence of the fixed-order results, where $\mu$ is the subtraction (energy) scale regulating the infrared cancellations. As a matter of fact, the $\cO{\as^3}$ corrections are guaranteed to cancel the $\mu$-dependence of the $\cO{\as^2}$ three-jet fraction up to the order $\cO{\as^4}$, so that the smaller the variations with $\mu$ the lower the higher order corrections are expected to be. The $\mu$-scale dependence is introduced in eq. (\[f3\]) by means of the two substitutions \[scaledep\] (),B(y)B(y)-A(y)\_0(), where $\beta_0=11-2N_f/3$ is the first coefficient of the QCD $\beta$-function and $N_f$ is the number of fermionic (colored) flavors active at the energy scale $\mu$.
A problem arises when studying the scale dependence of $f_3(y)$ for algorithms based on different measures, as for the same $y$ the three-jet fraction at NLO can be significantly different. A more consistent procedure was outlined in Ref. [@BKSS]: that is, to compare the NLO scale dependence of the various schemes not at the same $y$-value, rather at the same $A(y)$, the three-jet fraction at LO. As can be viewed from Fig. \[fig\_bfunction\_bis\], two possible combinations of $y$’s are the following; $y_{\mathrm\tiny{D,A,C(DL,CL)[AR0,AR1]}}=0.010(0.005)[0.006]$ and $y_{\mathrm\tiny{D,A,C(DL,CL)[AR0,AR1]}}=0.050(0.021)[0.025]$. Such values are typically in the three-jet region and, in addition, they are rather large, as compared to the minimum $y=0.001$ considered so far, so that they can guarantee the full applicability of the perturbation theory.
Fig. \[fig\_scale\_bis\] shows (again for the seven selected algorithms) the value of $f_3(y)$ plotted against the adimensional scale parameter $\mu/Q\equiv \mu/M_Z$, over the range between $1/10$ and $2$, for the two mentioned combinations of the jet resolution parameter. Note that for the strong coupling constant we have used the two-loop expression, with $N_f=5$ and $\Lambda^{(5)}_{\overline{\mbox{\tiny{MS}}}}=250$ MeV, yielding $\as(M_Z)=0.120$, with $Q=M_Z$ as CM energy at LEP1.
Although the structure of the QCD perturbative expansion does not prescribe which value should be adopted for the scale $\mu$, an obvious requirement is that it should be of the order of the energy scale involved in the problem: i.e., the CM energy $Q$ (see Ref. [@subtraction] for detailed discussions). Indeed, this choice prevents the appearance of large terms of the form $(\as\ln(\mu/Q))^n$ in the QCD perturbative series. Furthermore, the physical scale of gluon emissions that actually give rise to three-jet configurations are to be found down to the energy scale ${\sqrt{y}}M_Z$, not above $M_Z$. In other terms, one should avoid building up large logarithmic terms related to the mismatch between $\mu\ge M_Z$ and the physical process scale ${\sqrt{y}}M_Z$. Therefore, as a sensible range over which to estimate the effects of the uncalculated ${\cal O}(\as^3)~+~...$ corrections one should adopt a reduced interval just below the value $\mu/M_Z = 1$. If one does so, then a remarkable feature of Fig. \[fig\_scale\_bis\] is that the DL and CL algorithms show a reduced scale dependence as compared to the D and C ones, respectively, at low and especially at high $y$-values. Furthermore, among these two algorithms, it is the CL one that in general performs better, on the same footing as the C algorithm does as compared to the D one. For example, at the low(high) $y$-values considered, the differences between the maximum and minimum values of $f_3(y)$ between $\mu/M_Z=1/2$ and $\mu/M_Z=1$ are $2.4(3.6)\%$ for the DL scheme and $1.2(2.9)\%$ for the CL one, respectively. In the case of the D and C algorithms, one has $2.4(4.3)\%$ and $1.3(3.6)\%$, correspondingly. The numbers for the AR0 and AR1 algorithms are larger: $4.0(5.5)\%$ and $4.0(5.5)\%$ at small(large) $y$-values, respectively. To help the reader in disentangling the features of Fig. \[fig\_scale\_bis\], we have reproduced some of the data points of the figure in Tab. \[tab\_scale\_bis\].
[|c|c|c|c|c|c|c|c|]{}\
\
------------------------------------------------------------------------
$\mu/M_Z$ & D & A & C & DL & CL & AR0 & AR1\
------------------------------------------------------------------------
$0.25$ & $0.336$ & $0.312$ & $0.311$ & $0.353$ & $0.325$ & $0.383$ & $0.381$\
$0.50$ & $0.334$ & $0.316$ & $0.315$ & $0.352$ & $0.329$ & $0.371$ & $0.370$\
$0.75$ & $0.330$ & $0.314$ & $0.313$ & $0.347$ & $0.327$ & $0.363$ & $0.362$\
$1.00$ & $0.326$ & $0.311$ & $0.310$ & $0.343$ & $0.325$ & $0.357$ & $0.356$\
$1.25$ & $0.323$ & $0.309$ & $0.308$ & $0.340$ & $0.323$ & $0.352$ & $0.351$\
$1.50$ & $0.320$ & $0.307$ & $0.306$ & $0.337$ & $0.320$ & $0.348$ & $0.347$\
\
------------------------------------------------------------------------
$\mu/M_Z$ & D & A & C & DL & CL & AR0 & AR1\
------------------------------------------------------------------------
$0.25$ & $0.120$ & $0.114$ & $0.114$ & $0.114$ & $0.108$ & $0.134$ & $0.134$\
$0.50$ & $0.116$ & $0.111$ & $0.111$ & $0.111$ & $0.107$ & $0.128$ & $0.127$\
$0.75$ & $0.113$ & $0.109$ & $0.109$ & $0.109$ & $0.105$ & $0.124$ & $0.123$\
$1.00$ & $0.111$ & $0.108$ & $0.108$ & $0.108$ & $0.104$ & $0.121$ & $0.121$\
$1.25$ & $0.110$ & $0.106$ & $0.106$ & $0.106$ & $0.103$ & $0.119$ & $0.119$\
$1.50$ & $0.108$ & $0.105$ & $0.105$ & $0.105$ & $0.102$ & $0.117$ & $0.117$\
\
The improvement in switching from CL to DL can be traced back to the implementation of the angular-ordering and soft-freezing procedures, as one of their side effects is to reduce the three-jet fraction: compare to eq. (\[f3\]), where the $B(y)$ term enters with a positive sign (the leading piece proportional to $A(y)$ is the dominant one also at NLO). As pointed out in Ref. [@camjet] and also discussed earlier on, the reduced scale dependence and the smaller NLO corrections to the three-jet rate are intimately related.
measure: $2~\mbox{min}\left(\frac{x_i^2}{4},\frac{x_j^2}{4}\right)$
-2.0cm
$x_i$ $x_j$
measure: $2\left(\frac{x_i}{2}\right)^2 \left(\frac{x_j}{2}\right)^2/
\left(\frac{x_i}{2}+\frac{x_j}{2}\right)^2$
-2.0cm
$x_i$ $x_j$
The differences between the CL(DL) algorithm and the C(D) one can only be ascribed to the choice of the measure, as the clustering procedure is the same for both schemes. From the numbers in Tab. \[tab\_scale\_bis\], it is clear that the reason for the improved performance goes beyond the relative importance of the LO and NLO pieces in the three-jet rate, as in some cases the DL and CL rates are above the D and C ones, respectively: e.g., at low $y$, where nonetheless the behaviour of the two measures is almost identical. Indeed, one could associate the better performances of the measure (\[yL\]) as compared to the one (\[yD\]) to the fact that the first describes a [*smooth*]{} function of its arguments over all the available phase space whereas the second does not. This can be appreciated in Fig. \[fig\_measure\], where the shape of the expression (here $Q$ plays the rôle of ${E_{\mathrm{vis}}} $ in Sect. \[sec\_algorithms\]) \[measureD\] = 2 2 (,) for the measure is compared to that of the one \[measureL\] = 2 2, as bi-dimensional function of the reduced energies $x_i=2E_i/Q$ and $x_j=2E_j/Q$. For simplicity, we assume the two cluster $i$ and $j$ to be massless (i.e., $E_{i,j}=|{\bfp}_{i,j}|$) and drop the angular dependence $(1-\cos\theta_{ij})$.
It is well known that the presence of ‘edges’ at the border of the phase space defined by a jet algorithm is a source of misbehaviors in higher-order perturbation theory, as they ultimately generate infrared divergences (integrable, though) inside the physical region [@CatWeb]. For example, one can refer to the so-called ‘infrared instability’ of the jet energy profile $(d E_T/dr)$ in (iterative) cone algorithms typically used in hadron-hadron collisions, with $r$ being the Lorentz-invariant opening angle of the cone defined in terms of pseudorapidity and azimuth (see, e.g., Ref. [@Mike] for definitions and details). In fact, such a shape shows an edge in $\cO{\as^3}$ perturbation theory at the cone radius $r=R$. Although a resummation of logarithms $\pm\ln(|r-R|)$ to all orders cures the problem (as the edge eventually becomes a ‘shoulder’ !), a lesson to be learned is that it is clearly desirable to avoid observables with discontinuities when comparing with fixed-order predictions. (Similar conclusions can be drawn for the $C$-parameter in $e^+e^-$ scatterings [@CatWeb; @lastCatWeb].)
Although not quite the same context, it is not unreasonable to expect that the measure might reveal sooner or later problems similar to those discussed, given its behavior along the trajectory $x_i=x_j$. In this respect, the original measure should represent a ‘safer’ observable. Indeed, the more sensitive scale dependence of the D and C algorithms, as compared to the DL and CL ones, respectively, could well be a first notice of possible problems in higher order pQCD.
Also the measure in is a smooth function of its arguments. However, as discussed in section \[subsec\_DICLUS\], still have some problems with the seagull diagram giving larger three-jet rates and also larger scale dependence.
For completeness, we also present the rates for the four-jet fraction at LO. The analytical expression reads as follows (neglecting the $\mu$-scale dependence in $\as$) \[f4\] f\_4(y) = ( )\^2 C(y) + ... , where $C(y)$ is the corresponding coefficient function introduced on the same footing as $A(y)$ in the three-jet rates at LO. Its behavior is shown in Fig. \[fig\_cfunction\_bis\], for the D, A, C, DL, CL, AR0 and AR1 schemes. Note that D and A, on the one hand, and AR0 and AR1, on the other hand, coincide as no clustering between unresolved parton can take place at LO. Once again, we leave aside any comments about the D, A and C rates, for which we refer the reader to Ref. [@camjet]. As for the DL and CL algorithms, notice the increase of the LO rates due to the soft freezing mechanism, like between the D and C schemes. In this case the increase is larger, $7.4\%$ against $2.5\%$ at the minimum $y=0.001$. The absolute value (of DL vs. D and of CL vs. C) is smaller, though: by approximately $35-38\%$ (at $y=0.001$). Such a difference can be explained in terms of the and measure, as was done while commenting Fig. \[fig\_afunction\_bis\]. The curve falls between the two ones. Thus, like in the case of the $A(y)$ function, for a fixed $y$, the $y_{ij}$ value of the former is on average larger than the L but smaller than the D one.
[|c|c|c|c|c|c|c|c|]{}
------------------------------------------------------------------------
$F$ & Algorithm & $y$-range & $k_0$ & $k_1$ & $k_2$ & $k_3$ & $k_4$\
------------------------------------------------------------------------
$A$ & DL, CL & $0.001-0.06$ & $19.049$ & $-18.991$ & $5.891$ & $-0.619$ & $0.0314$\
$A$ & AR0, AR1 & $0.001-0.10$ & $11.436$ & $-12.879$ & $4.369$ & $-0.461$ & $0.0257$\
------------------------------------------------------------------------
$B$ & DL & $0.001-0.06$ & $691.886$ & $-556.092$ & $117.215$ & $1.290$ & $-1.349$\
$B$ & CL & $0.001-0.06$ & $622.657$ & $-504.153$ & $107.150$ & $1.388$ & $-1.341$\
$B$ & AR0 & $0.001-0.10$ & $55.037$ & $10.161$ & $-63.296$ & $27.529$ & $-2.757$\
$B$ & AR1 & $0.001-0.10$ & $154.687$ & $-99.096$ & $-20.233$ & $20.332$ & $-2.335$\
------------------------------------------------------------------------
$C$ & DL & $0.001-0.03$ & $-172.821$ & $108.274$ & $-5.551$ & $-7.255$ & $1.152$\
$C$ & CL & $0.001-0.03$ & $-239.022$ & $176.112$ & $-30.242$ & $-3.593$ & $0.981$\
$C$ & AR0, AR1 & $0.001-0.10$ & $-99.895$ & $50.846$ & $9.853$ & $-8.880$ & $1.214$\
We conclude this Section by presenting a polynomial fit of the form \[fit\] F(y)=\_[n=0]{}\^[4]{} k\_n( )\^n to the $F=A$, $B$ and $C$ functions, as already done in various instances in previous literature (see, e.g., Refs. [@camjet; @BKSS]), now in the case of the DL, CL, AR0 and AR1 schemes. The lower limit of our parameterization is $y=0.001$ for all three auxiliary functions. We extend the fits up to values where the four schemes yield sizable rates exploitable in phenomenological studies (see, e.g., Ref. [@OPALnj]). Typically, for the DL and CL algorithms these are around $y=0.06(0.03)$ for $A$ and $B$($C$), whereas for AR0 and AR1 a common value to the three function is $y=0.1$. The values of the coefficients $k_n$, with $n=0, ... 4$, are given in Tab. \[tab\_kparam\_largey\]. Those for the D, A and C algorithms were given in Ref. [@camjet].
As a summary of our fixed-order studies, though limited to the NLO $\cO{\as^2}$ rates, several conclusions can be drawn.
1. The angular-ordering and soft-freezing procedures advocated in Ref. [@camjet] represent a [*genuine improvement*]{} in fixed-order perturbative studies in the infrared dominion, provided these are plugged onto a $p_{\perp}$-based algorithm. (In fact, one should recall their ‘inefficiency’ when implemented within the scheme [@camjet], based on a mass measure.) This can be appreciated by noticing a reduced scale dependence of the NLO three-jet rates in both the C and CL schemes, as compared to the D and DL ones, respectively.
2. Of the two $p_{\perp}$-based binary measures considered here, the one yields better performances than the one in terms of the stability of the NLO results against variation of the subtraction scale $\mu$. This is presumably related to its definition in terms of the energies of the partons involved, which does not contain discontinuities or edges at the border of the phase space selected by the resolution parameter, contrary to the case of the $p_{\perp}$-measure. Thus, an algorithm based on both angular-ordering and soft-freezing and exploiting the measure represents an alternative option to the scheme to be adopted in the kind of studies performed here.
3. However, as for multi-parton studies in higher order pQCD, the exploitation of the original scheme should be limited to the adoption of its measure, not to the implementation of the preclustering and rearrangement steps recommended in the original version. On the one hand, these would introduce a considerable complication in both the numerical and (especially) analytical calculations. On the other hand, they would spoil the well known factorization properties of $p_{\perp}$-based algorithms, in resumming to all order in perturbation theory terms of the form $\ln
y$ at small values of the parameter $y$. Indeed, one should notice that such properties are applicable to the case of the described DL and CL schemes, as the measure reduces to the one in the soft limit. We will address this point specifically in the next Section. In addition, we will also show that the sole adoption of the measure (i.e., the DL scheme) is not enough to reduce the size of the hadronization corrections of the D scheme (see Sect. \[subsec\_jetrates\]), so that CL is an option to be generally preferred to the DL one, as was the case for the C algorithm as compared to the D one [@camjet].
4. Finally, the two schemes based on the clustering of three-particles into two, do not show any substantial improvement, as compared to the conventional ones, which join two particles into one, at least in fixed-order perturbative studies of three-jet rates at NLO.
Before closing, we should mention that further analyses on the same footing as those described in this Section are under way, for the case of the four-jet rate at NLO: from QCD, using the program [@DEBRECEN][^6], and from $W^+W^-$ decays, exploiting the code used in Ref. [@Ezio]. An account of the results in these contexts will be given in a future publication [@preparation].
Resummed perturbative results {#subsec_resummed}
-----------------------------
In this Section we introduce a quantity which makes use of the results present in the previous Section and which is very useful in investigating the interplay between perturbative and non-perturbative effects [@camjet]. This is the mean number of jets, defined as $$\label{n_jets}
n_{\mbox{\tiny jets}}\equiv\cN(y)=\sum_{n=1}^N n F_n(y),$$ where $F_n(y)$ is nothing else than the $n$-jet fraction introduced in eq. (\[fn\]) in theoretical terms (thus, $F_n(y)={f}_n(y)$ and $N=4$ in Sect. \[subsec\_fixedorder\]), i.e., as a ratio of cross sections. From the experimental point of view ($F_n(y)=\tilde{f}_n(y)$ and $N\ar\infty$), the corresponding quantity is defined as a ratio of numbers of events, i.e., \[tfn\] \_n(y)= =, where $N_n(y)$ is the amount of $n$-jet events and $N_{\mbox{\tiny tot}}(y)$ the total hadronic sample. (This is the definition that we will adopt in computing the $n_{\mbox{\tiny jets}}$ rates using the MC programs, in Sect. \[subsec\_jetrates\].)
The mean number of jets can be studied as a function of the jet resolution parameter $y$, down to arbitrarily low values, at fixed energy. Furthermore, its perturbative behavior at very low values of $y$ can be computed including resummation of leading and next-to-leading logarithmic terms to all orders in perturbation theory [@CDFW1]. The corresponding predictions (particularly accurate at small $y$’s) can then be matched with the fixed-order results (especially reliable at large $y$’s) of the previous Section over an appropriate interval in $y$, to give reliable pQCD estimates throughout the whole range of $y$. Furthermore, these results are quite stable against variation of the scale $\mu$ while being particularly sensitive to $\Lambda^{(5)}_{\overline{\mbox{\tiny{MS}}}}$, making the jet multiplicity $n_{\mbox{\tiny jets}}$ a particularly good quantity for the determination of $\as$. Non-perturbative contributions to $n_{\mbox{\tiny jets}}$ can then be estimated by comparing the perturbative results with those of MC event generators. This will be done in Sect. \[subsec\_jetrates\].
We first compute the resummed predictions for the DL and CL algorithms. In doing so, we make use of the results and formulae presented in Ref. [@CDFW1] for the case of the D scheme. Those were obtained in the case of multiple soft-gluon emission at small values of the resolution parameter. As we have already stressed in the Introduction, since in the soft limit in which either of $i$ or $j$ is much softer than the other the two measures (\[yD\]) and (\[yL\]) coincide, all the soft-gluon exponentiation properties of the algorithm carry over to the one, provided no unnatural partition of the phase space is introduced by preclustering and/or reassignment. We do not perform the same fit for the algorithms for two reasons. First, neither the AR0 or AR1 schemes have proven themselves being particularly suitable in pQCD studies, because of the larger scale dependence of their NLO rates. Second, the calculation of the resummed predictions would presumably be more complicated than in the case of the D and L schemes and would deserve an all new paper on its own.
Recalling that through second-order in $\as$ the two-jet fraction reads as \[f2\] f\_2(y) = 1 - ( ) A(y) + ( )\^2 (2A(y)-B(y)-C(y)) + ... , and using the expressions (\[f3\]) and (\[f4\]) of Sect. \[subsec\_fixedorder\], one easily finds that in ${\cal O}(\as^2)$ pQCD the mean number of jets is \[Nypert\] (y) = 2 + ( ) A(y) + ( )\^2 ( B(y)+2C(y)-2A(y) ) + ...
The behavior of the first-order coefficient $A(y)$ at small $y$ is given by \[Fx\] A(y)= C\_F(\^2 y + 3y + r(y)), with the non-logarithmic contribution being [@durham; @partition] \[ry\] r(y) = 62 + - + 4((1+2)-22)y - 3.7 yy +y, whereas for the second-order coefficient one has \[Fy\] F(y)B(y)+2C(y)-2A(y)= C\_F . In eqs. (\[Fx\]) and (\[Fy\]) the two quantities $C_F=4/3$ and $C_A=3$ are the Casimir operators of the fundamental and adjoint representations of the QCD gauge group $SU(N_C)$, these quantifying the strength of the $q\ar qg$ and $g\ar gg$ splittings, respectively, with $N_C=3$ the number of colors, whereas the number of flavors is $N_f=5$.
Note that, as long as terms of order $\ln^2 y$ are neglected, the above expressions are identical for the two versions of the based algorithms DL and CL (and so are they for the D, A or C schemes). In order to introduce the algorithm-dependent ${\cal O}(\ln^2y)$ coefficients we adopt the same procedure as in Ref. [@camjet]. That is, we perform a fit of the form (\[fit\]), restricted to the interval, say, $0.001<y<0.01$, with the coefficients $k_3$ and $k_4$ fixed at the values prescribed by Eq. (\[Fy\]). The quantities $k_0$, $k_1$, $k_2$ are instead treated as free parameters. The numerical results are given in Tab. \[tab\_kparam\_smally\]. Therefore, one can simply use the fits in Tab. \[tab\_kparam\_smally\] for, say, $y<0.005$ and those in Tab. \[tab\_kparam\_largey\] for $y>0.005$. (Indeed, over the region $0.001<y<0.01$, the transition between the two parameterization is very smooth.) This way, the second-order coefficient $F(y)$ can be obtained over the whole ranges of $y<0.03$ (i.e., the DL and CL limits given in the second column of Tab. \[tab\_kparam\_largey\] for $C(y)$).
[|c|c|c|c|c|c|]{}
------------------------------------------------------------------------
Algorithm & $k_0$ & $k_1$ & $k_2$ & $k_3$ & $k_4$\
------------------------------------------------------------------------
DL & $-76.264$ & $4.257$ & $4.108$ & $-0.296$ & $0.333$\
CL & $14.275$ & $-27.217$ & $5.522$ & $-0.296$ & $0.333$\
To obtain the final perturbative predictions for the mean number of jets, we now proceed as in Ref. [@CDFW1]. To next-to-leading logarithmic (NLL) accuracy, the resummed results are independent of the version of the algorithm. Therefore the only differences between DL and CL come from the matching to the fixed-order results. We simply subtract the first- and second-order terms of the NLL resummed result and substitute the corresponding exact terms. Denoting by $\cN_q$ the NLL multiplicity in a quark jet, given in [@CDFW1], we obtain \[nfin\] (y) = 2\_q(y) + C\_F ( ) r(y) + ( )\^2 ( F(y)-2F\_q(y) ) where $F_q$ is the second-order coefficient in $\cN_q$, given in [@CDW2]: $$\begin{aligned}
\label{nadd}
F_q(y) & = & C_F \left\{\frac{1}{24}C_A\ln^4 y
- \frac{1}{18}(C_A-N_f)\ln^3 y \right. \nonumber \\
& & \left. + \frac{N_f}{9}\left(1-\frac{C_F}{C_A}\right)
\left[\left(4\frac{C_F}{C_A}-1\right)\frac{N_f}{C_A}
-1\right]\ln^2y \right\}.\end{aligned}$$
We will make practical use of the formulae (\[nfin\])–(\[nadd\]) later on, in Sect. \[subsec\_jetrates\].
Non-perturbative comparisons {#sec_nonpertcomparison}
============================
In this Section we will attempt to quantify the effects due to hadronization for the various algorithms we have been discussing so far. In doing so we will resort to three among the most widely used QCD-based MC event generators: [@herwig] (version 5.9 [@herwig59]), [@jetset] ( version 6.1 [@pythia], which incorporates 7.4) and (version 4.10 [@ariadne]). In order to avoid ‘philosophical’ arguments about what [*hadronization*]{} actually is, we give here an [*operational*]{} definition useful for our purposes: [*hadronization corrections*]{} are the ‘empirical adjustments’ applied to the theoretical perturbative predictions before comparing them with the experiment. In an event generator, the former is represented by the partonic state before the hadronization routines are called and the latter by the state of final particles after hadronization and decays.
However, one must note a difference between the two. In the end there is an unambiguous identification between the hadron level of a generator and experiment, since the former is eventually tuned to reproduce data. The partonic state, on the other hand, is not a physically well-defined quantity. We therefore have to cope with the arbitrariness inherent in generators, which generally implement only enhanced terms of the infrared (i.e., soft and collinear) dynamics of quarks and gluons, thus introducing unnatural cut-offs and kinematic boundaries into the original QCD evolution. As a consequence, the mutilations done to the exact QCD dynamics in its PS approximation could well give rise to non-perturbative contributions already at the parton level [@camjet].
In many studies, one wants to take one step further, and extract an $\as$ value from the deduced partonic level, based on the same parton-shower generator. This is more dangerous. For example, it has been shown in Ref. [@camjet] that at the parton level overestimates the number of jets, as compared to the pQCD result, if the same $\as$ is used. Therefore a larger $\as$ has to be used for the resummed pQCD results than in the shower to obtain the same parton level. Clearly, this knowledge is important in order to extract an $\as$ value, but it is irrelevant in a study of hadronization corrections. (More details on this point will be given later on, in Sect. \[subsec\_jetrates\].) The only thing we would like to recall here is that progress in this direction is being made: e.g., that exact ${\cal O}(\alpha_s^2)$ LO matrix elements have been dressed with string fragmentation in the modeling since a long time, that further studies in the same environment involving a matching of the mentioned MEs to the parton cascade have also been carried out [@andre] and, finally, that an ‘${\cal O}(\alpha_s^2)$ + PS + [cluster hadronization]{}’ version of will soon be publicly available [@Bryan]. The general problem of how completely to match the ‘two parton levels’ remains a key issue to be addressed, but that is evidently far beyond the scope of this paper.
Therefore, for most part of this Section we leave aside the analytical formalism of the previous one, and only compare the partonic and the hadronic levels of generators. (We will however come back to it in one instance, at the very end of Subsect. \[subsec\_jetrates\].) One is indeed comforted in doing so by what we have already mentioned and will illustrate below: that we do find agreement among different MC programs. It is rather straightforward to study hadronization corrections in this spirit, since the generators provide lists of all partons and hadrons, event by event. In each case, jets are reconstructed both from the quarks and gluons at the end of the parton cascade and from the particles arising after hadronization and decays[^7]. We will perform our analysis in the following Subsections. Each of these corresponds to a different phenomenological context. In the first, we study the three-jet resolution in a simple tube model. In \[subsec\_jetrates\] we study hadronic events at LEP1, focusing our attention to the case of typical multi-jet quantities, in particular the mean number of jets defined in eq. (\[n\_jets\]). In the following Subsection the emphasis will be on some kinematic properties of two- and three-jet events at LEP1 energies. Finally, in Subsect. \[subsec\_WW\], we will study hadronization effects in the context of the mass reconstruction of $W^\pm$-bosons in four-jet events at LEP2.
Before proceeding further, for completeness, we briefly recall the properties of the three mentioned MC event generators. implements the parton shower by coherent branching of the partons involved down to a fixed transverse momentum scale $Q_0\sim 1$ GeV, and then converts these partons into hadrons using a cluster hadronization model [@clu]. In particular, the branching algorithm includes angular-ordering and azimuthal correlations of the emission (due to QCD coherence) along with the retention of gluon polarization and $p_\perp$ is the $\as$ scale. The shower algorithm orders emissions in decreasing mass, with angular ordering imposed as an additional constraint, down to a cut-off mass $Q_0\sim 1$ GeV. Azimuthal anisotropies from coherence and gluon polarization are also included, and the $\as$ scale is $p_{\perp}^2$. Hadronization is done according to the Lund string model [@AGIS]. In [@ariadne], the ordering of emissions is in the invariant transverse momentum defined above in eq. (\[LL:invpt\]) for the algorithm. The same is used as the scale in $\as$ and the cascade is continued down to $p_{\perp 0}\approx0.6$ GeV. The coherence, treated by angular ordering in the and parton showers, is inherent in the way gluons are emitted as dipole radiation from color connected pairs of partons. The azimuthal anisotropy due to gluon polarization is not explicitly taken into account but is reproduced to some extent by the dipole structure of the emissions. Hadronization is handled by the string fragmentation[^8].
Above we have seen that the algorithms in part are based on different considerations. One example is the picture of the perturbative shower evolution, which can be organized either in terms of decreasing opening angles of emissions or in terms of decreasing transverse momenta of emissions. Either of these two pictures gives a perfectly legitimate description of nature, but they arrive at different answers for what is the ‘right’ way to cluster a set of $m$ partons into $n$ jets. Even within a given calculational scheme, further uncertainties exist, such as how to distribute the recoil of an emission, i.e., the details of how energy and momentum is conserved. Add to this differences in the way non-perturbative physics is viewed, e.g., in string vs. cluster fragmentation models, and it is clear that there is not one unique view of the world. Therefore there is also not a unique criterion for what is the best possible clustering algorithm. One may then expect to find that different algorithms have complementary strengths and realize that the choice of algorithm should be based on the intended application.
Tube model results {#subsec_tubemodel}
------------------
As a preliminary exercise, we consider the simplest possible hadronization mechanism [@tube], the so called ‘longitudinal phase space’ or ‘tube’ model, in fact a simplified version of the Lund string model. Here a color-connected pair of partons produces a jet of light hadrons over a cylindrical $(y,p_{\perp})$ phase space, where $y$ is the rapidity and $p_{\perp}$ the transverse momentum (note that $y=\frac{1}{2}\ln[(E+p_z)/(E-p_z)]$, with $z$ the cylinder axis and $(E,p_x,p_y,p_z)$ the four-momentum). In practice, a number $N$ of massless four-momenta are generated at random with an exponential transverse momentum distribution and a uniform rapidity distribution in the interval $-Y<y<Y$. The maximum rapidity $Y$ is given by $E_{\mathrm{cm}}=Q=2\lambda\sinh Y$ and the multiplicity $N$ by $\lambda = N\VEV{p_{\perp}}/2Y$ (see Ref. [@camjet] for details). As illustrative values, we have taken $\lambda=0.5$ GeV and $\VEV{p_{\perp}}=0.3$ GeV. In Ref. [@camjet], use was made of this model in order to illustrate some shortcomings of the algorithm: that is, junk-jet formation and misclustering. We resume here those studies for the case of the algorithms that were not treated there.
As explained in Sect. \[subsec\_Angular\], by studying the mean value of the scale $\ycut=y_3$ (denoted by $\VEV{y_3}$) at which a third junk-jet is resolved in the tube model one can get an insight on the effectiveness of the modifications to the scheme proposed in Ref. [@camjet]. In the sense that, the smaller $\VEV{y_3}$ is, the more contained is the junk-jet phenomenon. As a matter of fact, both in the A and C algorithms [@camjet] the average value of $y_3$ is much smaller, as compared to the D scheme, over an enormous range of energies $Q\equiv\Ecm$. Fig. \[fig\_y3\_bis\] illustrates this.
We supplement the results for D, A and C of Ref. [@camjet] by adding in Fig. \[fig\_y3\_bis\] the corresponding behaviors for the L, DL and CL binary schemes, and for the AR1 scheme. Note that here L refers to the original scheme implementing both preclustering and reassignment.
It is curious to realize once more (see Sect. \[subsec\_fixedorder\]) the beneficial effects in the DL scheme of adopting the measure instead of the one, as the corresponding data points lay well below those of the original D scheme. In contrast, preclustering and rearrangement bring no noticeable improvement in this context, neither separately nor together: notice the overlap of the DL and L curves. (For readability of the figures, we have avoided plotting the cases of the scheme with only one of preclustering and reassignment.) This probably indicates that the junk-jet formation is dominated by the existence of a single high-$p_{\perp}$ track or a few very nearby tracks. However, it is clear that the further step of angular-ordering is needed even in the case of the measure in order to bring the results further down. Once this is implemented, there is no sizable difference between the performances of the two different measures (C vs. CL). In a sense, the sole adoption of the $y_{ij}$ helps to alleviate the original problem, but is not enough to cure it in the same way the angular-ordering does. In addition, it is evident that the latter procedure removes any distinction between the two measures. For reference, we should also mention that the (which we do not study further here, see Sect. \[subsec\_Angular\]) can boast identical performances to those of the A, C and CL schemes. The algorithm nicely interpolates between the D and L ones, following the relative behaviour of the measures, similarly to the case of Fig. \[fig\_afunction\_bis\].
Thanks to simple kinematic relations valid within the tube model and depending on the clustering procedures of the various algorithms, it was shown in Ref. [@camjet] that one can approximate the behaviors of the binary algorithms in Fig. \[fig\_y3\_bis\] by means of some analytical formulae, as a function of the collider CM energy $Q$. We recall here those for the J, D, A and C schemes: \[yJDtube\] \^J\~Q, \^D\~()\^2, \[yACtube\] \^A\^C\^[CL]{}\~((Q/))\^2. As evident from the plot, CL coincides with A and C. It is also easy to show that the corresponding equation for both the DL and L schemes reads as follows: \[yLDLtube\] \^L\^[DL]{}\~()\^2. The curves in Fig. \[fig\_y3\_bis\] correspond to the above formulae: the dashed line to the J scheme, the continuous one to the D algorithm, the dotted one refers to L and DL whereas the dot-dashed one to the A, C and CL cases.
In order to test the efficiency of the soft-freezing procedure proper of the C scheme a good quantity to study is the mean number of particles contained in the the third (softest) jet when $\ycut=y_3$, which was denoted by $\VEV{n_3}$ in Ref. [@camjet]. Clearly, the smaller this quantity is on average, the less particles have been attracted inside the original resolved (soft) cluster (see the discussion in Sect. \[subsec\_Cambridge\]), and the misclustering effect is thus reduced. Fig. \[fig\_n3\_bis\] shows $\VEV{n_3}$ as a function of the CM energy, over the same $\Ecm$ range as in the previous plot. The relative behaviors of the J, D, A and C schemes were already illustrated in detail in Ref. [@camjet], so we do not dwell here on them. Rather we emphasize that the adoption of the measure is less helpful in this case, as the DL and D curves almost coincide. Furthermore, it is worth spotting that the reassignment procedure increases the mean value of $n_3$, as can be appreciated by comparing the data points labeled ‘L’ and ‘L (no reassignment)’. It is therefore clear that such a procedure, which does remedy the problem of the misassignment of soft particles (see Sect. \[subsec\_LUCLUS\]), is instead inefficient in suppressing misclustering. This is evident if one considers that such a step tends to ‘balance’ the event, by reassigning tracks among clusters so that the jets show in the end similar multiplicity (thus acting in contrast to what soft-freezing does).
Curiously, the preclustering seems to work on the same footing as the angular-ordering: compare ‘A’ and ‘L (no reassignment)’. However, here the almost exact agreement is somewhat of a coincidence. As intimated in the Introduction, this is a consequence of having adopted the default $d_{\mathrm{init}}$ value in producing the ‘L (no reassignment)’ curve. We have verified that in the limit $d_{\mathrm{init}} \to 0$ the DL results are in fact recovered. This makes clear the possible danger of ascribing artifacts of the algorithm to real physics effects, if a wrong setting of $d_{\mathrm{init}}$ is adopted. Finally, like in the case of $\VEV{y_3}$, the adoption of the soft-freezing procedure wipes out differences between the and measures (compare C and CL). As for the scheme, we note that it yields the largest $\VEV{n_3}$, especially at high energies, this witnessing the tendency of this algorithm of clustering soft resolved particles which are source of dipoles (see discussion in Subsect. \[subsec\_DICLUS\] while commenting Fig. \[fig\_soft\_resolved\].)
Evidently, the tube model adopted in the previous paragraph should be regarded as a useful tool in order to test the performances of the various algorithms with respect to the misbehaviors responsible for junk-jet formation and misclustering, which naturally arise in the physics dominion governed by soft radiation, for all algorithms based on $p_{\perp}$- and $m$-measures [@camjet]. In the very end, however, the benchmark ground to verify the goodness of an algorithm in terms of hadronization corrections should be a full MC program, such as those previously mentioned.
Jet rates {#subsec_jetrates}
---------
In Ref. [@camjet] the quantity $n_{\mbox{\tiny jets}}$ was studied, both at hadron and parton level, as a function of the resolution parameter, down to the minimum figure of $\ycut=0.0001$. Such a choice of range was not a random one. Indeed, it is well known that the energy scale at which QCD enters its non-perturbative phase is around 1 GeV . This is also the typical value at which the QCD-based MC programs stop the parton cascade and turn on the hadronization process. Clearly, if one intends to probe the interface between perturbative and non-perturbative QCD by studying jet properties as a function of $\ycut$ at LEP1 energies, where $Q\approx M_Z$, one should aim for an algorithm with small hadronization corrections for $\ycut$ values down to $(1~\mbox{GeV})^2/Q^2\approx 10^{-4}$. Otherwise it becomes more difficult to understand the QCD dynamics in such a critical regime, and systematic uncertainties (due to different MC modelings) must be accounted for.
The fact that the algorithm (see Fig. \[fig\_hadronisation\_herwig2\]) fails to maintain the size of the hadronization corrections small at low $\ycut$ (as compared to, e.g., the scheme, see Fig. \[fig\_hadronisation\_herwig1\]) is a consequence of the adoption of a measure which is an invariant mass one, rather than a transverse momentum. We have in fact already recalled in the Introduction that a $p_{\perp}$-distance is better adapted to the conventional picture of non-perturbative jet fragmentation and therefore naturally allows a cleaner separation of perturbative and non-perturbative physics [@durham; @partition; @yuri]. We also quantified this point in the tube model when mentioning that the power-suppression in $Q$ on $\VEV{y_3}$ goes like $(\ln Q/Q)^2$ in the scheme, whereas the corresponding behavior in the one is $1/Q$: see eq. (\[yJDtube\]) and, for a more theoretically sound basis, also Ref. [@BPY].
That a $p_{\perp}$-based measure is indeed a better choice is confirmed not only by the fact that also the various algorithms can boast a power-suppression similar to that of the scheme, see eq. (\[yLDLtube\]), but also by observing that a common feature of Figs. \[fig\_hadronisation\_herwig1\]–\[fig\_hadronisation\_herwig2\] is that [*all*]{} the transverse momentum based schemes (also the ones) are better behaved than the one at small $\ycut$’s.
Comparisons must be done with some care, however, since the horizontal $y$ scale means different things for many of the algorithms shown in Figs. \[fig\_hadronisation\_herwig1\]–\[fig\_hadronisation\_herwig2\]. For a pair of partons/hadrons $i,j$, the definitions give that $(y_{ij})_{\Jade} > (y_{ij})_{\Durham} > (y_{ij})_{\Luclus}$, since the three measures share the same angular dependence and differ only in the energy factors being proportional to $E_i E_j$, $\min(E_i^2,E_j^2)$ and $E_i^2 E_j^2/(E_i + E_j)^2$, respectively (disregarding the difference between $|\bfp_i|$ and $E_i$ for ). It then follows that differs significantly from the other two when $E_i \ll E_j$, that and differ by up to a factor four for $E_i \approx E_j$, and that and always differ by more than a factor four. Even more different is the scheme, where the energy factor on the same scale would be $(4/9) E_i E_j E_{\mathrm{vis}}^2/(E_i + E_j)^2$. Since normally $E_i + E_j \ll E_{\mathrm{vis}}$, it follows that $(y_{ij})_{\Geneva} \gg (y_{ij})_{\Jade}$ most of the time. The transition region between pQCD and non-pQCD is thus no longer around $10^{-4}$ in the scheme, but maybe more like at $10^{-2}$, judging by the jet rate in Fig. \[fig\_hadronisation\_herwig2\].
An alternative procedure to compare jet algorithms is offered by Fig. \[fig\_parton-hadron\_herwig\], where the average hadron jet multiplicity is plotted against the average parton one. The $n_{\mbox{\tiny jets}}$ values in the plot are exactly the same as defined by the curves in Figs. \[fig\_hadronisation\_herwig1\]–\[fig\_hadronisation\_herwig2\], but with the explicit $\ycut$ dependence eliminated in each pair of average jet numbers. This is done for the same algorithms analyzed in the previous two figures. Both the parton and the hadron jet multiplicity increase with a diminishing $\ycut$. The criterion of a good algorithms as one with small hadronization corrections thus translates into one with a curve close to the diagonal in the left-hand side of Fig. \[fig\_parton-hadron\_herwig\] or, alternatively, to the horizontal at unity in the right-hand side of Fig. \[fig\_parton-hadron\_herwig\], where the hadron-to-parton jet multiplicity ratio is presented.
If one intends to probe the QCD phase transition at small $\ycut$, then the relevant region is that at large jet multiplicities. There, we notice that four ‘bunches’ of curves distinctively separate. Four algorithms remain particularly close to the diagonal/horizontal line. Not surprisingly, among these are the schemes that had been especially designed (in Ref. [@camjet]) to improve the performances of the scheme: such as the (asterisks symbols) and the (open-circle symbols), which are the closest to the diagonal/horizontal line. To this group also belong the scheme with only preclustering implemented (with $d_{\mathrm{init}}$ kept at its default value 0.25 GeV even at low $\ycut$, full-down-triangle symbols) and the algorithm using the measure (full-circle symbols). This is not surprising if one recalls Fig. \[fig\_n3\_bis\] in the tube model. That plot, on the one hand, had already shown the residual effects of preclustering with the default $d_{\mathrm{init}}$ and, on the other hand, had also made the point that the scheme using the measure performs as well as the original one with the distance. In addition, always in line with what was assessed in the tube model, it is clear that the measure alone is not enough to improve the hadronization performances of the scheme (full- and open-square symbols, respectively). In fact, the DL algorithm belongs to another set of curves (along with the , and the two other configurations of the ) whose hadronization corrections are much larger with increasing multiplicity (i.e. decreasing $\ycut$). A third group is constituted by the two algorithms, which perform very well in the two-jet-dominated region but rather worse as soon as a third jet is resolved. The algorithm performs worst, since it starts out with the largest negative hadronization corrections and thereafter steeply shoots up towards the largest positive ones.
Fig. \[fig\_parton-hadron\_cambridge\] reproduces the rates of the algorithm, already given in Fig. \[fig\_parton-hadron\_herwig\] for the case of , now extended to include and data points, too. Indeed, the pattern of the hadron/parton multiplicity is very similar among the three programs, though with a more marked tendency of the rates of departing from the diagonal in the latter two cases. We have verified (though not shown) that similar consistent behaviors among the three generators also occur in the case of the other jet schemes.
A very interesting aspect of Figs. \[fig\_hadronisation\_herwig1\]–\[fig\_parton-hadron\_herwig\] is the ‘negative hadronization corrections’ (see Ref. [@BKSS] and also Ref. [@negative] for ), i.e., that fewer hadron than parton jets are reconstructed in the three-jet-dominated region. This can be easily observed in the case of the , the , the ‘two’ and the various schemes. Though less visible there, it also occurs for the and algorithms. Not even the two modes of are immune from it, though here the effects are small.
This phenomenon has a very straightforward interpretation in terms of the well-known string [@stringeff] or drag [@drageff] effect. The concept can be illustrated by considering a three-parton event $q\overline{q}g$. To leading order in $1/N_C^2$, the system separates into two color dipoles, one $qg$ one where the color of the quark is compensated by the anticolor of the antiquark, and another similar $g\overline{q}$ one. Each of these dipoles can act as a source for further softer perturbative emission or define the topology of a non-perturbatively hadronizing string piece. In the (transverse) rest frame of such a dipole the emission of partons and hadrons is isotropic in azimuth but, when viewed in the CM frame of the event, particle production is not symmetric around the three jet axes. Instead enhanced soft particle production is found in the angular ranges spanned by the dipoles, i.e., in the $qg$ and $g\overline{q}$ ranges. There is no corresponding $q\overline{q}$ dipole — in fact, the color-suppressed dipole of this kind enters with a negative sign, i.e., provides destructive interference to the other two. Therefore the $q\overline{q}$ angular range has a much smaller soft particle production than the other two ranges. This effect is well established experimentally [@stringdata].
A corollary of the string/drag effect is that reconstructed jet directions also can display a systematic bias. For instance, with respect to the original quark parton direction, softer particles will predominantly be produced on the side of the gluon jet, and less on the antiquark side. Therefore a naive clustering of hadrons will find a quark jet axis somewhat shifted towards the gluon. The original parton direction is not known experimentally, of course, but the effect is visible by harder particles in the quark jet appearing slightly more on the antiquark side (lined up with the original parton direction) and softer particles more on the gluon side [@stringasym]. The antiquark is similarly affected, while the gluon receives opposite contributions from the two string pieces attached to it. Simple geometry shows that a dipole is more boosted and the string/drag effect therefore more developed when two partons are nearby. If the gluon is closer to the quark than to the antiquark, say, the gluon and quark reconstructed directions will be shifted closer to each other, while the antiquark direction is less affected. (The reconstruction of jet directions is further tested in the next Section.) Thus a three-jet event becomes more two-jet like, in the sense that the $y_{3}$ value where the event flips from a two-jet to a three-jet is lower on the hadron than on the parton level. Hence the negative hadronization corrections. These arguments are valid both for string and cluster hadronization — the clusters are aligned by the same colour topology as the strings and are similarly boosted — and both models correctly reproduce the measured string/drag effect. In the following, to shorten our discussion, we will discuss the dynamics of hadronization in terms of the string model, but recall that an equivalent formulation in terms of the cluster one is always possible.
The magnitude of the above effect depends on the details of the algorithm used. For instance, the shift inwards of the two jet directions can be viewed as a kinematical consequence of replacing two massless partons with two massive jets, while still retaining the same total invariant mass of the pair. Thus the negative hadronization corrections should be absent in the E scheme, where the correct invariant mass is used as distance measure, eq. (\[Jade\_E\_dist\]). (But, of course, E has its own problems of misclustering, so is not the solution.) The corrections are still rather small in the normal scheme. The algorithm is the only one deliberately designed to reflect particle production along hyperbolae, and to reconstruct the directions of the asymptotes of those hyperbolae, while others implicitly are based on a picture of a jet as a set of particles extending away from the origin along a fixed momentum direction. As we see, does a pretty good job of its intended task in the two-jet region. Other things being the same, in this region would thus be the preferred choice. The other algorithms have differently large negative hadronization corrections from this effect, reflecting the details of the distance measure and the clustering scheme. For instance, using $|\bfp_i|$ rather than $E_i$ emphasizes the importance of jet masses acquired in the hadronization; thus the DL curve lies below the D one.
What is the most ‘desirable’ behaviour here is not so easy to tell. On the one hand, small hadronization corrections would be better for perturbative studies, on the other hand, the string phenomenon is a genuinely interesting piece of non-perturbative physics that deserves to be studied in its own right. And even for an $\as$ determination, ultimately what matters is not whether a hadronization correction has to be applied or not, but how large an error bar has to be assigned to this correction. Fig. \[fig\_parton-hadron\_jetset\] (left picture) here illustrates that the event generators we have tried agree very well once again. That is, the phenomena described above are of general validity, and seem to be accountable to a similar extent for most measures.
From the figures, it would seem that the above string/drag effects are present for three-jets but absent for higher jet multiplicities. This is not fully correct, however. In any hadronic $n$-jet event, the two closest jets are likely to correspond to dipole-connected partons. (In the leading-log picture of shower evolution, the only exception is given by the rather infrequent $g \to q\overline{q}$ branchings.) Therefore the hadron-level jet directions will sit closer than the parton-level ones, and one is more likely to get an $(n-1)$-jet event on the hadron level than on the parton level. The turnaround of the curves, and ultimately the larger hadron than parton average jet multiplicity at small $\ycut$, is thus rather a reflection of other effects entering and becoming more important. These can collectively be classified as fluctuations in the hadronization process, but can have different origins. One example is the junk-jet formation discussed repeatedly above, e.g., in Sect. \[subsec\_tubemodel\]. Another is the kinematics of particle decays, especially of bottom and charm hadrons. This latter effect is illustrated in Fig. \[fig\_parton-hadron\_jetset\] (right picture), where results for the production of all initial quark flavors are compared with those for $u$ quarks only.
If jet rates are to be used to extract an $\as$ value, one also needs to understand the relation between the parton level curves given by a generator and those expected in an exact theory. Indeed, as previously recalled and as already shown in Ref. [@camjet] for the D, A and C algorithms, if the same $\as$ produced by the generator is used for the pQCD leading-log resummed $+$ ${\cal O}(\as^2)$ fixed-order predictions, then the corresponding parton level curves would fall below the hadron level over a much larger $\ycut$ spectrum. In fact, a good matching (for all $\ycut$’s) between the pQCD predictions and the parton level is obtained if the former use $\as \equiv
\as(M_Z^2) = 0.126$, instead of 0.114, the value obtained from the generator by interpreting the input parameter [QCDLAM]{}, with the default value of 0.18 GeV, as the NLO scale parameter $\lms$. The necessity of such ‘rescaling’ should be not surprising, as such an interpretation of [QCDLAM]{} is only justified in a small region of phase space (see [@CMW]) which should not be dominant.
Fig. \[fig\_as\_herwig\] plots the rates as obtained from the formulae (\[nfin\])–(\[nadd\]), that is, the resummed predictions for the DL and CL algorithms, for three values of $\as$, against the parton and hadron levels. (They are the counterpart of those for the D, A and C scheme presented in Figs. 13–15 of Ref. [@camjet].) From Fig. \[fig\_as\_herwig\] is then clear that, if the measure is used instead of the one, things go the other way round: the dotted curves, for which $\as=0.114$, are the ones closer to the MC parton level. This seems to indicate a further advantage in using the $p_{\perp}$: the MC parton level reproduces more accurately the best perturbative results for the same $\as$. That is, it appears that this measure is more blind to the differences between the MC and the perturbative results than the one. Furthermore, this is particularly true at very small $\ycut$, the critical regime where not only a reduced size of the hadronization corrections is required to study the transition between pQCD and non-pQCD, but possibly also a good matching between the theoretical and phenomenological ‘parton levels’. Even more reassuring is the fact that, of the two schemes, CL is the one doing best in that region (the dashed curve in the right-hand side plot practically coincides with the MC parton level), in line with various other results previously obtained for this new hybrid scheme.
As a summary of our hadronization studies in the context of multi-jet event rates at LEP1, we can recapitulate the following.
1. There is substantial correspondence between the simple tube model and more sophisticated MC programs. The main kinematic features recognised in the former reflect onto the latter. Thus, the simple hadronization mechanism based on a longitudinal phase space represents a good guidance in order to test in first instance the performances of new (and old) algorithms.
2. After full hadronization is implemented, four among the clustering schemes studied since the beginning appear to have rather contained hadronization corrections at small values of the resolution parameter, say, around $10^{-4}$ at LEP1 energies, corresponding to partonic multiplicities of five or so. These are the , the original scheme (i.e., that using the distance), the one using the measure and, curiously, the original scheme deprived of the reassignment step and implementing the default $d_{\mathrm{init}}$. All other schemes perform significantly worse, particularly the one, which appears to be very unstable. offers unique advantages in having very small hadronization corrections in the two-jet-dominated region, but then has larger corrections than other algorithms (except ) at smaller resolution scales.
3. The hadronization corrections come about for several reasons. The string/drag effects usually give a negative contribution at large $\ycut$, i.e., results in fewer jets on the hadron than on the parton level, the size of which depends on the details of the algorithm. Fluctuations in the hadronization process, such as charm and bottom decays and junk-jet/misclustering effects, give a positive contribution, that always wins out at small $\ycut$. That some algorithms have a smaller net hadronization correction at medium $\ycut$ thus in part is the result of a cancellation between opposite effects. From this point of view, our results for the case of the and schemes are in general agreement with those presented in Ref. [@stan].
4. Our results are substantially independent of the MC program used, that is, of the model adopted for the hadronization mechanism, and for the QCD cascade. This means that hadronization corrections can be estimated rather reliably for many algorithms. As a by-product of this conclusion, we observe that the angular-ordering procedure recommended in Ref. [@camjet] as a refinement of the algorithm is then not restricted to the ‘angular ordered’ emission as implemented in the parton shower (the MC program exploited in that reference).
5. A warning should be borne in mind, concerning the comparison of parton and hadron level. It should be recalled that the partonic dynamics implemented in event generators is an approximation of the actual prediction, as the parton cascade only exploits some (logarithmically) enhanced terms of the infrared (soft and collinear) emission. Therefore, there is an intrinsic danger in interpreting the hadron–parton difference as generated by the MCs as a method-independent estimate of non-perturbative effects and simply adding it to the resummed predictions.
6. Specifically, we did not address here the key issue of how to combine the best perturbative QCD predictions, based on resummed contributions matched to fixed-order results, with hadronization corrections. However, in this respect, we have shown that the $p_{\perp}$ distance measure seems to offer an interesting alternative to the traditional one, in the sense that the theory and MC parton levels appear to match better in $\as$, especially when implemented along with the clustering sequence of the algorithm.
Jet reconstruction {#subsec_jetrecon}
------------------
In this section we study various aspects of how well algorithms reconstruct jet directions and energies, as well as a few other related quantities. The results presented here have been obtained with the program, but all essential features come out very similarly with and .
Since some of the studies are based on reconstructing a fixed number of jets, like three or four, it should be noted that the and algorithms do not always allow this, and do not necessarily provide a unique answer. To understand these points, first consider simple binary joining algorithms, such as or . In these, it is always the cluster pair with smallest distance $y_{ij}$ that are joined next. Starting from $n$ clusters, there is thus a unique sequence of joinings giving $n-1, n-2, n-3, \ldots, 3,
2, 1$ clusters. If one wants to obtain three jets, say, one simply has to perform binary joinings till exactly three clusters remain. For an exercise like that, the $\ycut$ value need never even be specified.
For use in the continued discussion, let us make an alternative description. By $\hat{y}_{m(m-1)}$ we may denote the smallest $y_{ij}$ value in the $m$-cluster configuration, which thus sets the scale for the joining to $m-1$ clusters. Standard distance measures are constructed such that, in a joining, the joined cluster is always further away from any third particle than was the nearest of the two original clusters, i.e., $y_{\hat{ij}k} > \min (y_{ik},y_{jk})$. Thus, when joining the pair with smallest $y_{ij}$, the new configuration has a larger smallest $y_{ij}$. This way one obtains a unique ordered sequence of joining scales $\hat{y}_{n(n-1)} < \hat{y}_{(n-1)(n-2)} < \ldots <
\hat{y}_{43} < \hat{y}_{32} < \hat{y}_{21}$. Looking for three clusters, there is always a non-vanishing range of $\ycut$ values $\hat{y}_{43} < \ycut < \hat{y}_{32}$, and all $\ycut$ in this range correspond to exactly the same three-cluster configuration.
In the scheme, on the other hand, $\ycut$ is used not only to interrupt a sequence of joinings, but also to influence the sequence itself. We remind that the procedure joins the pair with smallest $v_{ij} \equiv 2(1 - \cos\theta_{ij})$ among all those with $y_{ij} < \ycut$. Change $\ycut$, and you can change which pair is joined in the step from $m$ to $m-1$ clusters, and in turn all the subsequent joinings. Therefore, even if there should be a range $\hat{y}_{43} < \ycut < \hat{y}_{32}$, that range may split into subranges corresponding to different three-jet configurations. Furthermore, it may be impossible to obtain three clusters, since an infinitesimal $\ycut$ change may give a flip from one joining sequence, ending with four clusters, to a completely different one, ending with two. A further consequence of such flips is that the number of clusters need not be a monotonous function of $\ycut$.
The algorithm introduces one further task for $\ycut$, on top of the , namely to provide the scale for sterilization/soft-freezing. This increases the fraction of events that fail to reconstruct a requested number of jets, but reduces the number of cases with several different three-jet topologies.
One should not exaggerate the problem, however. Typically only for 0.15% of LEP1 events it is impossible to find a three-jet configuration with , which increases to 1.3% in . Furthermore, 4.7% give two (or more) different three-cluster configurations for and 0.9% for . The numbers are somewhat higher for four-jets. None of the other algorithms failed to reconstruct the requested number of jets, nor have any ambiguity in which jets are reconstructed. In the studies below, events which failed to reconstruct are not considered at all, while the choice among alternative three-cluster configurations is simply based on which is found first. (In a search procedure that involves a measure of randomness, so there should be no special bias[^9].)
The parton shower starts out from a back-to-back $q \overline{q}$ pair. The observable event axis is smeared by the parton shower and hadronization, but one interesting measure is how well the original axis can be reconstructed. Thus clustering algorithms are requested to find two jets; alternatively measures such as thrust and sphericity can be used here. In the first result column of Tab. \[tab\_jet\_ang\] it is shown that most algorithms do comparably well, including thrust, while without reclustering does worse and sphericity has the largest error. In three-jet events there is no ‘correct’ answer, and so here the comparison is based on matching the jets clustered on the parton level with those obtained on the hadron level. Each event thus gives three angles. Only events with $0.85 < T < 0.95$ on the parton level have been used to produce the numbers. Again without reclustering does worse, third column of Tab. \[tab\_jet\_ang\], though less dramatically so than above, while does somewhat better than the others. The same pattern holds for four-jets (not shown).
[|l|c|c|c|c|c|]{}
------------------------------------------------------------------------
Algorithm & &\
------------------------------------------------------------------------
& $\VEV{\Delta\theta}$ & $\VEV{(p_z)_{\mathrm{back}}}$ & $\VEV{\Delta\theta}$ & $\sigma(\Delta E)$ & $\VEV{\Delta\theta_{\mathrm{min}}}$\
& ($^{\circ}$) & (GeV) & ($^{\circ}$) & (GeV) & ($^{\circ}$)\
------------------------------------------------------------------------
& $3.22$ & $0.33$ & $4.01$ & $2.74$ & $-2.4$\
& $3.09$ & $0.11$ & $3.91$ & $2.41$ & $-2.4$\
& $3.14$ & $0.19$ & $3.86$ & $2.48$ & $-3.0$\
& $3.05$ & $0.04$ & $4.01$ & $2.45$ & $-2.7$\
& $3.09$ & $0.10$ & $3.81$ & $2.25$ & $-1.9$\
& $3.09$ & $0.10$ & $3.88$ & $2.28$ & $-2.4$\
& $3.06$ & $0.00$ & $3.52$ & $2.02$ & $-2.3$\
1 & $3.66$ & $0.00$ & $4.43$ & $1.99$ & $-0.3$\
2 & $3.56$ & $0.00$ & $4.23$ & $1.93$ & $-0.3$\
2 reclustered & $3.07$ & $0.00$ & $3.65$ & $2.16$ & $-2.3$\
thrust & $3.23$ & — & — & — & —\
sphericity & $4.36$ & — & — & — & —\
The error on the jet axis reconstruction need not be entirely of a statistical character, however. In Sect. \[subsec\_jetrates\] above, we have mentioned the string/drag effect as the reason for the ‘negative hadronization corrections’. In a three-jet event, normally the smallest angle between two jets, $\theta_{\mathrm{min}}$, would be formed by the gluon jet and a quark/antiquark jet. These are connected by a dipole and thus should be ‘pulled closer’ by the hadronization. The last column in Tab. \[tab\_jet\_ang\] shows the average $\Delta\theta_{\mathrm{min}}$, the difference between the hadron- and parton-level $\theta_{\mathrm{min}}$ values. We see that indeed there is the expected systematic bias in all algorithms, although markedly smaller in than in the others. is the only algorithm intended to correctly account for dipole effects in the hadronization, and is thus seen to achieve this purpose. To set the scale of the effect, the width of the $\Delta\theta_{\mathrm{min}}$ distribution is about $10^{\circ}$ in all algorithms, so the systematic bias is still significantly smaller than the event-to-event fluctuations. Also the smallest angle in four-jet analyses show a similar pattern, with the only one to be almost bias-free. We note that if the sum of the momenta of the particles assigned to each jet in are allowed to redefine the jet directions, the bias returns and this ‘reclustered’ behaves more or less like the standard binary algorithms.
In a study of fixed three-parton configurations at lower energies, where then the parton-level was known to have a fixed smallest angle of about $70^{\circ}$, most algorithms reconstruct an angle around $65^{\circ}$, while the average is around $72^{\circ}$, second results column of Tab. \[tab\_jet\_fixede\]. Thus, while still doing best, there are indications that at times overcompensates for the string/drag effect.
[|l|c|c|c|c|c|c|]{}
------------------------------------------------------------------------
Algorithm & $\VEV{\Delta\theta}$ & $\VEV{\theta_{\mathrm{min}}}$ & $\sigma(\Delta E)$ & $\VEV{\Delta E_q}$ & $\VEV{\Delta E_{\overline{q}}}$ & $\VEV{\Delta E_g}$\
& ($^{\circ}$) & ($^{\circ}$) & (GeV) & (GeV) & (GeV) & (GeV)\
------------------------------------------------------------------------
& $5.70$ & $64.9$ & $1.34$ & $-0.16$ & $0.47$ & $-0.39$\
& $5.68$ & $64.8$ & $1.33$ & $-0.03$ & $0.60$ & $-0.58$\
& $5.67$ & $64.7$ & $1.30$ & $-0.08$ & $0.46$ & $-0.39$\
& $5.74$ & $64.4$ & $1.44$ & $~0.25$ & $0.73$ & $-0.97$\
& $5.60$ & $65.0$ & $1.29$ & $-0.01$ & $0.60$ & $-0.61$\
& $5.82$ & $63.6$ & $1.43$ & $~0.16$ & $0.66$ & $-0.85$\
& $5.34$ & $65.0$ & $1.11$ & $-0.05$ & $0.57$ & $-0.53$\
1 & $5.67$ & $72.0$ & $1.06$ & $-0.06$ & $0.71$ & $-0.65$\
2 & $5.38$ & $71.6$ & $1.03$ & $-0.04$ & $0.69$ & $-0.65$\
2 reclustered & $5.13$ & $66.0$ & $0.96$ & $-0.32$ & $0.59$ & $-0.27$\
Also the jet energy reconstruction can be compared between the hadron and parton level, fourth column of Tab. \[tab\_jet\_ang\] and third column of Tab. \[tab\_jet\_fixede\] give the width of the jet energy difference distribution. Here and perform better than any of the others. The tendency for systematic bias can be studied in fixed three-parton configurations, last three columns of Tab. \[tab\_jet\_fixede\]. The energy of the most energetic jet is usually reconstructed without much bias, whereas there is a tendency in all algorithms for the other quark to gain energy from the gluon, reflecting the fact that gluon jets are softer and broader and thus easily lose particles to the other jets, especially the most nearby one. This systematic bias is largest in , as could be expected from the way favours clustering around energetic particles. shows the second largest bias, and the reclustered the smallest.
From a practical point of view, a jet is a collection of ‘nearby’ particles, where ‘nearby’ obviously is a very subjective criterion. One measure is how far out in angle a jet extends from its core. For instance, if two back-to-back jet axes are reconstructed for a LEP1 event, one may expect an optimal subdivision of particles to be by hemisphere, so that no particle is found more than $90^{\circ}$ from its jet axis. Fig. \[fig\_largest\_angle\] shows the angle for the particle furthest away from its assigned jet, on a per-event basis. It is seen that only and respects the $90^{\circ}$ criterion. The reason is that the standard distance measures allow two soft particles to be joined, also when they are somewhat away in angle. In a normal binary joining scheme, they will thereafter together enter into one of the final jets, even if one of them is much closer to another jet. The reassignment step of is specially devised to overcome this limitation, i.e., to reevaluate prior joining decisions in the light of the joinings that have been performed since.
Among the other algorithms, is most likely to have a particle in the ‘wrong’ hemisphere. In fact, the E scheme, using the true mass as distance measure, is the very worst of the algorithms studied. This is the well-known instability problem, already mentioned. and the other $p_{\perp}$-based algorithms are better, but note that it is important that the angular dependence is $2(1 - \cos\theta_{ij})$ rather than the correct $\sin^2\theta_{ij}$, or else two back-to-back particles would have $p_{\perp} = 0$ and be joined. with the measure is slightly worse than normal , since the clustering of two soft particles is somewhat more favoured than in standard . The and schemes offer no visible improvement. is the pure binary joining algorithm with best performance, reflecting that clustering of two soft particles is disfavored.
The same phenomenon obviously carries over when more than three jets are reconstructed. always gives much narrower jets than any of the other algorithms, and gives the broadest ones. The one notable change is that for three-jets, no longer gives narrower jets than and its relatives, probably indicating how the distance measure allows an energetic jet to pick up particles also fairly close to another softer jet. While the wide-angle tracks are very important for the visual impression, they normally carry little momentum. The second column in Tab. \[tab\_jet\_ang\] shows $(p_z)_{\mathrm{back}}$, the average amount of longitudinal momentum carried by particles moving ‘backwards’ with respect to their jet axis. Typically this number is only 0.1 GeV per event, rising to 0.3 GeV for and 0.6 GeV for E.
Another alternative measure for the narrowness of jets is offered by the sum of the invariant jet masses. This is studied in Fig. \[fig\_mass\_pT\_best\]. Since the $\ycut$ definition is scheme dependent, results are plotted as a function of the average number of jets at the $\ycut$ values studied, as in Sect. \[subsec\_jetrates\]. Here the algorithm indeed does best, in line with its distance measure being intended to minimize jet masses. and with the measure come next, i.e., here reassignment is not important. does markedly worse than other schemes.
A third measure is the summed transverse momentum of all particles in an event, relative to their respective jet axis. This is shown, again as a function of the average number of jets, in Fig. \[fig\_mass\_pT\_best\]. The difference between the $p_{\perp}$-based algorithms is here small, while and reclustered are somewhat worse and together with the other modes are the worst. The large difference between the standard and the reclustered one is due to that jets typically are asymmetric with most of the particles lying on one side of the jet direction. The reclustering pulls the jet direction more to the center of the particles assigned to it, thus reducing the summed .
In summary, we draw the following conclusions from the analysis of two-, three-, and four-jet events (also supported by some studies not shown):
1. The algorithm generally does best (among the algorithms studied) in jet energy reconstruction, and also successfully addresses the issue of a systematic hadronization bias in the opening angle between two nearby jets, caused by the string/drag effect. The price to be paid is that the average error on the individual jet direction is larger than in other algorithms. Reclustering the jets from makes it behave more like the standard binary algorithms. We also note that using the transverse mass in Eq. (\[LL:invmt\]) as measure (mode 2) is somewhat favoured as compared to using the one in eq. (\[LL:invpt\]) (mode 1).
2. does almost as well as in jet energy reconstruction, and best in jet angles. (A similar conclusion was reached in Ref. [@BKSS].) The reassignment step means it is the only algorithm that does not have stray particles in a jet that are visibly much closer to another jet. Since the stray particles normally carry rather small momenta, the impact of reassignment on momentum-weighted quantities should not be overstressed, however.
3. The and algorithms here offer no significant advantages over the basic scheme, nor does a use of the distance measure. All these algorithms therefore share a common ‘average’ level of performance.
4. fulfills the intended task of reconstructing small cluster masses, but at the price of a larger rate of large-angle stray particles.
5. does better than the average in some quantities, and significantly worse in others. Its distance measure means that the jet energy determinations show larger systematic biases than with any of the other measures used.
W mass reconstruction {#subsec_WW}
---------------------
Above we have studied jet finding in quite general terms. For an intended application, special further studies may be necessary. The criteria for a good algorithm are going to be different in the determination of an $\as$ value and in the study of angular distributions as a test of the three-gluon vertex, to give but two examples. Currently, the $W$ mass determination at LEP2 is another such topic of large interest [@Wmass], representing different optimization criteria than the ones illustrated above. We here focus on the hadronic production channel, where $e^+ e^- \to W^+
W^- \to q \overline{q} Q \overline{Q}$. Thus the signal is the presence of four jets, where the two jet pairs ought to have a mass around $m_W \approx 80$ GeV. There are several complications. Backgrounds exist, both from the four ‘wrong’ jet pairs in the same event as the two ‘right’, and from other processes such as the QCD four-jets $e^+ e^- \to \gamma^*/Z^{*} \to q \overline{q} g g,
q\overline{q} Q\overline{Q}$. The mass distribution is smeared by the intrinsic $W$ width $\Gamma_W
\approx 2$ GeV in combination with the production matrix element itself, by initial-state QED radiation, by neutrinos that escape without detection, by cracks in the detector acceptance, by measurement errors on particle four-momenta and, of course, by misassignments in the clustering procedure. A full study can therefore only be carried out within the context of a complete detector simulation, which is rather beyond the scope of the current report. To illustrate some of the clustering issues we have carried out a rather more modest exercise.
Hadronic $W^+ W^-$ events are generated at 180 GeV CM energy, but none of the background processes are studied. Detectors are assumed perfect, i.e., the correct four-momenta of outgoing particles are used to reconstruct exactly four jets per event, by the respective jet algorithm. (Some small number of times and fail to find four jets, as explained in Sect. \[subsec\_jetrecon\]; such events are left out from the statistics of the respective algorithm.) In experimental analyses usually some further cuts are imposed, e.g., on the opening angles between jets and on jet energies. This makes sense, since events where two jets are very close are not reconstructed so well. However, then the retained event sample would differ between clustering algorithms, so we have avoided cuts here. Instead all six jet–jet masses in all events are found and studied, and the success of an algorithm is reflected in how often it can reconstruct sensible $W$ masses.
Some impression of how good the jet reconstruction is can be gleaned by matching the four jets to the four original partons by minimizing the sum of jet–parton opening angles. The average value of this sum, as well as the sum of deviations in the energies between jets and partons, is given in the first two columns of Tab. \[tab\_Wmass\_simple\]. It generally agrees with the picture in the previous section: does good overall, while does worse with angles unless reclustering is performed. The poor numbers for are more marked than in previous studies, however.
[|l|c|c|r|c|]{}
------------------------------------------------------------------------
Algorithm & $\VEV{\sum\Delta\theta}$ & $\VEV{\sum|\Delta E|}$ & & $\sigma(\delta)$\
& ($^{\circ}$) & (GeV) & (GeV) & (GeV)\
------------------------------------------------------------------------
Jade & $41.0$ & $25.3$ & $ 0.06$ & $2.8$\
& $36.5$ & $21.7$ & $-0.03$ & $2.6$\
& $37.0$ & $22.5$ & $ 0.05$ & $2.6$\
& $46.1$ & $27.3$ & $-0.70$ & $3.4$\
& $37.6$ & $22.0$ & $-0.08$ & $2.8$\
& $38.2$ & $23.0$ & $-0.13$ & $2.8$\
& $35.6$ & $19.9$ & $ 0.01$ & $2.6$\
1 & $39.3$ & $19.7$ & $-0.58$ & $3.1$\
2 & $38.8$ & $19.5$ & $-0.57$ & $3.0$\
2 reclustered & $35.6$ & $18.8$ & $0.16$ & $2.5$\
The true test is in the jet–jet mass spectrum, illustrated in Fig. \[fig\_jetjetmass\], where one may discern the peaked signal from correct combinations of well reconstructed jets, over a smoother background of mismeasured jets or incorrect jet combinations. The 70–90 GeV mass window has been used to produce [Minuit]{} [@minuit] fits for a signal plus background shape. The choice of best fit function is not trivial: the signal Breit-Wigner shape is combined with misassignment errors in a complicated and clustering-algorithm-dependent way. For simplicity we have assumed a Breit-Wigner shape, characterized by a peak height $h$ (given as the number of events per 0.2 GeV mass bin; this is related to the input normalization for ), position $m_W$ and width $\Gamma_W$. The $h$ and $\Gamma_W$ may be combined to an area $A$ underneath the Breit-Wigner. Normalization is such that 2 should be the maximum, corresponding to two correctly reconstructed jet pairs per event. Since the Breit-Wigner has rather large tails, this ansatz may have a tendency to paint too rosy a picture of how well algorithms do. An alternative would have been a Gaussian fit, where the tails are rather strongly dampened, and the bias would go in the other direction. The qualitative differences between algorithms that we report below are the same, however. Two background shapes have been used, one a three-term polynomial in mass and another corresponding to a smeared step function (motivated by the kinematical-limit shoulder at large masses). Results with these two backgrounds come rather close, thus in Tab. \[tab\_Wmass\_fit\] numbers are for the former and numbers for the latter background. The fits described here correspond to the ‘Individual $W$ mass’ columns. The ‘parton (right)’ row makes used of the two correct $W$ masses, and thus represents the best possible answer for algorithms, while ‘parton (all)’ contains all six possible combinations of the four original partons. The fact that fitted areas above 2 are obtained illustrate imperfections in the fitting ansatz.
[|l|r|r|r|r|r|r|r|r|]{}
------------------------------------------------------------------------
Algorithm & &\
------------------------------------------------------------------------
& & $m_W$ & $\Gamma_W$ & $A$ & & $m_W$ & $\Gamma_W$ & $A$\
& & (GeV) & (GeV) & & & (GeV) & (GeV) &\
------------------------------------------------------------------------
&\
------------------------------------------------------------------------
& 200 & 80.321 & 8.000 & 1.26 & 163 & 80.719 & 3.697 & 0.47\
& 260 & 80.354 & 5.663 & 1.16 & 195 & 80.482 & 3.296 & 0.51\
& 247 & 80.333 & 5.800 & 1.12 & 200 & 80.537 & 3.288 & 0.52\
& 226 & 80.376 & 6.206 & 1.10 & 190 & 80.180 & 3.237 & 0.48\
& 260 & 80.396 & 5.721 & 1.17 & 227 & 80.454 & 3.216 & 0.57\
& 238 & 80.376 & 5.871 & 1.10 & 240 & 80.396 & 3.249 & 0.61\
& 268 & 80.387 & 5.447 & 1.14 & 190 & 80.492 & 3.395 & 0.51\
1 & 182 & 80.008 & 6.883 & 0.98 & 136 & 79.671 & 3.923 & 0.42\
2 & 188 & 80.044 & 6.732 & 0.99 & 138 & 79.657 & 3.758 & 0.41\
2 reclustered & 270 & 80.486 & 5.709 & 1.21 & 184 & 80.615 & 3.375 & 0.49\
parton (all) &1267 & 80.320 & 2.088 & 2.08 & 656 & 80.329 & 2.118 & 1.09\
parton (right) &1270 & 80.324 & 2.076 & 2.07 & 661 & 80.325 & 2.053 & 1.07\
------------------------------------------------------------------------
&\
------------------------------------------------------------------------
& 235 & 80.218 & 6.553 & 1.212 & 220 & 80.491 & 3.337 & 0.576\
& 315 & 80.326 & 4.893 & 1.211 & 295 & 80.268 & 2.999 & 0.694\
& 299 & 80.310 & 5.203 & 1.222 & 238 & 80.293 & 3.216 & 0.601\
& 260 & 80.359 & 5.287 & 1.081 & 180 & 80.010 & 3.248 & 0.460\
& 311 & 80.345 & 4.944 & 1.209 & 320 & 80.284 & 2.863 & 0.719\
& 281 & 80.376 & 5.288 & 1.168 & 319 & 80.252 & 2.895 & 0.725\
& 280 & 80.387 & 5.261 & 1.155 & 299 & 80.250 & 2.879 & 0.677\
& 324 & 80.368 & 5.247 & 1.335 & 241 & 80.291 & 3.212 & 0.608\
(no pre) & 324 & 80.371 & 5.249 & 1.334 & 239 & 80.286 & 3.200 & 0.601\
(no reas) & 177 & 79.984 &10.000 & 1.392 & 236 & 80.602 & 3.383 & 0.626\
0 & 193 & 79.920 & 9.904 & 1.498 & 179 & 79.494 & 4.600 & 0.646\
1 & 244 & 79.896 & 7.521 & 1.440 & 271 & 79.561 & 3.528 & 0.750\
parton (all) &1312 & 80.422 & 1.995 & 2.056 & 680 & 80.408 & 1.986 & 1.060\
parton (right) &1319 & 80.427 & 1.988 & 2.059 & 694 & 80.418 & 1.911 & 1.041\
Comparing algorithms, several aspects should be kept in mind. A larger area $A$ implies a larger efficiency for sensible jet finding, i.e., fewer misassignments that completely kill the signal. A smaller width $\Gamma_W$ is a sign of good performance for those jet pairs that are still correctly combined, i.e., fewer misassignments of a less disastrous character. For good $m_W$ determination in an experiment one should thus have both a large $A$ and a small $\Gamma_W$. As a third criterion one could imagine the systematic offset between the reconstructed $W$ mass and the parton-level one. However, so long as such an offset is not too large and can be well modelled, it is not so important. One anyway has to make other corrections, e.g., the input $m_W$ parameter does not coincide with the average generated $m_W$ because of the convolution with matrix-element and phase-space factors. Unfortunately, while and results largely agree, there are some discrepancies that we do not fully understand, and that thus should act as a warning not to take these studies as the definite word.
One possible conclusion from the numbers in Tab. \[tab\_Wmass\_fit\] is that many of the algorithms perform comparably well. In particular, the correlation between sophistication and performance is weak or non-existent, moving, e.g., from to to . It appears that consistently reconstructs the largest area, i.e., does fewest severe misassignments, but has a rather standard peak width. The difference between and $p_{\perp}$ measures is small; if anything the latter gives a wider peak and thus is worse. Whether preclustering is performed or not in is irrelevant so long as reassignment is allowed, but without reassignment the preclustering is disastrous — the peak is so broadened that $\Gamma_W$ hits the upper bound allowed in the fit. Thus, to the extent that does somewhat better than , the reason is the reassignment step. The fits without reclustering give problems with the $\Gamma_W$ or $A$ values, but also displays a large systematic bias in the estimated $m_W$. With reclustering, again does fairly well. One reason for the problems could be that is designed with QCD events in mind, where two nearby partons are connected by a color dipole. Here two nearby jets would come from different $W$’s and not share a dipole (we did not include the possibility of color rearrangement [@LEP2QCDgen; @col_rearr]).
In experiments, it is advantageous to study the average $W$ mass of an event rather than the two individual ones. There are several reasons for this, but of interest here is that misassignments of particles in part cancel, in that a reassignment of one particle from one $W$ to the other reduces the first mass and increases the second, leaving the average less affected than each separately. Per event there are thus three possible jet pairings, each giving one potential average $W$ mass. Of these three, we exclude the one where the two most energetic jets are paired with each other, since kinematically this is seldom the right combination. The remaining two combinations give mass distributions as illustrated in Fig. \[fig\_avgWmass\]. Note that indeed the signal peak is much more narrow, and that there now is an absolute kinematic limit at 90 GeV. fits have been performed, as before, with results as shown in the ‘Average $W$ mass’ columns of Tab. \[tab\_Wmass\_fit\]. Normalization is such that an ideal fit would give $A =1$.
It is notable that the relative performance of algorithms changes rather drastically compared with above. The two best ones now are (the original one, not the one employing the measure, which is not shown in the plots) and , whereas falls below the average. This could indicate that the particles that get misassigned are somewhat different in the former two and in the latter algorithm. That is, in the former two, the errors on the two individual $W$ masses tend to cancel better in the average. still gives a larger $\Gamma_W$ than other algorithms. Geneva has a reasonable width but a small area $A$.
The right two columns of Tab. \[tab\_Wmass\_simple\] shows that the pattern between models is not so easy to understand. Here the average mass is evaluated for all three possible jet pairings and compared with the correct average $W$ mass of the same event. The pairing which agrees best is retained, and $\delta$ denotes the average mass difference between the reconstructed and the true average $W$ mass. As we see, and without reclustering here show a significant bias in the negative direction, and also give a larger width $\sigma$ of the $\delta$ distribution. does quite well in these ‘behind-the-scene’ numbers, so the poor numbers above do not seem to have a simple explanation.
As possible conclusions for the $W^+W^-$ analysis, we attempt the following.
1. The choice of the algorithm is in general not so trivial in such a context. However, there are some that cannot be recommended, notably without reclustering and , and also .
2. If the ‘Individual $W$ mass’ distribution is preferred in the selection procedure, that performs slightly better than the other algorithms.
3. If, instead, one resorts to the ‘Average $W$ mass’ spectrum, then and the original (i.e., that with the measure) come out best.
4. However, differences between the three algorithms that excel do not show a simple pattern, so that, in the end, a definite decision between these latter could probably only be made in the context of some specific detector simulation and mass extraction procedure.
Speed {#subsec_speed}
-----
All binary clustering algorithms are comparably fast. Starting from an initial configuration of a large number $n$ of partons and hadrons, a small number of jets is to be found. Therefore $O(n)$ binary joinings have to be performed. Each in principle requires $O(n^2)$ distances to be evaluated to find the smallest one. In practice, distances can be kept in a table that is only updated for those entries affected by a binary joining. Therefore execution time scales more like $O(n^2)$ than the expected $O(n^3)$. At LEP1 energies, the clustering time is about two thirds of the time it takes to generate an event (with and ).
The preclustering time roughly scales like $O(n^2)$: each particle can be the seed of a precluster and all particles have to be tested whether they belong to the precluster. If $m$ preclusters are formed, normally with $m$ much smaller than $n$, then subsequent joinings take $O(m^3)$, since the reassignment steps means one cannot reuse older numbers. The reassignment step after each joining requires assigning $n$ particles to $m$ clusters, i.e., a total $O(m^2n)$ for $O(m)$ joinings. In practice, scaling of the total time is about like $O(n^2)$. At LEP1 the algorithm is somewhat faster than the binary joining ones, but at most by a factor of two. If the trick of pretabulation is not used in the binary routines, the difference is more like a factor five.
The basic step of the algorithm is the joining of three clusters into two. Therefore $O(n^3)$ distances have to be evaluated to find the smallest one. Again, keeping a table of distances allows the total time to scale more like $O(n^3)$ than like the $O(n^4)$ that might have been expected. Still, there is a significant price to be paid, and at LEP1 energies is about a factor fifty slower than the other algorithms.
Summary and conclusions {#sec_summary}
=======================
Jet clustering algorithms are an expression of [*time*]{} and [*place*]{}. The time evolves with the calculational methods developed and these can in turn be limited by the computing power available. The place is circumscribed by the experimental contexts where algorithms are needed and the tasks that they are asked to accomplish. In ten, fifteen years from now, the two will both have changed. Specifically, if the theoretical methods adopted (e.g., in determining the higher-order and exponentiation properties of pQCD, the parton-shower evolution and/or the non-perturbative dynamics of hadronization) in some years time would be different, so should clustering algorithms be.
Inevitably then, our study can claim no prerogative to being definitive. We have undertaken it for the present era and for the imminent phenomenology. The aim was to survey the many and different jet finding algorithms for electron-positron events available on the market nowadays and study which algorithm to use where, if at all possible. As anticipated in the Introduction, we have not found [*one*]{} single best choice that prevails in [*all*]{} cases we have addressed. However, as the reader should have agreed upon by now, there need not exist such a one. Nonetheless, in several instances it has been possible to recognize, if not the [*most suitable*]{} algorithm to use, at least the [*attractiveness*]{} of some of its basic components. In this Section, we attempt to summarize our findings.
As a first example we have considered the realm of pQCD, by studying jet fractions at parton level and resorting to the most advanced techniques of perturbation theory: that is, exact next-to-leading fixed order results combined with resummed predictions to next-to-leading-logarithmic accuracy. Such a choice was not made by chance, as it was dictated by the crucial rôle that jet rates play, e.g., in the determination of the strong coupling constant $\as$ and of its running with energy. In this respect, we should point out that the main features illustrated in this paper for the case of LEP1 energies, survive unaltered for the case of LEP2.
By studying the three-jet fraction in pQCD we have taken for granted the well-established result that a jet measure based on some relative transverse momentum of the clusters involved is the most appropriate to use, thus neglecting consideration of jet finders based on other quantities (such as the invariant mass). Under these circumstances, one historically recognizes three different such measures. Namely, the so-called , and ones. The first two cluster two particles into one whereas the last one merges three into two. Neglecting imperceptible differences (we used ‘massless’ partons) between energy and momentum, they can geometrically be viewed as follows. The first represents the transverse momentum of either particles with respect to the sum of the momenta of the two. The second is the transverse momentum of the lower-energy cluster with respect to the higher-energy one. The third is the transverse momentum of one cluster with respect to the other two.
Among the three our preference would go to the measure. In fact, algorithms based on the latter display a reduced (renormalization) scale dependence of the three-jet fraction at NLO, as compared to the cases of the and expressions. The stability of the perturbative results in higher order against variations of such a scale is a measure of the smallness of even higher terms in the perturbative expansion, this ultimately reflecting a better degree of convergence of the corresponding power series. As $\as$ measurements are unavoidably biased by a theoretical error, and since this is assessed in no other way than the range in $\as$ spanned by the QCD predictions for different choices of the above scale, in our opinion, the measure comes to be a recommendable choice in this context. We have hypothized its improved behaviour, with respect to the one, as due to their respective definitions: whereas the former is a continuous function of the energies of the two clusters the latter is not. As a matter of fact, the presence of discontinuities at the edge of the phase space of an observable has recently been advocated to act as a source of misbehaviors in higher order perturbation theory.
An additional neat attribute of the transverse momentum appears while combining the fixed-order with the resummed perturbative predictions, for example in computing the average number of jets produced in electron-positron annihilation events. Such a quantity can be predicted reliably from QCD over a wide range in $\ycut$ and, furthermore, it is also particularly sensitive to the actual value of $\Lambda^{(5)}_{\overline{\mbox{\tiny{MS}}}}$. These two aspects render it then a particularly good variable for the determination of $\as$. The advantage of using the measure in this case is that the parton level of the theory matches more naturally the parton level produced by the Monte Carlo generator, as no rescaling of $\as$ is needed to find an adequate agreement between the two (contrary to the case of the measure).
The difference between the parton level and the hadron level as generated by a phenomenological Monte Carlo program is customarily used as an estimate of the hadronization corrections. However, one should notice that even in presence of a good agreement between exact parton level from the theory and the approximate one from the Monte Carlo, there is a danger in interpreting the hadron-parton difference in the phenomenological generator as an estimate of non-perturbative effects and simply adding it to the matched prediction. In fact, the presence of unnatural cut-offs and kinematic boundaries in the parton shower could well induce non-perturbative contributions already at the parton level. Thus, we have refrained here from doing so. Instead, we have compared the partonic and the hadronic outputs as they come from the generator, without any attempts to correct the former.
The non-perturbative hadronization is clearly a genuine physics process, but for it we do not have at present a well established theory. Rather, our knowledge is based on the phenomenological experience and is implemented in the above-mentioned programs. Although the agreement between the latter is remarkable, and these in turn reproduce well real data, there are systematic dissimilarities in their implementation of the non-pQCD dynamics that must be accounted for. In other terms, the differences in the predictions of the Monte Carlo programs contribute to build up our systematic uncertainties on the actual measurements. These so-called hadronization corrections turn out to be algorithm dependent, thus to design one for which these are reduced would represent a clear improvement: the smaller those are, the more under control would the differences between generators be. This is of particular relevance at very small values of the resolution parameter $\ycut$, where the interface between perturbative and non-perturbative QCD occurs.
In order to to reduce the size of the non-perturbative corrections in multi-jet rates, the implementation of the angular-ordering and soft-freezing procedures has proven to be decisive, particularly at low $\ycut$. The first one consists in distinguishing between the variable used to decide which pair of objects to test first and that to be compared with the resolution parameter. The second one corresponds to eliminating from the sequence of clustering the less energetic one in a resolved pair of particles. These two steps help to heal two of the unwanted phenomena occurring in the dominion of soft physics, that is, ‘junk-jet’ formation and ‘misclustering’, respectively. The first takes place because of the tendency of soft ‘unresolved’ particles of acquiring momenta from particles at low transverse momentum and forming spurious jets from these whereas the second happens because of the bias of soft ‘resolved’ particles of attracting wide-angle radiation.
These two remedies are however effective [*only*]{} if inserted into $p_{\perp}$-based jet finders. In fact, although these two steps were originally implemented as part of the algorithm, we have assessed their efficiency also in presence of the measure while reminding the reader of their inadequacy if the one is used instead. If one then combines this result with what we have already mentioned for the fixed-order and resummed predictions, it is evident that the hybrid scheme that we had originally introduced for purpose of comparison, based on the transverse momentum and the clustering sequence, performs better than any other tested, so to deserve the status of new algorithm. In our opinion, it has come to set the standard as far as the dominion of soft physics in multi-jet events is concerned.
Before proceeding further, we should mention that the overall features obtained with respect to the size of the hadronization corrections are in part the result of the fortuitous cancellations between opposite tendencies. On the one hand, junk-jet formation and misclustering (and heavy quark decays as well) induce positive corrections. On the other hand, the well-known string or drag effect (i.e., the pulling closer of the two nearest jet directions by the hadronization mechanism) produces negative contributions. The increased size of the ‘negative hadronization corrections’ for some algorithms at medium values of $\ycut$ is then the consequence of having reduced the former while leaving untouched the latter effect. Therefore, as $\ycut$ grows larger, to diminish the extent of the corrections becomes more and more matter of finding a delicate balance between the two. At the upper extreme of the $\ycut$ range, that is, in the two-jet limit, the algorithm admirably contains the size of the hadronization effects.
If one abandons the subject of QCD studies in multi-jets, that is, the dominion of soft physics and global quantities (such as jet fractions, shape variables, etc.), and enter, for example, the territory of the search for mass resonances, the criteria that define a good algorithm are going to be rather different. In the new context, as it is now for the mass determination of the $W$ boson at LEP2, kinematical quantities such as energies and angles (which build up the definition of invariant mass) are of main concern. Also in this case, although we have not carried out a sophisticated analysis of four-jet events at LEP2, including detector effect and background simulations, we believe to have achieved interesting results.
In hadronic decays of $W^+W^-$ pairs, the four partons emerging from the unstable resonances are naturally energetic and far apart. QCD radiation from the two $W$ decays does not interfere till the next-to-next-to-leading order in the strong coupling constant. In other terms, the soft dynamics that determines to a large extent the phenomenology of jet rates is of little concern here. Instead, in this case, it is how well an algorithm is able to reconstruct at hadron level the original partonic energy and direction, and ultimately the shape of the mass resonance, that sets the target of a good jet clustering performance. Therefore, the next step of our analysis has been to quantify the ability of the various clustering algorithms in minimizing the average angular and energy error in the jet reconstruction. As a preliminary exercise, to allow for an understanding of the typical biases, we have addressed the simplified case of the kinematics of two-, three- and four-parton events, for some fixed phase space configuration. The procedure has been eventually generalized to include all final state jet multiplicities, by studying the sum of the invariant jet masses as well as of the transverse momentum of all particles of an event.
After these tests, two out of our list of clustering algorithms excel above all others, which share an ordinary degree of performance. They are the and schemes. The former is undoubtedly the best in reconstructing angles and it is second in case of energy only to the latter, which is however very modest with angular quantities. The ability of in reconstructing angles and energies can be attributed to the reassignment procedure, which it is the only to implement. In other schemes, it is not uncommon with stray particles at the edge of a jet that, by any distance criterion, are closer to another jet. The poor performances of in angles are the price paid for an implementation especially designed to remedy systematic biases in the hadronization, notably the mentioned string or drag effect.
Studies in energies and angles similar to those above have been carried out also for the case of $W^+W^-$ into four-jet events at LEP2. The general picture for these two quantities separately is similar to that outlined above, with best overall. One would then expect this algorithm to come first also when energies and angles are combined to reconstruct the $W$ mass invariant spectrum. This is however true only if one plots in the corresponding histogram [*all*]{} individual jet-jet masses (six in total). The majority of events are in fact concentrated around the $W$ mass, whereas misassignments take place more often for other algorithms, whose spectra can be significantly more spread out.
Surprisingly enough, if one plots instead the mass distribution formed from the two average masses which can be obtained from the two possible pairings that most likely reconstruct better the $W$ mass (those in which the two most energetic jets are not paired together), then the original algorithm (the one employing the measure) comes out best (ahead of the ). The reasons for this are not entirely understood. On the one hand, the use of the ‘average masses’ rather then the ‘individual masses’ is generally dictated by the fact that misassignments of particles partially cancel, on the other hand, our studies of jet angle and energy reconstruction did not furnish us with an obvious explanation why angular-ordering and/or soft-freezing should be beneficial to the four-jet decays of $W^+W^-$ pairs. (In addition, notice that in the context of energy, angle and mass reconstruction, there is no intrinsic advantage in using the transverse momentum rather than the one. Indeed, in the average $W$ mass distribution the adoption of the former worsen the good performances obtained with the latter.)
Since in high-statistic Monte Carlo simulations the actual speed of the program is not a secondary issue (hundreds of hadrons are typically involved), we have studied the performances of the various algorithms in this respect. In general, all binary clustering algorithms are equally fast, whereas is slower by more than one order of magnitude.
Finally, three different Monte Carlo event generators have been used to carry out all aspects of our analysis. We have never found any significative difference among them.
Acknowledgements {#acknowledgements .unnumbered}
================
SM is grateful to the UK PPARC for financial support and to the Theoretical Physics Group in Lund for their kind hospitality during his visit in Sweden, which has been partially supported by the Italian Institute of Culture ‘C.M. Lerici’ (Stockholm) under the grant Prot. I/B1 690, 1997. SM finally acknowledges useful discussions with James Stirling and Bryan Webber as well as various numerical comparisons with Garth Leder. Finally, we all thank Yuri Dokshitzer, Mike Seymour and Bryan Webber for carefully reading the manuscript version of this paper.
[99]{}
S.D. Ellis and D.E. Soper, D48 1993 3160.
M.H. Seymour, 1994 [127]{}.
Yu.L. Dokshitzer, G.D. Leder, S. Moretti and B.R. Webber, .
S. Bethke, Z. Kunszt, D.E. Soper and W.J. Stirling, ; Erratum, preprint .
L. Lönnblad, .
Mark-I Collaboration, G. Hanson et al., 1975 [1609]{}.
S. Brandt, C. Peyrou, R. Sosnowski and A. Wroblewski, 1964 [57]{};\
E. Fahri, 1977 [1587]{}.
J.D. Bjorken and S.J. Brodsky, 1970 [1416]{}.
S. Brandt and H.D. Dahmen, 1979 [61]{}.
S.L. Wu and G. Zobernig, 1979 [107]{}.
J.B. Babcock and R.E. Cutkosky,
M.C. Goddard, Rutherford preprint RL-81-069 (1981).
A. Bäcker, .
S.L. Wu, 1981 [329]{}.
J. Dorfan, .
H.J. Daum, H. Meyer and J. Bürger, .
K. Lanius, H.E. Roloff and H. Schiller, .
G. Sterman and S. Weinberg, .
R.D. Field and R.P. Feynman, .
B. Andersson, G. Gustafson, G. Ingelman and T. Sjöstrand, .
T. Sjöstrand, .
JADE Collaboration, W. Bartel et al., ;\
S. Bethke, Habilitation thesis, LBL 50-208 (1987).
R.K. Ellis, D.A. Ross and A.E. Terrano, ;\
J.A.M. Vermaseren, K.J.F. Gaemers and S.J. Oldham, ;\
K. Fabricius, G. Kramer, G. Schierholz and I. Schmitt, .
N. Brown and W.J. Stirling, .
S. Catani, Yu.L. Dokshitzer, M. Olsson, G. Turnock and B.R. Webber, .
M.H. Seymour,
Z. Kunszt and P. Nason, in Proceeding of the Workshop ‘$Z$ Physics at LEP1’, eds. G. Altarelli, R. Kleiss and C. Verzegnassi, CERN 89–09, Vol. 1, p. 373
G. Kramer and B. Lampe, [*Z. Phys.*]{} [**C34**]{} ([1987]{}) [497]{}; Erratum, [*ibidem*]{} [**C42**]{} (1989) 504; [*Fortschr. Phys.*]{} [**37**]{} (1989) 161.
N. Brown and W.J. Stirling, .
Yu.L. Dokshitzer, contribution cited in Report of the Hard QCD Working Group, Proc. Workshop on Jet Studies at LEP and HERA, Durham, December 1990, 1991 [1537]{}.
I.G. Knowles et al., in Proceedings of the Workshop ‘Physics at LEP2’, eds. G. Altarelli, T. Sjöstrand and F. Zwirner, CERN 96–01, Vol. 2, p. 103.
L. Dixon and A. Signer, .
G.R. Farrar, .
S. Bentvelsen and I. Meyer, preprint CERN-EP/98-043, March 1998, [hep-ph/]{}[9803322]{}.
version 4.10 program and manual, L. Lönnblad, .
W.T. Giele and E.W.N. Glover, .
I.G. Knowles et al., in Proceedings of the Workshop ‘Physics at LEP2’, eds. G. Altarelli, T. Sjöstrand and F. Zwirner, CERN 96–01, Vol. 2, p. 163.
R.K. Ellis, W.J. Stirling and B.R. Webber, “QCD and Collider Physics” (Cambridge University Press, Cambridge 1996).
G. Grunberg, ;\
S.J. Brodsky, G.P. Lepage and P.B. Mackenzie, ;\
P.M. Stevenson, ;\
H.D. Politzer, .
S. Catani and B.R. Webber, 10 1997 5.
M.H. Seymour, contributed to XIth Les Rencontres de Physique de la Vallee d’Aoste ‘Results and Perspectives in Particle Physics’, La Thuile, Italy, 2-8 March 1997, preprint C97-03-02, March 1997, ; preprint RAL-97-026, July 1997, .
S. Catani and B.R. Webber, preprint Cavendish-HEP-97/16, CERN-TH-98-14, January 1998, .
OPAL Collaboration, P.D. Acton et al., 1993 [1]{}.
Z. Nagy and Z. Trócsány, ; preprint .
Z. Bern, L. Dixon, D.A. Kosower and S. Weinzier, ;\
Z. Bern, L. Dixon, D.A. Kosower, preprint SLAC-PUB-7529, June 1996, ;\
L. Dixon and A. Signer, ;\
A. Signer, presented at the XXXIInd Rencontres de Moriond ‘QCD and High-Energy Hadronic Interactions’, Les Arcs, France, 22-29 March 1997, SLAC preprint SLAC-PUB-7490, May 1997, ;\
E.W.N. Glover and D.J. Miller, ;\
J.M. Campbell, E.W.N. Glover and D.J. Miller, [*Phys. Lett.*]{} [**B409**]{} (1997) 503.
E. Maina and M. Pizzio, [*Phys. Lett.*]{} [**B369**]{} (1996) 341.
E. Maina, S. Moretti and M. Pizzio, in preparation.
S. Catani, Yu.L. Dokshitzer, F. Fiorani and B.R. Webber, .
S. Catani, Yu.L. Dokshitzer and B.R. Webber, 1994 [263]{}.
G. Marchesini and B.R. Webber, B310 1988 461.
G. Marchesini, B.R. Webber, G. Abbiendi, I.G. Knowles, M.H. Seymour and L. Stanco, .
T. Sjöstrand, ;\
M. Bengtsson and T. Sjöstrand, .
T. Sjöstrand,
J. André and T. Sjöstrand, preprint LU-TP 97-18, August 1997, 9708390;\
J. André, preprint LU-TP 97-12, June 1997, 9706325.
B.R. Webber, private communication.
B.R. Webber, B238 1984 492.
See, e.g.:\
R.P. Feynman, “Photon Hadron Interactions” (W.A. Benjamin Press, New York 1972).
Yu.L. Dokshitzer, G. Marchesini and B.R. Webber, .
S. Bethke, .
B. Andersson, G. Gustafson and T. Sjöstrand, .
Ya.I. Azimov, Yu.L. Dokshitzer, V.A. Khoze and S.I. Troyan, .
JADE Collaboration, W. Bartel et al., ;\
TPC/2$\gamma$ Collaboration, H. Aihara et al., ;\
TASSO Collaboration, M. Althoff et al., ;\
MARK II Collaboration, P.D. Sheldon et al., 1398;\
DELPHI Collaboration, P. Aarnio et al., ;\
OPAL Collaboration, M.Z. Akrawy et al., ;\
ALEPH Collaboration, R. Barate et al., .
JADE Collaboration, W. Bartel et al., ;\
ALEPH Collaboration, EPS0518, contribution to the International Europhysics Conference on High Energy Physics, Brussels, Belgium, 27 July – 2 August 1995.
S. Catani, G. Marchesini and B.R. Webber, .
Z. Kunszt et al., in Proceedings of the Workshop ‘Physics at LEP2’, eds. G. Altarelli, T. Sjöstrand and F. Zwirner, CERN 96–01, Vol. 1, p. 141
F. James and M. Roos, .
G. Gustafson, U. Pettersson and P.M. Zerwas, ;\
T. Sjöstrand and V.A. Khoze, ; ;\
G. Gustafson and J. Häkkinen, ;\
L. Lönnblad, ;\
J. Ellis and K. Geiger, .
[^1]: Here and in the following, the word ‘cluster’ refers to hadrons or calorimeter cells in the real experimental case, to partons in the theoretical perturbative calculations, and also to intermediate jets during the clustering procedure.
[^2]: For which we are indebted to Yuri Dokshitzer.
[^3]: This similarity with the cascade is the reason the algorithm was originally called .
[^4]: An up-to-date list and a description of similar codes publicly available can be found in Ref. [@QCDgenerators].
[^5]: Here and in the following Subsection, in order to simplify the notation, we shall use $y$ to represent $\ycut$ and refer to the various jet clustering algorithms/schemes by using their initials only. In addition, we acknowledge our abuse in referring to the latter both as algorithms and as schemes, since the last term was originally intended to identify the composition law of four-momenta when pairing two clusters (see Sect. \[subsec\_JADE\]). This is in fact a well admitted habit which we believe will not generate confusion in our discussion.
[^6]: In fact, the long-awaited $\cO{\as^3}$ corrections to the four-jet rate have recently become available [@a3] and have been implemented in the mentioned code.
[^7]: Note that no simulation of detector acceptance and resolution are implemented in the latter case.
[^8]: An up-to-date review of our current understanding of hadronization is found in Ref. [@book].
[^9]: In Ref. [@stan] a special-purpose algorithm was devised in order to determine the $\ycut$ transition values at which an event flips from an $n$-jet to an $m$-jet configuration, with $m$ and $n$ not necessarily consecutive. It was used to study the characteristics of those events that have two different $n$-jet configurations.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'After reviewing the evidence that QCD matter at ultrarelativistic energies behaves as a very good fluid, we describe the connection of QCD fluidity to heavy quark observables. We review the way in which heavy quark spectra can place tighter limits on the viscosity of QCD matter. Finally, we show that correlations between flow observables and the event-by-event charm quark abundance (“flavoring”) can shed light on the system’s equation of state [@ourpaper].'
address:
- 'FIAS, Johann Wolfgang Goethe Universität,Frankfurt A.M., Germany'
- 'Department of Physics, Columbia University, 538 West 120$^{th}$ Street, New York, NY 10027, USA'
author:
- 'G.Torrieri [^1], J.Noronha [^2]'
title: Heavy quarks and the collective properties of hot QCD
---
The purpose of high energy heavy ion collisions is to study the thermodynamic properties of strongly interacting matter at high temperatures and densities. The most ambitious part of this program is to create a bubble of deconfined “quark-gluon plasma”, a gas of “free” quarks and gluons mimicking the properties of the universe shortly after the big bang [@jansbook].
This program, however, has a potential obstacle: Thermodynamics generally applies to “large”,”static” systems, where “large” is compared to typical sizes of microscopic degrees of freedom. While the system created in a heavy ion is large compared to the quark or hadron size, the system is certainly not static. In fact, it [*could*]{} be be a far-from-equilibrium “mess of quarks and gluons”, where each parton scatters a few times, but whose dynamics has no connection to thermodynamics. In this case, concepts like phase transition have little value.
The best that we can hope for is to replace “static” by “slowly evolving”, where “slowly” is defined in respect to the timescales of the microscopic processes within the system. If the system is slowly evolving (alternatively, if microscopic dynamics equilibrates very fast, as it naively should in a strongly coupled hot system), we can explore the thermodynamics of the system via hydrodynamics:The systems collective motion will be governed by the equation of state and transport coefficients, calculated from equilibrium thermodynamics.
RHIC experimental data has given us reason for optimism in this respect: One of the most widely cited findings in ultrarelativistic heavy ion collisions concerns the discovery of a “perfect fluid” in collisions at the Relativistic Heavy Ion Collider (RHIC) [@v2popular; @sqgpmiklos]. The evidence for these claims comes from the successful modeling of the anisotropic expansion of the matter in the early stage of the reaction by means of ideal hydrodynamics [@heinz; @shuryak; @huovinen]
It appears, therefore, that the system is, to a very good approximation, locally thermalized, and we can use flow observables as a probe of the equation of state,and eventually of phase transitions. However, efforts in this direction have met remarkably little success: The probes for which hydrodynamics works very well are insensitive to the equation of state [@shuryak; @huovinen]. Probes (such as HBT radii and average transverse momentum) which are more sensitive show no structure indicative of a phase transition [@heinz; @shuryak; @huovinen],presumably because the systems initial conditions are not “tidy” enough to highlight discontinuities in the equation of state [@pratt; @larry].
Other probes more sensitive to the details of the equation of state are therefore needed. One obvious candidate is heavy quark observables. A heavy probe in a thermalized medium is, as is well-known, described by Brownian motion. This makes it a very sensitive probe of thermalization, since the scale of thermalization of a heavy particle $\tau_{heavy}$ is parametrically larger wrt to its light component $\tau_{light} \sim \eta/(Ts)$ (where $\eta/s$ is the dimensionless ratio of viscosity to entropy density and $T$ is the temperature): If the mass of the heavy particle is $M\gg T$, $\tau_{heavy}/\tau_{light} \sim M/T$ [@teaneyc].
RHIC data to date is consistent with “heavy particles flowing as much as light particles” [@phecharm; @whitestar], although large systematic errors persist (for example, we can not distinguish a charm from a bottom quark) If this is really true, than $\eta/s$ is truly $\ll 1$ and heavy quarks are, for all intents and purposes, thermalized with respect to the rest of the system. They (in particular, the more abundant charm quarks) can then be used as chemical probes for the equation of state.
In general, charm in heavy ion collisions is not expected to be [*chemically*]{} equilibrated. The bulk of charm content should be produced by “hard” processes in the initial state at a concentration far above their equilibrium expectation [@ramona]. The abundance of $c \overline{c}$ pairs produced in heavy ion collisions at the LHC is expected to be reasonably high [@ramona] ($\sim 10^{1-2}$, of course parametrically smaller than the total $\sim 10^{4}$ multiplicity). In the dilute limit, the total charm abundance is for all intents and purposes a conserved number. Hence, a good observable is the dimensionless quantity ${\tilde{\rho}}= \rho/s \sim \rho T^{-3} \ll 1$. The events entropy, also nearly conserved for a good fluid, can be related to the multiplicity rapidity density $dN/dy$ [@jansbook].
In the dilute (corrections of ${ \mathcal{O} \left( {\tilde{\rho}}^2 \right) }$) infinitely heavy quark (corrections of ${ \mathcal{O} \left( T/M \right) }$) limit, the contribution of an abundance of charm quarks to the free energy can be computed by adding a Polyakov loop density [@polyakov] to the free energy density $$\mathcal{F}(T) = \mathcal{F}_0(T) +{\tilde{\rho}}\,s_0(T) \, F_Q (T)
\label{noronhaeq1}$$ where $F_Q(T) = -T \ln \ell(T)$ and $\ell(T)$ is the renormalized Polyakov loop. The Polyakov loop is obtainable from lattice calculations, and the quantities underscored with 0 denote the values before the charm flavor was included. The speed of sound can then be computed by textbook thermodynamic methods, $c_s^2 = \frac{d\ln T}{d\ln s}\,$, and $s=-dF/dT$. Fig. \[saltcs\] top panel shows our estimate for the speed of sound derived via Eq. (\[noronhaeq1\]) using the expectation value of the Polyakov loop extracted from the lattice (2+1 QGP with almost physical quark masses [@bazavov]). One can see that the main effect comes from the region near the phase transition (where there is a minimum in the speed of sound) but well before the Polyakov loop expectation value reaches its asymptotic high-T limit, leading to a negative shift of the speed of sound from its value in a 2+1 QGP. This can be physically readily understood: Correlations between the medium and slowly moving heavy quarks lower the system’s response to pressure.
Our estimate stops close to $T_c$, as $-T \ln \ell(T) \rightarrow \infty$ in the confining phase (where the Polyakov loop expectation value vanishes). Mathematically, one can trust our approach as long as the heavy quark is much heavier than any other scale in the system, ie $-T \ln \ell(T) \ll M_q$. At some point in the approach to confinement, however, this approximation breaks down. To estimate the contribution of flavoring in the confined phase we assume flavorful confined QCD is described by the hadron resonance gas model. In this case, flavoring can be approximated by an admixture of heavy mesons in a gas of pions. The latter has a speed of sound of $c_s^{light} \simeq 1/\sqrt{3}$ (ultra-relativistic ideal gas), while the former will have a speed of sound of $c_s^{heavy} \simeq \sqrt{5T/(3M_{meson})}$. The speed of sound of the mixture will therefore go as $$\label{csweak}
c_s^2 \sim \frac{1}{3} + { \mathcal{O} \left( {\tilde{\rho}}\frac{T}{M_{meson}} \right) }$$ parametrically smaller than the contribution in the deconfined phase, which is just ${ \mathcal{O} \left( {\tilde{\rho}}\right) }$. Thus, the flavoring effect on the speed of sound is [*specific to the deconfined phase*]{}. The effect of flavoring in a [*weakly coupled*]{} QGP is similarly described by Eq. \[csweak\] (with $M_{meson} \rightarrow M_{charm}$), so a correlation of $c_s$ with flavoring would indicate a deconfined but strongly coupled system.
These effects produce observable consequences. Using pQCD estimates [@ramona] and logarithmic scaling for multiplicity rapidity density, we estimate ${\tilde{\rho}}$ at the LHC to be (Fig. \[saltcs\] bottom panel) $${\tilde{\rho}}= \frac{1}{6}\frac{dN_{charm}/dy}{dN_{charged}/dy} \simeq
\label{tilderho}$$ $$\frac{1}{3} \frac{N_{collisions}}{N_{participants}} \frac{\sigma_{pp \rightarrow c \overline{c}}(\sqrt{s}) \Lambda_{QCD}^2}{ \Delta y \ln \left( \frac{\sqrt{s}}{E_0} \right)} \sim 0.05$$ This is the average expectation. $dN_{charm}/dy$ and $dN/dy$ will however vary event by event, in an approximately Poissonian manner. Provided that charm can be reasonably reconstructed and there is a large enough event sample, ${\tilde{\rho}}$ is an [*experimental observable*]{} capable of serving as a binning class for events (see Fig. \[saltthermo\]).
As is well known, there is a connection between the speed of sound and the limiting average velocity of a hydrodynamic expansion with shock-like initial conditions, ${\left\langle \gamma_T v_T \right\rangle}_{freezeout} \sim f(N_{part}) {\left\langle c_s \right\rangle}_{\tau}^2$ where “freezeout” implies averaging over the freeze-out hypersurface [@jansbook] while the subscript $\tau$ means the average is done over the hydrodynamic evolution. For a shallow shock this result is exact [@dirkhydro1]. While knowledge of the initial geometry is needed to establish the form of $f(N_{part})$, model calculations [@shuryak] indicate that the dependence is not washed away even in steeper shocks and more complicated initial geometries.
The final transverse flow is in return connected to the average transverse momentum ${\left\langle p_T \right\rangle} \simeq T + m {\left\langle \gamma_T v_T \right\rangle}$. Hence, the decrease of the speed of sound close to $T_c$ (Fig. \[saltcs\]) could lower ${\left\langle p_T \right\rangle}$ for more flavored events with respect to flavorless ones (Fig. \[saltthermo\] right panel). Note that this effect is [*opposite*]{} to the naturally expected positive correlation due to the correlation between $N_{collisions},N_{part}$ and $dN/dy$. The coefficient associated with this heavy flavoring effect would be straightforwardly related to non-perturbative QCD via Fig. \[saltcs\]. This effect might be easier to measure in smaller systems due to the greater event-by-event variation in ${\tilde{\rho}}$ and less background. The main requirement of such an analysis is the ability to experimentally gauge both the charm quark abundance and ${\left\langle p_T \right\rangle}$ [*event-by-event*]{}
A possible “trivial” effect which would give correlations in the same sense as the effect proposed here is energy conservation (roughly, charm quarks need a lot of energy to be created, and that lowers ${\left\langle p_T \right\rangle}$). The correlation due to energy conservation should,however, be suppressed by factorization and boost-invariance: Charm quarks are created of partons with larger Bjorken $x$ [@feybj] than the mid-rapidity soft particles accounting for most of $dN/dy$, so the energy they take up comes from regions where rapidity deviates from zero. We do not expect, therefore, that energy conservation will lower ${\left\langle p_T \right\rangle}$ at mid-rapidity. Hence, the observation of a significant charm-flow correlation should be due to the collective properties of the system, [*not*]{} energy conservation.
In conclusion, we have argued that heavy quark observables seem to confirm that the matter produced in RHIC collisions is a very good fluid. This means that heavy quarks are, too a good approximation, in thermal (but not chemical) equilibrium with the rest of the medium. This makes it possible to study the medium response to the presence of heavy quark impurities. This response is calculable, to a good approximation, from lattice QCD data, and can produce non-trivial correlations between event charm abundance and flow properties.
[9]{}
G. Torrieri and J. Noronha, Phys. Lett. B [**690**]{}, 477 (2010) \[arXiv:1004.0237 \[nucl-th\]\]. Letessier J, Rafelski J (2002), Hadrons quark - gluon plasma, Cambridge Monogr. Part. Phys. Nucl. Phys. Cosmol. [**18**]{}, 1, and references therein
Scientific American, May 2006, New York Times, http://www.nytimes.com/2005/04/19/science/19liqu.html
M. Gyulassy and L. McLerran, Nucl. Phys. A [**750**]{}, 30 (2005) \[arXiv:nucl-th/0405013\]. P. F. Kolb and U. W. Heinz, arXiv:nucl-th/0305084. D. Teaney, J. Lauret and E. V. Shuryak, Phys. Rev. Lett. [**86**]{}, 4783 (2001) P. F. Kolb, U. W. Heinz, P. Huovinen, K. J. Eskola and K. Tuominen, Nucl. Phys. A [**696**]{}, 197 (2001)
S. Pratt, Nucl. Phys. A [**830**]{}, 51C (2009)
L. D. McLerran, M. Kataja, P. V. Ruuskanen and H. von Gersdorff, Phys. Rev. D [**34**]{}, 2755 (1986). G. D. Moore and D. Teaney, Phys. Rev. C [**71**]{}, 064904 (2005) A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett. [**98**]{}, 172301 (2007) J. Adams [*et al.*]{} \[STAR Collaboration\], Nucl. Phys. A [**757**]{}, 102 (2005) N. Armesto [*et al.*]{}, J. Phys. G [**35**]{}, 054001 (2008) \[arXiv:0711.0974 \[hep-ph\]\].
A. M. Polyakov, Phys. Lett. B [**72**]{} (1978) 477.
A. Bazavov [*et al.*]{}, Phys. Rev. D [**80**]{}, 014504 (2009) \[arXiv:0903.4379 \[hep-lat\]\]. D. H. Rischke, B. L. Friman, B. M. Waldhauser, H. Stoecker and W. Greiner, Phys. Rev. D [**41**]{}, 111 (1990). J. D. Bjorken and E. A. Paschos, Phys. Rev. [**185**]{}, 1975 (1969). R. P. Feynman, Phys. Rev. Lett. [**23**]{} (1969) 1415.
[^1]: Work was financially supported by the Helmholtz International Center for FAIR within the framework of the LOEWE program (Landesoffensive zur Entwicklung Wissenschaftlich-Ökonomischer Exzellenz) launched by the State of Hesse.
[^2]: Acknowledges support from DOE under Grant No. DE-FG02-93ER40764.
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Lawrence Berkeley National Laboratory, One Cyclotron Road, MS70-319, Berkeley CA 94720, USA\
E-mail: [email protected]
author:
- Sven Soff
title: 'Meson Interferometry and the Quest for Quark-Gluon Matter'
---
HBT-radius parameters and their relevance for the quest for quark-gluon matter
==============================================================================
Correlations of identical particle pairs, also called HBT interferometry, provide important information on the space-time extension of the particle emitting source as for example in ultrarelativistic heavy ion collisions. In this case, QCD lattice calculations have predicted a transition from quark-gluon matter to hadronic matter at high temperatures. For a first-order phase transition, large hadronization times have been expected due to the associated large latent heat as compared to a purely hadronic scenario. Entropy has to be conserved while the number of degrees of freedom is reduced throughout the phase transition. Thus, one has expected a considerable jump in the magnitude of the HBT-radius parameters and the emission duration once the energy density is large enough to produce quark-gluon matter[@soffbassdumi]. The two alternative space-time evolution pictures, with and without a phase transition, are illustrated in Fig. 1 in the $z$-$t$-diagram. After the collision of the two nuclei, each with nucleon number $A$, the system is formed at some eigen-time $\tau$ (indicated by the hyperbola) and the initial expansion proceeds either in a hadronic state (left-hand side) or in a state dominated by partonic degrees of freedom, for example a quark-gluon plasma (QGP) (right-hand side). In the latter case, the formation of a mixed phase, leads to large hadronization times and thus to rather long emission durations. The freeze-out is defined as the decoupling of the particles, i.e., the space-time coordinates of their last (strong) interactions. As a consequence, HBT interferometry and in particular the excitation function of the HBT-parameters have been considered as an ideal tool to detect the existence and the properties of a transition from a thermalized quark-gluon plasma to hadrons.
The importance of late soft hadronic rescatterings for two-particle correlations at small relative momenta
==========================================================================================================
Here, we discuss calculations based on a two-phase dynamical transport model that describes the early quark-gluon plasma phase by hydrodynamics and the later stages after hadronization from the phase boundary of the mixed phase by microscopic transport of the hadrons [@soffbassdumi]. In the hadronic phase, resonance (de)excitations and binary collisions are modeled based on cross sections and resonance properties as measured in vacuum. Fig. 2a shows the pion HBT-parameters $R_i$ as a function of the transverse momentum $K_T$ as calculated from the rms-widths of the freeze-out distributions [@soffbassdumi]. $R_{\rm out}$ probes the spatial and temporal extension of the source while $R_{\rm side}$ is only sensitive to the spatial extension. Thus, the ratio $R_{\rm out}/R_{\rm side}$ gives a measure of the emission duration. Here, we focus on the fact that for all initial conditions considered (SPS or RHIC energies and critical temperatures $T_c \simeq 160\,$MeV or $T_c\simeq 200\,$MeV) the HBT-parameters appear to be rather similar. This demonstrates that a long-lived hadronic phase dominates the bulk dependencies of the pion HBT-parameters rather than the exact properties of the QCD phase transition.
=13.5pc
In addition, the ratio $R_{\rm out}/R_{\rm side}$ increases as a function of $K_T$ up to values of about $1.5-2$ indicating the large emission durations. However, experimental data at RHIC [@STARpreprint; @Johnson:2001zi] show a completly new behaviour (not seen at SPS). The $R_{\rm out}/R_{\rm side}$ ratio decreases and even is smaller than unity. This would hint at a rather explosive scenario with very short emission times, not compatible with a picture of a thermalized quark-gluon plasma hadronizing via a first-order phase transition to an interacting hadron gas. Rather a shell-like emission as illustrated in Fig.2b would be prefered. Thus, the further study of HBT-interferometry will provide extremly important information e.g. on the hadronization process or the question of thermalization in ultrarelativistic heavy ion collisions.
Advantages of Kaons
===================
Besides many experimental advantages kaons are less contaminated by long-lived resonances and escape the opaque hadronic phase easier. Thus, $\sim 30\%$ of the kaons at $K_T\sim 1\,$GeV/c are directly emitted from the phase-boundary. Complementary, large $K_T$ kaons and their $R_{\rm out}/R_{\rm side}$ ratio exhibit a strong sensitivity on the QCD equation of state as shown in Fig. 3.
Acknowledgments {#acknowledgments .unnumbered}
===============
S.S. has been supported by the Alexander von Humboldt Foundation through a Feodor Lynen Fellowship and DOE Grant No. DE-AC03-76SF00098. S.S. is grateful to S. Bass, A. Dumitru, D. Hardtke, and S. Panitkin for fruitful collaborations leading to the results presented here.
[99]{} S. Soff, S. A. Bass, A. Dumitru, [*Phys. Rev. Lett. *]{}[**86**]{}, 3981 (2001) and references therein; S. Soff, S. Bass, D. Hardtke, S. Panitkin, [*e-print archive*]{} nucl-th/0109055. Figure drawn and kindly provided by S. Panitkin. STAR Collaboration, C. Adler [*et al.*]{}, [*Phys. Rev. Lett. *]{} [**87**]{}, 082301 (2001). PHENIX Collaboration, S. C. Johnson [*et al.*]{}, nucl-ex/0104020. D. Zschiesche, S. Schramm, H. Stöcker, W. Greiner, nucl-th/0107037.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Maria Lugaro,$^{1\ast}$ Alexander Heger,$^{1,2,3}$ Dean Osrin,$^{1}$\
Stephane Goriely,$^{4}$ Kai Zuber,$^{5}$ Amanda I. Karakas,$^{6}$\
Brad K. Gibson,$^{7,8,9}$ Carolyn L. Doherty,$^{1}$ John C. Lattanzio,$^{1}$\
Ulrich Ott$^{10}$\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
\
title: 'Stellar origin of the $^{182}$Hf cosmochronometer and the presolar history of solar system matter'
---
=1
Among the short-lived radioactive nuclei inferred to be present in the early solar system via meteoritic analyses there are several heavier than iron whose stellar origin has been poorly understood. In particular, the abundances inferred for $^{\mathbf 182}$Hf (half-life = 8.9 Myr) and $^{\mathbf 129}$I (half-life = 15.7 Myr) are in disagreement with each other if both nuclei are produced by the rapid neutron-capture process. Here we demonstrate that, contrary to previous assumption, the slow neutron-capture process in asymptotic giant branch stars produces $^{\mathbf 182}$Hf. This has allowed us to date the last rapid and slow neutron-capture events that contaminated the solar system material at $\sim$100 Myr and $\sim$30 Myr, respectively, before the formation of the Sun.
Radioactivity is a powerful clock for measuring cosmic times. It has provided us the age of the Earth[@wilde01], the ages of old stars in the halo of our Galaxy[@frebel07], the age of the solar system[@amelin10; @connelly12], and a detailed chronometry of planetary growth in the early solar system[@dauphas11]. The exploitation of radioactivity to measure timescales related to the presolar history of the solar system material, however, has been so far hindered by our poor knowledge of how radioactive nuclei are produced by stars. Of particular interest are three radioactive isotopes heavier than iron: $^{107}$Pd, $^{129}$I, and $^{182}$Hf, with half-lives of 6.5 Myr, 15.7 Myr, and 8.9 Myr, respectively, and initial abundances (relative to a stable isotope of the same element) in the early solar system of $^{107}$Pd/$^{108}$Pd $= 5.9 \pm 2.2 \times 10^{-5}$ [@schonbachler08], $^{129}$I/$^{127}$I = 1.19 $\pm 0.20 \times
10^{-4}$ [@brazzle99], and $^{182}$Hf/$^{180}$Hf $= 9.72 \pm 0.44 \times 10^{-5}$ [@burkhardt08]. The current paradigm is that $^{129}$I and $^{182}$Hf are mostly produced by rapid neutron captures (the $r$ process), where the neutron density is relatively high ($>10^{20}$ cm$^{-3}$) resulting in much shorter time-scales for neutron capture than $\beta$-decay[@burbidge57]. The $r$ process is believed to occur in neutron star mergers or peculiar supernova environments[@arnould07; @thielemann11]. Additionally to the $r$ process, $^{107}$Pd is also produced by slow neutron captures (the $s$ process), where the neutron density is relatively low ($<10^{13}$ cm$^{-3}$) resulting in shorter time-scales for $\beta$-decay than neutron capture, the details depending on the $\beta$-decay rate of each unstable isotope and the local neutron density[@burbidge57]. The main site of production of the $s$-process elements from Sr to Pb in the Galaxy is in asymptotic giant branch (AGB) stars[@busso99], the final evolutionary phase of stars with initial mass lower than $\sim$10 solar masses (M$_{\odot}$). Models of the $s$ process in AGB stars have predicted marginal production of $^{182}$Hf [@wasserburg94] because the $\beta$-decay rate of the unstable isotope $^{181}$Hf at stellar temperatures was estimated to be much faster [@takahashi87] than the rate of neutron capture leading to the production of $^{182}$Hf (Fig. 1).
Uniform production of $^{182}$Hf and $^{129}$I by the $r$ process in the Galaxy, however, cannot self-consistently explain their meteoritic abundances[@wasserburg96; @wasserburg06; @ott08]. The simplest equation for uniform production (hereafter UP) of the abundance of a radioactive isotope in the Galaxy, relative to a stable isotope of the same element produced by the same process, is given by $$\label{eq:eq1}
\frac{\mathrm N_{radio}}{\mathrm N_{stable}} = \frac{\mathrm P_{radio}}{\mathrm P_{stable}}
\times \frac{\tau}{\mathrm T},$$ where ${\mathrm N_{radio}}$ and ${\mathrm N_{stable}}$ are the abundances of the radioactive and stable isotopes, respectively, ${\mathrm P_{radio}/\mathrm P_{stable}}$ is the ratio of their stellar production rates, $\tau$ is the mean lifetime of the radioactive isotope, and ${\mathrm T} \sim10^{10}$ yr is the timescale of the evolution of the Galaxy. Some time during its presolar history, the solar system matter became isolated from the interstellar medium characterised by UP abundance ratios. Assuming that both $^{129}$I and $^{182}$Hf are primarily produced by the $r$ process, one obtains inconsistent isolation times using $^{129}$I/$^{127}$I or $^{182}$Hf/$^{180}$Hf: 72 Myr or 15 Myr, respectively, prior to the solar system formation[@ott08]. This conundrum led Wasserburg [*et al.*]{}[@wasserburg96] to hypothesise the existence of two types of $r$-process events. Another proposed solution is that the $^{107}$Pd, $^{129}$I, and $^{182}$Hf present in the early solar system were produced by the neutron burst that occurs during core-collapse supernovae[@meyer00a; @meyer05; @supplementary]. This does not result in elemental production, but the relative isotopic abundances of each element are strongly modified due to relatively high neutron densities with values between those of the $s$ and $r$ processes.
We have updated model predictions of the production of $^{182}$Hf and other short-lived radioative nuclei in stars of initial masses between 1.25M$_{\odot}$ and 25M$_{\odot}$ (Table S1). Stars of initial mass up to 8.5M$_{\odot}$ evolve onto the AGB phase and have been computed using the Monash code[@karakas12; @lugaro14; @lugaro12; @doherty14]. Stars of higher mass evolve into core-collapse supernovae and have been computed using the KEPLER code[@rauscher02; @heger10]. The estimates of $\beta$-decay rates by Takahashi & Yokoi[@takahashi87] were based on nuclear level information from the Table of Isotopes (ToI) database, which included states for $^{181}$Hf at 68 keV, 170 keV, and 298 keV. The 68 keV level was found to be responsible for a strong enhancement of the $\beta$-decay rate of $^{181}$Hf at $s$-process temperatures, preventing the production of $^{182}$Hf during the $s$ process (Fig. 1). More recent experimental evaluations[@bondarenko02], however, did not find any evidence for the existence of these states. Removing them from the computation of the half-life of $^{181}$Hf in stellar conditions results in values compatible with no temperature dependence for this isotope (Fig. S2), within the uncertainties.
The removal of the temperature dependence of the $\beta$-decay rate of $^{181}$Hf resulted in an increase by a factor of 4 – 6 of the $^{182}$Hf abundance predicted by the AGB $s$-process models. The effect was milder on the predictions from the supernova neutron burst, with increases between 7% for the 15M$_{\odot}$ model and up to a factor of 2.6 for the 25M$_{\odot}$ model. Some production of $^{182}$Hf, as well as of $^{129}$I and $^{107}$Pd, is achieved in all the models, with $^{182}$Hf/$^{180}$Hf ranging from $\sim$0.001 to $\sim$0.3 (Fig. 2). In terms of the absolute $^{182}$Hf abundance, however, only AGB models of mass $\sim$2 – 4M$_{\odot}$ are major producers of $s$-process $^{182}$Hf in the Galaxy, due to the combined effect of the $^{13}$C($\alpha$,n)$^{16}$O and the $^{22}$Ne($\alpha$,n)$^{25}$Mg neutron sources[@lugaro14; @supplementary]. Only in these stars in fact the production factor of the stable $^{180}$Hf with respect to its solar value is well above unity.
When using Eq. 1 with the updated $s$+$r$ production rate ratio for $^{182}$Hf/$^{180}$Hf, we still have the problem that the time of isolation of the solar system material from the average interstellar medium is much shorter than the value obtained using $^{129}$I/$^{127}$I (Table 1). For the nuclei under consideration, however, it is likely that their mean lifetimes are smaller or similar to the recurrence time, $\delta$, between the events that produce them. In this case, the granularity of the production events controls the abundances and the correct scaling factor for the production ratio is the number of events, ${\mathrm T}/\delta$. Because the cosmic abundances of these nuclei result from two different types of sources, the $r$ process and the $s$ process, it necessarily follows that the precursor material of the solar system must have seen a last event (LE) of each type, i.e., a [*r-process LE*]{} and a [*s-process LE*]{}. Following each of these LE, the abundance of a radioactive isotope in the Galaxy, relatively to a stable isotope of the same element produced by the same process, is given by: $$\label{eq:eq2}
\frac{\mathrm N_{radio}}{\mathrm N_{stable}}
= \frac{\mathrm p_{radio}}{\mathrm p_{stable}} \times \frac{\delta}{\mathrm T}
\times \left(1 + \frac{e^{-\delta/\tau}}{1 - e^{- \delta/\tau}}\right),$$ where ${\mathrm p_{radio}/\mathrm p_{stable}}$ are the production ratio of each single stellar event and the second term of the sum accounts for the memory of all the previous events[@wasserburg06]. Employing simple considerations on the expansion of stellar ejecta into the interstellar medium and the resulting contamination of the Galactic disk [@meyer00a] one can derive $\delta \sim 10$ Myr for supernovae and $\sim 50$ Myr for AGB stars in the mass range 2 – 4M$_{\odot}$. Because these values are first approximations, and because the $r$ process probably does not occur in every supernova, in Table 1 we present the results obtained using $\delta$ = 10 – 100 Myr. The time of the $r$-process LE as derived from $^{129}$I/$^{127}$I is 80 – 109 Myr (Table 1), in agreement (within the uncertainties) with the 95 – 123 Myr values derived from the early solar system $^{247}$Cm/$^{235}$U ratio, which can only be produced by the $r$ process and whose initial abundance needs confirmation. This $r$-process LE time is in strong disagreement with the $r$-process LE times derived from $^{107}$Pd/$^{108}$Pd and $^{182}$Hf/$^{180}$Hf, which should be considered upper limits, given that the abundances of $^{108}$Pd and $^{180}$Hf have an important (70% to 80%) $s$-process contribution that is not accounted for when considering $r$-process events only. A natural explanation is to invoke a separate $s$-process LE for $^{107}$Pd and $^{182}$Hf. When calculating the time of this event under the approximation that the stable reference isotopes $^{108}$Pd and $^{180}$Hf are of $s$-process origin, which is correct within 30%, we derive concordant times from $^{107}$Pd and $^{182}$Hf of $\sim$10 – 30 Myr (Table 1). Our derived timeline for the solar system formation is schematically drawn in Fig. 3.
Our timing of the $s$-process LE that contributed the final addition of elements heavier than Fe to the precursor material of the solar system has implications for our understanding of the events that led to the formation of the Sun. This is because it provides us with an upper limit of the time prior to the solar system formation when the precursor material of the solar system became isolated from the ongoing chemical enrichment of the Galaxy. This isolation timescale can represent the time it took to form the giant molecular cloud where the proto-solar molecular cloud core formed, plus the time it took to form and collapse the proto-solar cloud core itself. Interestingly, it compares well to the total lifetime (from formation to dispersal) of typical giant molecular clouds of 27$\pm$12 Myr[@murray11]. In this context, other radioactive nuclei in the early solar system of possible stellar origin (Table S2), e.g., $^{26}$Al, probably result from self-pollution of the star-forming region itself[@gounelle12; @vasileiadis13; @young14; @supplementary]. This is not possible for the radioactive nuclei of $s$-process origin considered here, because their $\sim$3M$_{\odot}$ parent stars live too long ($\sim$400 Myr) to evolve within star-forming regions. Our present scenario implies that the origin of $^{26}$Al and $^{182}$Hf in the early solar system was decoupled, in agreement with recent meteoritic analysis, which have demonstrated the presence of $^{182}$Hf in an early solar system solid that did not contain $^{26}$Al[@holst13].
[10]{}
S. A. [Wilde]{}, J. W. [Valley]{}, W. H. [Peck]{}, C. M. [Graham]{}, [*Nature*]{} [**409**]{}, 175 (2001).
A. [Frebel]{}, [*et al.*]{}, [*[Astrophys. J.]{}*]{} [**660**]{}, L117 (2007).
Y. [Amelin]{}, [*et al.*]{}, [*Earth and Planetary Science Letters*]{} [ **300**]{}, 343 (2010).
J. N. [Connelly]{}, [*et al.*]{}, [*Science*]{} [**338**]{}, 651 (2012).
N. [Dauphas]{}, M. [Chaussidon]{}, [*Annu. Rev. Earth Planet. Sci.*]{} [**39**]{}, 351 (2011).
M. [Sch[ö]{}nb[ä]{}chler]{}, R. W. [Carlson]{}, M. F. [Horan]{}, T. D. [Mock]{}, E. H. [Hauri]{}, [*[Geochim. Cosmochim. Acta]{}*]{} [**72**]{}, 5330 (2008).
R. H. [Brazzle]{}, O. V. [Pravdivtseva]{}, A. P. [Meshik]{}, C. M. [Hohenberg]{}, [ *[Geochim. Cosmochim. Acta]{}*]{} [**63**]{}, 739 (1999).
C. [Burkhardt]{}, [*et al.*]{}, [*[Geochim. Cosmochim. Acta]{}*]{} [**72**]{}, 6177 (2008).
E. M. [Burbidge]{}, G. R. [Burbidge]{}, W. A. [Fowler]{}, F. [Hoyle]{}, [*[Rev. Mod. Phys.]{}*]{} [**29**]{}, 547 (1957).
M. [Arnould]{}, S. [Goriely]{}, K. [Takahashi]{}, [*[Phys. Rep.]{}*]{} [**450**]{}, 97 (2007).
F.-K. [Thielemann]{}, [*et al.*]{}, [*Prog. Part. Nucl. Phys.*]{} [**66**]{}, 346 (2011).
M. [Busso]{}, R. [Gallino]{}, G. J. [Wasserburg]{}, [*[Ann. Rev. Astron. Astrophys.]{}*]{} [**37**]{}, 239 (1999).
G. J. [Wasserburg]{}, M. [Busso]{}, R. [Gallino]{}, C. M. [Raiteri]{}, [*[Astrophys. J.]{}*]{} [**424**]{}, 412 (1994).
K. [Takahashi]{}, K. [Yokoi]{}, [*At. Data Nucl. Data Tables*]{} [**36**]{}, 375 (1987).
G. J. [Wasserburg]{}, M. [Busso]{}, R. [Gallino]{}, [*[Astrophys. J.]{}*]{} [**466**]{}, L109 (1996).
G. J. [Wasserburg]{}, M. [Busso]{}, R. [Gallino]{}, K. M. [Nollett]{}, [*Nucl. Phys. A*]{} [**777**]{}, 5 (2006).
U. [Ott]{}, K.-L. [Kratz]{}, [*[New Astron. Rev.]{}*]{} [**52**]{}, 396 (2008).
B. S. [Meyer]{}, D. D. [Clayton]{}, [*[Space Sci. Rev.]{}*]{} [**92**]{}, 133 (2000).
B. S. [Meyer]{}, [*Chondrites and the Protoplanetary Disk*]{}, A. N. [Krot]{}, E. R. D. [Scott]{}, B. [Reipurth]{}, eds. (2005), vol. 341 of [*Astronomical Society of the Pacific Conference Series*]{}, p. 515.
See supplementary online text for further discussion.
A. I. [Karakas]{}, D. A. [Garc[í]{}a-Hern[á]{}ndez]{}, M. [Lugaro]{}, [*[Astrophys. J.]{}*]{} [**751**]{}, 8 (2012).
M. [Lugaro]{}, [*et al.*]{}, [*[Astrophys. J.]{}*]{} [**780**]{}, 95 (2014).
M. [Lugaro]{}, [*et al.*]{}, [*Meteorit. Planet. Sci*]{} [**47**]{}, 1998 (2012).
C. L. [Doherty]{}, P. [Gil-Pons]{}, H. H. B. [Lau]{}, J. C. [Lattanzio]{}, L. [Siess]{}, [*[Mon. Not. R. Astron. Soc.]{}*]{} [**437**]{}, 195 (2014).
T. [Rauscher]{}, A. [Heger]{}, R. D. [Hoffman]{}, S. E. [Woosley]{}, [*[Astrophys. J.]{}*]{} [ **576**]{}, 323 (2002).
A. [Heger]{}, S. E. [Woosley]{}, [*[Astrophys. J.]{}*]{} [**724**]{}, 341 (2010).
V. [Bondarenko]{}, [*et al.*]{}, [*Nucl. Phys. A*]{} [**709**]{}, 3 (2002).
N. [Murray]{}, [*[Astrophys. J.]{}*]{} [**729**]{}, 133 (2011).
M. [Gounelle]{}, G. [Meynet]{}, [*[Astron. Astrophys.]{}*]{} [**545**]{}, A4 (2012).
A. [Vasileiadis]{}, [Å]{}. [Nordlund]{}, M. [Bizzarro]{}, [*[Astrophys. J.]{}*]{} [**769**]{}, L8 (2013).
E. D. [Young]{}, [*Earth Planet. Sci. Lett.*]{} [**392**]{}, 16 (2014).
J. C. [Holst]{}, [*et al.*]{}, [*Proc. Natl. Acad. Sci. USA*]{} [**110**]{}, 8819–8823 (2013).
G. A. [Brennecka]{}, [*et al.*]{}, [*Science*]{} [**327**]{}, 449 (2010).
M. [Asplund]{}, N. [Grevesse]{}, A. J. [Sauval]{}, P. [Scott]{}, [*[Ann. Rev. Astron. Astrophys.]{}*]{} [ **47**]{}, 481 (2009).
[J. M. Trigo-Rodríguez, [*et al.*]{}, [*Meteorit. Planet. Sci*]{} [**44**]{}, 627 (2009).]{}
[A. Takigawa, [*et al.*]{}, [*Astrophys. J.*]{} [**688**]{}, 1382 (2008).]{}
[A. N. Krot, [*et al.*]{}, [*Astrophys. J.*]{} [**672**]{}, 713 (2008).]{}
[K. Makide, [*et al.*]{}, [*Astrophys. J*]{} [**733**]{}, L31 (2011).]{}
[C. Vockenhuber, [*et al.*]{}, [*Phys. Rev. C*]{} [**75**]{}, 015804 (2007).]{}
[K. Wisshak, [*et al.*]{}, [*Phys. Rev. C*]{} [**73**]{}, 045807 (2006).]{}
[C. Arlandini, [*et al.*]{}, [*Astrophys. J.*]{} [**525**]{}, 886 (1999).]{}
[S. Goriely, [*Astron. Astrophys.*]{} [**342**]{}, 881 (1999).]{}
[J. N. Ávila, [*et al.*]{}, [*Astrophys. J.*]{} [**744**]{}, 49 (2012).]{}
[T. Rauscher, [*Astrophys. J.*]{} [**755**]{}, L10 (2012).]{}
[M.-C. Liu, M. Chaussidon, G. Srinivasan, K. D. McKeegan, [*Astrophys. J.*]{} [**761**]{}, 137 (2012).]{}
[R. Mishra, M. Chaussidon, K. Marhas, [*Proceedings of Science (NIC XII)085*]{} (2012).]{}
[**Acknowledgements**]{} We thank Martin Asplund for providing us updated early solar system abundances, Daniel Price and Christoph Federrath for comments, and Marco Pignatari for discussion. The data described in the paper are presented in Fig. S2 and Table S1. M.L., A.H., and A.I.K. are ARC Future Fellows on projects FT100100305, FT120100363, and FT10100475, respectively. This research was partly supported under Australian Research Council’s Discovery Projects funding scheme (project numbers DP0877317, DP1095368 and DP120101815). U.O. thanks the Max Planck Institute for Chemistry for use of its IT facilities.\
\
[**Supplementary Materials**]{}
www.sciencemag.org
Supplementary text
Figs. S1 and S2
Tables S1 and S2
References ([*35-46*]{})
----------------------- ------------------------------------------ ---------------------- --------- ------------------------------------------ ---------------------- --------------------
Ratio ${\mathrm P_{radio}/\mathrm P_{stable}}$ UP ratio UP time ${\mathrm p_{radio}/\mathrm p_{stable}}$ LE ratio LE time
(Myr) \[$\delta$\] (Myr)
$^{247}$Cm/$^{235}$U $0.40$ $8.8 \times 10^{-3}$ $90$ $0.40(r)$ $3.8 \times 10^{-2}$ $123[100]$
$1.1 \times 10^{-2}$ $95[10]$
$^{129}$I/$^{127}$I $1.25$ $2.9 \times 10^{-3}$ $73$ $1.35(r)$ $1.4 \times 10^{-2}$ $109[100]$
$3.8 \times 10^{-3}$ $80[10]$
$^{182}$Hf/$^{180}$Hf $0.29$ $3.8 \times 10^{-4}$ $18$ $0.91(r)$ $9.1 \times 10^{-3}$ $59[100]$
$1.7 \times 10^{-3}$ $37[10]$
$0.15(s)$ $1.5 \times 10^{-3}$ $36[100]$
$2.8 \times 10^{-4}$ $14[10]$
$^{107}$Pd/$^{108}$Pd $0.65$ $6.1 \times 10^{-4}$ $22$ $2.09(r)$ $2.1 \times 10^{-2}$ $55[100]$
$3.2 \times 10^{-3}$ $38[10]$
$0.14(s)$ $1.4 \times 10^{-3}$ $30[100]$
$2.1 \times 10^{-4}$ $12[10]$
----------------------- ------------------------------------------ ---------------------- --------- ------------------------------------------ ---------------------- --------------------
\
Table 1: Production ratios and inferred timescales. ${\mathrm P_{radio}}/{\mathrm P_{stable}}$ are the ratios of the stellar production rates ($s$+$r$ processes), ${\mathrm p_{radio}}/{\mathrm p_{stable}}$ are the production ratios of each single stellar event ($s$ or $r$ process, as indicated). The UP and LE ratios are calculated using Eq. 1 and Eq. 2, respectively. For $^{247}$Cm/$^{235}$U in Eq. 1, T is substituted with the mean lifetime of $^{235}$U ($\tau$=1020 Myr), and in Eq. 2, $\delta/$T is removed and ${\mathrm p_{radio}/\mathrm p_{stable}}$ is multiplied by the ratio of the summation terms derived for $^{247}$Cm and for $^{235}$U. The UP and LE times are the time intervals required to obtain the initial solar system ratio starting from the UP and LE ratios, respectively. For the initial $^{247}$Cm/$^{235}$U we assume the average of the range given by Brennecka [*et al.*]{}[@brennecka10] $=(1.1 - 2.4) \times 10^{-4}$. Meteoritic and nuclear uncertainties result in error bars on the reported times of the order of 10 Myr [@supplementary].
![\[fig:path\] Section of the nuclide chart including Hf, Ta, and W, showing stable isotopes as grey boxes and unstable isotopes as white boxes (with their terrestrial half-lives). Neutron-capture reactions are represented as black arrows, $\beta$-decay as red arrows, and the radiogenic $\beta$-decay of $^{182}$Hf as a green arrow. The production of $^{182}$Hf is controlled by the half-life of the unstable $^{181}$Hf, which preceeds $^{182}$Hf in the $s$-process neutron-capture isotopic chain. The probability of $^{181}$Hf to capture a neutron to produce $^{182}$Hf is $>50$% for neutron densities $>4 \times 10^{9}$ cm$^{-3}$ or $>10^{11}$ cm$^{-3}$, using a $\beta$-decay rate of 42.5 days (terrestrial) or of 30 hours at 300 million K, as according to Takahashi & Yokoi[@takahashi87], respectively.](Figure1.pdf)
![\[fig:production\] Stellar model predictions as function of the initial stellar mass. The production ratios of the radioactive isotopes of interest with respect to the stable reference isotope of the same element are shown in panel A, the production factors with respect to the initial solar composition of each stable reference isotope are shown in panel B. Stars below 10M$_{\odot}$ evolve through the AGB phase and associated $s$ process, while stars above 10M$_{\odot}$ evolve through a core-collapse supernova and associated neutron burst. All the models were calculated using no temperature dependence for the half-life of $^{181}$Hf and with initial solar abundances updated from Asplund [*et al.*]{}[@asplund09], corresponding to a metallicity 0.014.](Figure2.pdf)
![\[fig:timeline\] Schematic timeline of the solar system formation. The $r$-process LE contributed $^{129}$I to the early solar system, the $s$-process LE $^{107}$Pd and $^{182}$Hf, and self-pollution of the star-forming region the lighter, shorter lived radionuclides, e.g., $^{26}$Al.](Figure3.pdf)
[**Supplementary text**]{}\
A single local stellar polluter has been traditionally invoked to have possibly injected most of the radioactive isotopes into the early solar system ([*16, 19*]{}) and we briefly discuss here this scenario in the light of our models. In a simple pollution mixing model we have two free parameters: the dilution factor $f$ of the stellar ejecta into the original pre-solar cloud, which is related to the distance of the polluter, and the time delay $\Delta$t between ejection of the radioactive isotopes from the star and the formation of the first solids in the solar system ([*16, 19, 23*]{}). We set these two parameters to match the abundances of $^{26}$Al and $^{41}$Ca and plot the results in Fig. S1. Models of mass lower than 6M$_{\odot}$ eject too little $^{26}$Al to result in realistic (1/$f > 100$) dilution factors. On the other hand, models of AGB stars of masses 6M$_{\odot}$ - 8.5M$_{\odot}$ produce enough $^{26}$Al via proton captures at the base of their hot convective envelope to result in realistic solutions ([*23, 35*]{}), which include $^{107}$Pd and $^{182}$Hf. In this case, we would have to make the assumptions that $^{53}$Mn and $^{129}$I came from uniform production (UP) and $^{36}$Cl from [*in situ*]{} nucleosynthesis. While providing also $^{36}$Cl and $^{129}$I, all the supernova models overproduce $^{107}$Pd and $^{182}$Hf by factors from 3 to 10 of the solar abundances, as well as resulting in three orders of magnitude more $^{53}$Mn than observed. This is a well known problem, which has been addressed in the past by invoking a mass cut below which the supernova material is assumed to not have been incorporated in the early solar system ([*18*]{}), or including mixing and fall back ([*36*]{}). For example, when assuming an injection mass cut at $\sim$2.1M$_{\odot}$ and $\sim$2.7M$_{\odot}$ in our 18M$_{\odot}$ and 25M$_{\odot}$ models, respectively, together with no mixing of the ejecta, we reproduce the observed $^{53}$Mn/$^{55}$Mn ratio leaving all the other isotopes unchanged. In conclusion, we found potential solutions for the early solar system radioactivities when considering a single stellar polluter of mass $>$5M$_{\odot}$. This scenario comes with a series of problems, however: Stars of mass $<$10M$_{\odot}$ have evolutionary time scales that are overly long ($>$30 Myr), and stars of mass $>$10M$_{\odot}$ produce too much $^{53}$Mn, unless a mass cut is assumed, below which the supernova material should not have been incorporated in the early solar system ([*18*]{}). Overall, the predicted $^{60}$Fe/$^{56}$Fe are above the observed upper limit and there are O isotopic effects larger than 10% correlated to the presence of $^{26}$Al, which are not observed ([*37, 38*]{}).\
The $s$ process in AGB stars occurs in the He-rich layer located between the He- and the H-burning shells. The $^{13}$C($\alpha$,n)$^{16}$O neutron source reaction is activated in the radioative layer located below the ashes of H burning where the temperature reaches $\sim$100 MK. The $^{22}$Ne($\alpha$,n)$^{25}$Mg neutron source reaction, instead, is activated in the convective region associated with recurrent He-burning episodes where the temperature reaches $\sim$300 MK. Only AGB models of mass $\sim$3M$_{\odot}$ are significant producers of $s$-process $^{182}$Hf in the Galaxy because in these stars the $s$-process is driven by the $^{13}$C($\alpha$,n)$^{16}$O neutron source, which generates the largest total number of neutrons of all the models and efficiently produces the heaviest $s$-process elements. This leads to high enhancements of $^{180}$Hf, which in turn leads to high $^{182}$Hf production during the secondary neutron burst generated by the $^{22}$Ne($\alpha$,n)$^{25}$Mg neutron source, with lower total number of neutrons but higher neutron densities than those produced by the $^{13}$C($\alpha$,n)$^{16}$O neutron source. In AGB stars of mass lower than $\sim$3M$_{\odot}$ the $^{22}$Ne neutron source is not efficiently activated, whereas in higher mass AGB stars the $^{13}$C neutron source is not efficiently activated. The four to sixfold increase in the $s$-process production of $^{182}$Hf obtained using our new decay rate of $^{181}$Hf resolves the problem highlighted by Vockenhuber [*et al.*]{} ([*39*]{}) and Wisshak [*et al.*]{} ([*40*]{}) of a non-smooth even-isotope $r$-process residual curve in correspondance to $^{182}$W, e.g., Figure 6 of Vockenhuber [*et al.*]{} ([*39*]{}). The $r$-process residuals are calculated by subtracting from the solar abundances the $s$-process contributions predicted by the models (normalised to an $s$-only isotope, typically $^{150}$Sm ([*41, 42*]{}). Following this procedure, we obtain from our 3M$_{\odot}$ model a $s$-process contribution to $^{182}$W of 74% when using our new decay rate of $^{181}$Hf, as compared to a value of 60% computed with the old decay rate. This is due to the enhanced radiogenic component from the $s$-process $^{182}$Hf, which shifts the $r$-process abundance of $^{182}$W from 0.015 down to 0.0095 (using solar abundances normalised to Si=10$^{6}$), in better agreement with the neighbouring even nuclei. Over the other possible solution to this problem of decreasing the neutron-capture cross section of $^{182}$W by $\sim$30%, our solution has the advantage of not compromising the match between the $s$-process AGB models and the $s$-process $^{182}$W/$^{184}$W ratio observed in the meteoritic stardust silicon carbide (SiC) grain LU-41 that originated from an AGB star ([*43*]{}). This is because the $^{180}$Hf/$^{184}$W ratios measured in this grain is 0.274, i.e., roughly 5 times lower than predicted by the AGB models, which means that Hf did not condensate as much as W in the grain resulting in a minimal radiogenic contribution of $^{182}$Hf to $^{182}$W.\
The times derived in Table 1 are affected by the uncertainties related to the ratios measured in early solar system. The impact of these uncertainties is, however, relatively small: roughly $\pm 4$ Myr for $^{129}$I/$^{127}$I = 1.19 $\pm 0.20 \times
10^{-4}$ and $^{107}$Pd/$^{108}$Pd $= 5.9 \pm 2.2 \times 10^{-5}$, and $\pm 0.6$ Myr for $^{182}$Hf/$^{180}$Hf $= 9.72 \pm 0.44 \times 10^{-5}$. For $^{247}$Cm/$^{235}$U, using the observed lower and upper limits of $=(1.1 - 2.4) \times 10^{-4}$ results in changes of $+11$ and $-7$ Myr, respectively. These times, as well as the UP and LE ratios, are also affected by the uncertainties related to ${\mathrm P_{radio}/\mathrm P_{stable}}$ (in Eq. 1) and ${\mathrm p_{radio}/\mathrm p_{stable}}$ (in Eq. 2), which depend mostly on the nuclear physics behind the $s$-process predictions. A conservative analysis of these uncertainties does not change the main conclusion of our study. The ${\mathrm P_{129}/\mathrm P_{127}}$ ratio[^1] suffers from the uncertainties related to the $r$-process residual of $^{129}$Xe, the decay daughter of $^{129}$I. These can be taken from Goriely ([*42*]{}) and result in an uncertainty of $\pm 5$ Myr in the UP time. The small ($\sim10^{-4}$) ${\mathrm p_{129}/\mathrm p_{127}}$($s$) ratio does not suffer large uncertanties because the production of $^{129}$I in $s$-process conditions is prevented by the $^{128}$I nucleus having a very short half-life of $\sim 25$ minutes. The ${\mathrm p_{129}/\mathrm p_{127}}$($r$) ratio is derived from the $r$-process residuals of $^{129}$Xe and $^{127}$I, which mostly depend on their neutron-capture cross sections. These are given with uncertainties up to $\sim$30% and $\sim$50%, respectively ([*44*]{}), which results in uncertainties in the derived $r$-process LE times of up to $\pm 14$ Myr. As discussed in the paper, the ${\mathrm p_{182}/\mathrm p_{180}}$($s$) ratio depends mostly on the temperature dependence of the half-life of $^{181}$Hf. When using our current lower limit (from Fig. S2, excluding the 68, 170, 298 keV states) we derive ${\mathrm p_{182}/\mathrm p_{180}}$($s$)=0.11, which results in a $s$-process LE time of 10 Myr and 32 Myr for $\delta=$ 10 and 100 Myr, respectively. The uncertainties in the ${\mathrm p_{182}/\mathrm p_{180}}$($r$) ratio mostly derive from the neutron-capture cross sections of $^{180}$Hf and $^{182}$W, which are up to $\sim$40% each, and the magnitude of the radiogenic effect of the $s$-process $^{182}$Hf on $^{182}$W, for which we have derived above an error bar of $\sim$40%. These result in a uncertainty of up to $\pm 9$ Myr in the $r$-process LE time. Uncertainties on the UP time are of similar size. Finally, the neutron-capture cross sections of $^{107}$Pd, $^{108}$Pd, and $^{107}$Ag are given with maximum uncertainties of $\sim$10%, $\sim$25%, and $\sim$45%, respectively, which change the ${\mathrm p_{107}/\mathrm p_{108}}$($s$) ratio by $\sim$35% at most, resulting in an uncertainty of up to $\pm 3$ Myr in the $s$-process LE time, and of up to $\pm 5$ Myr in the $r$-process LE time. Uncertainties on the UP time are of similar magnitude.\
These radioactive nuclei are lighter than those discussed in the paper and their cosmic abundances are not made by the $s$ and $r$ processes. Aluminum-26 is made via proton captures on $^{25}$Mg, $^{36}$Cl and $^{41}$Ca via the capture of a neutron by $^{35}$Cl and $^{40}$Ca, respectively, $^{53}$Mn via explosive nucleosynthesis, and $^{60}$Fe via the neutron-capture chain $^{58}$Fe(n,$\gamma$)$^{59}$Fe(n,$\gamma$)$^{60}$Fe, where $^{59}$Fe is unstable with a half-life of 44.51 days. When we considered a possible [*supernova LE*]{} for the origin of $^{26}$Al, $^{35}$Cl, and $^{41}$Ca we obtained LE times negative or lower than $\sim$1 Myr. The abundances of these radioactive nuclei in the early solar system more likely resulted from self-pollution of the star forming region itself ([*29, 30, 31*]{}). A supernova LE for the origin of $^{53}$Mn is more plausible because it results in LE times very similar to those derived for the $s$-process LE and would also produce $^{60}$Fe/$^{56}$Fe $\sim6 \times 10^{-9}$, which is within the range observed. The isolation timescale derived from a supernova LE, however, is not robust because these nuclei can also be produced by supernovae occurring within the star-forming region.

Figure S1: Results from the model that assumes injection from a single stellar source. The required dilution factor (1/$f$) and time delay ($\Delta$t) are indicated in each panel, together with the predicted $^{60}$Fe/$^{56}$Fe ratio. In the 6M$_{\odot}$ model, the ratio relative to $^{129}$I/$^{127}$I is offscale many orders of magnitude below unity and the ratio relative to $^{53}$Mn/$^{55}$Mn is zero.

Figure S2: Three different calculations of the half-life of $^{181}$Hf: same as Takahashi & Yokoi ([*14*]{}) (black line), same but removing the 68 keV level (blue line), and same but removing the 68 keV, 170 keV, and 298 keV levels (red line). The lower panel is the same as the upper panel, but including the minimum and maximum half-lives for each computation (dotted lines) allowed when assuming a 0.5 uncertainty on the unknown transition probabilities ([*42*]{}). Changing the value of the electron density N$_e$ does not affect the results.
Table S1: Selected yields, all in units of M$_{\odot}$ (“1.21E-07” stands for $1.21 \times 10^{-7}$ and so on.)
---------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
Initial stellar mass 1.25 1.8 3 4 5 6 6.5 8 8.5
Total mass ejected 0.69 1.21 2.32 3.21 4.13 5.09 5.54 6.97 7.35
$^{26}$Al 1.21E-07 3.31E-07 2.19E-07 2.06E-07 4.34E-07 1.37E-06 4.09E-06 1.82E-05 3.86E-05
$^{27}$Al 3.90E-05 7.04E-05 1.42E-04 1.95E-04 2.62E-04 3.18E-04 3.40E-04 4.43E-04 4.66E-04
$^{35}$Cl 2.38E-06 4.20E-06 8.01E-06 1.12E-05 1.45E-05 1.78E-05 1.94E-05 2.49E-05 2.58E-05
$^{36}$Cl 8.62E-10 2.26E-09 1.11E-08 9.04E-09 1.07E-08 1.08E-08 9.12E-09 1.02E-08 1.11E-08
$^{40}$Ca 4.00E-05 7.05E-05 1.35E-04 1.89E-04 2.43E-04 2.99E-04 3.26E-04 4.18E-04 4.33E-04
$^{41}$Ca 1.54E-09 5.51E-09 2.07E-08 2.05E-08 2.60E-08 2.65E-08 1.96E-08 1.90E-08 2.56E-08
$^{53}$Mn 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
$^{55}$Mn 8.78E-06 1.56E-05 3.07E-05 4.17E-05 5.35E-05 6.58E-05 7.20E-05 9.24E-05 9.62E-05
$^{56}$Fe 7.66E-04 1.35E-03 2.58E-03 3.60E-03 4.64E-03 5.72E-03 6.23E-03 7.80E-03 8.30E-03
$^{60}$Fe 3.15E-07 2.82E-07 2.83E-08 5.30E-07 2.23E-06 3.83E-06 4.31E-06 6.43E-06 5.18E-06
$^{107}$Pd 4.82E-11 2.04E-09 8.06E-09 3.46E-09 8.40E-12 1.19E-11 5.03E-11 5.74E-11 1.09E-10
$^{108}$Pd 9.70E-10 1.44E-08 5.75E-08 2.68E-08 4.11E-09 5.07E-09 5.61E-09 7.12E-09 7.33E-09
$^{127}$I 2.42E-09 8.19E-09 2.25E-08 1.88E-08 1.43E-08 1.77E-08 1.91E-08 2.45E-08 2.32E-08
$^{129}$I 1.53E-16 4.61E-14 5.98E-12 5.36E-12 2.03E-14 2.76E-14 5.15E-14 4.44E-14 1.46E-13
$^{180}$Hf 1.77E-10 5.27E-09 1.63E-08 8.53E-09 1.07E-09 1.32E-09 1.43E-09 1.82E-09 1.89E-09
$^{182}$Hf 1.82E-12 1.06E-10 2.46E-09 2.42E-09 1.90E-11 1.77E-11 1.18E-11 9.68E-12 7.62E-12
---------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
\
Table S1: continues.
---------------------- ---------- ---------- ---------- ---------- ---------------
Initial stellar mass 12 15 18 25 Solar system
Total mass ejected 10.6 13.3 16.3 23.1 mass fraction
$^{26}$Al 9.32E-06 2.21E-05 3.20E-05 6.54E-05
$^{27}$Al 1.58E-03 3.83E-03 7.05E-03 1.49E-02 5.65E-05
$^{35}$Cl 9.39E-05 1.78E-04 3.75E-04 2.27E-03 3.50E-06
$^{36}$Cl 7.70E-07 1.71E-06 3.10E-06 2.66E-05
$^{40}$Ca 3.52E-03 5.73E-03 7.83E-03 1.33E-02 5.88E-05
$^{41}$Ca 2.28E-06 4.58E-06 9.78E-06 5.89E-05
$^{53}$Mn 9.26E-05 1.34E-04 1.76E-04 2.22E-04
$^{55}$Mn 5.39E-04 8.30E-04 1.05E-03 1.24E-03 1.29E-05
$^{56}$Fe 8.26E-02 1.41E-01 1.54E-01 1.59E-01 1.12E-03
$^{60}$Fe 3.25E-05 9.08E-05 1.36E-04 1.14E-04
$^{107}$Pd 2.54E-10 4.90E-10 5.31E-10 1.26E-09
$^{108}$Pd 1.03E-08 1.27E-08 1.54E-08 2.35E-08 9.92E-10
$^{127}$I 3.39E-08 4.07E-08 4.70E-08 6.16E-08 3.50E-09
$^{129}$I 3.36E-10 5.97E-10 1.22E-09 1.44E-09
$^{180}$Hf 2.69E-09 3.44E-09 4.54E-09 6.04E-09 2.52E-10
$^{182}$Hf 1.31E-10 1.61E-10 6.64E-10 1.18E-09
---------------------- ---------- ---------- ---------- ---------- ---------------
\
Table S2: Radioisotopes of potential stellar origin in the early solar system. $\tau$ is the mean life time of each isotope in Myr. In the case of $^{247}$Cm also the reference isotope $^{235}$U is radioactive, with $\tau$ = 1020 Myr. The early solar system ratios are taken from Dauphas & Chaussidon ([*5*]{}), except for $^{41}$Ca/$^{40}$Ca, which is updated according to Liu [*et al.*]{} ([*45*]{}), $^{247}$Cm/$^{235}$U reported directly from Brennecka [*et al.*]{} ([*33*]{}), and $^{60}$Fe/$^{56}$Fe, which is currently debated and for which we give the range discussed in detail by Mishra, Chaussidon & Marhas ([*46*]{}).
------------ ------------- ------------ ----------------------------------
Isotope $\tau$(Myr) Reference Early solar
isotope system ratio
$^{247}$Cm $22.5$ $^{235}$U $(1.1 - 2.4) \times 10^{-4}$
$^{129}$I $23$ $^{127}$I $(1.19 \pm 0.20) \times 10^{-4}$
$^{182}$Hf $13$ $^{180}$Hf $(9.72 \pm 0.44) \times 10^{-5}$
$^{107}$Pd $9.4$ $^{108}$Pd $(5.9 \pm 2.2) \times 10^{-5}$
$^{53}$Mn $5.3$ $^{55}$Mn $(6.28 \pm 0.66) \times 10^{-6}$
$^{60}$Fe $3.8$ $^{56}$Fe $10^{-9}$ - $10^{-6}$
$^{26}$Al $1.03$ $^{27}$Al $(5.23 \pm 0.13) \times 10^{-5}$
$^{36}$Cl 0.43 $^{35}$Cl $(17.2 \pm 2.5) \times 10^{-6}$
$^{41}$Ca 0.15 $^{40}$Ca $\sim4.2 \times 10^{-9}$
------------ ------------- ------------ ----------------------------------
\
[^1]: Hereafter ${\mathrm P_{129}/\mathrm P_{127}}$=${\mathrm {P_{radio}(^{129}I)/P_{stable}(^{127}I)}}$; ${\mathrm p_{129}/\mathrm p_{127}}$=${\mathrm {p_{radio}(^{129}I)/p_{stable}(^{127}I)}}$, and so on.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We discuss dynamical pairing correlations in the context of configuration mixing of projected self-consistent mean-field states, and the origin of a divergence that might appear when such calculations are done using an energy functional in the spirit of a naive generalized density functional theory.'
address:
- 'CEA-Saclay DSM/DAPNIA/SPhN, F-91191 Gif sur Yvette Cedex, France'
- 'National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA'
author:
- 'M. Bender[^1]'
- 'T. Duguet'
title: PAIRING CORRELATIONS BEYOND THE MEAN FIELD
---
Introduction {#sec:bmf}
============
Self-consistent mean-field models are one of the standard approaches in nuclear structure theory, see Ref.[@RMP] for a recent review. For heavy nuclei, they are the only microscopic method that can be systematically applied on a large scale.
Over the last few years, we were involved in the development of a method, that adds long-range correlations to self-consistent mean-field by projection after variation and variational configuration mixing within the generator coordinate method (GCM). A pedagogical introduction to our method is given in Ref.[@Ben05b], we will sketch some of our recent work on those aspects of this method that concern the treatment of pairing correlations.
Theoretical framework
=====================
Self-consistent mean-field with pairing
---------------------------------------
For an introduction into the treatment of pairing correlations in a HFB framework, see for example[@Man75a; @Rin80a; @Dob84a] and references given therein. We will give only those details that are relevant for the further discussion.
All results given below were obtained with the effective Skyrme interaction SLy4[@SLyx] and a density-dependent pairing interaction with a soft cutoff at 5 MeV above and below the Fermi energy as described in Ref.[@Rig99a]. The HFB equations are solved using the two-basis method introduced in Ref.[@Gal94a]. All we need to know about an HFB state in what follows is that in its canonical basis it is given by $$| {{\rm {HFB}}} \rangle
= \prod_{\mu > 0} ( u_\mu + v_\mu {\hat{a}^\dagger}_{\bar\mu} {\hat{a}^\dagger}_{\mu} )
| 0 \rangle
.$$ in terms of occupation probabilities and creation and annihilation operators.
HFB states are not eigenstates of the particle-number operator. In condensed matter, for which HFB theory was originally designed, this is not much of a problem, as the number of particles is usually huge. In nuclear physics, where the particle number is quite small, this causes two problems: On the one hand, the standard HFB treatment artificially breaks down when the density of single-particle levels around the Fermi energy is below a critical value, leading to a HF state without pairing correlations. On the other hand, a HFB state mixes the wave functions of different nuclei as it is spread in particle-number space, with a width that is proportional to the dispersion of the particle number[@Flo97a].
Particle-number projection {#subsec:proj}
==========================
variation after projection (VAP)[@She00a] provides a rigorous solution to both problems. For various mainly technical reasons, however, this approach is not yet widely used (see[@Sto06a] for an exception). We use a different approach instead, that was first outlined in Ref.[@Hee93a]. In a first step, we complement the HFB equation with the Lipkin-Nogami (LN) method, that provides a numerically simple approximation to projection before variation, see Ref.[@Gal94a] and references given therein for details. The LN procedure enforces the presence of pairing correlations also in the weak pairing regime at small level density. The LN method gives a correction to the total energy whose quality has been repeatedly questioned. We do not make use of the correction term, but project in a second step on exact eigenstates of the particle-number operator $\hat{N}$, with eigenvalue $N_0$, applying the particle-number projection operator $$\label{eq:proj:number}
\hat{P}_{N_0}
= \frac{1}{2 \pi}
\int_{0}^{2 \pi} \! d \varphi \;
e^{i \varphi (\hat{N}-N_0)}
.$$ The operator $e^{i \varphi \hat{N}}$ rotates the HFB state in a $U(1)$ gauge space $$\label{eq:HFB:rot}
| {{\rm {HFB}}} (\varphi) \rangle
= e^{i \varphi \hat{N}} | {{\rm {HFB}}} \rangle
= \prod_{\mu > 0} ( u_\mu + v_\mu e^{2i\varphi} {\hat{a}^\dagger}_{\bar\mu} {\hat{a}^\dagger}_{\mu} )
| 0 \rangle
,$$ while $e^{-i \varphi N_0}$ is a weight function. For states with even particle number $N_0$ the integration interval can be reduced to $[0,\pi]$. We discretize the integrals over the gauge angle with a simple $L$-point trapezoidal formula $$\label{eq:fomenko}
\frac{1}{\pi} \int_{0}^{\pi} d{\varphi} \, e^{i\varphi(\hat{N}-N_0)}
\Rightarrow \frac{1}{L} \sum_{l=1}^{L} e^{i \frac{\pi l}{L}(\hat{N}-N_0)}
.$$ As was shown by Fomenko[@Fom70a], this simple scheme eliminates exactly all components from an HFB state which differ from the desired particle number $N_0$ by up to $\pm 2(L-1)$ particles. Although the spread of the near-Gaussian distribution of particle numbers is large in comparison with respect to the constrained value, it is small enough that already small values of $L$, ranging from 5 in light nuclei to 13 in heavy ones, are sufficient for the numerical convergence of the integrals over $\varphi$.
In practice, we constrain the HFB states in the mean-field calculations to the same (integer) particle number that we project on afterwards. This is, however, not a necessary condition. In a projection-before-variation approach, the particle number of the intrinsic state has no physical meaning, and will usually take a non-integer value close to, but not identical with, the particle number projected on[@Flo97a; @She00a].
We address here only HFB states with pairing correlations among particles of the same isospin. In this case, the nuclear HFB state is the direct product of separate HFB states for protons and neutrons, respectively, which are separately projected afterwards on the number of the respective particle species.
Variational configuration mixing {#subsec:gcm}
--------------------------------
In a mean-field calculation, one often encounters a situation where the total binding energy (mean field or projected) varies only slowly with a collective degree of freedom. In such a case, it can be expected that the nuclear wave function is widely spread around the mean-field minimum, which is beyond the scope of (projected) mean-field theory. These fluctuations around a single state can be incorporated within the generator coordinate method (GCM). The mixed projected many-body state is set-up as a coherent superposition of projected mean-field states $| q \rangle$ which differ in one or several collective coordinates $q$ $$| k \rangle
= \sum_{q} f_{k} (q) \, | q \rangle
.$$ The weight function $f_{k} (q)$ is determined from the stationarity of the GCM ground state, which leads to the Hill-Wheeler-Griffin equation[@Hil53a] $$\label{eq:HWG}
\sum_{q'}
\big[ \langle q | \hat{H} | q' \rangle
- E_k \, \langle q | q' \rangle \big] \; f_{k} (q')
= 0
,$$ that gives a correlated ground state, and, in addition, a spectrum of excited states from orthogonalization to the ground state. The weight functions $f_{k} (q)$ are not orthonormal. A set of orthonormal collective wave functions in the basis of the states $| q \rangle$ is obtained from a transformation involving the square root of the norm kernel[@Bon90a]. It has to be noted that projection is in fact a special case of the GCM, where degenerate states are mixed. The generators of the group involved define the collective path, and the weight functions are determined by the restored symmetry.
For a state which would result from the mixing of different unprojected mean-field states, the mean particle number will not not be equal to the particle number of the original mean-field states anymore. Projection on particle-number, as done here, eliminates this problem, otherwise a constraint on the particle number has to be added to the Hill-Wheeler-Griffin equation (\[eq:HWG\]), see, for example, Ref.[@Bon90a].
The technical challenges of a configuration-mixing calculation come from the non-diagonal kernels of different mean-field states, which are evaluated with a generalized Wick theorem[@Oni66a; @Bal69a]. We represent the single-particle states on a 3-dimensional mesh in coordinate space using a Lagrange mesh technique[@Bay86a]. As a consequence, the two sets of single-particle states representing the intrinsic HFB states entering the kernels are usually not equivalent, which has to be carefully taken into account[@Bon90a; @Val00a].
We usually combine the techniques presented above with a projection on angular momentum[@Val00a] that will not be discussed here. Applications are published in Refs.[@Ben03a; @Ben03b; @Dug03a; @Ben04b; @Ben04c; @Ben05a; @Ben06a]. Similar methods, but without particle-number projection, have been set up using the Gogny force[@Madrid] and relativistic Lagrangians[@Nik06a].
For the sake of simple notation, we have introduced the GCM using a many-body Hamiltonian $\hat{H}$. All the methods just mentioned, however, have in common that they are not based on a Hamiltonian, but an energy functional to calculate the binding energy. The necessary generalization will be sketched in section \[sect:divergence\] below.
Dynamical pairing correlations {#sect:dyn}
==============================
There is fundamental problem with projection after variation as performed here: it cannot be expected that the projection of the mean-field ground state (after variation) gives the minimum of the energy hyper-surface that is obtained by the projection of all possible mean-field states (which would be found by projection before variation). When projecting deformed mean-field states on angular-momentum, the intrinsic deformation of the mean-field ground state will indeed usually be different from the intrinsic deformation of the state giving the minimum of the projected energy curve, with the the exception of well-deformed heavy nuclei in the rare-earth, actinide, and transactinide regions.
In the context of angular-momentum projection, we overcome this problem to some extend by a minimization after projection (MAP), where we generate an energy curve of projected mean-field states with different intrinsic quadrupole deformation, whose minimum provides a first order approximation to projection before variation. When performing a GCM calculation of projected states, the spacing of points along this deformation energy curve does not need to be very dense, it even does not have to contain the actual minimum, as the variational projected GCM calculation of two (non-orthogonal) projected states around a minimum has the ability to (implicitely) construct this minimum, as the projected state representing the minimum has a non-zero overlap with the actually used states. As the projected GCM ground state also describes the energy gain from fluctuations around this minimum, its energy will be even below that of the minimum of the projected energy curve. It has to be stressed that the correlations from fluctuations around the projected state are outside the scope of projection before variation. As a consequence, projection before variation does not *per se* give a better description of correlations beyond the mean field than a GCM of states projected after variation.
While this is a standard procedure in the context of angular-momentum projection, where a MAP is performed with respect to quadrupole deformation, it has been rarely addressed in connection with particle-number projection. The most obvious degree of freedom to search for a minimum of a particle-number projected energy curve is the amount of pairing correlations contained in the (unprojected) intrinsic state. Using this degree of freedom in a projected GCM calculation is equivalent to including the ground-state correlations from pairing vibrations (see Refs.[@Rin80a] for an overview and Refs.[@Rip69a; @Fae73a] for early GCM calculations using schematic models). Exploratory studies along these lines within the context of realistic mean-field models are presented in Refs.[@Mey91a; @Hee01a]. They do, however, not involve projection on angular momentum.
We will present here a similar study of the role of dynamical pairing correlations in [[$^{120}$]{}[Sn]{}]{}. First of all, we have to note that finding a suitable constraint on pairing correlations is not an easy task. The obvious choice in schematic models is the pairing gap obtained from a pairing force with constant matrix elements. This coordinate was, in fact, also used in Refs.[@Mey91a; @Hee01a]. It has, however, some serious drawbacks as this pairing force leads to unrealistic asymptotics of the HFB state at large distances from the nucleus[@Dob84a]. Constraints on other observables that measure pairing correlations pose similar problems even when used with a realistic pairing interaction, or they introduce ambiguities on how to put them into the variational equations[@pairconstraint]. The situation is similar to a constraint on a multipole moment of the density distribution of order $\ell$ used to generate an energy surface as a function of deformation. Such a constraint has always to be damped at large distances from the nucleus, as it introduces along some direction a contribution to the constrained single-particle potential that diverges as $-r^\ell$.
![\[fig:sn120\] HFB and HFB-LN mean-field energy curves, the latter with and without the LN correction term in the energy, and particle-number projected energy curves as a function of the “generating pairing strength” $V_c$ for a spherical mean-field state in [[$^{120}$]{}[Sn]{}]{}. The dots with horizontal bars represent the energy of the projected GCM ground state plotted at its average “generating pairing strength”. The energy scale on the left is normalized with respect to the HF ground state, while the scale to the right gives the total binding energy. ](bender_kazimierz_fig1.eps)
A not completely satisfactory, but well working constraint on the amount of pairing correlations is provided by the strength of the pairing force. A calculation is done in two steps: First, the HFB or HFB-LN equations are solved using a “generating pairing strength” $V_c$. Then, in a second step, the energy of each HFB state is re-calculated without iterating the HFB equations using the realistic pairing strength $V_0 = -1000$ MeV fm$^{3}$, either with or without projection. In a third step, the projected HFB states can be mixed within the GCM. Only the pairing strength of the neutrons is changed. For protons, we always use $V_0 = -1000$ MeV fm$^{3}$, and the same method as for the decription of the neutrons; in the case of pure HFB this means that the proton pairing breaks down. An example of such a calculation is shown in Fig. \[fig:sn120\]. The most remarkable findings are
1\) The HFB equations without LN corrections break down to HF at values of the pairing strength around $-500$ MeV fm$^{3}$. For those states, our formalism cannot introduce pairing correlations beyond the mean-field. Just above the transition from an unpaired to a paired system, the energy gain from projection rises very rapidly up to about the $V_c$ corresponding to the minimum of the energy curve (which differes on the order of 10 $\%$ from $V_0$), and decreases more slowly afterwards. This is analogue to what is found in the angular-momentum projection of quadrupole deformed states around a spherical configuration.
2\) There are obvious differences between HFB and HFB-LN energy curves, both on the mean-field level and projected, for small generating pairing strength $V_c < -1500$ MeV fm$^{3}$, which is indeed in the regime of the physical pairing strength. In the strong pairing regime at larger values of $V_c$, the energy curves obtained from projection of the HFB and HFB-LN states are nearly identical. The remaining difference might be attributed to the presence of proton pairing correlations in the HFB-LN case, while they are absent for HFB.
3\) The LN correction overestimates the energy gain from projection at small generating pairing strength, and underestimates it in the strong pairing regime.
4\) The additional energy gain from the GCM of projected states is quite small, around 100 keV when starting with HFB-LN states, and about 200 keV for pure HFB states. The unphysical breakdown of HFB in the weak pairing regime leads to a smaller binding energy at the minimum of the projected energy curve, and gives a potential energy surface that is stiffer than the HFB-LN one for small $V_c$. This leads to a GCM ground state from projected HFB that is less bound than the GCM gound state from projected HFB-LN. This also pushes the GCM ground state wave function from projected HFB to larger values of $V_c$ than the one from projected HFB-LN, as seen from the larger average generating pairing strength $\bar{V}_c = \sum_{V_c} g_0^2 (V_c) V_c$ in the projected HFB case.
This calculation, of course, scratches only on the surface of the importance of dynamical pairing correlations. The question of better constraints on pairing correlations, systematics, excited states, and the coupling with deformation modes will be addressed elsewhere.
The divergence in particle-number projection {#sect:divergence}
============================================
General features
----------------
![\[fig:o18:sly4\] Lower panel, left: particle-number projected binding energy of [[$^{18}$]{}[O]{}]{} calculated with the Skyrme interaction Sly4 and a density-dependent pairing interaction as a function of the mass quadrupole deformation for 9 and 99 discretization points for the integral over the gauge angle $\varphi$ in Eq. (\[eq:fomenko\]). Upper panel, left: canonical single-particle energies (full lines: parity $+1$, dotted lines: parity $-1$) and Fermi energy (dashed line) for neutrons. Right panel: dispersion of the neutron number $\langle N_0 | \hat{N}^2 | N_0 \rangle - \langle N_0 | \hat{N} | N_0 \rangle^2$ for 1 (no projection), 3 and 5 discretization points. ](bender_kazimierz_fig2.eps)
It has been noticed for a long time that the particle-number projected energy might exhibit divergences[@Don98a; @Ang01b] when a single-particle level crosses the Fermi energy and has the occupation . More recently, it was pointed out by Stoitsov [*et al.*]{}[@Dob05a], that in addition to the divergence there appears a finite step in the projected energy when passing this situation as a function of a constraint. An example is given in Fig. \[fig:o18:sly4\], where two clear divergences appear when a fine discretization of the integral over gauge angles, Eq. (\[eq:fomenko\]), is used. The steps are also hinted, with the binding energy changing on both sides from a lower curve around sphericity to a higher lying curve when passing the divergence.
First of all, it has to be stressed that the binding energy is the only observable that shows an anomaly when a single-particle level crosses the Fermi energy. Neither the overlap, nor any observable calculated from any $n$-body operator shows an unusual behaviour. This is examplified in Fig. \[fig:o18:sly4\] by the dispersion of the particle number, a two-body operator that also provides a measure for the numerical quality of projection. For a small system like [[$^{18}$]{}[O]{}]{}, the integrals over gauge angles are already numerically converged for discretization points. The only exception is the binding energy. However, an extremely huge number of discretization points is needed to see the divergence develop in [[$^{18}$]{}[O]{}]{} (and even more for heavier nuclei), which is one of the reasons why the divergence often remains undetected. The other reason is that it is very unlikely that one of the discrete points used to calculate a potential energy surface hits the narrow region where the divergence appears. This is different when performing projection before variation, where the variation will detect the divergences more easily[@Dob05a].
The appearance of a divergence for the binding energy, but no other observable is related to the the particular definition of the binding energy in our method: the energy is calculated from an energy density functional, while everything else is, for the moment at least, calculated as the expectation value of an operator.
To understand the origin of the divergence, we have to look into the definition and evaluation of the energy functional. For the sake of simple notation and a transparent argument, we will restrict ourselves here to a toy model with one kind of particles only, and a two-body interaction. The additional complications introduced by density dependencies as needed in realistic energy functionals will be discussed elsewhere[@Ben07a]. The further generalization to a system composed of protons and neutrons is then straightforward. All expressions given below are evaluated in the canonical basis, as this basis will turn out to be the only basis in which the origin of the divergence can be clearly identified and analyzed.
The Hamiltonian case
--------------------
As a reference, for which everything is properly defined and no problem occurs, we will use the energy obtained from a two-body force. At the HFB level, one has $$\begin{aligned}
\label{E:mf:hatH}
\mathcal{E}^{\hat{H}} [\rho, \kappa^{01}, \kappa^{10} ]
& = & \mathcal{E}_{{{\rm {kin}}}} [\rho ]
+ {{\scriptstyle \frac{{1}}{{4}}}} \sum_{ijmn \gtrless 0} \bar{v}_{ijmn} \;
\langle {{\rm {HFB}}} | {\hat{a}^\dagger}_{i} {\hat{a}^\dagger}_{j} {\hat{a}}_{n} {\hat{a}}_{m}
| {{\rm {HFB}}} \rangle
\\
& = & \mathcal{E}_{{{\rm {kin}}}} [\rho ]
+ {{\scriptstyle \frac{{1}}{{2}}}} \sum_{\mu,\nu \gtrless 0}
\bar{v}_{\mu\nu\mu\nu} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\mu} \rangle
\langle {\hat{a}^\dagger}_{\nu} {\hat{a}}_{\nu} \rangle
+ {{\scriptstyle \frac{{1}}{{4}}}} \sum_{\mu,\nu \gtrless 0} \,
\bar{v}_{\mu\bar{\mu}\nu\bar{\nu}} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\bar\mu} \rangle
\langle {\hat{a}}_{\bar\nu} {\hat{a}}_{\nu} \rangle
{\nonumber}.\end{aligned}$$ The contractions are the usual density matrix and pair tensor in the canonical basis $$\begin{aligned}
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\nu} \rangle
& = & \rho_{\mu \nu}
= \frac{\langle {{\rm {HFB}}} | a^{\dagger}_{\nu} a_{\mu}
| {{\rm {HFB}}} \rangle}
{\langle {{\rm {HFB}}} | {{\rm {HFB}}} \rangle}
= \rho_{\mu \mu} \, \delta_{\mu \nu}
= v^{2}_{\mu} \, \delta_{\mu \nu}
\\
\langle {\hat{a}}_{\nu} {\hat{a}}_{\mu} \rangle
& = & \kappa^{01}_{\mu \nu}
= \frac{\langle {{\rm {HFB}}} | a_{\nu} a_{\mu}
| {{\rm {HFB}}} \rangle}
{\langle {{\rm {HFB}}} | {{\rm {HFB}}} \rangle}
= \kappa^{01}_{\mu \bar{\mu}} \, \delta_{\nu\bar{\mu}}
= u_{\mu} v_{\bar{\mu}} \, \delta_{\nu\bar{\mu}}
\\
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\nu} \rangle
& = & \kappa^{10}_{\mu \nu}
= \frac{\langle {{\rm {HFB}}} | a^{\dagger}_{\mu} a^{\dagger}_{\nu}
| {{\rm {HFB}}} \rangle}
{\langle {{\rm {HFB}}} | {{\rm {HFB}}} \rangle}
= \kappa^{10}_{\mu \bar{\mu}} \, \delta_{\nu\bar{\mu}}
\equiv u_{\mu} v_{\bar{\mu}} \, \delta_{\nu\bar{\mu}}
,\end{aligned}$$ Unlike in papers on standard HFB theory we distinguish already here between two different pair tensors, as they generalize differently for particle-number projected HFB states. Unless necessary, we will not specify the kinetic energy in what follows. It is given by a one-body operator, always evaluated as such from whatever the left and right many-body states might be, and free of any divergence problems. The generalization of $\mathcal{E}^{\hat{H}}$ to particle-number projected states is straightforward $$\begin{aligned}
\label{E:proj:hatH}
\mathcal{E}^{\hat{H}}
& = & \frac{\langle {{\rm {HFB}}} | \hat{H} \hat{P}^N | {{\rm {HFB}}} \rangle }
{\langle {{\rm {HFB}}} | \hat{P}^N | {{\rm {HFB}}} \rangle }
= \int_0^{2\pi} \! \! \frac{d \varphi}{2\pi \mathcal{D}_{N_0}} \,
e^{-2i\varphi N_0} \,
\langle {{\rm {HFB}}} (0) | \hat{H} | {{\rm {HFB}}} (\varphi) \rangle \end{aligned}$$ with $\mathcal{D}_{N_0} = \langle {{\rm {HFB}}} | \hat{P}^N | {{\rm {HFB}}} \rangle$. The relevant piece is the calculation of the Hamiltonian kernel $\langle {{\rm {HFB}}} | \hat{H} | {{\rm {HFB}}} (\varphi) \rangle$ Although for particle-number projection the Hamiltonian kernel can – in principle – be evaluated using the standard Wick theorem, it is much more convenient to apply a generalized Wick theorem[@Oni66a; @Bal69a] $$\begin{aligned}
\label{eq:Eproj:H}
\langle {{\rm {HFB}}} | \hat{H} | {{\rm {HFB}}} (\varphi) \rangle
& = & \bigg[
\sum_{\mu \gtrless 0} t_{\mu \mu}
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\mu} \rangle_\varphi
+ {{\scriptstyle \frac{{1}}{{2}}}} \sum_{\mu,\nu \gtrless 0}
\bar{v}_{\mu\nu\mu\nu} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\mu} \rangle_\varphi
\langle {\hat{a}^\dagger}_{\nu} {\hat{a}}_{\nu} \rangle_\varphi
{\nonumber}\\
& & \qquad
+ {{\scriptstyle \frac{{1}}{{4}}}} \sum_{\mu,\nu \gtrless 0} \,
\bar{v}_{\mu\bar{\mu}\nu\bar{\nu}} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\bar\mu} \rangle_\varphi
\langle {\hat{a}^\dagger}_{\bar\nu} {\hat{a}^\dagger}_{\nu} \rangle_\varphi
\bigg] \;
\mathcal{I} (\varphi)
. \end{aligned}$$ The basic contractions are given by $$\begin{aligned}
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\nu} \rangle_\varphi
& = & \rho_{\mu \nu} (\varphi)
= \rho_{\mu \mu} (\varphi) \, \delta_{\nu \mu}
= \frac{v_{\mu}^2 \, e^{2 i \varphi}}
{u_\mu^2 + v_{\bar{\mu}}^2 \, e^{2 i\varphi} } \, \delta_{\nu \mu}
\\
\langle {\hat{a}}_{\nu} {\hat{a}}_{\mu} \rangle_\varphi
& = & \kappa^{01}_{\mu \nu} (\varphi)
= \kappa^{01}_{\mu \nu} (\varphi) \, \delta_{\nu \bar{\mu}}
= \frac{u_\mu v_{\bar{\mu}}}
{u_\mu^2 + v_{\bar{\mu}}^2 \, e^{2 i \varphi} } \,
\delta_{\nu \bar{\mu}}
\\
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\nu} \rangle_\varphi
& = & \kappa^{10}_{\mu \nu} (\varphi)
= \kappa^{10}_{\mu \nu} (\varphi) \, \delta_{\nu \bar{\mu}}
= \frac{u_\mu v_{\bar{\mu}} e^{2 i \varphi}}
{u_\mu^2 + v_{\bar{\mu}}^2 \, e^{2 i \varphi} } \,
\delta_{\nu \bar{\mu}} \end{aligned}$$ and the norm kernel $\mathcal{I} (\varphi)$ by $$\begin{aligned}
\label{eq:I:can}
\mathcal{I} (\varphi)
& = & \langle {{\rm {HFB}}} | {{\rm {HFB}}} (\varphi) \rangle
= \prod_{\mu>0} (u_\mu^2 + v_\mu^2 \, e^{2 i \varphi} )
.\end{aligned}$$
The EDF case
------------
In the case of general energy density functional (EDF) in the spirit of a Kohn-Sham approach with pairing, the energy is given by $$\begin{aligned}
\label{E:mf:dft:1}
\mathcal{E}^{{{\rm {EDF}}}} [\rho, \kappa, \kappa^{\ast}]
& = & \mathcal{E}_{{{\rm {kin}}}} [\rho ]
+ \mathcal{E}_{\rho\rho} [\rho ]
+ \mathcal{E}_{\kappa\kappa} [\kappa^{10}, \kappa^{01} ]
,\end{aligned}$$ where $\mathcal{E}_{{{\rm {kin}}}}$ is the kinetic energy, $\mathcal{E}_{\rho\rho}$ the energy of the particle-hole interaction, and $\mathcal{E}_{\kappa\kappa}$ the energy from the particle-particle (pairing) interaction. The only restriction that we impose on the energy functional for the moment is that it is bilinear in either the density matrix or the pair tensor, which is done to keep the analogy with the Hamiltonian case. Then, the energy functional can always be described in terms of the kinetic energy and a double sum over *not antisymmetrized* two-body matrix elements $w^{\rho\rho}_{\mu\nu\mu\nu}$ and $w^{\kappa\kappa}$, different in the particle-hole $$\begin{aligned}
\label{E:mf:dft:2}
\mathcal{E}^{{{\rm {EDF}}}}
& = & \mathcal{E}_{{{\rm {kin}}}} [\rho ]
+ \sum_{\mu\nu \gtrless 0} \, w^{\rho\rho}_{\mu\nu\mu\nu} \,
\rho_{\mu \mu} \, \rho_{\nu \nu}
+ \sum_{\mu\nu \gtrless 0} \,
w^{\kappa\kappa}_{\mu\bar{\mu}\nu\bar{\nu}} \,
\kappa^{10}_{\mu \bar\mu} \kappa^{01}_{\nu \bar\nu}
{\nonumber}\\
& = & \mathcal{E}_{{{\rm {kin}}}} [\rho ]
+ \sum_{\mu\nu \gtrless 0} \, w^{\rho\rho}_{\mu\nu\mu\nu} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\mu} \rangle
\langle {\hat{a}^\dagger}_{\nu} {\hat{a}}_{\nu} \rangle
+ \sum_{\mu\nu \gtrless 0} \,
w^{\kappa\kappa}_{\mu\bar{\mu}\nu\bar{\nu}} \,
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\bar\mu} \rangle
\langle {\hat{a}}_{\bar\nu} {\hat{a}}_{\nu} \rangle
.\end{aligned}$$ This is the Kohn-Sham approach to an EDF[@Koh64a], formally generalized to systems with pairing by Oliveira [*et al.*]{}[@Oli88a], where the actual density (matrix) is calculated from an auxiliary independent quasi-particle state $| {{\rm {HFB}}} \rangle$. Equation (\[E:mf:dft:2\]) might appear to be an unusual representation of an energy functional, but, for example, the interaction of a term bilinear in the local density, where $f(|{\boldmath{r}}-{\boldmath{r}}'|)$ is an arbitrary function and $C$ the coupling constant, translates as $$\begin{aligned}
\lefteqn{
C {\int \! \! \! \! \int}\! d^3r \; d^3 r' \; \rho ({\boldmath{r}}) \, f(|{\boldmath{r}}-{\boldmath{r}}') \,
\rho ({\boldmath{r}}')
} {\nonumber}\\
& = & C \sum_{\mu, \nu \gtrless 0}
{\int \! \! \! \! \int}\! d^3r \; d^3 r' \;
\rho_{\mu \mu} \, \psi^\dagger_\mu ({\boldmath{r}}) \, \psi_\mu ({\boldmath{r}}) \,
f(|{\boldmath{r}}-{\boldmath{r}}'|) \,
\rho_{\nu \nu} \, \psi^\dagger_\nu ({\boldmath{r}}') \, \psi_\nu ({\boldmath{r}}')
{\nonumber}\\
& = & C \sum_{\mu, \nu \gtrless 0}
\rho_{\mu \mu} \, w^{\rho\rho}_{\mu\nu\mu\nu} \, \rho_{\nu \nu}
.\end{aligned}$$ Similar expressions are obtained for any other bilinear contribution to an energy functional that does not have density-dependent coupling constant.
For the generalization of the energy functional to the case of particle-number projected states, the same generalized Wick theorem as above is used to define $$\label{eq:EDF:proj}
\mathcal{E}^{{{\rm {EDF}}}} (\varphi)
= \int_0^{2\pi} \! \! \frac{d \varphi}{2\pi \mathcal{D}_{N_0}} \,
e^{-i\varphi N_0} \,
\mathcal{H}^{{{\rm {EDF}}}} (\varphi)$$ where the Hamiltonian kernels are now given by $$\begin{aligned}
\label{eq:E:proj:dft:2}
\mathcal{H}^{{{\rm {EDF}}}} (\varphi) \!
& = & \mathcal{H}_{{{\rm {kin}}}} (\varphi)
+ \! \! \sum_{\mu,\nu \gtrless 0} \! \!
\big[
w^{\rho\rho}_{\mu\nu\mu\nu}
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}}_{\mu} \rangle_\varphi
\langle {\hat{a}^\dagger}_{\nu} {\hat{a}}_{\nu} \rangle_\varphi
+ w^{\kappa\kappa}_{\mu\bar{\mu}\nu\bar{\nu}}
\langle {\hat{a}^\dagger}_{\mu} {\hat{a}^\dagger}_{\bar\mu} \rangle_\varphi
\langle {\hat{a}^\dagger}_{\bar\nu} {\hat{a}^\dagger}_{\nu} \rangle_\varphi
\big]
\mathcal{I} (\varphi)
{\nonumber}\\
\phantom{nothing left} \end{aligned}$$ The identification of the origin of the divergence proceeds as follows
1\) As analyzed in detail by Anguiano [*et al.*]{}[@Ang01b], the divergence appears for those terms in the energy that originate from the interaction of a particle with its conjugated partner $$\begin{aligned}
\label{eq:dft:div}
\mathcal{H}^{{{\rm {EDF}}}} (\varphi)
& = & \bigg[
\ldots
+ \big(
w^{\rho\rho}_{\mu\mu\mu\mu}
+ w^{\rho\rho}_{\bar\mu \bar\mu \bar\mu \bar\mu}
+ w^{\rho\rho}_{\bar\mu \mu \bar\mu \mu}
+ w^{\rho\rho}_{\mu \bar\mu \mu \bar\mu}
\big)
\frac{v^2_\mu e^{2i\varphi}}
{u^2_\mu + v^2_\mu e^{2i\varphi}}
\frac{v^2_\mu e^{2i\varphi}}
{u^2_\mu + v^2_\mu e^{2i\varphi}}
{\nonumber}\\
& & \qquad
+ 4 w^{\kappa\kappa}_{\mu\bar{\mu}\mu\bar{\mu}}
\frac{u_\mu v_\mu}
{u^2_\mu + v^2_\mu e^{2i\varphi}}
\frac{v_\mu v_\mu e^{2i\varphi}}
{u^2_\mu + v^2_\mu e^{2i\varphi}}
+ \ldots \;
\bigg] \;
\mathcal{I} (\varphi)
.\end{aligned}$$ For and $\varphi = \pi/2$, the denominator in the transition densities becomes zero. One of the two denominators is canceled by the same factor contained in $\mathcal{I} (\varphi)$, Eq. (\[eq:I:can\]), while the other one causes the divergence[@Ang01b].
2\) It can be shown from very general arguments[@Ben07a] that the Hamiltonian kernel should have a certain dependence on $\varphi$. The divergent contributions to (\[eq:dft:div\]) do not follow this rule. This shows that such terms are spurious even for .
3\) In the case of a two-body Hamiltonian the divergence disappears when one identifies $2 w^{\rho\rho}_{\mu \bar\mu \mu \bar\mu}
= 4 w^{\kappa\kappa}_{\mu \bar\mu \mu \bar\mu}
= \bar{v}_{\mu \bar\mu \mu \bar\mu}$ and . This can be used to combine the $u_\mu$ and $v_\mu e^{2i\varphi}$ factors such that they cancel the dangerous denominator[@Ang01b].
4\) The matrix elements of the kind $w^{\rho\rho}_{\mu\mu\mu\mu}$ might indeed have non-zero values in an EDF, which represent a spurious interaction of a particle with itself. They violate the exchange symmetry in a Fermionic systems (the Pauli principle), and lead to a spurious contribution to the total binding energy already on the mean-field level. The appearance of this so-called “self-interaction” is a well-known annoyance of EDFs for electronic systems[@Per81a], but ignored in nuclear physics so far.
5\) In the case of an EDF with pairing, there is an additional spurious “self-pairing” interaction that originates from the scattering of a pair onto itself. The interaction energy from two isolated Fermions occupying pair-conjugated orbitals divided by the occupation of the pair should have the same value as if pairing correlations were not present[@Ben07a]. Again, this is violated by a general EDF. The actual expression for the spurious self-pairing energy combines matrix elements and the occupation factors that weight them and will be given elsewhere[@Ben07a].
6\) The divergent contributions to Eq. (\[eq:dft:div\]) come from terms that represent self-interaction and self-pairing. By that, the projection adds a second level of spuriosity to these terms. Not the whole contribution from the self-interaction and self-pairing terms is divergent, though.
7\) The self-interaction and self-pairing contribute to the total energy not only because the relevant matrix elements have spurious non-zero values, but also because they are multiplied with unphysical weights. When evaluating the Hamiltonian kernel (\[eq:Eproj:H\]) by commutating the creation and annihilation operators of the Hamiltonian with those setting up the HFB states until they hit the vacuum, it becomes clear that each pair of conjugated particles $(\mu, \bar\mu)$ can be multiplied with one $e^{2i\varphi}$ factor stemming from the rotated state to the right only, and that the possible combinations of $u_\mu$ and $v_\mu$ for a given $\mu$ will always be bilinear. Only when the Wick theorem is applied, one obtains terms which are of 4th order in $u_\mu$ and $v_\mu$ and quadratic in $e^{2i\varphi}$. The reason is that multiple contributions from the same particle or pair of particles are not excluded from the sum as the Wick theorem implicitely assumes that the matrix elements these terms multiply are zero or sum up to zero, so that they will not contribute anyway. In an EDF these matrix elements are not zero anymore, but the Wick theorem is still used.
One rigorous way to remove the spurious terms from an energy functional would be to remove all possible self-interactions in the total energy by excluding that the same summation index appears more than once in a contribution to Eq. (\[E:mf:dft:2\]) This was, in fact, already used by Hartree in his seminal paper on the Hartree method[@Har28a]. Or, alternatively, one sums up explicitely all the contributions where the same summation index appears two times or more often and subtracts this as a self-energy correction from the total energy[@Per81a]. This leads, however, to enourmous complications in the variational equations, particularly when extended to density-dependent interactions.
The origin of the finite step
-----------------------------
As pointed out by Stoitsov [*et al.*]{}[@Dob05a], there is also a finite step that appears when passing the divergence as a function of a collective coordinate. The analysis of its origin will be given elsewhere[@Ben07a]. Let us just outline the main arguments: With the substitution , the integral over the real gauge angle $\varphi$ in Eq. (\[eq:proj:number\]) can be transformed into an integral in the complex plane, that can be analyzed with the tools from function theory[@Dob05a; @Ben07a]. In the Hamiltonian case, the integral has a pole at , which has an order of the number of particles below the Fermi energy. The residue of this pole is proportional to the projected energy. In a projected EDF framework, there are two changes to this scenario: First, there is an additional contribution to the pole at , this time from all particles, and second, there appear additional poles at $z_\mu^\pm = \pm i |u_\mu|/|v_\mu|$ along the imaginary axis, which also contribute for all particles below the Fermi energy. Both originate from the same terms as the divergence. The two additional contributions to the projected energy are huge (on the order of several hundered MeV for [[$^{18}$]{}[O]{}]{}), but of opposite sign and nearly, but not exactly, canceling each other. The step appears when one of the poles at $\pm i |u_\mu|/|v_\mu|$ enters or leaves the integration contour when the single-particle energy of the corresponding particle crosses the Fermi energy.
Further discussion
==================
Density-dependent terms add further complications to the divergence, which we will not address here in detail. Let us just comment that, first, in a projected theory, the evaluation of a density-dependent term with non-integer power requires the evaluation of a (multivalued) root of a complex number, which leaves the Riemann sheet for choice. Second, when one expands the densities in the energy functional for density-dependent terms, and combines the resulting terms similar to Eq. (\[eq:E:proj:dft:2\]), one finds again terms which contain more than just one $v_{\mu} e^{2i\varphi}$ factor originating from $| {{\rm {HFB}}}(\varphi) \rangle$ (one of them with a usually non-integer power), which again introduces an unphysical dependence on $\varphi$ into the Hamiltonian kernel.
There are good and profound reasons to use different effective interactions in the particle-hole and particle-particle channels. The particle-hole and particle-particle channel of the effective interaction sum up different classes of diagrams[@Hen64a], which inevitably leads to different expressions for the effective interaction in both channels. Short-range correlations are resummed into the functional providing the two channels with different density-dependent forms. On the other hand, the long-range correlations from large-amplitude fluctuations around the mean-field states are either neglected in a pure mean-field approach when they are assumed to be small, or otherwise described explicitely by projection and GCM-type configuration mixing. A correction that removes the divergent part and the finite step from the projected energy can be set-up when identifying the terms with an unphysical dependence on the gauge angle comparing the expressions obtained from the standard and generalized Wick theorem[@Ben07a; @Lac07a].
While particle-number projection is the prominent example for the appearance of a divergence, similar divergences can be expected in any GCM calculation or projection on any other quantum number. As in these cases the mixed states have different canonical bases, the transition density matrix is usually not diagonal and the analysis is less obvious. The widely used collective Schr[ö]{}dinger equation and Bohr Hamiltonian that are set-up through a series of approximations (at some points including improvements) from exact configuration mixing cannot be expected to be free of problems stemming from the divergence either, although there they will be more difficult to identify.
Summary
=======
First, we have examined for the example of [[$^{120}$]{}[Sn]{}]{} the effect of a minimization after particle-number projection and of GCM ground state correlations from pairing vibrations. For [[$^{120}$]{}[Sn]{}]{}, the effect is visible, but not enormous, and clearly smaller than the current uncertainties from the parameterization of the effective pairing interaction. This might be, however, different for other nuclei.
Second, we have examined the divergence that appears in particle-number projected EDF. As its origin we identify the spurious self-pairing contribution contained in all common energy functionals for self-consistent mean-field models. On the mean-field level, the self-pairing as such adds a small spurious contribution to the total binding energy, that is similar to, but smaller, than the usual spurious centre-of-mass or rotational energies from broken symmetries (in fact, together with the self-energy it is the spurious energy from violating the Pauli principle in approximate EDF approaches). When generalizing the EDF to particle number projection via the generalized Wick theorem, the self-pairing provides the Hamiltonian kernels with an unphysical dependence on the gauge angle, which ultimately leads to divergences and steps.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank K. Bennaceur D. Lacroix, and P.-H. Heenen for many fruitful discussions on the theoretical and numerical treatment of dynamical pairing correlations, and J. Dobaczewski, W. Nazarewicz, M. V. Stoitsov, and L. Robledo for many inspiring and clarifying discussions on the appearance and interpretation of, and the threat posed by, the divergence and step in particle-number projected EDF.
[99]{}
M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. **75**, 121 (2003).
M. Bender and P.-H. Heenen, Eur. Phys. J. A 25, s01 (2005) 519.
H. J. Mang, Phys. Rep. **18**, 327 (1975).
P. Ring and P. Schuck, *The Nuclear Many-Body Problem*, Springer Verlag, New York, Heidelberg, Berlin, 438 (1980).
J. Dobaczewski, H. Flocard, J. Treiner, Nucl. Phys. **422**, 103 (1984).
E. Chabanat, P. Bonche, P. Haensel, J. Meyer, and R. Schaeffer, Nucl. Phys. **A635**, 231 (1998); Nucl. Phys. **A643**, 441(E) (1998).
C. Rigollet, P. Bonche, H. Flocard, P.-H. Heenen, Phys. Rev. C **59**, 3120 (1999).
B. Gall, P. Bonche, J. Dobaczewski, H. Flocard, and P.-H. Heenen, Z. Phys. **A348**, 183 (1994).
H. Flocard and N. Onishi, Ann. Phys. **254** (1997) 275.
J. A. Sheikh and P. Ring, Nucl. Phys. **A665**, 71 (2000).
M. V. Stoitsov, J. Dobaczewski, R. Kirchner, W. Nazarewicz, J. Terasaki, preprint nucl-th/0610061.
P.-H. Heenen, P. Bonche, J. Dobaczewski, H. Flocard, Nucl. Phys. **A561**, 367 (1993).
V. N. Fomenko, J. Phys. (G.B) A **3**, 8 (1970).
D. L. Hill and J. A. Wheeler, Phys. Rev. **89**, 1106 (1953);\
J. J. Griffin and J. A. Wheeler, Phys. Rev. **108**, 311 (1957).
P. Bonche, J. Dobaczewski, H. Flocard, P.-H. Heenen and J. Meyer, Nucl. Phys. **A510**, 466 (1990).
N. Onishi and S. Yoshida, Nucl. Phys. **80**, 367 (1966).
R. Balian and E. Br[é]{}zin, Il Nuovo Cimento, B **64**, 37 (1969).
D. Baye and P.-H. Heenen, J. Phys. **A19**, 2041 (1986).
A. Valor, P.-H. Heenen, P. Bonche, Nucl. Phys. **A671**, 145 (2000).
M. Bender and P.-H. Heenen, Nucl. Phys. **A713**, 390 (2003).
M. Bender, H. Flocard, P.-H. Heenen, Phys. Rev. C **68**, 044321 (2003).
T. Duguet, M. Bender, P. Bonche, P.-H. Heenen, Phys. Lett. **B559**, 201 (2003).
M. Bender, P. Bonche, T. Duguet, P.-H. Heenen, Phys. Rev. C **69**, 064303 (2004).
M. Bender, P.-H. Heenen, P. Bonche, Phys. Rev. C **70**, 054304 (2004).
M. Bender, G. F. Bertsch, P.-H. Heenen, Phys. Rev. Lett. **94**, 102503 (2005);\
Phys. Rev. C **73**, 034322 (2006).
M. Bender, P. Bonche, P.-H. Heenen, Phys. Rev. C **74**, 024312 (2006).
R. R. Rodriguez-Guzman, J. L. Egido, and L. M. Robledo, Phys. Rev. C **62**, 054319 (2002); J. L. Egido, L.M. Robledo, Lecture Notes in Physics No. 641 (Springer, Berlin, 2004), p. 269.
T. Nik[š]{}i[ć]{}, D. Vretenar, and P. Ring, Phys. Rev. C **73**, 034308 (2006).
G. Ripka and R. Padjen, Nucl. Phys. **A132**, 489 (1969).
A. Faessler F. Gr[ü]{}mmer, A. Plastino, F. Krmpotic, Nucl. Phys. **A217**, 420 (1973).
J. Meyer, P. Bonche, J. Dobaczewski, H. Flocard, P.-H. Heenen, Nucl. Phys. **A533**, 307 (1991).
P.-H. Heenen, A. Valor, M. Bender, P. Bonche, H. Flocard Eur. Phys. J. **A11**, 393 (2001) .
M. Bender, K. Bennaceur, and T. Duguet, unpublished.
F. D[ö]{}nau, Phys. Rev. C **58**, 872 (1998).
M. Anguiano, J. L. Egido, and L. M. Robledo, Nucl. Phys. **A696**, 467 (2001).
J. Dobaczewski, W. Nazarewicz, P.-G. Reinhard, and M. V. Stoitsov, in preparation.
M. Bender and T. Duguet, in preparation.
W. Kohn and L. J. Sham, Phys. Rev. **137**, A1697 (1964);\
W. Kohn, Rev. Mod. Phys. **71**, 1253 (1998).
L. N. Oliveira, E. K. U. Gross, and W. Kohn, Phys. Rev. Lett. **60**, 2430 (1988).
J. P. Perdew and A. Zunger, Phys. Rev. **B23**, 5048 (1981).
D. R. Hartree, Proc. Cambridge Philos. Soc., Vol. 1928, 89 (1928).
D. Lacroix, T. Duguet and M. Bender , in preparation.
E. M. Henley and L. Wilets, Phys. Rev. **133**, B 1118 (1964).
[^1]: present address: CEN Bordeaux Gradignan, France
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we study the synthesis of Gegenbauer processes using the wavelet packets transform. In order to simulate a $1$-factor Gegenbauer process, we introduce an original algorithm, inspired by the one proposed by Coifman and Wickerhauser [@CoiWic92], to adaptively search for the best-ortho-basis in the wavelet packet library where the covariance matrix of the transformed process is nearly diagonal. Our method clearly outperforms the one recently proposed by [@Whi01], is very fast, does not depend on the wavelet choice, and is not very sensitive to the length of the time series. From these first results we propose an algorithm to build bases to simulate $k$-factor Gegenbauer processes. Given its practical simplicity, we feel the general practitioner will be attracted to our simulator. Finally we evaluate the approximation due to the fact that we consider the wavelet packet coefficients as uncorrelated. An empirical study is carried out which supports our results.'
address:
- 'School of Economics and Finance, Queensland University of Technology, GP0 Box 2434, Brisbane QLD 4001, Australia.'
- 'Image Processing Group GREYC CNRS UMR 6072 14050 Caen Cedex France.'
author:
- 'J. J. Collet'
- 'M. J. Fadili'
bibliography:
- 'synthesisGG.bib'
title: Simulation of Gegenbauer Processes using Wavelet Packets
---
and
Gegenbauer process, Wavelet packet transform, Best-basis, Autocovariance.
Introduction
============
The simulation of long memory processes is an issue of a paramount importance in many statistical problems. In the time domain, there exist different methods devoted to this task (see [@Ber94] for a non exhaustive review of them). Alternative efficient approaches, which operate in the frequency domain, were also proposed (see [@Hosk84; @DavHar87; @Ber94]). More recently, owing to their scale-invariance property, wavelets have since been widely adopted as a natural tool for analyzing and synthesizing $1/f$ long-memory processes. They were demonstrated to provide almost Karhunen-Loève expansion of such processes [@Wor96].
The simulation of fractional differenced Gaussian noise (fdGn) using discrete wavelet transform (DWT) has been studied by [@MccWal96]. This kind of process is characterized by an unbounded power spectral density (PSD) at zero. The proposed method relies on the fact that the DWT approximately decorrelates long memory processes (see e.g. [@DerTew93; @TewKim92; @Wor96; @Jen99; @PerWal00]). The orthonormal wavelet decomposition “only” ensures approximate decorrelation. The quality of this approximation has been widely assessed in [@Wor90; @Flan92; @Dijk94; @TewKim92; @Wor96; @Jen99; @Jen00] for a variety of $1/f$ long memory processes.
The DWT is only adapted to processes whose PSD is unbounded at the origin. Gegenbauer processes (sometimes also called seasonal persistent processes) are also long memory processes and are characterized by an unbounded PSD. The main difference with the fdGn processes is that the singularities of the PSD of the Gegenbauer processes can be located at one or many frequencies in the Nyquist domain, not necessarily at the origin. Therefore, a natural tool to analyze such processes appears to be the [*wavelet packet transform*]{} (WPT), which is a generalization of the wavelet transform. The wavelets packets adaptively divide the frequency axis into separate dyadic intervals of various sizes. They segment unconditionally, the frequency axis and are uniformly translated in time. Moreover, a discrete time series of size $N$ is decomposed into more than $2^{N/2}$ wavelet packet (WP) bases. Among these bases, one is a very good candidate to whiten the series and hence almost diagonalizes the covariance of the seasonal process.
Recently, Mallat, Zhang and Papanicolaou [@MalZhaPap98], and, following their work, Donoho, Mallat and von Sachs [@DonMalVon98], studied the idea of estimating the covariance of locally stationary processes by approximating the covariance of the process by a covariance which is almost diagonal in a specially constructed basis (cosine packets for their locally stationary processes) using an adaptation of Coifman-Wickerhauser (CW) best ortho-basis algorithm. To some extent (given that we are interested in synthesis and they were in estimation issues), our work here can be seen as the spectral dual of theirs, since we are interested in studying the covariance of seasonal processes in the WP domain.
To the best of our knowledge, the simulation of the Gegenbauer process using the Discrete WPT (DWPT) has been first studied in [@Whi01]. The DWPT creates a redundant collection of wavelet coefficients at each level of the transform organized in a binary tree structure, equipped with a natural inheritance property. Different methods exist to determine the best candidate orthonormal basis. The author in [@Whi01] used a method which depends on both the location of the singularity and the wavelet used in the DWPT. To simulate realizations of a Gegenbauer process, once the basis is found, it then remains to apply the (inverse) DWPT using the same approximation as in [@MccWal96].
This basis search method consists first in considering the square gain function of the wavelet filter associated with each WP coefficient that is sufficiently small at the Gegenbauer frequency. Then a pruning of this family is done to obtain the ortho-basis. The main advantage of this method is its simplicity. However several points are still questionable and must be clarified. First the notion “sufficiently small” implies the introduction of a threshold which seems to depend both on the wavelet used and the length of the simulated series. No indication is given how to choose this threshold which remains awkward to control. Furthermore, it is not clear why the basis should depend on the wavelet. Lastly, this method inherently leads to an over-partitioning of the spectra which depends on the wavelet and the threshold considered (see e.g. Fig.\[basis\_spec\_freq\] and Fig.4 in [@Whi01]). Indeed, as the Gegenbauer process we consider here is stationary, it is known that the Karhunen-Loève basis is the Fourier basis. While over-partitioning, the approach of [@Whi01] inherently tries to approach the Fourier basis (more precisely it tends to select most of the atoms from the Shannon wavelet packets at the deepest level). Then, this makes wavelet packets machinery only of limited interest here. Furthermore, many important statistical tasks involving Gegenbauer processes would seriously suffer from such an over-partitioning, e.g. maximum likelihood estimation, resampling-based inference, to cite only a few examples.
To alleviate these intricacies, our belief is that it should more beneficial to build, for each Gegenbauer process, an unique valid (almost whitening) basis for all wavelets with a reduced number of packets. This basis should only depend on the Gegenbauer frequencies, but not on the long memory parameters nor on the wavelet used. The rationale behind these claims can be supported by different arguments. Indeed, wavelets are now widespread as almost-diagonalizing expansion for $1/f$ processes, no matter what the long memory parameter and the wavelet are. Although, the latter parameters clearly influence the quality of the decorrelation as was widely proven [@Wor96]. Our goal is then to mimic this behavior by extending and generalizing the aforementioned properties to Gegenbauer processes within the WP framework, with the desirable properties that (i) our basis tends to the dyadic wavelet basis (the 1-band WP) when the singularity frequency tends to $0$, and (ii) the provided basis should have a limited number of packets. To get a gist of the latter property, we can say that we are seeking a basis (and the corresponding tree) which attains the minimal diagonalization error penalized by the complexity of the tree in terms of the number of packets (i.e. number of leaves of the tree) involved in the dyadic partition of the spectral axis provided by the selected WP basis. See [@DonMalVon98 Sec. 13] and [@Donoho97] for a more detailed discussion of complexity penalized estimation and its relation to best-ortho-basis.
In this paper, we propose an alternative efficient way to determine the appropriate basis for the simulation of $1$-factor Gegenbauer process, that we extend to the simulation of $k$-factor Gegenbauer process. To find this basis, We propose an algorithm which is an adaptation of the best-basis search algorithm of [@CoiWic92]. The main property of this algorithm is that it provides us with a (unique) basis that only depends on the location of the Gegenbauer frequency, unlike the construction method of [@Whi01] which provides bases depending both on the location of the singularity and the wavelet. To point out the role played by the wavelet used, we will study the decorrelation properties of the WP coefficients of a Gegenbauer process when it is expressed in this basis. In particular, the influence of the wavelet regularity, the long memory parameter and the location of the singularity on the decorrelation decay speed will be established.
The organization of this paper is as follows. After some preliminaries and notations related to the WPT theory (Section \[sec:21\]) and to the Gegenbauer process (Section \[sec:22\]) are introduced, we will define the best-basis search algorithm and the cost function we propose (Sections \[sec:31\]-\[sec:32\]). Theoretical support to this cost function is also supplied. We then develop an algorithm to build an appropriate basis to simulate $1$-factor Gegenbauer process (Section \[sec:33\]). This method will then be extended to $k$-factor processes (Section \[sec:34\]). Theoretical evaluation of the approximation quality due to the fact that we consider the WP coefficient as uncorrelated is studied in Section \[sec:4\]. A simulation study is finally conducted to illustrate and discuss our results (Section \[sec:5\]).
Preliminaries
=============
The wavelet packet transform {#sec:21}
----------------------------
Wavelet packets were introduced by Coifman, Meyer and Wickerhauser [@CoiMeyWic92], by generalizing the link between multi-resolution approximations and wavelets. Let the sequence of functions defined recursively as follows: $$\begin{aligned}
\psi_{j+1}^{2p}(t) & =\sum_{n=-\infty}^{\infty}h(n)\psi_j^p(t-2^jn) \\
\psi_{j+1}^{2p+1}(t) & =\sum_{n=-\infty}^{\infty}g(n)\psi_j^p(t-2^jn)\end{aligned}$$ for $j\in\mathbb{N}$ and $p=0,\dots,2^j-1$, where $h$ and $g$ are the conjugate pair of quadrature mirror filters (QMF). At the first scale, the functions $\psi_0$ and $\psi_1$ can be respectively identified with the father and the mother wavelets $\phi$ and $\psi_1$ with the classical properties (among others): $$\int \phi(t) = 1, \int \psi(t) = 0$$
The collection of translated, dilated and normalized functions $\psi^{p,n}_{j}\overset{\mathrm{def}}{=}2^{-j/2}\psi_p(2^{-j}t-n)$ makes up what we call the (multi-scale) wavelet packets associated to the QMFs $h$ and $g$. $j\in\mathbb{N}$ is the scale index, $p=0,\ldots,2^j-1$ can be identified with a frequency index and $k$ is the position index. It has been proved (see e.g. [@Wic94]) that if $\{\psi_j^{p,n}\}_{n\in\mathbb{Z}}$ is an orthonormal basis of a space $\mathbf{V}_j$, then the family $\{\psi_{j+1}^{2p,n},\psi_{j+1}^{2p+1,n}\}_{n\in\mathbb{Z}}$ is also an orthonormal basis of $\mathbf{V}_j$.
The recursive splitting of vector spaces is represented in a binary tree. To each node $(j,p)$, with $j\in\mathbb{N}$ and $p=0,\dots,2^j-1$, we associate a space $\mathbf{V}_j^p$ with the orthonormal basis $\{\psi_{j}^{p}(t-2^{j}n)\}_{n\in\mathbb{Z}}$. As the splitting relations creates two orthogonal basis, it is obvious that $\mathbf{V}_j^p=\mathbf{V}_{j+1}^{2p}\oplus\mathbf{V}_{j+1}^{2p+1}$.
The WP representation is overcomplete. That is, there are many subsets of wavelet packets which constitute orthonormal bases for the original space $\mathbf{V_0}$ (typically more than $2^{2^{J-1}}$ for a binary tree of depth $J$). While they form a large library, these bases can be easily organized in a binary tree and efficiently searched for extreme points of certain cost functions, see [@CoiWic92] for details. Such a search algorithm and associated cost function are at the heart of this paper.
In the following we call the collection $\mathcal{B}=\{\psi_j^{p,n}\}_{(j,p) \in \mathcal{T}, n \in \mathbb{Z}}$ the basis of $\mathbf{L^2}(\mathbb{R})$, and the tree $\mathcal{T}$ for which the collection of nodes $(j,p)$ are the leaves, the associated tree.
Given a basis $\mathcal{B}$ and its associated tree $\mathcal{T}$ it is possible to decompose any function $x$ of $\mathbf{L^2}(\mathbb{R})$ in $\mathcal{B}$. At each node $(j,p) \in \mathcal{T}$, the WP coefficients $W_{j}^{p}(n)$ of $x$ in the subspace $\mathbf{V}_{j}^{p}$ at position $n$ are given by the inner product: $$W_{j}^{p}(n)=\int\psi_{j}^{p}(t-2^{j}n)x(t)dt.$$ For a discrete signal of $N$ equally-spaced samples, the DWPT is calculated using a fast filter bank algorithm that requires $O(N\log N)$ operations. The interested reader may refer to the books of Mallat [@Mal98] and Wickerhauser [@Wic94] for more details about the DWPT.
Gegenbauer process {#sec:22}
------------------
The $k$-factor Gegenbauer process is a $1/f$-type process introduced in [@GraZhaWoo89; @Gray98]. The PSD $f$ of a such process $(X_t)_t$ is given by for all $|\lambda|\leq1/2$ $$\label{GG_spect_dens}
f(\lambda)=\frac{\sigma_\varepsilon^2}{2\pi}\prod_{i=1}^{k}{\left(4{\left(\cos2\pi\lambda-\cos2\pi\nu_i\right)}^2\right)}^{-d_i}$$ where $k$ is a finite integer and $0<d_i<1/2$ if $0<|\nu_i|<1/2$ and $0<d_i<1/4$ if $|\nu_i|=0$ for $i=1,\dots,k$. The parameter $d_i$ and $\nu_i$ are respectively called the memory parameter and the Gegenbauer frequency. The $k$-factor Gegenbauer process is a generalization of the fractionally differenced Gaussian white noise process (see [@Hos81] and [@GraJoy80]) in the sense that the PSD is unbounded at $k$ different frequencies not necessary located in $0$.
The Gegenbauer process $(X_t)_t$ is related to a white noise process $(\varepsilon_t)_t$ with mean $0$ and variance $\sigma_\varepsilon^2$ through the relationship: $$\label{def2} \prod_{i=1}^{k} (I-2\nu_iB+B^2)^{d_i}X_t=\varepsilon_t,$$ where $BX_t=X_{t-1}$ and $\eta_i=\cos2\pi\nu_i$.\
The main characteristic of the Gegenbauer processes in the time domain is the slow decay of autocovariance function. In the case of a $1$-factor Gegenbauer process, Gray [*et al.*]{} [@GraZhaWoo89] and then Chung [@Chu96] proved the asymptotic behavior of the autocovariance function: $$\rho(h)\sim h^{2d-1}\cos(2\pi\nu h) \quad \textrm{as} \quad h\rightarrow\infty.$$ The next section is devoted to the construction of the best basis diagonalizing the covariance of a $N$-sample realization of a Gegenbauer process with the convention $N=2^J$.
Simulation of Gegenbauer processes
==================================
This section is composed of two parts. The first one is devoted to the simulation procedure in the general case: no assumption is made concerning the basis, except that we have an appropriate basis. The second part concerns the construction of this appropriate basis.
Simulation procedure {#sec:31}
--------------------
Here we present the procedure to simulate a Gegenbauer process. Assume we would like to simulate a $k$-factor Gegenbauer process, $(X_t)_t$, with PSD $f$ as defined in (\[GG\_spect\_dens\]), with Gegenbauer frequencies $(\lambda_1,\dots,\lambda_k)$ and long memory parameters $(d_1,\dots,d_k)$. The length of the realization will be $N=2^J$.
We define the band-pass variance $\beta^2_{j,p}$ in the frequency interval $I^p_j=[\frac{p}{2^{j+1}},\frac{p+1}{2^{j+1}}]$ by: $$\label{Eq:Bjp}
\beta^2_{j,p}=2\int_{\frac{p}{2^{j+1}}}^{\frac{p+1}{2^{j+1}}}f(\lambda)d\lambda$$ As in [@MccWal96] and [@Whi01], we assume that the PSD in each frequency interval $I^p_j$, for which the couple $(j,p)$ is a leaf of the tree $\mathcal{T}$ associated to the basis $\mathcal{B}$, is constant and equal to $\sigma^2_{j,p}$. Then, the band-pass variance is (approximately) equal to: $$\beta^2_{j,p}=2\int_{\frac{p}{2^{j+1}}}^{\frac{p+1}{2^{j+1}}}\sigma^2_{j,p}d\lambda=2^{-j}\sigma^2_{j,p}$$ Thus the variance of each WP coefficient is given by $\mathbb{V}[W_j^p(n)]=\sigma^2_{j,p}=2^j\beta^2_{j,p}$, $\mathbb{V}$ is the variance operator. To simulate $N$ observations of a Gegenbauer process $(X_t)_{t=1,\ldots,N}$ with PSD $f$, we use the following procedure:
Given an appropriate basis $\mathcal{B}$ and its associated tree $\mathcal{T}$, calculate the band-pass variances $\beta^2_{j,p}$, $(j,p)\in\mathcal{T}$ as in (\[Eq:Bjp\]); For each $(j,p)\in\mathcal{T}$, generate $2^{J-j}$ realizations of $W_j^p(n)$, an independent Gaussian random variable with zero mean and variance equal to $\sigma^2_{j,p}$; Organize the WP coefficients $W_j^p(n)$, for $(j,p)\in\mathcal{T}$ and $n=1,\dots,2^{J-j}$, in a vector $\mathbf{W}_\mathcal{B}$, and apply the the inverse DWPT to obtain the observation vector $\mathbf{X}=(X_1,\dots,X_N)^T$.
In the following subsection, we examine the construction of what we term an appropriate basis $\mathcal{B}$.
Best-basis construction algorithm {#sec:32}
---------------------------------
### Approximate Diagonalization in a Best-Ortho-basis {#sec:BoBDon}
Let $(X_t)_t$ be a stationary Gegenbauer process and $\Gamma$ its covariance matrix. Let $\gamma_{i,j}{\left[{\mathcal B}\right]}$ the entries of $\Gamma{\left[\mathcal B\right]}$; the covariance matrix of the coordinates $\mathbf{W}_\mathcal{B}$ of $(X_t)_t$ in the ortho-basis ${\mathcal B}$. One can define diagonalization as an optimization of the functional [@DonMalVon98]: $$\max_{{\mathcal B}} {\mathcal E}({\mathcal B}) = \max_{{\mathcal B}} \sum_i e(\gamma_{ii}[{\mathcal B}])$$ where $e$ is taken as a strictly convex cost function. In practice, the optimization formulation of diagonalization is not widely used, presumably because it generally does not help in computing diagonalizations. Optimization of an arbitrary objective ${\mathcal E}$ over finite libraries of orthogonal bases - the cosine packets library and the wavelet packets library - is not a problem with good algorithmic solutions. Wickerhauser [@Wic91] suggested applying these libraries in problems related to covariance estimation. He proposed the notion of selecting a “best basis” for representing a covariance by optimization of the “entropy functional” $e_H(\gamma) = -\log \gamma$ over all bases in a restricted library. Authors in [@MalZhaPap98], developed a proposal which uses the specific choice $e_2(\gamma) = \gamma^2$.
In the Wickerhauser formulation, one is optimizing over a finite library and there will not generally be a basis in this library which exactly diagonalizes $\Gamma$. Then different strictly convex functions $e(\gamma)$ may end up picking different bases. For example, the quadratic cost function $e_2$ has a special interpretation in this context as it leads to a basis which best diagonalizes $\Gamma$ in a least-squares sense [@DonMalVon98], and is closely related to the Hilbert-Schmidt (HS) norm of the diagonalization error. Similarly, the -log “entropy functional” is connected to the Kullback-Leibler divergence [@DonMalVon98]. Even if the approach developed in [@MalZhaPap98; @DonMalVon98] was specialized to the case of $e_2$, it is not really tied to the specific entropy measure; other additive convex measures can be accommodated such as the $l_\alpha$ norm $\alpha>2$ or the neg-entropy, and the CW proposal makes equally sense. This was the starting point of our work.
### Proposed Algorithm
The optimization problem of ${\mathcal E}$ over bases can be re-expressed as an optimization over trees, as follows. Set ${{\mathscr{E}_{\mathbb{V}}}}[W_{j}^{p}]=\sum_{n_j} e{\left(\mathbb{V}{\left[W_{j}^{p}(n_j)\right]}\right)}$. Then as $\sum_{\mu \in \mathcal{B}}=\sum_{(j,p) \in \mathcal{T}}\sum_{n_j}$, one is actually trying to optimize: $$\sum_{(j,p) \in \mathcal{T}} {{\mathscr{E}_{\mathbb{V}}}}[W_{j}^{p}]$$ over all recursive dyadic partitions of the spectral axis. The best basis $\mathcal{B}$ is then the one that maximizes some measure of the wavelet packets variances, among all the bases that can be constructed from the tree-structured library. The construction of the best basis can be accomplished efficiently using the recursive bottom-up CW algorithm defined by [@CoiWic92]: $$\label{algo_CW} \mathcal{B}_j^p=\begin{cases}
\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1} & \textrm{if} \quad {{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}] +
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]>{{\mathscr{E}_{\mathbb{V}}}}[W_{j}^{p}],\\
\mathcal{B}_{j}^{p} & \textrm{if} \quad {{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}] +
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]\leq {{\mathscr{E}_{\mathbb{V}}}}[W_{j}^{p}].
\end{cases}$$
The chosen criterion lies on the comparison between some measure of WP coefficients variances at the children nodes and their parents. Beside the fact that these variances, and then the basis, will depend on the long-memory parameter (or even the wavelet), there is another even more important reason that prevents from a crude use of such a search algorithm with the cost functions that we defined above (such as $e_2$). Indeed, the band-pass variance of any node is equal to the sum of those of its children. Hence, it is not a difficult matter to check that any strictly convex cost functional such as those specified above, e.g. $e_2$ or $e_H$, will systematically provide the basis corresponding to the finest partition of the spectral axis, which is clearly the worst in terms of complexity (i.e. number of wavelet packets). Again, this makes the wavelet packets machinery only of limited interest.
Therefore, motivated by the above discussion, we were led to define, for the wavelet packets $W_{j+1}^{2p}$ and $W_{j+1}^{2p+1}$, a new type of WP variance cost measure as follows: $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=\begin{cases}
0 & \textrm{if} \quad \mathbb{V}[W_{j+1}^{2p}]\leq A_0\mathbb{V}[W_{j+1}^{2p+1}]\\
\mathbb{V}[W_{j+1}^{2p}] & \textrm{otherwise.}
\end{cases}$$ $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=\begin{cases}
0 & \textrm{if} \quad \mathbb{V}[W_{j+1}^{2p+1}]\leq A_0\mathbb{V}[W_{j+1}^{2p}]\\
\mathbb{V}[W_{j+1}^{2p+1}] & \textrm{otherwise.}
\end{cases}$$ where $A_0$ is a fixed positive constant (its value will depend for instance on the singularity frequency and will be given in the proof of Proposition \[Prop\_1\_fact\]).
In the following, when we write (with a slight abuse of notation) that ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=0$ or ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0$, it will mean respectively that there exists a constant $A_0<1$ such that $\mathbb{V}[W_{j+1}^{2p}]\leq
A_0\mathbb{V}[W_{j+1}^{2p+1}]$ or $\mathbb{V}[W_{j+1}^{2p+1}]\leq A_0\mathbb{V}[W_{j+1}^{2p}]$. In these cases we will also use respectively the notations, $$\mathbb{V}[W_{j+1}^{2p}]\ll\mathbb{V}[W_{j+1}^{2p+1}]\ \ \ \ \ \textrm{and}\ \ \ \ \ \
\mathbb{V}[W_{j+1}^{2p+1}]\ll\mathbb{V}[W_{j+1}^{2p}]$$
Using the criterion defined above, algorithm (\[algo\_CW\]) becomes[^1]: $$\label{new_algo_CW}
\mathcal{B}_j^p=\begin{cases}
\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1}, & \textrm{if} \quad
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=0 ~\textrm{or}~{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0,\\
\mathcal{B}_{j}^{p}, & \textrm{otherwise.}
\end{cases}$$ In the following, we use this algorithm to build the best-ortho-basis for a Gegenbauer process.
The $1$-factor case {#sec:33}
-------------------
It is natural to build the best basis according to the shape of the PSD of our process. More precisely, the basis is a function of the location of the singularities. It means that in the case of $1$-factor Gegenbauer process, the basis depends directly on the value of the Gegenbauer frequency. Using the notations defined in the previous section, the recursive construction is summarized in the following proposition.
\[Prop\_1\_fact\] If $(X_t)_t$ is a stationary $1$-factor Gegenbauer process, with parameters $(d,\nu,\sigma)$ then, at node $(j,p)$, if the frequency $\nu$ is in the interval $I_{j}^{p}=[\frac{p}{2^j},\frac{p+1}{2^j}[$, then: $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=0\ \ \ \ \ \textrm{or}\ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0,$$ and consequently for algorithm (\[new\_algo\_CW\]): $$\mathcal{B}_j^p=\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1}.$$ Furthermore, if the frequency $\nu$ is in the closure of the intervals $I_{j+1}^{2p}$ and $I_{j+1}^{2p+1}$, then: $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+2}^{4p+1}]=0\ \ \ \ \ \textrm{and}\ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+2}^{4p+2}]=0,$$ and consequently for algorithm (\[new\_algo\_CW\]): $$\mathcal{B}_j^p=\mathcal{B}_{j+2}^{4p}\cup\mathcal{B}_{j+2}^{4p+1}\cup\mathcal{B}_{j+2}^{4p+2}\cup
\mathcal{B}_{j+2}^{4p+3}.$$
Proof: [*See Appendix A*]{}.
To construct the best-ortho-basis of a $1$-factor Gegenbauer process, we propose Algorithm \[algo:1\] which proceeds according to Proposition \[Prop\_1\_fact\] and the aggregation relation defined in (\[new\_algo\_CW\]).
\[1factoralgo\]
$Tree(j,p)=0$.
$Tree(j,p+1)=1$ $Tree(j,p)=1$
$Tree(j,p)=0$
This algorithm is decomposed into two mains loops. The first one builds a family where the best-ortho-basis is included. The second loop is a pruning of the family to obtain the best-ortho-basis. This second loop corresponds to the second part of Proposition \[Prop\_1\_fact\].
The algorithm we propose is very fast involving only simple comparisons, and it does not require the calculation of variances of WP coefficients. To illustrate the computational speed of our algorithm we provide in Fig.\[Tps\_construct\_basis\] some computation times to build bases using our method and the method of [@Whi01][^2]. In this example, we are only interested in the time needed to build the basis. These bases are built to simulate Gegenbauer process with a singularity located at $1/12$ and length equal to $2^J$, with $J=6,\dots,13$. The solid line corresponds to the computation time of the algorithm we propose. The symbols $'+'$, $'\times'$ and $'.'$ correspond to the computation time using the method of [@Whi01] in the case of respectively $'db10'$ (Daubechies wavelet with $q=10$ vanishing moments), $'sym10'$ (Symmlet $q=10$) and $'coif5'$ (Coiflet $q=10$). In every case the computation time increases with the length of the process. However this time increases always much faster for [@Whi01] than for the current method (the ratio of computation times is 10 to 300 times larger for the competitor method for series of length $64$ to $8192$). Typically, for a 8192-sample series, it takes 100 ms to our algorithm to find the best basis while [@Whi01] algorithm requires 30 s.
### Examples {#examples .unnumbered}
We give two examples of construction of bases. Fig.\[basis\_spec\_freq\].(a) depicts the basis built using the first part of Algorithm \[algo:1\], to simulate a stationary Gegenbauer process with frequency $\nu=1/12$. Fig.\[basis\_spec\_freq\].(b) shows the basis constructed in the case of a stationary Gegenbauer process with $\nu=0.375$. The last case corresponds to the second situation of Proposition \[Prop\_1\_fact\]. One may remark that unlike the first case where the tree has at least one leaf at each scale, in this second case, because of the particular value of the Gegenbauer frequency, there exists scale for which the tree has no leaf (see scale $j=2$). For comparative purposes, observe that the basis provided by the approach of [@Whi01] (with a threshold 0.01) is highly dependent of the wavelet choice. For example in Fig.\[basis\_spec\_freq\].(c) ($'db3'$), one cannot have an idea of the singularity location. In Fig.\[basis\_spec\_freq\].(d) ($'coif5'$), two singularties are apparent while only one is relevant. In both last cases, the basis is clearly over-partitioned.
The $k$-factor case {#sec:34}
-------------------
In this section we are interested in the general case: the construction of the appropriate basis to simulate a $k$-factor Gegenbauer process. To achieve this goal, let us consider $(X_t^1)_t$ and $(X_t^2)_t$ as respectively a $(k-1)$-factor and a $1$-factor Gegenbauer processes. We denote $(d_1,\nu_1,\dots,d_{k-1}, \nu_{k-1})$ and $(d_{k},\nu_{k})$ the parameters of $(X_t^1)_t$ and $(X_t^2)_t$. Let $\mathcal{B}_1$ and $\mathcal{B}_2$ the best-ortho-bases of $(X_t^1)_t$ and $(X_t^2)_t$. We denote respectively $\mathcal{T}_1$ and $\mathcal{T}_2$ the trees associated with the bases $\mathcal{B}_1$ and $\mathcal{B}_2$.
Let $(X_t)_t$ be a $k$-factor Gegenbauer process with parameters $(d_1,\nu_1,\dots,d_{k},\nu_{k})$. We denote $\mathcal{B}$ the appropriate basis and $\mathcal{T}$ the associated tree. Let $\mathcal{B}'$ be the family equal to the union of the bases $\mathcal{B}_1$ and $\mathcal{B}_2$ and let $\mathcal{T}'$ be the associated tree. We are now ready to state the following,
\[Prop\_k\_fact\] Under the previous assumptions:
1. $\mathcal{B}\subset\mathcal{B}'$
2. Let $(j,p)$ be node in the tree $\mathcal{T}'$ such that there exists $r^*=1,\dots,J-j$ and $s^*=0,\dots,2^{r^*}-1$ such that $(j+r^*,2^{r^*}p+s)$ is also in the tree $\mathcal{T}'$. Then: $$(j,p)\not\in\mathcal{T}\ \ \ \ \ \textrm{and}\ \ \ \ \ (j+r^*,2^{r^*}p+s)\in\mathcal{T}$$
Proof: [*See Appendix A*]{}.
According to this last proposition, the best-ortho-basis of a $k$-factor Gegenbauer process may be built using $k$ well chosen best-ortho-bases of $1$-factor Gegenbauer processes. The steps outlined in Algorithm \[algo:2\] allow to build the appropriate basis to simulate a $k$-factor Gegenbauer process. This procedure lies on Algorithm \[1factoralgo\] and results given in Proposition \[Prop\_k\_fact\].
\[kfactoralgo\]
Construct the best-ortho-basis $\mathcal{B}_i$ and associated tree $Tree_i$ using Algorithm \[algo:1\]. $Tree=\cup_{i=1}^{k}Tree_i$ (implemented using e.g. the logical OR operator under R or Matlab).
$Tree(j,p)=0$
### Example {#example .unnumbered}
Here, we give an example of construction of the best-ortho-basis for a $2$-factor Gegenbauer process $(X_t)_t$ with Gegenbauer frequencies $1/12$ and $1/24$. Fig.\[basis\_BoB\].(a) and \[basis\_BoB\].(b) show the best-ortho-bases $\mathcal{B}_1$ and $\mathcal{B}_2$ of the processes $(X^1_t)_t$ and $(X^2_t)_t$ (see the previous section for construction of these bases). The family $\mathcal{B}^*$ equal to $\mathcal{B}_1\cup\mathcal{B}_2$ is given in Fig.\[basis\_BoB\].(c). This family is not a basis, the intersections between its elements are not always empty, e.g. at depth $j=3$, the elements at $p=0$ and $p=1$ should not be considered as elements of the best-ortho-basis and must be pruned away. This is accomplished using the methodology developed above, and an appropriate basis for the process $(X_t)_t$ is obtained as represented in Fig.\[basis\_BoB\].(d).
Back to the original CW algorithm
---------------------------------
Our aim here is to shed light on our best-basis search algorithm by relating it to the original CW one. More precisely, we shall give an additive cost functional, which can be used within the CW algorithm, that is closely linked to our proposal in (\[new\_algo\_CW\]). Basically, the aggregation relation (\[new\_algo\_CW\]) can be thought of as a rule which at each level, enforces dyadic splitting of an interval $I^p_j$ if (and only if) the WP variance inside that interval is above a threshold, and the node corresponding to this interval is marked as a branch (non-terminal). Otherwise the interval is kept intact (marked as a leaf) and the children nodes in the tree are pruned away. Doing so, this procedure implicitly tries to track the packets that contain the singularities of the process. Hence, motivated by these observations, an additive variance cost functional satisfying this aggregation rule can be defined as: $$\label{Eq:costthresh1}
{{\mathscr{E}_{\mathbb{V}}}}[W^p_j]=\beta^2_{j,p} \mathds{1}{\left(\beta^2_{j,p} \geq \delta\right)}$$ where $\beta^2_{j,p}$ is the band-pass variance as before and $\delta$ is a strictly positive threshold. The original recursive (bottom-up) CW could then be used to minimize such a cost functional (termed as “Number above a threshold” functional in Wickerhauser book [@Wic94]). Unfortunately, the threshold remains an important issue to fix, and depends jointly on the singularity frequencies, the long memory parameter, the WP level and even its location. It is therefore awkward to choose and control in general. To circumvent such a difficulty, a condition involving the singularity frequencies can substitute for the thresholding condition in (\[Eq:costthresh1\]), that is: $$\label{Eq:costthresh2}
{{\mathscr{E}_{\mathbb{V}}}}[W^p_j]=\beta^2_{j,p} \mathds{1}{\left(\exists~l=1,\ldots,k~|~\nu_l \in I^p_j\right)}$$ From the above arguments, it turns out that minimizing the latter cost (with the CW algorithm) will provide us with the same basis as Algorithm \[algo:2\]. The main difference is that from a numerical standpoint, our construction algorithm is much faster and stable since there is no need to compute explicitly the band-pass variances, which avoids possible numerical integration problems (because of the PSD singularities).
Analysis of decorrelation properties {#sec:4}
====================================
One of the approximations adopted to simulate the Gegenbauer processes using the DWPT is that the coefficients inside each packet of the basis are uncorrelated. Strictly speaking, this is not true, although the expected range of correlation is rather weak as evidently shown by the numerical experiments in Fig.\[corrgg\], in contrast to the long-range dependence of the process in the original domain. This section provides a theoretical result that establishes the asymptotic behavior of the covariance between WP coefficients for a $1$-factor Gegenbauer process.
\[Cov\_Wave\] If $\psi$ has $q\geq1$ vanishing moments with support $[(N_1-N_2+1)/2,(N_2-N_1+1)/2]$ and $X(t)$ is a stationary $1$-factor Gegenbauer process with Gegenbauer frequency $\nu$. Then the wavelet packet coefficients covariance $\textrm{Cov}(W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2))$ decays as:
- $O\left(|2^{j_1}k_1-2^{j_2}k_2|^{2d-1-R_{p_1}-R_{p_2}}\right)$, if $p_1\neq0$ **and $p_2\neq0$,**
- $O\left(|2^{j_1}k_1-2^{j_2}k_2|^{2d-1-R_{\max(p_1,p_2)}}\right)$, if $p_1=0$ **or $p_2=0$,**
- $O\left(|2^{j_1}k_1-2^{j_2}k_2|^{2d-1}\right)$, if $p_1=p_2=0$,
for all, $j_1$, $j_2$, $k_1$ and $k_2$ such that $|2^{j_1}k_1-2^{j_2}k_2|> (N^*+1)(2^{j_1}+2^{j_2})$, with $N^*=\max(N_1,N_2)$, and $R_p=q\sum_{k=0}^{j-1} p^k$ for $p \neq 0$, and $p={\left(p^{j-1} p^{j-2} \ldots p^1
p^0\right)}_2$ is the binary representation of $p$. In the last case, we note that $j_1=j_2=j$.
Proof: [*See Appendix B*]{}.
This proposition generalizes the results given by [@Jen99] and [@Jen00] for the case of the FARIMA process. It makes an interesting statement about the order of correlation between well separated WP coefficients, by establishing that the covariance between $W_{j_1}^{p_1}(k_1)$ and $W_{j_2}^{p_2}(k_2)$ decays exponentially over time and scale space. More precisely, the decay speed for $p_1\neq0$ or $p_2\neq0$, depends on the regularity of the wavelet used, on the memory parameter of the process, and indirectly on the location of singularity through the frequency indices $p_1$ and $p_2$. However, keeping the same notations as in Proposition \[Cov\_Wave\], the larger $q$, the wider the wavelet support and the fewer are the number of wavelet packet coefficients that satisfy the support condition $|2^{j_1}k_1-2^{j_2}k_2|> (N^*+1)(2^{j_1}+2^{j_2})$. Thus, by choosing a wavelet with a large $q$, the rate of decay of autocovariance function increases, but over a subset of WP coefficients. One must then avoid inferring a stronger statement. Nonetheless, the effective support of a wavelet is smaller than the provided bound (see Lemma \[lem\_support\]), and we expect a rapid decay in the WP coefficient’s covariance for translations and dilations satisfying $|2^{j_1}k_1-2^{j_2}k_2|> (N^*+1)(2^{j_1}+2^{j_2})$. The following simulation study confirms these remarks.
Simulation results and discussion {#sec:5}
=================================
Exact correlation of DWPT transformed series
--------------------------------------------
Suppose we take $\mathbf{X}$ as our input stationary Gegenbauer process vector, whose covariance matrix is $\Gamma$. If $\mathcal{B}$ is the best-ortho-basis provided by our algorithm, it follows that the covariance matrix of the transformed series in the WP domain is: $$\label{Eq:exactGam}
\Gamma{\left[\mathcal{B}\right]} = \mathcal{W}^T_{\mathcal{B}} \Gamma \mathcal{W}_{\mathcal{B}}$$ where $\mathcal{W}_{\mathcal{B}}$ is the DWPT transform matrix operating on a vector $\mathbf{X}$, whose columns are the basis elements of $\mathcal{B}$. This equation gives the (exact) covariance structure for a given choice of wavelet (type, number of vanishing moments) and treatment of boundaries (e.g. periodic) since both are in $\mathcal{W}_{\mathcal{B}}$.
Fig.\[corrgg\].(a) depicts the original correlation matrix $\Omega$ (resulting from $\Gamma$) for a Gegenbauer process vector ($N=64$) with parameters $d=0.4$ and $\nu=1/12$. In Fig.\[corrgg\].(b)-(e) are shown the exact correlation matrices resulting from (\[Eq:exactGam\]), using respectively the Daubechies ($q=10$), Symmlet ($q=10$), Coiflet ($q=10$) and Battle-Lemarié wavelets ($q=6$). There is essentially no correlation within the packets that are far from the singularities. The most prominent correlation occurs within the packets close to the singularity. This effect is mainly caused by the support condition stated in Theorem \[Cov\_Wave\] since packets near the singularity are those with smallest length. There is also some correlation between wavelet packets. A significant part of the correlation between two different packets seems to be concentrated along the boundaries between contiguous WP. The latter effect is a consequence of periodic boundary conditions. For example, the periodic boundary effect is higher for the Battle-Lemarié spline wavelet, whose support is 59 (compare to the series length of 64). But except boundary effects, this wavelet has a smallest between and intra-packet correlation particularly inside WP close to the singularity. This can be interpreted as a result of a sharper band-pass localization of the Battle-Lemarié filters, while the other wavelets have side lobes that yield more energy leaks between bands.
To gain insight into these approximate diagonalizing capabilities of the DWPT, we conduct a larger scale experiment where four Gegenbauer processes with different frequencies and long memory parameters $(d,\nu)$ are studied: three 1-factor with $(0.4,1/12)$, $(0.2,1/12)$, $(0.3,0.016)$ and one 2-factor with $(0.3,1/40)-(0.3,1/5)$. Again the influence of the wavelet on the correlation matrix resulting from (\[Eq:exactGam\]) is assessed. For comparative purposes, our bases are systematically compared to those of [@Whi01] for each process (and wavelet as the best-basis of [@Whi01] depends also on the wavelet filter). We also need to consider a criterion to measure the quality of non-correlation. We here propose the Hilbert-Schmidt norm of the diagonalization error, which measures the sum of squares of the off-diagonal elements of the covariance matrix in the best-ortho-basis. As explained above, the method of [@Whi01] tends to over-partition the spectral axis yielding to too many packets. Hence, to penalize such configurations and make the comparison fair, we propose the following penalized criterion [@DonMalVon98; @Donoho97]: $$\label{Eq:S}
S(\mathcal{B})=\|\Omega{\left[\mathcal{B}\right]}-\Omega_0\|_{HS}^2+\lambda\#{\left(\mathcal{B}\right)}$$ where $\Omega{\left[\mathcal{B}\right]}$ is the correlation matrix resulting from (\[Eq:exactGam\]) and $\Omega_0$ is the correlation matrix of a white noise, i.e. the identity matrix and $\lambda$ is a weight parameter balancing between the diagonalization error and the complexity of the tree associated to $\mathcal{B}$ as measured by $\#(\mathcal{B})$, the number of WP (leaves of the tree) in the basis. The value of the weight $\lambda$ is determined by considering two extreme cases. On the one hand, in the Shannon basis, we can assume that the decorrelation of the covariance matrix of the Gegenbauer process is perfect but the tree associated to this basis has too many leaves and the penalty term is the highest; thus $S(\mathcal{B}_S)=2^J\lambda$. On the other hand, if one considers the basis $\mathcal{B}_0$ composed with only one leaf (i.e. the root packet $W^0_0$), there isn’t any decorrelation of the covariance matrix. That is $S(\mathcal{B}_0)=\|\Omega-\Omega_0\|_{HS}^2$, with $\Omega$ the correlation matrix of the Gegenbauer process whose variance-covariance matrix is $\Gamma$. Equating the scores of these two extreme cases yields the following weight: $$\lambda=\frac{\|\Omega-\Omega_0\|_{HS}^2}{2^J-1}.$$
Table.\[table:exact\] summarizes the scores $S$ obtained for each process as a function of the wavelet filter properties (type and number of vanishing moments). We here assumed time series of length $N=256$. For details about the calculation of the exact autocovariance function of Gegenbauer processes and hence its associated covariance matrix, see [@And86; @Chung96; @Lapsa97]. These tables show that:
- The basis provided by our algorithm is systematically better than the one given by [@Whi01], whatever the wavelet and process. Over-partitioning is clearly responsible for the bad performance of the approach in [@Whi01]. Meanwhile, the diagonalization error part (not shown here but will be in the next section) remains comparable for both bases. This means that our basis, with a reduced number of packets, does not sacrify the diagonalization quality and yields a diagonalization error comparable to what would be obtained by over-partitioning. It is also worth pointing out that the approach of [@Whi01] fails in providing a basis for spline wavelets, and thus cannot be used in this case. The reason is that their best basis search algorithm strongly relies on a threshold on the wavelet packet filter gain, whose choice remains [*ad hoc*]{}.
- For a given process, the criterion $S$ decreases as the number of vanishing moments increases. This is in a very good agreement with our expectations as stated in Theorem \[Cov\_Wave\].
- From our experiments, we have also noticed that as the number of vanishing moments increases, the best basis provided by [@Whi01] tends towards the basis we propose.
- For all processes, and among all tested wavelets, the Battle-Lemarié spline wavelet appears to provide the best score. This confirms our previous observations. Nonetheless, the observed differences between wavelets become less salient at high number of vanishing moments.
Simulation of Gegenbauer processes
----------------------------------
This section is devoted to the illustration of some simulation examples of Gegenbauer processes. The same Gegenbauer processes as in the previous section are considered. For each process, wavelet type and number of vanishing moments, $M=500$ time series of length $N=256$ were generated according to Section \[sec:31\], using our basis and that provided by [@Whi01] method. For each simulated series, an unbiased estimate of the autocovariance function for the first $N/2$ lags was calculated. An average of the autocovariance function (over the $M$ estimates) was then obtained and the associated correlation matrix $\bar{\Omega}$ was constructed. Finally, the HS norm of deviation between the true and averaged sample correlation matrices was computed: $$B(\mathcal{B})=\|\Omega-\bar{\Omega}\|_{HS}^2$$ As previously, a penalized version of $B(\mathcal{B})$ by the complexity of the tree associated to $\mathcal{B}$, as in (\[Eq:S\]) was also calculated (denoted $B_{\text{pen}}$)[^3]. In order to determine which part of the score $B_{\text{pen}}$ is the largest contributor to the performance, and in order to not favour our best basis construction algorithm, both $B$ and $B_{\text{pen}}$ are displayed. The score $B$ of the Hosking method [@Hosk84], which is an exact simulation scheme, is also reported. The results are summarized in Table.\[table:simu\].
As revealed by these tables, the deviation error part $B$ is comparable between the two best basis construction methods, but the penalized version differs significantly. This is caused by a fairly large difference in the “size” of the basis. Again, this backs up the statement that the method of [@Whi01] over-partitions the spectrum, and also agrees with the fact that in terms of performance our method generates as reasonable Gegenbauer processes as [@Whi01] with less number of packets. This also clearly provides a numerical support to our claim that good quality DWPT-based best-basis search, and then simulation, of Gegenbauer processes can be achieved without necessarily depending on the wavelet choice, just as it has been extensively done for $1/f$ processes using the DWT. But, one has to keep in mind that the quality of the reconstructed covariance structure (by assuming almost decorrelation of WP coefficients in the best-ortho-basis), compared to the true covariance of a Gegenbauer process will still depend on the wavelet. From this point of view (decorrelation performance), the numerical results observed for simulated data essentially confirm those reported in the previous subsection.
Both the score $B$ and its penalized version exhibit a decreasing tendency with increasing number of vanishing moments. This numerical evidence is a confirmation of the previous subsection findings and support our claims in Theorem \[Cov\_Wave\]. The Battle-Lemarié spline wavelet seems to perform the best (in terms of both $B$ and $B_{\text{pen}}$), followed closely by the symmlets. The difference in performance between all wavelet types vanishes as $q$ increases.
Conclusion
==========
In this paper, we provided a new method to build approximate diagonalizing bases for $k$-factor Gegenbauer processes. Exploiting the intuitive fact that a wavelet packet library contains the basis where a Gegenbauer process could be (almost) whitened, our best-ortho-basis search algorithm was formulated in the case of $1$-factor process and the fast search algorithm of Coifman-Wickerhauser was adapted to find this best basis. Using this framework, our methodology was posed in a well principled way and the uniqueness of the basis was guaranteed. Furthermore, unlike the approach [@Whi01], it is very fast (see simulations), does not depend on the wavelet choice, and is not very sensitive to the length of the time series. As the method construction of the best basis for simulation of a $k$-factor Gegenbauer process relies on the $1$-factor construction method, the same conclusions hold.
Then, we studied the error of diagonalization in the best-ortho-basis. Towards this goal, we established the decay speed of the correlation between two WP coefficients. These results generalize the work of [@Jen99] and [@Jen00] provided in the case of FARIMA processes. The numerical evidence shown by our experimental study confirmed these theoretical findings. It has also shown that the algorithm introduced in the paper is appealing in that it provides good quality simulated Gegenbauer processes with computational simplicity and reduced complexity bases independently of the wavelet, which is a clear improvement over the existing method in [@Whi01]. Owing to these appealing theoretical and empirical properties, and given its practical simplicity, we feel the general practitioner will be attracted to our simulator.
This new method of simulating Gegenbauer processes gives a new perspective for analyzing processes whose PSD singularities occur at any frequency in the Nyquist interval. In such a task, one could have the basis by knowing the process parameters ($\nu$ in particular). Thus, our method has a direct application for bootstrap-based inference in the presence of Gegenbauer noise.
A remaining important open problem is how we could extend this work if the question of interest becomes that of estimating the parameters of a $k$-factor Gegenbauer given one or more sample paths of this process. This estimation problem can be accomplished in a maximum likelihood framework once the diagonalizing basis is found. In this case, the best-ortho-basis cannot be found by a naive straightforward application of Algorithm \[algo:2\]. Nevertheless, we have some promising directions that are now under investigation. Establishing the asymptotic behavior of such estimators also remains an open problem. One could also refine the estimation process by handling the residual correlation structure of the WP coefficients via explicit modeling by a low-order autoregressive process as recently suggested in [@Craigmile04] for $1/f$ fractionally-differenced processes. Additional research is still required and our current work is focusing on these directions.
Appendix A {#appendix-a .unnumbered}
==========
[[***Proof Proposition \[Prop\_1\_fact\]:*** ]{}]{}
- Let’s consider the node $(j,p)$. We compute the variance of the WP coefficients at its two children: $(j+1,2p)$ and $(j+1,2p+1)$. Without loss of generality, we assume that the frequency $\nu$ is in the interval $I_{j+1}^{2p}=[\frac{2p}{2^{j+1}},\frac{2p+1}{2^{j+1}}[$. Then a good approximation of the variance of the WP coefficient is given by the integral over the interval $I_{j+1}^{2p}$ of the PSD. On this interval, a very good approximation to the PSD of the process $f(\lambda)=\frac{\sigma^2}{2\pi}|2(\cos2\pi\lambda-\cos2\pi\nu)|^{-2d}$ is given by $C_0|\lambda-\nu|^{-2d}$ with $C_0$ a positive constant.
Two different cases are then distinguished with associated values of $A_0$:
$\ast$ $$\begin{aligned}
\mathbb{V}[W_{j+1}^{2p}] &=& C_0\int_{\frac{2p}{2^{j+1}}}^{\nu}|\nu-\lambda|^{-2d}d\lambda+C_0\int_{\nu}^{\frac{2p+1}{2^{j+1}}}|\lambda-\nu|^{-2d}d\lambda\\ &=&\frac{C_0}{1-2d}\left(\left(\nu-\frac{2p}{2^{j+1}}\right)\left(\left(\nu-\frac{2p}{2^{j+1}}\right)^2\right)^{-d}+\left(\frac{2p+1}{2^{j+1}}-\nu\right)\left(\left(\frac{2p+1}{2^{j+1}}-\nu\right)^2\right)^{-d}\right)\\
&=&\frac{C_0}{1-2d} u^{1-2d}\left(1-\left(1-\frac{1}{2^{j+1} u}\right)^{1-2d}\right), \ \ \text{where~} u=\frac{2p+1}{2^{j+1}}-\nu\end{aligned}$$ $$\begin{aligned}
&\geq&\frac{C_0}{2^{j+1}}u^{-2d}\left(1+\frac{2d}{2^{j+1}u}\right)\\ &\geq&\frac{C_0}{2^{j+1}}u^{-2d}\left(1+\frac{2d}{2^{j+1}\frac{1}{2^{j+1}}}\right)=\frac{C_0}{2^{j+1}}\left(\frac{2p+1}{2^{j+1}}-\nu\right)^{-2d}\left(1+2d\right)\\\end{aligned}$$
where the last inequality is a consequence of the fact that $u \leq 2^{-(j+1)}$ in this case.
To compute the variance of $W_{j+1}^{2p+1}$ we denote $\lambda^*$ the location of the maxima of the PSD $f$ over the interval $I_{j+1}^{2p+1}$. As $f$ is a non-increasing function over $[\frac{2p+1}{2^j},\frac{2p+2}{2^{j+1}}]$, it follows that this variance is bounded by a rectangle area (to a good approximation in this case): $$\begin{aligned}
\mathbb{V}[W_{j+1}^{2p+1}]
&\leq& \frac{\sigma^2}{2\pi2^{j+1}}|2(\cos2\pi\lambda^*-\cos2\pi\nu)|^{-2d}.\end{aligned}$$ Using the same approximation of the PSD as previously, we obtain: $$\mathbb{V}[W_{j+1}^{2p+1}]\leq\frac{C_0}{2^{j+1}}\left(\frac{2p+1}{2^{j+1}}-\nu\right)^{-2d}.$$ Thus $$\mathbb{V}[W_{j+1}^{2p+1}]\leq\frac{1}{1+2d}\mathbb{V}[W_{j+1}^{2p}]=A_0\mathbb{V}[W_{j+1}^{2p}],~ 0<A_0<1$$ Therefore, in this case we can write that $\mathbb{V}[W_{j+1}^{2p+1}]\ll\mathbb{V}[W_{j+1}^{2p}]$, and following the criterion defined in section \[sec:31\] we have ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0$. Consequently, at the node $(j,p)$, the algorithm (\[new\_algo\_CW\]) gives us: $$\mathcal{B}_j^p=\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1}.$$
$\ast$ Using the same steps as in the first case, we prove that: $$\begin{aligned}
\label{var_gauche}
\mathbb{V}[W_{j+1}^{2p}] &=&
\frac{C_0}{1-2d}\left(\frac{2p+1}{2^{j+1}}-\nu\right)^{1-2d}\left(1-\left(1-\frac{1}{2^{j+1}(\frac{2p+1}{2^{j+1}}-\nu)}\right)^{1-2d}\right).
\end{aligned}$$ When $\frac{2p+1}{2^{j+1}}\sim\nu$, the rectangle approximation is no longer valid. But a good approximation of the PSD in the interval $[\frac{2p+1}{2^{j+1}},\frac{2p+2}{2^{j+1}}]$ can be $C_0|\lambda-\nu|$, where the constant $C_0$ is the same as previously. Then, after some manipulations: $$\begin{aligned}
\label{var_droite}
\mathbb{V}[W_{j+1}^{2p+1}] &=&
\frac{C_0}{1-2d}\left(\frac{2p+1}{2^{j+1}}-\nu\right)^{1-2d}\left(1+\left(1-\frac{1}{2^{j+1}\left(\frac{2p+1}{2^{j+1}}-\nu\right)}\right)^{1-2d}\right).
\end{aligned}$$
As by assumption $\frac{2p+1}{2^{j+1}}\sim\nu$, we have that $0<\frac{2p+1}{2^{j+1}}-\nu<\frac{1}{2^{j+2}}$ and then, $$\left(1-\frac{1}{2^{j+1}\left(\frac{2p+1}{2^{j+1}}-\nu\right)}\right)^{1-2d}<0.$$ Finally, combining equations (\[var\_gauche\]) and (\[var\_droite\]), we obtain $\mathbb{V}[W_{j+1}^{2p+1}] \sim A_1\mathbb{V}[W_{j+1}^{2p}],$ where: $$A_1 = \frac{1+\left(1-\frac{1}{2^{j+1}\left(\frac{2p+1}{2^{j+1}}-\nu\right)}\right)^{1-2d}}{1-\left(1-\frac{1}{2^{j+1}\left(\frac{2p+1}{2^{j+1}}-\nu\right)}\right)^{1-2d}} < 1.$$ In this case we can write that $\mathbb{V}[W_{j+1}^{2p+1}]\ll\mathbb{V}[W_{j+1}^{2p}]$, and using the criterion defined in section \[sec:31\] we obtain ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0$. Finally, at the node $(j,p)$, algorithm (\[new\_algo\_CW\]) gives us: $$\mathcal{B}_j^p=\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1}.$$
- In the case where the frequency $\nu$ is in the closure of the intervals $I_{j+1}^{2p}$ and $I_{j+1}^{2p+1}$, we have no relationship as ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=0$ or ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0$, and one could not conclude. However, fortunately, at the depth $j+2$, we still have: $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+2}^{4p+1}]=0\ \ \ \ \ \textrm{and}\ \ \ \ \
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+2}^{4p+2}]=0.$$ Then we easily obtain that for algorithm (\[new\_algo\_CW\]): $$\mathcal{B}_j^p=\mathcal{B}_{j+2}^{4p}\cup\mathcal{B}_{j+2}^{4p+1}\cup\mathcal{B}_{j+2}^{4p+2}\cup\mathcal{B}_{j+2}^{4p+3}.$$
[[***Proof Proposition \[Prop\_k\_fact\]:*** ]{}]{}
1. Let $(j,p)$ be a node. We assume that this node is not in the tree $\mathcal{T}'$. It means that this node is not in the tree $\mathcal{T}_1$ neither in $\mathcal{T}_2$ and in terms of threshold, we have $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}(1)]=0\ \ \textrm{or}\ \
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}(1)]=0\ \ \ \ \ \ \
\ \ \textrm{\bf and\rm}\ \ \ \ \ \ \ \ \
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}(2)]=0\ \ \textrm{or}\ \
{{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}(2)]=0.$$ As the tree $\mathcal{T}_2$ is associated to the best-ortho-basis of a $1$-factor Gegenbauer process, the fact that the node $(j,p)$ is node in the tree $\mathcal{T}_2$ means that the frequency $\nu_k$ is not in the interval $I_{j}^p=[\frac{p}{2^j},\frac{p+1}{2^j}]$. Then in this interval, the function $|2(\cos2\pi\lambda-\cos2\pi\nu_k)|^{-2d_k}$ is bounded and has a maximum at frequency $\lambda^*\in
I_j^p$. Then, $$\begin{aligned}
\label{Eq:varkfactor}
\mathbb{V}[W_{j+1}^{2p}]
&\leq&
\frac{\sigma^2}{2\pi}|2(\cos2\pi\lambda^*-\cos2\pi\nu_k)|^{-2d_k}\int_{\frac{2p}{2^{j+1}}}^{\frac{2p+1}{2^{j+1}}}\prod_{i=1}^{k-1}|2(\cos2\pi\lambda-\cos2\pi\nu_i)|^{-2d_i}d\lambda \nonumber \\
&=&
\frac{\sigma^2}{2\pi}|2(\cos2\pi\lambda^*-\cos2\pi\nu_k)|^{-2d_k}\mathbb{V}[W_{j+1}^{2p}(1)].
\end{aligned}$$ and, using the same argument, $$\mathbb{V}[W_{j+1}^{2p+1}]\leq\frac{\sigma^2}{2\pi}|2(\cos2\pi\lambda^*-\cos2\pi\nu_k)|^{-2d_k}\mathbb{V}[W_{j+1}^{2p+1}(1)].$$ Finally, as ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}(1)]=0$ or ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}(1)]=0$, we have $\mathbb{V}[W_{j+1}^{2p}]\gg\mathbb{V}[W_{j+1}^{2p+1}]$ or $\mathbb{V}[W_{j+1}^{2p}]\gg\mathbb{V}[W_{j+1}^{2^rp+1}],$ which means, $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}]=0\ \ \ \ \ \ \ \textrm{or}\ \ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}]=0.$$ Then, $\mathcal{B}_j^p=\mathcal{B}_{j+1}^{2p}\cup\mathcal{B}_{j+1}^{2p+1}$ and so the node $(j,p)$ is not in the tree $\mathcal{T}$. Finally, $$\mathcal{B}\subset\mathcal{B}'.$$
2. Here $(j,p)$ and $(j+r^*,2^{r^*}p+s^*)$ (for $s^*=0,\dots,2^{r^*}-1$) are in the tree $\mathcal{T}'$. We denote $r$ the minimum value of $r^*$ for which there exists a $s$ ($s=0,\dots,2^r-1$) such that the node $(j+r,2^{r}p+s)$ is in the tree $\mathcal{T}'$.\
Then the fact that the nodes $(j,p)$ and $(j+r,2^rp+s)$ are in the tree $\mathcal{T}'$ means that $(j,p)$ is in $\mathcal{T}_1$ or in $\mathcal{T}_2$ and $(j+r,2^rp+s)$ is in $\mathcal{T}_2$ or in $\mathcal{T}_1$ (it is important to remark that both $(j,p)$ and $(j+r,2^rp+s)$ cannot be in $\mathcal{T}_1$ or in $\mathcal{T}_2$). Without loss of generality, we assume that $(j,p)$ is in $\mathcal{T}_2$ and $(j+r,2^*p+s)$ is in $\mathcal{T}_1$. All the calculations made in the following remain valid if we consider that $(j,p)$ is in $\mathcal{T}_1$ and $(j+r,2^rp+s)$ is in $\mathcal{T}_2$. To simplify the notations, we assume also that there exists a $s$ which is even.\
We denote $W_j^p(1)$ and $W_j^p(2)$, for $j=0,\dots,J$ and $p=0,2^j-1$, the wavelet packet coefficients of respectively the processes $(X^1_t)_t$ and $(X^2_t)_t$. From these sub-processes, we have that for the algorithm CW,
- for the tree $\mathcal{T}_1$: $$\mathcal{B}_{j+r-1}^{2^{r-1}p+\frac{s}{2}}(1)=\mathcal{B}_{j+r}^{2^rp+s}(1)\cup\mathcal{B}_{j+r}^{2^rp+s+1}(1)$$ because ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s}(1)]=0$ or ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s+1}(1)]=0$,
- for the tree $\mathcal{T}_2$: $$\mathcal{B}_j^p(2)=\mathcal{B}_j^p(2)$$ because ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p}(2)]=\mathbb{V}[W_{j+1}^{2p}(2)]$ and ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+1}^{2p+1}(2)]=\mathbb{V}[W_{j+1}^{2p+1}(2)]$,
We consider the intervals $I_{j+r}^{2^rp+s}=[\frac{2^rp+s}{2^{j+r}}, \frac{2^rp+s+1}{2^{j+r}}]$ and $I_{j+r}^{2^rp+s+1}=[\frac{2^rp+s+1}{2^{j+r}}, \frac{2^rp+s+2}{2^{j+r}}]$. As the node $(j,p)$ is in the tree $\mathcal{T}_2$, and as $\mathcal{T}_2$ the tree of a basis, the frequency $\nu_k$ is not in the interval $I_{j+r}^{2^rp+s}\cup I_{j+r}^{2^rp+s+1}$.\
We denote $\lambda^*$ the location of the maximum of $|2(\cos2\pi\lambda-\cos2\pi\nu_k)|^{-2d_k}$ in the interval $I_{j+r-1}^{2^{r-1}p+s/2}$ (Note that the maximum is bounded). From (\[Eq:varkfactor\]), we have: $$\begin{aligned}
\mathbb{V}[W_{j+r}^{2^rp+s}]
&\leq&
\frac{\sigma^2|2(\cos2\pi\lambda^*-\cos2\pi\nu_k)|^{-2d_k}}{2\pi}\mathbb{V}[W_{j+r}^{2^rp+s}(1)],
\end{aligned}$$ $$\begin{aligned}
\mathbb{V}[W_{j+r}^{2^rp+s+1}]
&\leq&
\frac{\sigma^2|2(\cos2\pi\lambda^*-\cos2\pi\nu_k)|^{-2d_k}}{2\pi}\mathbb{V}[W_{j+r}^{2^rp+s+1}(1)].
\end{aligned}$$ Then, as ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s}(1)]=0$ or ${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s+1}(1)]=0$, we have $\mathbb{V}[W_{j+r}^{2^rp+s}]\gg\mathbb{V}[W_{j+r}^{2^rp+s+1}]$ or $\mathbb{V}[W_{j+r}^{2^rp+s}]\gg\mathbb{V}[W_{j+r}^{2^rp+s+1}],$ which means, $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s}]=0\ \ \ \ \ \ \ \textrm{or}\ \ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+r}^{2^rp+s+1}]=0.$$ Then, because of the particular choice of $r$, $$\mathcal{B}_j^p=\bigcup_{i=0}^{2^r-1}\mathcal{B}_{j+r}^{2^rp+i}.$$ Finally, the node $(j,p)$ is not in the tree $\mathcal{T}$.\
However, the fact that the node $(j+r,2^rp+s)$ is in the tree $\mathcal{T}_1$ means that, $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r+1}^{2^{r+1}p+2s}(1)]=W_{j+r+1}^{2^{r+1}p+2s}(1)\ \ \ \ \textrm{and}\ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+r+1}^{2^{r+1}p+2s+1}(1)]=W_{j+r+1}^{2^{r+1}p+2s+1}(1).$$ As the frequency $\nu_k$ is not in the interval $I_{j+r}^{2^rp+s}$, we obtain easily that $${{\mathscr{E}_{\mathbb{V}}}}[W_{j+r+1}^{2^{r+1}p+2s}]=W_{j+r+1}^{2^{r+1}p+2s}\ \ \ \ \textrm{and}\ \ \ \ \ {{\mathscr{E}_{\mathbb{V}}}}[W_{j+r+1}^{2^{r+1}p+2s+1}]=W_{j+r+1}^{2^{r+1}p+2s+1}.$$ Finally, the node $(j+r,2^rp+s)$ is in the tree $\mathcal{T}$. Using the argument, we show that the node $(j+r^*,2^{r^*}p+s^*)$ is in the tree $\mathcal{T}$.
Appendix B {#appendix-b .unnumbered}
==========
To prove Theorem \[Cov\_Wave\], the following preliminary lemmas are needed.
\[lem\_moment\] Let $\psi$ be a wavelet with $q$ vanishing moments, and the associated high-pass QMF filters $h$ and $g$. Then, for all $j$ and $p=0,\dots,2^j-1$, the moments of the WP function $\psi^j_p$ are such that: $$\mathcal{M}_{j,p}(r)=\int_{\mathbb{R}} t^r \psi_j^p(t) dt = \delta(r)\delta(p), ~ for ~ 0 \leq r < R_p$$ where $R_0=1$ and $R_p=q\sum_{k=0}^{j-1} p_k$ for $p \neq 0$, and $p={\left(p_{j-1} p_{j-2} \ldots p_1 p_0\right)}_2$ is the binary representation of $p$.
[[***Proof Lemma \[lem\_moment\]:*** ]{}]{} Note that for $p=1$ (wavelet basis), our result specializes to the traditional relation $\mathcal{M}_{j,1}(r)=0$ for $0 \leq r < q$. The lemma can be proved either by induction in the original domain, or using explicit proof in the Fourier domain. We shall proceed according to the latter. By iterating the actions of the QMF filters $h$ or $g$, from the root of the binary tree, to extract the appropriate range of frequencies, one can write that: $$\label{Eq:psijp1}
\hat{\psi}_{j}^p(\omega) = {\left[\prod_{k=0}^{j-1}
F_{p_{j-k-1}}(2^{k}\omega)\right]} \hat{\phi}(\omega)$$ where the sequence of filters $F_{p_k}$ is chosen according to $p=2^{j-1}p_{j-1} + 2^{j-2} p_{j-2}+ \ldots + 2p_1 + p_0$: $$F_{p_k} = \begin{cases}
\hat{h} & if ~ p_k = 0 \\
\hat{g} & if ~ p_k = 1
\end{cases}$$ and $\hat{\phi}(0) \neq 0$.
For compactly supported wavelets with $q$ vanishing moments, the associated high-pass filters $\hat{g}$ has $q-1$ zeros at $\omega=0$: $$\label{Eqqmfg}
\hat{g}(\omega) = {\left(1-e^{-i\omega}\right)}^q P{\left(e^{i\omega}\right)}$$ where $P(.)$ is a trigonometric polynomial bounded around $\omega=0$. The number of vanishing moments of $\psi_j^p(t)$ is equivalently given by the number of vanishing derivatives of $\hat{\psi}_j^p(\omega)$ at $\omega=0$, that is: $$\mathcal{M}_{j,p}(r)={\left[{\left(\frac{1}{i}\partial_\omega\right)}^r \hat{\psi}_j^p(\omega)\right]}_{\omega=0} ~
\text{for} ~ r=0,\ldots,R_p-1$$
- If $p=0$, $\hat{\psi}_{j}^p(\omega)$ is just the product of low-pass filters, and $\psi_j^0(t)=\phi_j(t)$ the scaling function at depth $j$. Then, $\mathcal{M}_{j,0}(r)=\hat{\phi}_j(0)$, which is non-zero with $R_0=1$. If additional constraints are imposed on the wavelet choice (e.g. Coiflets), $\mathcal{M}_{j,0}(r)$ might be zero for $1 \leq r < q$.
- If $p\neq0$, from (\[Eq:psijp1\]) we can write: $$\hat{\psi}_{j}^p(\omega) = \prod_{k|p_{j-k-1}=1} {\left(1-e^{-i2^{k}\omega}\right)}^q Q(\omega)$$ where $Q(.)$ is again bounded around $\omega=0$. The number of vanishing moments is then given by the number of zeros at $\omega=0$ which is $R_p=q\sum_k p_k$. The lemma follows.
\[lem\_support\] If the QMF $h$ has a support in ${\left[N_1,N_2\right]}$, then the support of the WP function $\psi_j^p(t)$ at each node ${\left(j,p\right)}$ in the WP binary tree is always included in ${\left[-2^j{\left(N^*+1\right)},2^j{\left(N^*+1\right)}\right]}$, with $N^*=\max{\left(|N_1|,|N_2|\right)}$.
[[***Proof Lemma \[lem\_support\]:*** ]{}]{} This is proved by induction. We also use the fact that $\psi_0$ will be supported in the interval ${\left[N_1,N_2\right]}$ and $\psi_1$ in ${\left[\frac{N_1-N_2+1}{2},\frac{N_2-N_1+1}{2}\right]}$ (see e.g. Mallat (1998)[@Mal98], Proposition 7.2).
\[mom\_acf\] Let $\mathcal{I}$ be a collection of disjoint dyadic intervals $I^j_p$ whose union is the positive half line, and $\mathcal{B}=\{\psi_j^p(t-2^jk):0 \leq k < 2^{J-j}, I_j^p \in \mathcal{I}\}$ is the associated orthonormal basis. Let $h$ and $g$ the QMFs as defined in (\[Eqqmfg\]). The vanishing moments $\mathcal{M}_{j_1,j_2}^{p_1,p_2}(m)$ of the inter-correlation function $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$ of $\psi^{p_1}_{j_1}(t)$ and $\psi^{p_2}_{j_2}(t) \in \mathcal{B}$ satisfy:
-------------------------------------------- ------------------------------------------ ----- --------------------------------
1\) $p_1 \neq 0$ [**and**]{} $p_2 \neq 0$: $\mathcal{M}_{j_1,j_2}^{p_1,p_2}(m) = 0$ for $0 \leq m<R_{p_1}+R_{p_2}$,
2\) $p_1 \neq 0$ [**or**]{} $p_2 \neq 0$: $\mathcal{M}_{j_1,j_2}^{p_1,p_2}(m) = 0$ for $0 \leq m< R_{\max(p_1,p_2)}$,
3\) $p_1=p_2=0$: $\mathcal{M}_{j_1,j_2}^{p_1,p_2}(m) = 0$ for $1 \leq m<2q$.
-------------------------------------------- ------------------------------------------ ----- --------------------------------
\
Furthermore, the support of $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$ is included in ${\left[-{\left(N^*+1\right)}{\left(2^{j_1}+2^{j_2}\right)},{\left(N^*+1\right)}{\left(2^{j_1}+2^{j_2}\right)}\right]}$.
[[***Proof Lemma \[mom\_acf\]:*** ]{}]{} By definition of the inter-correlation function, we have: $$\Lambda_{j_1,j_2}^{p_1,p_2}(h)=\int \psi_{j_1}^{p_1}(t)\psi_{j_2}^{p_2}(t-h)dt$$ As these WP functions belong to the orthonormal basis $\mathcal{B}$, then at integer lags $\Lambda_{j_1,j_2}^{p_1,p_2}(n)=\delta{\left(j_1-j_2\right)}\delta{\left(p_1-p_2\right)}\delta{\left(n\right)}$.
As far as the support is concerned, it is not a difficult matter to see, using Lemma \[lem\_support\], that $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$ is supported in ${\left[-{\left(N^*+1\right)}{\left(2^{j_1}+2^{j_2}\right)},{\left(N^*+1\right)}{\left(2^{j_1}+2^{j_2}\right)}\right]}$.
Let’s now turn to the moments of $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$.
1. $p_1 \neq 0$ [**and**]{} $p_2 \neq 0$: \[sit1\]\
In this case, we know that $\psi_{j_1}^{p_1}$ and $\psi_{j_2}^{p_2}$ have respectively $R_{p_1}$ and $R_{p_2}$ vanishing moments as defined in Lemma \[lem\_moment\]. Then, $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$ will have $R_{p_1}+R_{p_2}$ vanishing moments since, $$\begin{aligned}
\int h^m\Lambda_{j_1,j_2}^{p_1,p_2}(h)dh &= -\int\int(v-u)^m\psi_{j_1}^{p_1}(v)\psi_{j_2}^{p_2}(u)dudv \nonumber\\
&= -\sum_{n=0}^m {\left(-1\right)}^n {m\choose n} \int v^{m-n} \psi_{j_1}^{p_1}(v)dv \int u^n\psi_{j_2}^{p_2}(u)du = 0,\end{aligned}$$ for $0 \leq m<R_{p_1}+R_{p_2}$. Here, we used uniform convergence and continuity to invert the order of summation and integration. Note that the Fubini theorem allows us to invert the order of integrals.
2. $p_1 \neq 0$ [**or**]{} $p_2 \neq 0$: \[sit2\] Without loss of generality, assume that $p_1 \neq 0$ and $p_2=0$. The same reasoning as above can be adopted to conclude that for $0 \leq m<R_{p_1}$: $$\begin{aligned}
\label{Eqsit2}
\int h^m\Lambda_{j_1,j_2}^{p_1,0}(h)dh &= -\sum_{n=0}^m {\left(-1\right)}^n {m\choose n} \int v^{m-n} \psi_{j_1}^{p_1}(v)dv \int u^n\phi_{j_2}(u)du = 0.\end{aligned}$$
3. $p_1=p_2=0$: \[sit3\]\
In an orthonormal basis of wavelet packets, this situation is not possible unless $j_1 = j_2 = j$. Thus, $$\begin{aligned}
\int h^m\Lambda_{j,j}^{0,0}(h)dh &= 2^{-j}\int\int h^m\phi{\left(2^{-j}t\right)}\phi{\left(2^{-j}{\left(t-h\right)}\right)}dtdh \nonumber\\
&= 2^{j(m+1)}\int\int u^m\phi(v)\phi(v-u)dudv = 0,\end{aligned}$$ for $1 \leq m<2q$, where the latter result is proved in [@Bey92].
[[***Proof Theorem \[Cov\_Wave\]:*** ]{}]{} Here we are interested in the covariance between the WP coefficients $W_{j_1}^{p_1}(k_1)$ and $W_{j_2}^{p_2}(k_2)$. We have: $$\begin{aligned}
\textrm{Cov}\left[W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2)\right]
&=& \int\int\mathbb{E}[X(t)X(s)]\psi_{j_1}^{p_1}(t-2^{j_1}k_1)\psi_{j_2}^{p_2}(s-2^{j_2}k_2)dtds \nonumber\\
&=& \int\int\cos\left(\nu(t-s)\right)|t-s|^{2d-1}\psi_{j_1}^{p_1}(t-2^{j_1}k_1)\psi_{j_2}^{p_2}(s-2^{j_2}k_2)dtds\end{aligned}$$ After three changes of variables, $u=t-2^{j_1}k_1$ and $v=s-2^{j_2}k_2$, then $u=t'$ and $v=t'-h$ and finally $\alpha=2^{j_1}(k_1-2^{j_2-j_1}k_2)$, we obtain: $$\textrm{Cov}\left[W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2)\right]=\int\cos(\nu(h+\alpha))|h+\alpha|^{2d-1}\Lambda_{j_1,j_2}^{p_1,p_2}(h)dh.$$ From Lemma \[lem\_support\] we know that that the support of $\Lambda_{j_1,j_2}^{p_1,p_2}(h)$ is included in $[-(2^{j_1}+2^{j_2})(N^*+1),(2^{j_1}+2^{j_2})(N^*+1)]$. As $h$ is in the support of $\Lambda_{j_1j_2}^{p_1p_2}$ and by assumption $\alpha> (N^*+1)(2^{j_1}+2^{j_2})$, we have $h/\alpha<1$. Hence, using the binomial series expansion of $\left|1+\frac{h}{\alpha}\right|^{2d-1}$ and the fact that $\cos(\nu(\alpha+h)) \sim \cos(\nu\alpha)$ for large $\alpha$, it follows that: $$\label{eq_general} \textrm{Cov}\left[W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2)\right] \sim
|\alpha|^{2d-1}\cos(\nu\alpha)\left\{\int\Lambda_{j_1,j_2}^{p_1,p_2}(h)dh+\sum_{i=1}^{\infty}{2d-1\choose
i}\int\left(\frac{h}{\alpha}\right)^i\Lambda_{j_1,j_2}^{p_1,p_2}(h)dh\right\}.$$ We must then provide an upper bound on the integrals inside the braces. In the following we distinguish three different cases depending on the number of vanishing moments of $\Lambda_{j_1,j_2}^{p_1,p_2}$ according to Lemma \[mom\_acf\], that is:
1. If $p_1 \neq 0$ [**and**]{} $p_2 \neq 0$, then $\mathcal{M}^{j_1,j_2}_{p_1,p_2}(m) = 0$, for $0\leq
m<R_{p_1}+R_{p_2}$. We denote $q^*=R_{p_1}+R_{p_2}$. Then, using the fact that the $q^*$ first moments of $\Lambda_{j_1,j_2}^{p_1,p_2}$ are null, $$\textrm{Cov}\left[W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2)\right] \sim ~ C_1|\alpha|^{2d-1-q^*}+R_{q^*+1},$$ with $C_1$ a bounded constant, and: $$\begin{aligned}
{\left|R_{q^*+1}\right|} &=& {\left|\cos(\nu\alpha)\right|}|\alpha|^{2d-1}{\left|\sum_{i=q^*+1}^{\infty}{2d-1\choose
i}\int\left(\frac{h}{\alpha}\right)^{i}\Lambda_{j_1,j_2}^{p_1,p_2}(h)dh\right|} \nonumber\\
&\leq& |\alpha|^{2d-1}{2d-1\choose q^*}{\left|\sum_{i=q^*+1}^{\infty}\int\int\left(\frac{t-h}{\alpha}\right)^{i}\psi_{j_1}^{p_1}(t)\psi_{j_2}^{p_2}(h)dtdh\right|} \nonumber\\
&=& |\alpha|^{2d-1}{2d-1\choose q^*}\int\int\left|\psi_{j_1}^{p_1}(t)\psi_{j_2}^{p_2}(h)\right|dtdh\sum_{i=q^*+1}^{\infty}\beta^{i} \nonumber\\
&=& C_2|\alpha|^{2d-1}\sum_{i=1}^{\infty}\beta^{q^*+i}
\leq C_3|\alpha|^{2d-1-q^*-1}\end{aligned}$$ where $\beta=\sup_{t,h}\left|\frac{t-h}{\alpha}\right|$, $C_2$ and $C_3$ are finite constants. Finally, $$\textrm{Cov}\left[W_{j_1}^{p_1}(k_1),W_{j_2}^{p_2}(k_2)\right]=O\left(|2^{j_1}k_1-2^{j_2}k_2|^{2d-1-q^*}\right),$$ for $|2^{j_1}k_1-2^{j_2}k_2|>(N^*+1)(2^{j_1}+2^{j_2})$ and $q^*=R_{p_1}+R_{p_2}$.
2. If $p_1 \neq 0$ [**or**]{} $p_2 \neq 0$ then $\mathcal{M}^{j_1,j_2}_{p_1,p_2}(m) = 0,$ for $0\leq
m<R_{\max(p_1,p_2)}$. Following the same steps as above, we prove the second statement of Theorem \[Cov\_Wave\] with $q^*=R_{\max(p_1,p_2)}$.
3. $p_1=p_2=0$: $\mathcal{M}^{j_1,j_2}_{p_1,p_2}(m) = 0$ for $1\leq m<2q.$ In this particular case, we have necessarily $j_1=j_2=j$. We must then upper-bound the covariance. From (\[eq\_general\]), we have: $$\textrm{Cov}\left[W_{j}^{0}(k_1),W_{j}^{0}(k_2)\right] \sim ~ C_0|\alpha|^{2d-1}+C_1|\alpha|^{2d-1-2q}+R_{2q+1},$$ where $$C_0=\int\cos(\nu(h+\alpha))\Lambda_{j,j}^{0,0}(h)dh,\ \ \ \
C_1=\cos(\nu\alpha)\frac{(2d-1)!}{(2q)!(2d-1-2q)!}\int h^{2q}\Lambda_{j,j}^{0,0}(h)dh,$$ and $$\begin{aligned}
|R_{2q+1}| &\leq& |\cos(\nu\alpha)||\alpha|^{2d-1}|\sum_{i=2q+1}^{\infty}{2d-1\choose
i}\int\left(\frac{h}{\alpha}\right)^{i}\Lambda_{j,j}^{0,0}(h)dh| = O\left(|\alpha|^{2d-1-2q-1}\right).\end{aligned}$$ As previously, when $\alpha$ is large: $$\begin{aligned}
C_0 &\sim& \cos(\nu\alpha)\int\int\phi_j(t)\phi_j(t-h)dtdh = 2^j\cos(\nu\alpha)\left|\Phi(0)\right|^2=2^j\cos(\nu\alpha).\end{aligned}$$ Finally, using a similar argument as in the previous cases, we find that for $|k_1-k_2|>2(N^*+1)$: $$\begin{aligned}
\textrm{Cov}\left[W_{j}^{0}(k_1),W_{j}^{0}(k_2)\right] &=& O(|2^j(k_1-k_2)|^{2d-1}).\end{aligned}$$
![Computation time (in seconds) to build the best-ortho-basis. Solid line (our approach). The symbols $'+'$, $'\times'$ and $'.'$ correspond to the computation time using the method of [@Whi01] in the case of respectively $'db10'$ (Daubechies wavelet $q=10$), $'sym10'$ (Symmlet $q=10$) and $'coif5'$ (Coiflet $q=10$).[]{data-label="Tps_construct_basis"}](./temps_const_base.eps){width="80.00000%" height="6.9cm"}
![Best-orthos-basis for a Gegenbauer process, with $\nu=1/12$ (first column) and $\nu=0.375$ (second column). (a) Our basis $\nu=1/12$, (b) our basis $\nu=1/5$. Basis of [@Whi01] for $\nu=1/12$ with (c) $'db3'$ and (e) $'coif5'$ wavelets, and similarly for $\nu=0.375$ (d)-(f). Black rectangles correspond to the leaves of the binary tree, and then to the partition of the spectral axis.[]{data-label="basis_spec_freq"}](./basis_compare_1factor.eps){width="115.00000%"}
[p[4.5cm]{}p[4.1cm]{}p[4.1cm]{}p[4.1cm]{}]{} $d=0.4,\ \nu=1/12,\ \lambda=20.7084$ & $d=0.2,\ \nu=1/12,\ \lambda=0.7428$ & $d=0.3,\ \nu=0.016,\ \lambda=10.0526$ & $d_1=d_2=0.3,\ \nu_1=1/40, \nu_2=1/5,\ \lambda= 6.0472$\
[p[0.2cm]{}p[1.5cm]{}p[1.25cm]{}]{} $q$ & Whitcher basis & Our basis\
\
2 & 2728.6 & 1494.5\
4 & 1116.7 & 686.2\
6 & 750.4 & 441.8\
8 & 632.7 & 352.4\
10 & 421.7 & 308.2\
\
4 & 2211.0 & 677.0\
6 & 1371.0 & 444.7\
8 & 980.4 & 341.7\
10 & 693.1 & 297.3\
\
2 & 2697.3 & 1081.1\
4 & 1449.4 & 638.4\
6 & 841.1 & 412.7\
8 & 703.2 & 327.8\
10 & 581.2 & 287.7\
\
2 & - & 657.3\
4 & - & 267.5\
6 & - & 247.8\
&
[p[1.5cm]{}p[1.5cm]{}]{} Whitcher basis & Our basis\
\
105.1 & 52.3\
47.3 & 31.1\
31.9 & 23.3\
28.9 & 20.2\
21.9 & 18.4\
\
86.4 & 30.8\
54.2 & 23.4\
39.1 & 20.1\
28.2 & 18.2\
\
106.0 & 42.3\
58.9 & 29.6\
35.0 & 22.4\
29.0 & 19.5\
26.6 & 17.6\
\
- & 29.4\
- & 16.1\
- & 14.4\
&
[p[1.5cm]{}p[1.5cm]{}]{} Whitcher basis & Our basis\
\
230.6 & 141.4\
188.0 & 124.7\
141.8 & 116.7\
144.0 & 118.8\
140.3 & 115.0\
\
214.4 & 120.9\
210.4 & 116.2\
178.6 & 114.7\
138.8 & 113.5\
\
224.8 & 132.7\
215.2 & 121.0\
141.2 & 116.1\
140.1 & 114.9\
138.6 & 113.2\
\
- & 121.3\
- & 111.8\
- & 110.7\
&
[p[1.5cm]{}p[1.5cm]{}]{} Whitcher basis & Our basis\
\
958.5 & 639.1\
511.4 & 515.7\
444.1 & 422.5\
408.0 & 361.9\
400.5 & 315.3\
\
505.3 & 513.8\
445.0 & 423.5\
409.0 & 359.7\
401.2 & 313.7\
\
713.5 & 589.5\
467.1 & 500.7\
416.1 & 406.2\
363.3 & 342.7\
349.9 & 297.8\
\
- & 374.6\
- & 238.1\
- & 191.1\
![(a) Best-ortho-basis $\mathcal{B}_1$. (b) Best-ortho-basis $\mathcal{B}_2$. (c) Union of bases $\mathcal{B}_1$ and $\mathcal{B}_2$. (d) Best-ortho-basis for the two factor Gegenbauer process $(X_t)_t$.[]{data-label="basis_BoB"}](./basis_compare_kfactor.eps){width="\textwidth" height="9cm"}
[cc]{} $d=0.4,\ \nu=1/12,\ \lambda=20.7084$ & $d=0.2,\ \nu=1/12,\ \lambda=0.7428$\
[@p[0.2cm]{}p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}]{}\
$q$ & $B^W$ & $B_{\text{pen}}^W$ & $B^{CF}$ & $B_{\text{pen}}^{CF}$\
\
2 & 2753.3 & 6977.8 & 2785.0 & 2992.1\
4 & 1517.6 & 3629.8 & 1578.2 & 1785.2\
6 & 1410.6 & 2777.4 & 1177.6 & 1384.7\
8 & 929.7 & 1840.8 & 996.4 & 1203.5\
10 & 1052.9 & 1694.9 & 784.6 & 991.6\
\
4 & 1700.4 & 3812.6 & 1590.9 & 1797.9\
6 & 1035.6 & 2402.4 & 1225.3 & 1432.4\
8 & 1180.8 & 2091.9 & 1079.6 & 1286.7\
10 & 718.9 & 1360.9 & 797.4 & 1004.5\
\
2 & 2194.3 & 5238.4 & 2336.7 & 2543.8\
4 & 1751.5 & 3118.3 & 1696.7 & 1903.8\
6 & 1213.8 & 1855.8 & 1154.0 & 1361.1\
8 & 1138.9 & 1656.7 & 1038.4 & 1245.5\
10 & 995.2 & 1409.3 & 986.3 & 1193.4\
\
2 & - & - & 1588.8 & 1795.9\
4 & - & - & 995.4 & 1202.4\
6 & - & - & 867.0 & 1074.0\
&
[p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}]{}\
$B^W$ & $B_{\text{pen}}^W$ & $B^{CF}$ & $B_{\text{pen}}^{CF}$\
\
41.4 & 192.9 & 36.6 & 44.0\
15.6 & 91.3 & 13.0 & 20.4\
14.3 & 63.3 & 10.8 & 18.2\
10.9 & 43.5 & 6.9 & 14.3\
12.6 & 35.6 & 4.7 & 12.2\
\
25.5 & 101.3 & 16.2 & 23.7\
10.0 & 59.1 & 11.6 & 19.0\
8.8 & 41.5 & 11.6 & 19.0\
8.7 & 31.7 & 12.6 & 20.0\
\
27.8 & 137.0 & 34.8 & 42.2\
18.3 & 67.3 & 18.9 & 26.3\
10.2 & 33.2 & 10.8 & 18.3\
5.1 & 23.7 & 9.2 & 16.6\
11.8 & 26.7 & 6.4 & 13.8\
\
- & - & 18.0 & 25.4\
- & - & 9.0 & 16.4\
- & - & 5.6 & 13.0\
\
\
$d=0.3,\ \nu=0.016,\ \lambda=10.0526$ & $d_1=d_2=0.3,\ \nu_1=1/40, \nu_2=1/5,\ \lambda= 6.0472$\
[p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}]{}\
$B^W$ & $B_{\text{pen}}^W$ & $B^{CF}$ & $B_{\text{pen}}^{CF}$\
\
499.6 & 751.0 & 280.0 & 380.6\
258.9 & 490.1 & 231.7 & 332.3\
299.5 & 440.3 & 246.9 & 347.5\
243.9 & 384.7 & 185.4 & 285.9\
310.6 & 451.3 & 345.3 & 445.8\
\
287.1 & 518.3 & 284.0 & 384.5\
232.6 & 373.3 & 312.2 & 412.7\
362.3 & 503.0 & 159.6 & 260.2\
218.4 & 359.2 & 225.2 & 325.7\
\
280.4 & 511.7 & 293.6 & 394.1\
208.0 & 348.7 & 166.0 & 266.5\
377.5 & 518.3 & 268.5 & 369.0\
273.5 & 414.2 & 198.6 & 299.1\
257.5 & 398.3 & 292.8 & 393.4\
\
- & - & 308.7 & 409.2\
- & - & 321.5 & 422.0\
- & - & 324.4 & 424.9\
&
[p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}@@p[1.25cm]{}]{}\
$B^W$ & $B_{\text{pen}}^W$ & $B^{CF}$ & $B_{\text{pen}}^{CF}$\
\
382.1 & 1639.9 & 386.3 & 489.1\
301.9 & 906.6 & 291.7 & 394.5\
199.6 & 707.6 & 161.1 & 263.9\
157.1 & 628.7 & 217.2 & 320.0\
205.4 & 677.1 & 215.5 & 318.3\
\
274.9 & 879.6 & 282.7 & 385.5\
247.1 & 755.0 & 264.7 & 367.5\
202.5 & 674.2 & 174.7 & 277.5\
172.1 & 643.7 & 155.5 & 258.3\
\
381.7 & 1288.8 & 382.0 & 484.8\
284.8 & 792.7 & 305.7 & 408.5\
225.9 & 697.5 & 217.8 & 320.6\
127.6 & 526.7 & 218.6 & 321.4\
121.5 & 514.6 & 207.6 & 310.4\
\
- & - & 316.6 & 419.4\
- & - & 175.8 & 278.6\
- & - & 142.0 & 244.8\
[^1]: Strictly speaking, this is no longer a CW algorithm.
[^2]: The experiments were run under the R environment on a 2.4GHz PC with 512MB RAM
[^3]: Note that for our best-basis algorithm, and for a given process, the scores $B_{\text{pen}}$ is simply $B$ plus a constant for all wavelets, as the penalty part in $B_{\text{pen}}$ only depends on the singularity frequencies.
|
{
"pile_set_name": "ArXiv"
}
|
Revised on October 3, 2003\
Tom G. Mackay[^1]\
[*School of Mathematics, University of Edinburgh\
James Clerk Maxwell Building, The King’s Buildings\
Edinburgh EH9 3JZ, United Kingdom*]{}
Akhlesh Lakhtakia[^2]\
[*CATMAS — Computational and Theoretical Materials Sciences Group\
Department of Engineering Science and Mechanics\
Pennsylvania State University, University Park, PA 16802–6812, USA*]{}
[**Abstract.**]{} The propagation of plane waves in a Faraday chiral medium is investigated. Conditions for the phase velocity to be directed opposite to the direction of power flow are derived for propagation in an arbitrary direction; simplified conditions which apply to propagation parallel to the distinguished axis are also established. These negative phase–velocity conditions are explored numerically using a representative Faraday chiral medium, arising from the homogenization of an isotropic chiral medium and a magnetically biased ferrite. It is demonstrated that the phase velocity may be directed opposite to power flow, provided that the gyrotropic parameter of the ferrite component medium is sufficiently large compared with the corresponding nongyrotropic permeability parameters.
PACS number(s): 41.20.Jb, 42.25.Bs, 83.80.Ab
Introduction
============
Homogeneous mediums which support the propagation of waves with phase velocity directed opposite to the direction of power flow have attracted much attention lately Ziolkowski, LMW02, Veselago03. The archetypal example of such a medium is the isotropic dielectric–magnetic medium with simultaneously negative real permittivity and negative real permeability scalars, as first described Veselago in the late 1960s Veselago68. A range of exotic and potentially useful electromagnetic phenomenons, such as negative refraction, inverse Doppler shift, and inverse Cerenkov radiation, were predicted for this type of medium Veselago03, Veselago68. Recent experimental studies involving the microwave illumination of certain composite metamaterials Şmith00, Shelby01 — which followed on from earlier works of Pendry *et al.* Pendry98, Pendry99 — are supportive of Veselago’s predictions and have prompted an intensification of interest in this area. In particular, a general condition — applicable to dissipative isotropic dielectric–magnetic mediums — has been derived for the phase velocity to be directed opposite to power flow MartinLW02; and this condition shows that the real parts of both the permittivity and the permeability scalars do not have to be negative.
A consensus has yet to be reached on terminology. For the present purposes, a medium supporting wave propagation with phase velocity directed opposite to power flow is most aptly referred to as a *negative phase–velocity* medium. However, the reader is alerted that alternative terms, such as left–handed material Veselago03, backward medium Lindell01, double–negative medium Ziolkowski, and negative–index medium Valanju, are also in circulation. A discussion of this issue is available elsewhere LMW03.
The scope for the phase velocity to be directed opposite to power flow may be greatly extended by considering non–isotropic mediums, as has been indicated by considerations of uniaxial dielectric–magnetic mediums Lindell01, Hu, Kark. The focus of the present communication is the propagation of negative phase–velocity plane waves in *Faraday chiral mediums* (FCMs) Ȩngheta92, WL98. These mediums combine natural optical activity — as exhibited by isotropic chiral mediums Beltrami — with Faraday rotation — as exhibited by gyrotropic mediums Lax,Chen,Coll. A FCM may be theoretically conceptualized as a homogenized composite medium (HCM) arising from the blending together of an isotropic chiral medium with either a magnetically biased ferrite WLM98 or a magnetically biased plasma WM00. The HCM component mediums are envisioned as random particulate distributions. The homogenization process is justified provided that the particulate length scales in the mixture of components are small compared with electromagnetic wavelengths. A vast literature on the estimation of constitutive parameters of HCMs exists; see Refs. L96, M03, for example. The constitutive relations of FCMs have been rigorously established for some time WL98, although inappropriate use still occurs Fu.
In the following sections, wavenumbers and corresponding electric field phasors are delineated from eigenvalue/vector analysis for planewave propagation in an arbitrary direction. Simplified expressions for these quantities are derived for propagation parallel to the biasing (quasi)–magnetostatic field [@Chen p. 71]. A general condition for the phase velocity to be directed opposite to power flow is established. The theoretical analysis is illustrated by means of a representative numerical example: the constitutive parameters of FCMs arising from a specific homogenization scenario are estimated and then used to explore the wave propagation characteristics.
As regards notation, vectors are underlined whereas dyadics are double underlined. All electromagnetic field phasors and constitutive parameters depend implicitly on the circular frequency $\omega$ of the electromagnetic field. Unit vectors are denoted by the superscript $\, \hat{} \,$ symbol, while $\=I = \hat{\#x}\,\hat{\#x} +
\hat{\#y}\,\hat{\#y} + \hat{\#z}\,\hat{\#z}$ is the identity dyadic. The complex conjugate of a quantity $q$ is written as $q^*$; the real part of $q$ is written as $\mbox{Re} \, \{ q \}$. The free–space (i.e., vacuum) wavenumber is denoted by $\ko = \omega \sqrt{\epso \muo}$ where $\epso$ and $\muo$ are the permittivity and permeability of free space, respectively; and $\etao = \sqrt{\muo / \epso}$ represents the intrinsic impedance of free space.
Analysis
========
Preliminaries
-------------
The propagation of plane waves with field phasors $$\left.\begin{array}{l}
\#E(\#r) = \#E_0\, \exp (i \ko \tilde{k} \, \hat{\#u} \. \#r )\\[5pt]
\#H(\#r) = \#H_0\, \exp (i \ko \tilde{k} \, \hat{\#u} \. \#r )
\end{array}\right\}
\l{pw}$$ in a FCM is considered. Such a medium is characterized by the frequency–domain constitutive relations WL98 $$\left.
\begin{array}{l}
\#D (\#r) = \=\eps\.\#E (\#r) + \=\xi\.\#H (\#r) \,\\[5pt]
\#B (\#r) = - \=\xi\.\#E (\#r) + \=\mu\.\#H (\#r) \, \l{FCM_cr}
\end{array}
\right\},$$ with constitutive dyadics $$\left.
\begin{array}{l}
\=\eps =
\epso \les \, \eps \, \=I
- i \eps_g \, \hat{\#z} \times \=I +
\le \, \eps_z - \eps \, \ri \, \hat{\#z}\, \hat{\#z} \, \ris\\
\vspace{-2mm} \\
\=\xi =
i \sqrt{\epso \muo} \, \les \, \xi \, \=I
- i \xi_g \, \hat{\#z} \times \=I +
\le \, \xi_z - \xi \, \ri \,
\hat{\#z} \, \hat{\#z}
\ris\\
\vspace{-2mm} \\
\=\mu =
\muo \les \, \mu \, \=I
- i \mu_g \, \hat{\#z} \times \=I +
\le \, \mu_z - \mu \, \ri \, \hat{\#z}\, \hat{\#z} \, \ris
\end{array}
\right\}. \l{FCM}$$ Thus, the distinguished axis of the FCM is chosen to be the $z$ axis. For FCMs which arise as HCMs, the gyrotropic parameters $\eps_g$, $\xi_g$ and $\mu_g$ in develop due to the gyrotropic properties of the ferrite or plasma component mediums. Parenthetically, it is remarked that more general FCMs can develop through the homogenization of component mediums based on nonspherical particulate geometries WM00, MLW01a.
In general, the relative wavenumber $ \tilde{k} $ in is complex valued; i.e., $$\tilde{k} = \tilde{k}_R + i \tilde{k}_I, \qquad (\tilde{k}_R,
\tilde{k}_I \in \mathbb{R}).$$ It may be calculated from the planewave dispersion relation $$\begin{aligned}
\mbox{det} \, \les \, \=L(i \ko \tilde{k} \, \hat{\#u}) \, \ris = 0
\,, \l{disp}\end{aligned}$$ which arises from the vector Helmholtz equation $$\begin{aligned}
\=L(\nabla) \. \#E (\#r) = \#0\,, \l{Helm}\end{aligned}$$ wherein $$\begin{aligned}
\=L(\nabla) = \le \nabla \times \=I + i \omega \=\xi \ri
\. \=\mu^{-1} \. \le \nabla \times \=I + i \omega \=\xi \ri
- \omega^2 \=\eps \,. \l{L_nabla}\end{aligned}$$
Of particular interest is the orientation of the phase velocity, as specified by the direction of $ \tilde{k}_R \, \hat{\#u}$, relative to the direction of power flow given by the time–averaged Poynting vector $
\#P (\#r) =
\frac{1}{2} \mbox{Re} \, \les \, \#E (\#r) \times \#H^* (\#r) \,\ris
$. The combination of the constitutive relations with the source–free Maxwell curl postulates $$\left.
\begin{array}{l}
\nabla \times \#E (\#r) = i \omega \#B (\#r) \\[5pt]
\nabla \times \#H (\#r) = -i \omega \#D (\#r)
\end{array}
\right\}$$ yields $$\#P (\#r) =
\frac{1}{2} \, \exp \le - 2 \ko \tilde{k}_I \, \hat{\#u}\.\#r \ri \,
\mbox{Re} \, \lec \#E_0 \times
\les
(\=\mu^{-1})^* \.\le \sqrt{\epso \muo} \tilde{k}^* \, \hat{\#u} \times \#E^*_0 +
\=\xi^* \. \#E^*_0 \ri \ris \ric \l{P_pw}$$ for plane waves .
In the remainder of this section, the quantity $ \tilde{k}_R \, \hat{\#u} \. \#P(\#r) $ is derived for planewave propagation in an arbitrary direction; without loss of generality, $\hat{\#u}$ is taken to lie in the $xz$ plane (i.e., $\hat{\#u} = \hat{\#x}\, \sin \theta + \hat{\#z}\,\cos
\theta $). Further manipulations reveal the simple form $ \tilde{k}_R \, \hat{\#u} \. \#P (\#r) $ adopts for propagation along the FCM distinguished axis (i.e., $\hat{\#u} = \hat{\#z}$).
Propagation in the $xz$ plane {#2.xz}
------------------------------
For $ \hat{\#u} = \hat{\#x}\, \sin \theta + \hat{\#z}\,\cos
\theta $, the dispersion relation may be represented by the quartic polynomial $$a_4 \tilde{k}^4 + a_3 \tilde{k}^3 + a_2 \tilde{k}^2 + a_1 \tilde{k} + a_0 =0\,,
\l{quartic}$$ with coefficients $$\begin{aligned}
a_4 &=&
\le \eps \sin^2 \theta + \eps_z \cos^2 \theta \ri \le \mu \sin^2
\theta + \mu_z \cos^2 \theta \ri
- \le \xi \sin^2 \theta + \xi_z \cos^2 \theta \ri^2, \\
a_3 &=&
2 \cos \theta \big\{
\sin^2 \theta \les
\mu_g \le \eps \xi_z - \eps_z \xi \ri + \eps_g \le \mu \xi_z - \mu_z
\xi \ri + \xi_g \le \mu \eps_z + \eps \mu_z - 2 \xi \xi_g \ri \ris
\nonumber
\\
&&
+ 2 \cos^2 \theta \xi_g \le \eps_z \mu_z - \xi^2_z \ri \big\}, \\
a_2 &=&
\sin^2 \theta \Big\{
\mu \mu_z \le \eps^2_g - \eps^2 \ri + \le \xi^2 + \xi^2_g \ri
\le \mu \eps_z + \eps \mu_z \ri -
2 \xi \les \xi_z \le \xi^2_g - \xi^2 \ri + \mu_g \eps_z \xi_g \ris
\nonumber \\ &&
- 2 \eps_g \les \xi_z \le \mu_g \xi - \mu \xi_g \ri + \mu_z \xi \xi_g
\ris
- \eps \les \eps_z \le \mu^2 - \mu^2_g \ri + 2 \xi_z \le \mu \xi - \mu_g
\xi_g \ri \ris \Big\} \nonumber \\ &&
+ 2 \cos^2 \theta \le \eps_z \mu_z - \xi^2_z \ri
\le 3 \xi^2_g - \xi^2 - \eps_g \mu_g - \eps \mu \ri , \\
a_1 &=& 4 \cos \theta \le \eps_z \mu_z - \xi^2_z \ri \Big[ \xi
\le \eps_g \mu + \eps \mu_g \ri
+ \xi_g \le
\xi^2_g - \xi^2
- \eps \mu - \eps_g \mu_g \ri \Big],\\
a_0 & = &
\le \eps_z \mu_z - \xi^2_z \ri \Big[ \le \eps^2 - \eps^2_g \ri \le
\mu^2 - \mu^2_g \ri + \le \xi^2_g - \xi^2 \ri^2
- 2 \le \xi^2_g + \xi^2 \ri \le \eps \mu + \eps_g \mu_g \ri
\nonumber \\ &&
+ 4 \xi \xi_g \le \eps \mu_g + \mu \eps_g \ri \Big].\end{aligned}$$ Hence, four relative wavenumbers $\tilde{k} = \kappa_i$, $\kappa_{ii}$, $\kappa_{iii}$ and $\kappa_{iv}$ may be extracted — either algebraically or numerically AS — as the roots of .
Upon substituting $\hat{\#u} = \hat{\#x}\, \sin \theta + \hat{\#z}\,\cos
\theta $ into and combining with , the component of $\#P (\#r)$ aligned with $\hat{\#u}$ emerges straightforwardly as $$\begin{aligned}
\hat{\#u} \. \#P (\#r) &=&
\frac{1}{2 \etao} \exp \le - 2 \ko \tilde{k}_I \, \hat{\#u}\.\#r \ri
\,
\mbox{Re} \, \Bigg\{
\, \frac{1}{\mu^*_z} \le \tilde{k}^* \sin \theta | E_{0y}|^2 -
i \xi^*_z E_{0y} E^*_{0z} \ri \, \sin \theta
\nonumber \\ &&
+ \frac{1}{(\mu^*)^2 - (\mu^*_g)^2} \, \Bigg[ \tilde{k}^* \Bigg(
\les
\mu^* \le |E_{0x}|^2 + |E_{0y}|^2 \ri
+
i \mu^*_g \le E_{0x} E^*_{0y} - E_{0y} E^*_{0x} \ri \ris \,
\cos^2 \theta \nonumber \\ && +
\mu^* | E_{0z} |^2 \sin^2 \theta
- \les \mu^* \le E_{0z} E^*_{0x} + E_{0x}
E^*_{0z} \ri + i \mu^*_g \le E_{0z}E^*_{0y} - E_{0y} E^*_{0z} \ri \ris
\, \sin \theta \cos \theta \,
\Bigg)
\nonumber \\ &&
+ \le \mu^* \xi^*_g - \mu^*_g \xi^* \ri \les \le
|E_{0x}|^2 + |E_{0y}|^2 \ri \, \cos \theta - E_{0z} E^*_{0x} \sin \theta \ris
\nonumber \\ &&
- i \le \mu^* \xi^* - \mu^*_g \xi^*_g \ri \les
\le E_{0x} E^*_{0y} - E^*_{0x} E_{0y}
\ri \, \cos \theta - E_{0z} E^*_{0y} \sin \theta \ris \Bigg] \Bigg\}, \l{kP_z}\end{aligned}$$ wherein $(E_{0x}, E_{0y}, E_{0z}) = \#E_0$.
Let the quantity $$w =
2 \etao
\exp \le 2 \ko \tilde{k}_I \, \hat{\#u}\.\#r\, \ri
|E_{0y}|^{-2} \, \tilde{k}_R \, \hat{\#u} \. \#P (\#r)\,
\l{wz}$$ be introduced such that the fulfilment of the negative phase–velocity condition $$\tilde{k}_R \, \hat{\#u} \. \#P (\#r) < 0$$ is signaled by $w < 0$.
Substitution of in yields the expression $$\begin{aligned}
w &=& \tilde{k}_R \,
\mbox{Re} \, \Bigg\{
\,\frac{1}{\mu^*_z}
\le \tilde{k}^* \sin \theta -
i \xi^*_z \beta^* \ri \,\sin \theta
\nonumber \\ &&
+ \frac{1}{(\mu^*)^2 - (\mu^*_g)^2} \, \Bigg[ \tilde{k}^* \Bigg(
\les
\mu^* \le |\alpha|^2 +1 \ri
+
i \mu^*_g \le \alpha - \alpha^* \ri \ris \, \cos^2 \theta \nonumber \\ && +
\mu^* | \beta |^2 \sin^2 \theta
- \les \mu^* \le \alpha^* \beta + \alpha \beta^*
\ri
+ i \mu^*_g \le \beta - \beta^* \ri \ris \, \sin \theta \cos \theta
\Bigg)
\nonumber \\ &&
+ \le \mu^* \xi^*_g - \mu^*_g \xi^* \ri \les \le
|\alpha|^2 + 1 \ri \, \cos \theta - \alpha^* \beta \, \sin \theta \ris
\nonumber \\ &&
- i \le \mu^* \xi^* - \mu^*_g \xi^*_g \ri \les
\le \alpha - \alpha^*
\ri\, \cos \theta - \beta \, \sin \theta \ris \Bigg] \Bigg\}
\,. \l{w_xz}\end{aligned}$$ The ratios of electric field components $$\left.
\begin{array}{l}
\alpha = E_{0x} / E_{0y} \\[5pt]
\beta = E_{0z} / E_{0y}
\end{array}
\right\}$$ in are derived as follows: As a function of $\theta$, the dyadic operator $\=L$ of has the form $$\begin{aligned}
\=L &=&
i \ko \Big\{
\les \, \=L \, \ris_{11}
\hat{\#x}\, \hat{\#x} +
\les \, \=L \, \ris_{22}
\hat{\#y}\, \hat{\#y}
+ \les \, \=L \, \ris_{33}
\hat{\#z}\, \hat{\#z}
+\les \, \=L \, \ris_{12}
\le \, \hat{\#x}\, \hat{\#y}
- \hat{\#y}\, \hat{\#x} \, \ri \nonumber \\ &&
+\les \, \=L \, \ris_{13}
\le \, \hat{\#x}\, \hat{\#z}
+ \hat{\#z}\, \hat{\#x} \, \ri
+\les \, \=L \, \ris_{23}
\le \, \hat{\#y}\, \hat{\#z}
- \hat{\#z}\, \hat{\#y} \, \ri \Big\},\end{aligned}$$ with components $$\begin{aligned}
\les \, \=L \, \ris_{11} \l{L_11}
&=&
\eps + \frac{2 \mu_g \xi \Gamma - \mu \le \xi^2 + \Gamma^2 \ri}{\mu^2
- \mu^2_g},
\\
\les \, \=L \, \ris_{22}
&=&
\eps - \frac{\tilde{k}^2 \sin^2 \theta}{\mu_z} + \frac{ 2 \mu_g \xi \Gamma - \mu \le \xi^2 + \Gamma^2
\ri }{\mu^2
- \mu^2_g},\\
\les \, \=L \, \ris_{33}
&=& \eps_z - \frac{\xi^2_z}{\mu_z} - \frac{\mu \tilde{k}^2 \sin^2 \theta}{\mu^2
- \mu^2_g},\\
\les \, \=L \, \ris_{12} \l{L_12}
&=& i \le
\eps_g + \frac{\mu_g \le \xi^2 + \Gamma^2 \ri - 2 \mu \xi \Gamma}{\mu^2
- \mu^2_g} \ri,\\
\les \, \=L \, \ris_{13}
&=& \frac{ \mu \Gamma - \mu_g \xi }{\mu^2
- \mu^2_g} \, \tilde{k} \, \sin \theta \, ,\\
\les \, \=L \, \ris_{23}
&=& i \le
\frac{\mu_g \Gamma - \mu \xi }{\mu^2
- \mu^2_g}- \frac{\xi_z}{\mu_z} \ri \, \tilde{k} \, \sin \theta \,,\end{aligned}$$ where $\Gamma = \xi_g + \tilde{k} \, \cos \theta $. It follows from the vector Helmholtz equation that $$\left.
\begin{array}{l}
\alpha = \displaystyle{ \frac{ \les \, \=L \, \ris_{12} \les \, \=L \,
\ris_{33} + \les \, \=L \, \ris_{13}
\les \, \=L \, \ris_{23} }
{ \les \, \=L \, \ris_{13} \les \, \=L \, \ris_{13}
- \les \, \=L \, \ris_{11} \les \, \=L \, \ris_{33}}} \\
\vspace{-3mm}
\\
\beta =
\displaystyle{
\frac{ \les \, \=L \, \ris_{12} \les \, \=L \, \ris_{23} -
\les \, \=L \, \ris_{13} \les \, \=L \, \ris_{22} }
{ \les \, \=L \, \ris_{13} \les \, \=L \, \ris_{23} +
\les \, \=L \, \ris_{12} \les \, \=L \, \ris_{33} }}
\end{array}
\right\}. \l{alp_bet}$$
Propagation along the $z$ axis {#2.z}
-------------------------------
The results of the preceding analysis simplify considerably for planewave propagation along the $z$ axis (i.e., $ \theta = 0$). The quartic dispersion relation yields the four relative wavenumbers $$\left.
\begin{array}{l}
\kappa_{i} = \sqrt{ \eps +
\eps_g } \sqrt{ \mu + \mu_g} - \xi - \xi_g \\
\kappa_{ii} = - \sqrt{ \eps +
\eps_g } \sqrt{ \mu + \mu_g} - \xi - \xi \\
\kappa_{iii} = \sqrt{ \eps -
\eps_g } \sqrt{ \mu - \mu_g}
+ \xi - \xi_g
\, \\
\kappa_{iv} = - \sqrt{ \eps -
\eps_g } \sqrt{ \mu - \mu_g}
+ \xi - \xi_g
\end{array}
\right\}\,; \l{roots_kz}$$ and reduces to $$\begin{aligned}
w = \tilde{k}_R \, \mbox{Re} \lec
\frac{ \le
| \alpha |^2 + 1 \ri \le \,
\tilde{k}^* \mu^*
- \mu^*_g \xi^* + \mu^* \xi^*_g \ri
+ i
\le \alpha - \alpha^* \ri
\le \,
\tilde{k}^* \mu^*_g -
\mu^* \xi^* + \mu^*_g \xi^*_g \ri}
{(\mu^*)^2 - (\mu^*_g)^2} \, \ric \,. \l{cond_z}\end{aligned}$$ Since the dyadic operator components $\les\,\=L \,\ris_{13}$ and $\les\,\=L \,\ris_{23}$ are null–valued for $\theta = 0$, the electric field ratios are given as $$\left.
\begin{array}{l}
\alpha = - \les \, \=L \, \ris_{12} / \les \,
\=L \, \ris_{11}\\
\beta = 0
\end{array}
\right\}.$$ Note that a further consequence of $\les\,\=L \,\ris_{13} =
\les\,\=L \,\ris_{23} = 0$ is that the time–averaged Poynting vector is parallel to the $z$ axis.
By substituting into and , the ratio $\alpha$ emerges as $$\alpha =
\left\{
\begin{array}{lcccl}
i && \mbox{for} && \tilde{k} = \kappa_{i}, \kappa_{ii}, \\
-i && \mbox{for} && \tilde{k} = \kappa_{iii}, \kappa_{iv}.
\end{array}
\right. \l{alpha_val}$$ Hence, negative–phase velocity propagation along the $z$ axis occurs provided $w < 0$ where $$\begin{aligned}
&& w = w_i = 2\, \mbox{Re}\lec
\sqrt{\eps + \eps_g} \sqrt{\mu + \mu_g} - \xi -
\xi_g \ric \, \mbox{Re} \lec \frac{ \sqrt{\eps^* + \eps^*_g}}{ \sqrt{\mu^* + \mu^*_g}} \ric
\qquad \mbox{for} \qquad \tilde{k} = \kappa_{i}, \nonumber \\ && \\
&& w = w_{ii} = 2\, \mbox{Re}\lec
\sqrt{\eps + \eps_g} \sqrt{\mu + \mu_g} + \xi +
\xi_g \ric \, \mbox{Re} \lec \frac{ \sqrt{\eps^* + \eps^*_g}}{
\sqrt{\mu^* + \mu^*_g}} \ric
\qquad \mbox{for} \qquad \tilde{k} = \kappa_{ii}, \nonumber \\ && \\
&& w = w_{iii} = 2\,
\mbox{Re}\lec
\sqrt{\eps - \eps_g} \sqrt{\mu - \mu_g} + \xi -
\xi_g \ric \, \mbox{Re} \lec \frac{ \sqrt{\eps^* - \eps^*_g}}{ \sqrt{\mu^* - \mu^*_g}} \ric
\qquad \mbox{for} \qquad \tilde{k} = \kappa_{iii}, \nonumber \\ &&
\\
&& w = w_{iv} = 2\, \mbox{Re}\lec
\sqrt{\eps - \eps_g} \sqrt{\mu - \mu_g} - \xi +
\xi_g \ric \, \mbox{Re} \lec \frac{ \sqrt{\eps^* - \eps^*_g}}{
\sqrt{\mu^* - \mu^*_g}} \ric
\qquad \mbox{for} \qquad \tilde{k} = \kappa_{iv}. \nonumber \\\end{aligned}$$
Numerical results
=================
In order to further examine the negative phase–velocity conditions derived in Sections \[2.xz\] and \[2.z\], let us consider a Faraday chiral medium (FCM) produced by mixing (a) an isotropic chiral medium described by the constitutive relations Beltrami $$\left.
\begin{array}{l}
\#D = \epso \eps^a \, \#E + i \sqrt{\epso \muo}\, \xi^a \, \#H \,\\[5pt]
\#B = - i \sqrt{\epso \muo} \, \xi^a \, \#E + \muo \mu^a \,\#H \,
\l{chiral}
\end{array}
\right\}$$ and (b) a magnetically biased ferrite described by the constitutive relations [@Chen Ch. 7] $$\left.
\begin{array}{l}
\#D = \epso \eps^b \, \#E \\[5pt]
\#B =
\muo \les \, \mu^b \, \=I
- i \mu^b_g \, \hat{\#z} \times \=I +
\le \, \mu^b_z - \mu^b \, \ri \, \hat{\#z}\, \hat{\#z} \, \ris
\.\#H \,
\l{mag}
\end{array}
\right\}.$$ Both component mediums are envisioned as random distributions of electrically small, spherical particles. The resulting homogenized composite medium (HCM) is a FCM characterized by the constitutive dyadics $$\left.
\begin{array}{l}
\=\eps^{HCM} =
\epso \les \, \eps^{HCM} \, \=I
- i \eps^{HCM}_g \, \hat{\#z} \times \=I +
\le \, \eps^{HCM}_z - \eps^{HCM} \, \ri \, \hat{\#z}\, \hat{\#z} \, \ris\\
\vspace{-2mm} \\
\=\xi^{HCM} =
i \sqrt{\epso \muo} \, \les \, \xi^{HCM} \, \=I
- i \xi^{HCM}_g \, \hat{\#z} \times \=I +
\le \, \xi^{HCM}_z - \xi^{HCM} \, \ri \,
\hat{\#z} \, \hat{\#z}
\ris\\
\vspace{-2mm} \\
\=\mu^{HCM} =
\muo \les \, \mu^{HCM} \, \=I
- i \mu^{HCM}_g \, \hat{\#z} \times \=I +
\le \, \mu^{HCM}_z - \mu^{HCM} \, \ri \, \hat{\#z}\, \hat{\#z} \, \ris
\end{array}
\right\}. \l{HCM}$$ Incidentally, a FCM with constitutive dyadics of the form may also be developed via the homogenization of an isotropic chiral medium and a magnetically biased plasma WM00.
The constitutive dyadics $ \=\eps^{HCM}$, $ \=\xi^{HCM}$ and $ \=\mu^{HCM}$ are estimated using the Bruggeman homogenization formalism for a representative example. Comprehensive details of the Bruggeman formalism Ward, M03 and its implementation in the context of FCMs WLM98, WM00, MLWM01 are available elsewhere. Initially, we restrict our attention to nondissipative FCMs; the influence of dissipation is considered later in this section.
Nondissipative FCMs
-------------------
The parameter values selected for nondissipative component mediums are as follows: $$\begin{aligned}
&& \eps^a = 3.2, \:\: \xi^a = 2.4, \:\: \mu^a = 2; \:\: \eps^b = 2.2, \:\:
\mu^b = 3.5, \:\: \mu^b_z = 1, \:\: \mu^b_g \in [0,4].\end{aligned}$$ The permeability parameters for component medium $b$ may be viewed in terms of the semi–classical ferrite model as $$\left.
\begin{array}{l}
\mu^b = \displaystyle{1 + \frac{\omegao \, \omega_m}{\omegao^2 -
\omega^2}}\\[8pt]
\mu^b_g = \displaystyle{ \frac{\omega \, \omega_m}{\omegao^2 -
\omega^2}}\\[8pt]
\mu^b_z = 1
\end{array}
\right\},$$ wherein $\omegao$ is the Larmor precessional frequency of spinning electrons and $\omega_m$ is the saturated magnetization frequency Lax, Chen. Thus, the parameter values $\mu^b = 3.5$ and $\mu^b_g \in [0,4]$ correspond to the relative frequency range $\le \omegao / \omega \ri \in [0.625, \infty )$.
Let $f_a$ denote the volume fraction of the isotropic chiral component medium $a$. In figure 1, the estimated constitutive parameters of the HCM are plotted as functions of $f_a$ for $\mu^b_g = 4$. The uniaxial and gyrotropic characteristics of a FCM are clearly reflected by the constituents of the permeability dyadic $\=\mu^{HCM}$ and the magnetoelectric dyadic $\=\xi^{HCM}$. In contrast, the HCM is close to being isotropic with respect to its dielectric properties. Significantly, eight of the nine scalars appearing in are positive, while $\eps_g^{HCM}$ is negative only for $f_a < 0.32$; however, $\vert\eps_g^{HCM}\vert << \vert\eps^{HCM}\vert$ and $\vert\eps_g^{HCM}\vert << \vert\eps_z^{HCM}\vert$ for all values of $f_a \in [0,1]$.
The permeability parameters $\mu^{HCM}$ and $\mu^{HCM}_g$ are equal at $f_a \approx 0.25$, it being clear from the right side of that this equality has an important bearing on the stability of $w$. Further calculations with $\mu^b_g = 2$ and $\mu^b_g = 3$ have confirmed that $\mu^{HCM} \neq
\mu^{HCM}_g$ for all volume fractions $f_a \in [0,1]$. This matter is pursued in figure 2 where the estimated constitutive parameters of the HCM are graphed as functions of $\mu^b_g$ for $f_a = 0.35$. The HCM gyrotropic parameters $\xi^{HCM}_g$ and $\mu^{HCM}_g$ are observed to increase steadily as $\mu^b_g$ increases; $\eps^{HCM}_g$, $\xi^{HCM}_g$ and $\mu^{HCM}_g$ all vanish in the limit $\mu^b_g \rightarrow 0$. Also, as $\mu^b_g$ increases, the degree of uniaxiality (with respect to the $z$ axis) increases for $\=\xi^{HCM}$ but decreases for $\=\mu^{HCM}$.
The relative wavenumbers $\tilde{k} = \kappa_{i-iv}$ for propagation along the $z$ axis, as specified in , are displayed in figure 3 as functions of $f_a$, for $\mu^b_g = 2, 3$ and $4$. The relative wavenumbers $\kappa_i > 0 $ and $\kappa_{ii} < 0$ for all $f_a \in [0,1]$ for $\mu^b_g = 2, 3$ and $4$. Similarly, for $\mu^b_g = 2$ and $3$, the relative numbers $\kappa_{iii} > 0$ and $\kappa_{iv} < 0$.
However, the equality $\mu^{HCM} =
\mu^{HCM}_g$, which occurs at $f_a \approx 0.25$ for $\mu^b_g = 4$, results in $\kappa_{iii}$ and $\kappa_{iv}$ acquiring nonzero imaginary parts as $f_a$ falls below $0.25$ for $\mu^b_g = 4$. Only the real parts of these complex–valued relative wavenumbers are plotted in figure 3.
Observe that $\kappa_i$, $\kappa_{iii}$ and $\kappa_{iv} > 0$ in figure 3, whereas $\kappa_{ii} < 0$ in the volume fraction range $ 0.25 < f_a < 0.42$ with $\mu^b_g = 4$. Furthermore, $\kappa_i$, $\mbox{Re} \, \lec
\kappa_{iii} \, \ric$ and $\mbox{Re} \, \lec
\kappa_{iv} \, \ric > 0$ while $\kappa_{ii} < 0$ for $ f_a < 0.25$ with $\mu^b_g = 4$. In the limit $f_a \rightarrow 0$, the relative wavenumbers $ \kappa_{i-iv} \rightarrow \pm \sqrt{\eps^b}\sqrt{\mu^b
\pm \mu^b_g}$ (i.e., the relative wavenumbers of a ferrite biased along the $z$ axis Çhen). Also, as $f_a \rightarrow 1$, the relative wavenumbers $ \kappa_{i-iv} \rightarrow \pm \sqrt{\eps^a
\mu^a} \pm \xi^a$ (i.e., the relative wavenumbers of an isotropic chiral medium).
The values of $w$ corresponding to the relative wavenumbers $\kappa_{i-iv}$ of figure 3, namely $w_{i-iv} $, are plotted against $f_a$ in figure 4 for $\mu^b_g = 2, 3$ and $4$. The quantities $w_{i-iii} \ge 0$ for all volume fractions $f_a \in [0,1]$ with $\mu^b_g = 2, 3$ and $4$. Thus, for the relative wavenumbers $\kappa_{i-iii}$, power flows in the same direction as the phase velocity. This is the case regardless of whether the phase velocity is directed along the positive $z$ axis (as in modes $\kappa_{i}$ and $\kappa_{iii}$) or directed along the negative $z$ axis (as in mode $\kappa_{ii}$). Both $w_{iii}$ and $w_{iv}$ are null valued in those regions where the corresponding relative wavenumbers, $\kappa_{iii}$ and $\kappa_{iv}$, respectively, have nonzero imaginary parts. In addition, $w_{iii} \rightarrow \infty$ and $w_{iv} \rightarrow - \infty$ in the vicinity of $f_a = 0.25$ for $\mu^b_g = 4$.
Significantly, $w_{iv} < 0$ for $\mu^b_g = 4$ at volume fractions $f_a \in (0.25, 0.42)$ in figure 4. This means that [*the negative phase–velocity condition then holds in the chosen FCM*]{} which has been conceptualized as a homogenized composite medium.
In figure 5, the relative wavenumbers $\kappa_i$ and $\kappa_{iii}$ for $\theta = \pi /2$ (i.e., propagation along the $x$ axis) are plotted against $f_a$ for $\mu^b_g = 2, 3$ and $4$. The graphs of $\kappa_{ii}$ and $\kappa_{iv}$ need not be presented since $\kappa_{i} = - \kappa_{ii}$ and $\kappa_{iii} = -
\kappa_{iv}$.[^3] For all $f_a \in [0,1]$ with $\mu^b_g = 2$ and $3$, the relative wavenumbers $\kappa_{i} > 0$ and $\kappa_{iii} < 0$. Similarly, $\kappa_i > 0$ for $\mu^b_g = 4$. However, when $\mu^b_g = 4$, it is found that $\kappa_{iii} < 0$ for $f_a >
0.42$ but $\kappa_{iii}$ possesses a nonzero imaginary part and $\mbox{Re} \, \lec \kappa_{iii} \ric = 0 $ for $f_a < 0.42$. In the limit $f_a \rightarrow 0$, the relative wavenumbers $\kappa_{i-iv} \rightarrow \pm \sqrt{\eps^b / \mu^b
}\sqrt{\le \mu^b \ri^2 -
\le \mu^b_g \ri^2}$ and $\pm\sqrt{\eps^b \mu^b_z}$ (i.e., the relative wavenumbers of a ferrite biased along the $x$ axis Çhen). Also, as $f_a \rightarrow 1$, the relative wavenumbers $\kappa_{i-iv} \rightarrow \pm \sqrt{\eps^a
\mu^a} \pm \xi^a$ (i.e., the relative wavenumbers of an isotropic chiral medium).
Figure 6 shows the plots of $w_{i,iii} $ corresponding to the relative wavenumbers $\kappa_{i,iii}$ of figure 5. The graphs of $w = w_{ii,iv} $ corresponding to the relative waveumbers $\kappa_{ii,iv}$ are not displayed since the equalities $w_i = w_{ii}$ and $w_{iii} = w_{iv}$ hold for $\theta = \pi/2$ — as may be inferred from –. The quantities $w_{i,iii} \ge 0$ at all volume fractions $f_a \in [0,1]$ with $\mu^b_g = 2, 3$ and $4$. As remarked earlier for $\kappa_{i-iii}$ propagation along the $z$ axis, here we have that power flows in the same direction as the phase velocity, regardless of whether the phase velocity is directed along the positive $x$ axis (mode $\kappa_{i}$) or along the negative $x$ axis (mode $\kappa_{iii}$). Furthermore, it is found that $w_{iii} = 0$ in the region where the corresponding relative wavenumber $\kappa_{iii}$ is purely imaginary (i.e., for $f_a < 0.42$ with $\mu^b_g = 4$).
Dissipative FCMs
----------------
The scope of these numerical investigations is now broadened by considering (i) the effects of dissipation or loss; and (ii) propagation in an arbitrary direction. Let a small amount of loss be incorporated into component medium $b$ by selecting the constitutive parameters of the component mediums as $$\begin{aligned}
&& \eps^a = 3.2, \:\: \xi^a = 2.4, \:\: \mu^a = 2; \:\: \eps^b = 2.2 +
i \, \delta , \:\:
\mu^b = 3.5 + i \, \delta , \:\: \mu^b_z = 1 + i \, 0.5 \delta , \:\:
\mu^b_g = 4 + i \, 2 \delta,\end{aligned}$$ where the dissipation parameter $\delta \in [0,0.2]$. We focus attention on the region of negative phase–velocity propagation along the $z$ axis with relative wavenumber $\kappa_{iv}$, as illustrated by $w_{iv} < 0$ at $0.25 < f_a < 0.42 $ in figure 4.
Real parts of the relative wavenumber $\kappa_{iv}$, calculated at the volume fraction $f_a = 0.35$ with $\delta = 0, 0.1$ and $0.2$, are graphed as functions of $\theta$ in figure 7. The relative wavenumber $\kappa_{iv}$ for the nondissipative FCM (i.e., $\delta = 0$) is real–valued for $\theta < 52^\circ$ but has a nonzero imaginary part for $\theta > 52^\circ$. The relative wavenumbers $\kappa_{iv}$ for $\delta = 0.1$ and $0.2$ have nonzero imaginary parts for all values of $\theta$. Note that the real part of $\kappa_{iv}$ falls to zero at $\theta = \pi/2$ in the absence of dissipation (i.e., $\delta = 0$).
Plots of the quantity $w = w_{iv}$, corresponding to the relative wavenumber $\kappa_{iv}$ of figure 7, are provided in figure 8.
*The negative phase–velocity condition $w_{iv} < 0$ is satisfied*
- for $\theta < 52^\circ$ when $\delta = 0$,
- for $\theta < 76^\circ$ when $\delta =0.1$, and
- for $\theta < 38^\circ$ when $\delta = 0.2$.
Discussion and Conclusion
=========================
In isotropic dielectric–magnetic mediums, plane waves can propagate with phase velocity directed opposite to the direction of power flow under certain, rather restrictive, conditions MartinLW02. However, the constitutive parameter space associated with anisotropic and bianisotropic mediums provides a wealth of opportunities for observing and exploiting negative phase–velocity behavior. General conditions are established here for the phase velocity to be directed opposite to power flow for a particular class of bianisotropic mediums, namely Faraday chiral mediums. The theory has been explored by means of a representative example of FCMs, arising from the homogenization of an isotropic chiral medium and a magnetically biased ferrite. For our representative example, the negative phase–velocity conditions have been found to hold for propagation in arbitrary directions — for both nondissipative and dissipative FCMs — provided that the gyrotropic parameter of the ferrite component medium is sufficiently large compared with the corresponding nongyrotropic permeability parameters.
Previous studies Ziolkowski–Ḩu have emphasized the importance of the signs of constitutive (scalar) parameters in establishing the conditions for negative phase–velocity propagation in homogeneous mediums.[^4] In the absence of dissipation, negative phase–velocity propagation has been predicted in
- isotropic dielectric–magnetic mediums provided that both the permittivity and permeability scalars are negative Veselago03, and
- uniaxial dielectric–magnetic mediums when only one of the four constitutive scalars is negative Lindell01.
Also, the conditions for negative phase–velocity propagation may be fulfilled by dissipative isotropic dielectric–magnetic mediums when only one of the two constitutive scalars has a negative real part MartinLW02. The present study demonstrates that [*the condition for negative phase–velocity propagation can be satisfied by nondissipative FCMs with constitutive scalars that are all positive*]{}. Furthermore, these conditions continue to be satisfied after the introduction a small amount of dissipation.
For the particular case of propagation parallel to the ferrite biasing field, the components of the time–averaged Poynting vector are null–valued in directions perpendicular to the propagation direction. In contrast, for general propagation directions, the time–averaged Poynting vector has nonzero components perpendicular to the direction of propagation. Further studies are required to explore the consequences of the negative phase–velocity condition $ \tilde{k}_R \, \hat{\#u} \. \#P (\#r) < 0 $ for such general propagation directions.
To conclude, more general bianisotropic mediums, particularly those developed as HCMs based on nonspherical particulate components, offer exciting prospects for future studies of negative phase–velocity propagation.
[99]{}
R.W. Ziolkowski and E. Heyman, Phys. Rev. E [**64**]{}, 056625 (2001).
A. Lakhtakia, M.W. McCall, and W.S. Weiglhofer, Arch. Elektr. Übertrag. [**56**]{}, 407 (2002).
V.G. Veselago, in *Advances in Electromagnetics of Complex Media and Metamaterials*, edited by S. Zouhdi, A. Sihvola and M. Arsalane (Kluwer, Dordrecht, The Netherlands, 2003), p.83
V.G. Veselago, Sov. Phys. Usp. [**10**]{}, 509 (1968).
D.R. Smith *et al.*, Phys. Rev. Lett. [**84**]{}, 4184 (2000).
R.A. Shelby, D.R. Smith, and S. Schultz, Science [**292**]{}, 77 (2001).
J.B. Pendry *et al.*, J. Phys.: Condens. Matter [**10**]{}, 4785 (1998).
J.B. Pendry *et al,*, IEEE Trans. Microwave Theory Tech. [**47**]{}, 2075 (1999).
M.W. McCall, A. Lakhtakia, and W.S. Weiglhofer, Eur. J. Phys. [**23**]{}, 353 (2002).
I.V. Lindell *et al.*, Microw. Opt. Technol. Lett. [**31**]{}, 129 (2001).
P.M. Valanju, R.M. Walser, and A.P. Valanju, Phys. Rev. Lett. [**88**]{}, 187401 (2002).
A. Lakhtakia, M.W. McCall, and W.S. Weiglhofer, in *Introduction to Complex Mediums for Optics and Electromagnetics*, edited by W.S. Weiglhofer and A. Lakhtakia (SPIE Optical Engineering Press, Bellingham, WA, in press).
L. Hu and Z. Lin, *Physics Letters A* [**313**]{}, 316 (2003).
M.K. Kärkkäinen, *Phys. Rev. E* [**68**]{}, 026602 (2003).
E. Engheta, D.L. Jaggard, and M.W. Kowarz, IEEE Trans. Antennas Propagat. [**40**]{}, 367 (1992).
W.S. Weiglhofer and A. Lakhtakia, Microw. Opt. Technol. Lett. [**17**]{}, 405 (1998).
A. Lakhtakia, *Beltrami Fields in Chiral Media*, (World Scientific, Singapore, 1994).
B. Lax and K.J. Button, *Microwave Ferrites and Ferrimagnetics*, (McGraw–Hill, New York, NY, 1962).
H.C. Chen, *Theory of Electromagnetic Waves*, (McGraw–Hill, New York, NY, 1983).
R.E. Collin, *Foundations for Microwave Engineering*, (McGraw–Hill, New York, NY, 1966), Chap. 6
W.S. Weiglhofer, A. Lakhtakia, and B. Michel, Microwave Opt. Technol. Lett. [**18**]{}, 342 (1998).
W.S. Weiglhofer and T.G. Mackay, Arch. Elektr. Übertrag. [**54**]{}, 259 (2000).
A. Lakhtakia (ed), [*Selected Papers on Linear Optical Composite Materials*]{}, (SPIE Optical Engineering Press, Bellingham, WA, 1996).
T.G. Mackay, in *Introduction to Complex Mediums for Optics and Electromagnetics*, edited by W.S. Weiglhofer and A. Lakhtakia (SPIE Optical Engineering Press, Bellingham, WA, USA, in press).
Z. Fu, H. Zhou, and K. Zhang, Int. J. Infrared Millim. Waves [**24**]{}, 239 (2003).
T.G. Mackay, A. Lakhtakia, and W.S. Weiglhofer, Arch. Elektr. Übertrag. [**55**]{}, 243 (2001).
M. Abramowitz and I.A. Stegun (eds.), *Handbook of Mathematical Functions*, (Dover, New York, NY, 1965).
L. Ward, *The Optical Constants of Bulk Materials and Films*, (Adam Hilger, Bristol, UK, 1988).
B. Michel *et al.*, Compos. Sci. Technol. [**61**]{}, 13 (2001).
M. Notomi, Opt. Quantum Electron. [**34**]{}, 133 (2002).
C. Luo *et al.*, Phys. Rev. B [**65**]{}, 201104 (2002).
\
\
\
\
\
[^1]: Fax: +44 131 650 6553; e–mail: [email protected]
[^2]: Fax: +1 814 863 7967; e–mail: [email protected]
[^3]: When $\theta = \pi/2$, the dispersion relation reduces to a quadratic polynomial in $\tilde{k}^2$.
[^4]: Parenthetically, negative refraction is also displayed by certain purely dielectric mediums, but they must be nonhomogeneous Notomi,LJJP.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We establish the strong $\mathcal{L}^2(\mathcal{P})$-convergence of properly rescaled Wick powers as the power index tends to infinity. The explicit representation of such limit will also provide the convergence in distribution to normal and log-normal random variables. The proofs rely on some estimates for the $\mathcal{L}^2(\mathcal{P})$-norm of Wick products and on the properties of second quantization operators.'
author:
- Alberto Lanconelli
title: Some limit theorems for rescaled Wick powers
---
-- ---------------------------------
Dipartimento di Matematica
Universita’ degli Studi di Bari
Via E. Orabona, 4
70125 Bari - Italia
E-mail: [email protected]
-- ---------------------------------
Key words and phrases: Wick product, second quantization operator, convergence in distribution.\
AMS 2000 classification: 60H40, 60F25, 60H15.
Introduction
==============
In the last decade several authors have identified the Wick product as a necessary tool for the study of certain types of stochastic partial differential equations (SPDEs) or for the solution to some related problems. This is motivated by the crucial features of the Wick product: firstly it represents a bridge between stochastic and classical integration theories; secondly it provides an efficient way of multiplying infinite dimensional distributions. Since several SPDEs of interest do not admit classical solutions, the possibility of treating nonlinearities becomes fundamental. Important examples in these regards are the stochastic quantization equation, which was studied among others in [@DD],[@DT] and [@GG], and the KPZ equation, studied for instance in [@BG] and [@C]. Also the problem of finding Itô’s type formulas for SPDEs leads in a natural way to the use of Wick powers as a renormalization technique. This is shown in [@Z] and [@L]. We also mention the book [@HOUZ] which proposes a systematic use of Wick products for the formulation and the study of a variety of SPDEs.\
Let us briefly introduce the Wick product (see the next section for precise definitions). Consider two multiple Itô integrals $I_n(h_n)$ and $I_m(g_m)$ where $h_n\in\mathcal{L}^2([0,T]^n)$ and $g_m\in\mathcal{L}^2([0,T]^m)$. The well-known Hu-Meyer formula establishes that $$\begin{aligned}
I_n(h_n)\cdot I_m(g_m)=\sum_{r=0}^{n\wedge m}r!{n \choose r}{m
\choose r}I_{n+m-2r}(h_n\hat{\otimes}_rg_m),\end{aligned}$$ where $$\begin{aligned}
(h_n\otimes_rg_m)(t_1,...,t_{n+m-2r}):=\int_{[0,T]^r}h_n(t_1,...,t_{n-r},\bold{s})
g_m(t_{n-r+1},...,t_{n+m-2r},\bold{s})d\bold{s},\end{aligned}$$ and $\hat{}$ stands for symmetrization. If $h_n$ and $g_m$ are only tempered distributions then $I_n(h_n)$ and $I_m(g_m)$ become generalized random variables. In this case the pointwise product (1.1) is not anymore well defined due to the presence of the trace terms (1.2) that do not make sense in this new situation. To overcome this problem one can drop these problematic terms and define a new product obtained keeping only the term with $r=0$ in the sum (1.1), namely: $$\begin{aligned}
I_n(h_n)\diamond I_m(g_m):=I_{n+m}(h_n\hat{\otimes} g_m).\end{aligned}$$ This is called *Wick product* of $I_n(h_n)$ and $I_m(g_m)$. If for example we denote by $W_t$ the time derivative of a Brownian motion $B_t$ then we can write $W_t=I_1(\delta_t)$ ($\delta_t$ stands for the Dirac’s delta function concentrated in $t$) and obtain $$\begin{aligned}
W_t\diamond W_t=I_2(\delta_t\otimes\delta_t).\end{aligned}$$ Observe that applying formally (1.1) we get $$\begin{aligned}
W_t\diamond W_t="W_t^2-\int_0^T\delta_t^2(s)ds".\end{aligned}$$ The above mentioned bridge between stochastic and classical integration theories can be now formalized precisely as $$\begin{aligned}
\int_0^T\xi_tdB_t=\int_0^T\xi_t\diamond W_t dt,\end{aligned}$$ where the left hand side denotes the Itô integral of the stochastic process $\xi_t$.\
The aim of the present paper is the investigation of the limiting behavior of the sequence $$\begin{aligned}
X^{\diamond n}:=X\diamond\cdot\cdot\cdot\diamond X,\quad n\geq 1,\end{aligned}$$ as $n$ goes to infinity. The motivation for doing this is twofold: on one hand Wick powers of the type (1.3) appear in the formulation of the stochastic quantization equation (see [@DD]); on the other hand the Wick product, as suggested in [@KSS], can be viewed as a convolution between (generalized) random variables. Therefore a theorem about the limiting behavior of the sequence in (1.3) constitutes a result in the spirit of the central limit theorem for the convolution $\diamond$.\
We will prove that for any square integrable $X$ a properly rescaled version of the sequence in (1.3) converges in the strong topology of $\mathcal{L}^2(\mathcal{P})$ to a so-called stochastic exponential; this result will imply the convergence in distribution to log-normal random variables. We will also show that under the assumption of the positivity of $X$ the logarithm of the above mentioned sequence converges in distribution to a normal random variable.\
The paper is organized as follows: Section 2 recalls some classical background information and introduce the necessary definitions. We refer the reader to the book [@N] for more detailed material. Section 3 presents the main results of the paper together with some important corollaries concerning convergence in distribution.
Preliminaries
===============
Let $(\Omega,\mathcal{F},\mathcal{P})$ be the classical Wiener space over the time interval $[0,T]$ and denote by $B_t(\omega):=\omega(t), t\in [0,T]$ the coordinate process which is a Brownian motion under the measure $\mathcal{P}$. Set as usual $$\begin{aligned}
\mathcal{L}^2(\mathcal{P}):=\Big\{X:\Omega\to\mathbb{R}\mbox{
measurable s.t. }
E[|X|^2]:=\int_{\Omega}|X(\omega)|^2d\mathcal{P}(\omega)<+\infty\Big\},\end{aligned}$$ and $$\begin{aligned}
\Vert X\Vert:=(E[|X|^2])^{\frac{1}{2}}.\end{aligned}$$ According to the Wiener-Itô chaos representation theorem any $X\in\mathcal{L}^2(\mathcal{P})$ can be written uniquely as $$\begin{aligned}
X=\sum_{n\geq 0}I_n(h_n),\end{aligned}$$ where $I_0(h_0)=E[X]$ and for $n\geq 1$, $h_n\in\mathcal{L}^2([0,T]^n)$ is a symmetric deterministic function which is called the *n-th order kernel of X*. Moreover for $n\geq 1$, $I_n(h_n)$ stands for the $n$-th order multiple Itô integral of $h_n$ w.r.t. the Brownian motion $\{B_t\}_{0\leq t\leq
T}$.\
By means of this representation the $\mathcal{L}^2(\mathcal{P})$-norm of $X$ takes the following form: $$\begin{aligned}
\Vert X\Vert^2=\sum_{n\geq 0}n!|h_n|_{\mathcal{L}^2([0,T]^n)}^2.\end{aligned}$$
Given two square integrable random variables $X$ and $Y$ with chaotic representations: $$\begin{aligned}
X=\sum_{n\geq 0}I_n(h_n)\mbox{ and }Y=\sum_{n\geq 0}I_n(g_n),\end{aligned}$$ we call *Wick product of X and Y* the following quantity: $$\begin{aligned}
X\diamond Y:=\sum_{n\geq 0}I_n(k_n),\mbox{ with
}k_n:=\sum_{j=0}^nh_j\hat{\otimes}g_{n-j},\end{aligned}$$ where $\hat{\otimes}$ denotes the symmetric tensor product. We also denote $$\begin{aligned}
X^{\diamond n}:=X\diamond\cdot\cdot\cdot\diamond X\mbox{
($n$-times). }\end{aligned}$$
In general $X\diamond Y$ does not belong to $\mathcal{L}^2(\mathcal{P})$ since it may happen that $$\begin{aligned}
\Vert X\diamond Y\Vert^2=\sum_{n\geq
0}n!|k_n|_{\mathcal{L}^2([0,T]^n)}^2=+\infty.\end{aligned}$$
The next inequality is a straightforward generalization of Theorem 9 in [@KSS] where the Wick product of two random variables is considered. This result, that will be of crucial importance in our proofs, provides a sufficient condition for the Wick product of random variables to be square integrable.\
First we need to recall that for $\lambda\in\mathbb{R}$ we denote by $\Gamma(\lambda)$ the following operator: $$\begin{aligned}
\Gamma(\lambda)X=\Gamma(\lambda)\sum_{n\geq 0}I_n(h_n):=\sum_{n\geq
0}\lambda^nI_n(h_n).\end{aligned}$$
Let $X_1,X_2,...,X_n\in\mathcal{L}^2(\mathcal{P})$. Then $$\begin{aligned}
\Vert X_1\diamond\cdot\cdot\cdot\diamond X_n\Vert\leq
\Vert\Gamma(\sqrt{n})X_1\Vert\cdot\cdot\cdot\Vert\Gamma(\sqrt{n})X_n\Vert,\end{aligned}$$ or equivalently $$\begin{aligned}
\Big\Vert
\Gamma\Big(\frac{1}{\sqrt{n}}\Big)\Big(X_1\diamond\cdot\cdot\cdot\diamond
X_n\Big)\Big\Vert\leq \Vert X_1\Vert\cdot\cdot\cdot\Vert X_n\Vert.\end{aligned}$$ In particular for $X_1=\cdot\cdot\cdot=X_n=X$, we get $$\begin{aligned}
\Vert X^{\diamond n}\Vert\leq \Vert\Gamma(\sqrt{n})X\Vert^n,\end{aligned}$$ or equivalently $$\begin{aligned}
\Big\Vert \Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond
n}\Big\Vert\leq \Vert X\Vert^n.\end{aligned}$$
If $\lambda\in ]0,1]$, the operator $\Gamma(\lambda)$ can be expressed in terms of the Ornstein-Uhlenbeck semigroup. In fact if we write for $t\geq 0$, $$\begin{aligned}
(P_tX)(\omega):=\int_{\Omega}X(e^{-t}\omega+\sqrt{1-e^{-2t}}\tilde{\omega})d\mathcal{P}(\tilde{\omega}),\end{aligned}$$ then $$\begin{aligned}
\Gamma(\lambda)=\Gamma(e^{\log\lambda})=P_{-\log\lambda}.\end{aligned}$$ In particular $$\begin{aligned}
\Gamma\Big(\frac{1}{\sqrt{n}}\Big)=P_{\frac{1}{2}\log n}.\end{aligned}$$
We conclude this section observing that for $\lambda,\mu\in\mathbb{R}$, $$\begin{aligned}
\Gamma(\mu)\Gamma(\lambda)=\Gamma(\mu\lambda),\end{aligned}$$ and that for $X,Y\in\mathcal{L}^2(\mathcal{P})$, $$\begin{aligned}
\Gamma(\lambda)(X\diamond
Y)=\Gamma(\lambda)X\diamond\Gamma(\lambda)Y.\end{aligned}$$
Main results
==============
We are now ready to state one of the main results of this paper.
Let $X\in\mathcal{L}^2(\mathcal{P})$ with $E[X]\neq 0$ and denote by $h_1\in\mathcal{L}^2([0,T])$ the first-order kernel in the chaos decomposition of $X$. Then $$\begin{aligned}
\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}\in\mathcal{L}^2(\mathcal{P})\mbox{ for any }n\geq 1,\end{aligned}$$ and $$\begin{aligned}
\lim_{n\to\infty}\frac{\Gamma(\frac{1}{n})X^{\diamond
n}}{E[X]^n}=\exp\Big\{\int_0^Th_1(s)dB_s-\frac{1}{2}\int_0^Th_1^2(s)ds\Big\},\end{aligned}$$ where the convergence is in the strong topology of $\mathcal{L}^2(\mathcal{P})$.
To ease the notation for $h\in\mathcal{L}^2([0,T])$ we set $$\begin{aligned}
\mathcal{E}(h):=\exp\Big\{\int_0^Th(s)dB_s-\frac{1}{2}\int_0^Th^2(s)ds\Big\}.\end{aligned}$$ The random variable $\mathcal{E}(h)$ belongs to $\mathcal{L}^2(\mathcal{P})$ and its chaotic representation is $$\begin{aligned}
\mathcal{E}(h)=\sum_{n\geq 0}I_n\Big(\frac{h^{\otimes n}}{n!}\Big).\end{aligned}$$ From this identity one can easily derive the following properties: $$\begin{aligned}
\Vert\mathcal{E}(h)\Vert\!\!&=&\!\!\exp\Big\{\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}\Big\};\\
\Gamma(\lambda)\mathcal{E}(h)\!\!&=&\!\!\mathcal{E}(\lambda
h);\\
\mathcal{E}(h)\diamond\mathcal{E}(g)\!\!&=&\!\!\mathcal{E}(h+g).\end{aligned}$$
First of all note that since $E[X]$ is a constant we can write $$\begin{aligned}
\frac{\Gamma(\frac{1}{n})X^{\diamond
n}}{E[X]^n}=\Gamma\Big(\frac{1}{n}\Big)\Big(\frac{X}{E[X]}\Big)^{\diamond
n};\end{aligned}$$ Therefore we can assume without loss of generality that $E[X]=1$ and prove that $$\begin{aligned}
\lim_{n\to\infty}\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}=\mathcal{E}(h_1).\end{aligned}$$
For any $n\geq 1$, $$\begin{aligned}
\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}\in\mathcal{L}^2(\mathcal{P}).\end{aligned}$$ In fact according to Theorem 2.1, $$\begin{aligned}
\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}\Big\Vert&=&\Big\Vert\Gamma\Big(\frac{1}{\sqrt{n}}\Big)\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond
n}\Big\Vert\\
&\leq&\Big\Vert\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond
n}\Big\Vert\\
&\leq&\Vert X\Vert^n.\end{aligned}$$ Moreover for $n\geq 1$, $$\begin{aligned}
\mathcal{E}(h_1)&=&\mathcal{E}\Big(\underbrace{\frac{h_1}{n}+\cdot\cdot\cdot+\frac{h_1}{n}}_{n-times}\Big)\\
&=&\underbrace{\mathcal{E}\Big(\frac{h_1}{n}\Big)\diamond\cdot\cdot\cdot\diamond\mathcal{E}\Big(\frac{h_1}{n}\Big)}_{n-times}\\
&=&\mathcal{E}\Big(\frac{h_1}{n}\Big)^{\diamond n}.\end{aligned}$$
We have to prove that $$\begin{aligned}
\lim_{n\to\infty}\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}-\mathcal{E}(h_1)\Big\Vert=0.\end{aligned}$$ Since the Wick product is commutative, associative and distributive with respect to the sum, the following identity holds: $$\begin{aligned}
Y^{\diamond n}-Z^{\diamond
n}=(Y-Z)\diamond\Big(\sum_{j=0}^{n-1}Y^{\diamond j}\diamond
Z^{\diamond (n-1-j)}\Big).\end{aligned}$$ Therefore from Theorem 2.1 and properties (3.2)-(3.4) we obtain $$\begin{aligned}
\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}-\mathcal{E}(h_1)\Big\Vert&=&\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}-\mathcal{E}\Big(\frac{h_1}{n}\Big)^{\diamond n}\Big\Vert\\
&=&\Big\Vert\Big(\Gamma\Big(\frac{1}{n}\Big)X-\mathcal{E}\Big(\frac{h_1}{n}\Big)\Big)\diamond
\Big(\sum_{j=0}^{n-1}\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
j}\diamond \mathcal{E}\Big(\frac{h_1}{n}\Big)^{\diamond
(n-1-j)}\Big)\Big\Vert\\
&\leq&\Big\Vert\Gamma(\sqrt{2})\Big(\Gamma\Big(\frac{1}{n}\Big)X-\mathcal{E}\Big(\frac{h_1}{n}\Big)\Big)\Big\Vert\\
&&\times\Big\Vert\Gamma(\sqrt{2})\Big(\sum_{j=0}^{n-1}\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
j}\diamond \mathcal{E}\Big(\frac{h_1}{n}\Big)^{\diamond
(n-1-j)}\Big)\Big\Vert\\
&=&\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert\\
&&\times\Big\Vert\sum_{j=0}^{n-1}\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X^{\diamond
j}\diamond \mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)^{\diamond
(n-1-j)}\Big\Vert\\
&\leq&\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert\\
&&\times\sum_{j=0}^{n-1}\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X^{\diamond
j}\diamond \mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)^{\diamond
(n-1-j)}\Big\Vert\\
&\leq&\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert\\
&&\times\sum_{j=0}^{n-1}\Big\Vert\Gamma(\sqrt{n-1})\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X\Big\Vert^j
\Big\Vert\Gamma(\sqrt{n-1})\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert^{n-1-j}\\
&=&\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert\\
&&\times\sum_{j=0}^{n-1}\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert^j
\Big\Vert\mathcal{E}\Big(\frac{\sqrt{2(n-1)}h_1}{n}\Big)\Big\Vert^{n-1-j}.\end{aligned}$$ For $0\leq j\leq n-1$, $$\begin{aligned}
\Big\Vert\mathcal{E}\Big(\frac{\sqrt{2(n-1)}h_1}{n}\Big)\Big\Vert^{n-1-j}&=&\exp\Big\{\frac{n-1}{n^2}(n-1-j)|h_1|^2\Big\}\\
&\leq&\exp\Big\{\frac{(n-1)^2}{n^2}|h_1|^2\Big\}\\
&\leq&\exp\{|h_1|^2\}.\end{aligned}$$ Therefore $$\begin{aligned}
\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}-\mathcal{E}(h_1)\Big\Vert &\leq&
e^{|h_1|^2}\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert\cdot
\sum_{j=0}^{n-1}\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert^j\\
&=&e^{|h_1|^2}\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert
\frac{\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert^n-1}{\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert-1}.\end{aligned}$$ Now, $$\begin{aligned}
\lim_{n\to\infty}\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert^n&=&
\lim_{n\to\infty}\Big(1+\frac{2(n-1)}{n^2}|h_1|^2+o\Big(\frac{1}{n}\Big)\Big)^{\frac{n}{2}}\\
&=&\lim_{n\to\infty}\Big(1+\frac{2}{n}|h_1|^2+o\Big(\frac{1}{n}\Big)\Big)^{\frac{n}{2}}\\
&=&\exp{|h_1|^2},\end{aligned}$$ and $$\begin{aligned}
\lim_{n\to\infty}\frac{\Big\Vert\Gamma\Big(\frac{\sqrt{2}}{n}\Big)X-\mathcal{E}\Big(\frac{\sqrt{2}h_1}{n}\Big)\Big\Vert}
{\Big\Vert\Gamma\Big(\frac{\sqrt{2(n-1)}}{n}\Big)X\Big\Vert-1}&=&
\lim_{n\to\infty}\frac{\Big(2!\frac{4}{n^4}|h_2-h_1^{\otimes
2}|^2+o\Big(\frac{1}{n^4}\Big)\Big)^{\frac{1}{2}}}{\Big(1+\frac{2(n-1)}{n^2}|h_1|^2+o\Big(\frac{1}{n}\Big)\Big)^{\frac{1}{2}}-1}\\
&=&0.\end{aligned}$$ Hence we can conclude that $$\begin{aligned}
\lim_{n\to\infty}\Big\Vert\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}-\mathcal{E}(h_1)\Big\Vert=0.\end{aligned}$$
According to the observation of Remark 2.2 the statement of Theorem 3.1 can be formulated as follows: $$\begin{aligned}
\lim_{n\to\infty}\frac{P_{\log n}X^{\diamond
n}}{E[X]^n}=\exp\Big\{\int_0^Th_1(s)dB_s-\frac{1}{2}\int_0^Th_1^2(s)ds\Big\}.\end{aligned}$$
Theorem 3.1 assumes that $E[X]\neq 0$. The case of zero mean random variables is treated in the following theorem.
Let $X\in\mathcal{L}^2(\mathcal{P})$ with $E[X]=0$. Assume that we can find a sequence of real numbers $\{a_n\}_{n\geq 1}$ such that $$\begin{aligned}
\Gamma(a_n)X^{\diamond n}\in\mathcal{L}^2(\mathcal{P})\mbox{ for any
}n\geq 1,\end{aligned}$$ and $$\begin{aligned}
\lim_{n\to\infty}\Gamma(a_n)X^{\diamond n}\end{aligned}$$ exists in the strong topology of $\mathcal{L}^2(\mathcal{P})$. Then the limit must be zero.
Suppose there exist a sequence of real numbers $\{a_n\}_{n\geq 1}$ and $Z\in\mathcal{L}^2(\mathcal{P})$, $Z\neq 0$ such that $$\begin{aligned}
\Gamma(a_n)X^{\diamond n}\in\mathcal{L}^2(\mathcal{P})\mbox{ for any
}n\geq 1,\end{aligned}$$ and $$\begin{aligned}
\lim_{n\to\infty}\Big\Vert\Gamma(a_n)X^{\diamond n}-Z\Big\Vert=0.\end{aligned}$$ Define $$\begin{aligned}
n_0:=\min\{n\geq 0: z_n\neq 0\},\end{aligned}$$ where $z_0=E[X]$ and for $n\geq 1$, $z_n\in\mathcal{L}^2([0,T]^n)$ is the $n$-th order kernel in the Wiener-Itô chaos decomposition of $Z$.\
Since $Z$ is the strong $\mathcal{L}^2(\mathcal{P})$-limit of the sequence $\Gamma(a_n)X^{\diamond n}$, it is also its weak limit, that means $$\begin{aligned}
\lim_{n\to\infty}E[\Gamma(a_n)X^{\diamond n}U]=E[ZU],\end{aligned}$$ for all $U\in\mathcal{L}^2(\mathcal{P})$.\
Since $E[X]=0$ the first non zero term in the Wiener-Itô chaos decomposition of $\Gamma(a_n)X^{\diamond n}$ is at least of order $n$ and therefore for any $n>n_0$ we have $$\begin{aligned}
E[\Gamma(a_n)X^{\diamond n}I_{n_0}(z_{n_0})]=0,\end{aligned}$$ (by the orthogonality of homogenous chaos of different orders) and hence $$\begin{aligned}
\lim_{n\to\infty}E[\Gamma(a_n)X^{\diamond n}I_{n_0}(z_{n_0})]=0.\end{aligned}$$ On the other hand $$\begin{aligned}
E[ZI_{n_0}(z_{n_0})]=n_0!|z_{n_0}|^2>0,\end{aligned}$$ by the definition of $n_0$. This means that $Z$ is not the weak limit of the sequence $\Gamma(a_n)X^{\diamond n}$. This contradiction completes the proof.
If $\{X_n\}_{n\geq 1}$ is a sequence of random variables, by the symbol $$\begin{aligned}
X_n\Rightarrow X\mbox{ as }n\to\infty,\end{aligned}$$ we mean that the sequence $\{X_n\}_{n\geq 1}$ converges in distribution as $n$ goes to infinity to the random variable $X$.
Since convergence in $\mathcal{L}^2(\mathcal{P})$ is stronger than convergence in distribution we have the following result.
Let $X\in\mathcal{L}^2(\mathcal{P})$ with $E[X]\neq 0$ and denote by $h_1\in\mathcal{L}^2([0,T])$ the first-order kernel in the chaos decomposition of $X$. Then for any $n\geq 2$ the distribution of the random variable $$\begin{aligned}
\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n},\end{aligned}$$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$. Moreover $$\begin{aligned}
\frac{\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n}}{E[X]^n}\Rightarrow
Y\mbox{ as }n\to\infty,\end{aligned}$$ where $Y$ is either a log-normal random variable, more precisely $\ln Y$ is a normal random variable with mean $-\frac{1}{2}|h_1|_{\mathcal{L}^2([0,T])}^2$ and variance $|h_1|_{\mathcal{L}^2([0,T])}^2$ or $Y=1$.
Observe that $$\begin{aligned}
\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}&=&\Gamma\Big(\frac{1}{\sqrt{n}}\Big)\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond
n}\\
&=&\Gamma\Big(\frac{1}{\sqrt{n}}\Big)Z\end{aligned}$$ where we set $Z:=\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond n}$. From Theorem 2.1 we know that $Z\in\mathcal{L}^2(\mathcal{P})$ and therefore $\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n}$ can be written as the image through the operator $\Gamma(\frac{1}{\sqrt{n}})$ of a square integrable random variable. By Theorem 4.24 in [@J] this implies the absolute continuity of the distribution of $\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}$.\
The rest of the proof follows from Theorem 3.1 and the fact that if $h_1\neq 0$ then $\int_0^Th_1(s)dB_s$ is a zero mean gaussian random variable with variance $|h_1|_{\mathcal{L}^2([0,T])}^2$.
If $X$ is assumed to be non negative then Corollary 3.4 can be reformulated as follows.
Let $X\in\mathcal{L}^2(\mathcal{P})$ be a non negative random variable with $\mathcal{P}(X>0)>0$ and denote by $h_1\in\mathcal{L}^2([0,T])$ the first-order kernel in the chaos decomposition of $X$. Then for any $n\geq 2$, $$\begin{aligned}
\mathcal{P}\Big(\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n}>0\Big)=1,\end{aligned}$$ and $$\begin{aligned}
\ln\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n}-n\ln E[X]\Rightarrow
Z\mbox{ as }n\to\infty,\end{aligned}$$ where $Z$ is either a normal random variable with mean $-\frac{1}{2}|h_1|_{\mathcal{L}^2([0,T])}^2$ and variance $|h_1|_{\mathcal{L}^2([0,T])}^2$ or $Z=0$.
The second assertion is a straightforward consequence of Corollary 3.4 since convergence in distribution is preserved under the action of continuous functions.\
We have to prove that for any $n\geq 2$, $$\begin{aligned}
\mathcal{P}\Big(\Gamma\Big(\frac{1}{n}\Big)X^{\diamond n}>0\Big)=1.\end{aligned}$$ Let us write as before $$\begin{aligned}
\Gamma\Big(\frac{1}{n}\Big)X^{\diamond
n}=\Gamma\Big(\frac{1}{\sqrt{n}}\Big)Z,\end{aligned}$$ where $$\begin{aligned}
Z=\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond n}.\end{aligned}$$ We want to prove that $Z$ is non negative; according to Theorem 4.1 in [@NZ] this is equivalent to prove that the function $$\begin{aligned}
h\in\mathcal{L}^2([0,T])\mapsto
E[Z\mathcal{E}(ih)]e^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\in\mathbb{C},\end{aligned}$$ where $i$ is the imaginary unit, is positive definite. Since $\Gamma\Big(\frac{1}{\sqrt{n}}\Big)$ is self-adjoint in $\mathcal{L}^2(\mathcal{P})$ and for any $h\in\mathcal{L}^2([0,T])$, $$\begin{aligned}
E[(X\diamond Y)\mathcal{E}(h)]=E[X\mathcal{E}(h)]E[Y\mathcal{E}(h)],\end{aligned}$$ we get $$\begin{aligned}
E[Z\mathcal{E}(ih)]e^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}&=&
E\Big[\Gamma\Big(\frac{1}{\sqrt{n}}\Big)X^{\diamond
n}\mathcal{E}(ih)\Big]e^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\\
&=&E\Big[X^{\diamond
n}\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]e^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\\
&=&\Big(E\Big[X\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]\Big)^ne^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\\
&=&\Big(E\Big[X\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]
e^{-\frac{1}{2n}|h|^2_{\mathcal{L}^2([0,T])}}e^{\frac{1}{2n}|h|^2_{\mathcal{L}^2([0,T])}}\Big)^ne^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\\
&=&\Big(E\Big[X\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]
e^{-\frac{1}{2n}|h|^2_{\mathcal{L}^2([0,T])}}\Big)^ne^{\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}e^{-\frac{1}{2}|h|^2_{\mathcal{L}^2([0,T])}}\\
&=&\Big(E\Big[X\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]
e^{-\frac{1}{2n}|h|^2_{\mathcal{L}^2([0,T])}}\Big)^n.\end{aligned}$$ Since $X$ is by assumption non negative, the function $$\begin{aligned}
E\Big[X\mathcal{E}\Big(\frac{ih}{\sqrt{n}}\Big)\Big]
e^{-\frac{1}{2n}|h|^2_{\mathcal{L}^2([0,T])}}\end{aligned}$$ is positive definite as well as its $n$-th power, proving the non negativity of $Z$.\
The image through the operator $\Gamma\Big(\frac{1}{\sqrt{n}}\Big)$ of a non negative random variable is in virtue of Corollary 4.29 in [@J] a $\mathcal{P}$-a.s. positive random variable. Since $X=\Gamma\Big(\frac{1}{\sqrt{n}}\Big)Z$ the previous assertion applies to $X$ and completes the proof.
[99]{}
Bertini,L. and Giacomin,G.: Stochastic Burgers and KPZ equations from particle systems. Comm. Math. Phys., 183 (1997), no. 3, pp. 571-607.
Chan,T.: Scaling limits of Wick ordered KPZ equation. Comm. Math. Phys. 209 (2000), no. 3, pp. 671-690.
Da Prato,G. and Debussche,A.: Strong solutions to the stochastic quantization equations. Ann. Probab. 31 (2003), no. 4, pp. 1900-1916.
Da Prato,G. and Tubaro,L.: Wick powers in stochastic PDEs: an introduction. Technical Report UTM 711, March 2007, Matematica, University of Trento
Gatarek,D. and Goldys,B.: Existence, uniqueness and ergodicity for the stochastic quantization equation. Studia Math. 119 (1996), no. 2, pp. 179-193.
Holden,H., [Ø]{}ksendal,B., Ub[ø]{}e,J. and Zhang,T.-S.: *Stochastic Partial Differential Equations- A Modeling, White Noise Functional Approach*. Birkh$\ddot{a}$user, Boston 1996.
Janson, S.: *Gaussian Hilbert spaces*. Cambridge Tracts in Mathematics, 129. Cambridge University Press, Cambridge, 1997.
Kuo,H.H., Saito,K. and Stan,A.I.: A Hausdorff-Young inequality for white noise analysis. Quantum information, IV (Nagoya, 2001), World Sci. Publ., River Edge, NJ, pp. 115-126.
Lanconelli,A.: White noise approach to the Itô formula for the stochastic heat equation. Communication in Stochastic Analysis, Vol. 1, n. 2, 2007, pp. 311-320.
Nualart,D.: [*The Malliavin calculus and related topics*]{}, Springer-Verlag, 1995.
Nualart,D. and Zakai,M.: Positive and strongly positive Wiener functionals. Barcelona Seminar on Stochastic Analysis (St. Feliu de Guíxols, 1991), Progr. Probab., 32, Birkhäuser, Basel, pp. 132-146.
Zambotti,L.: Itô-Tanaka’s formula for stochastic partial differential equations driven by additive space-time white noise. SPDEs and applications VII, Lect. Notes Pure Appl. Math. 245 Boca Raton, 2006, pp. 337-347.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'One of the hallmark experiments of quantum transport is the observation of the quantized resistance in a point contact formed with split gates in GaAs/AlGaAs heterostructures [@van1991quantum; @wharam1988one]. Being carried out on a single material, they represent in an ideal manner equilibrium reservoirs which are connected only through a few electron mode channel with certain transmission coefficients [@van1991quantum]. It has been a long standing goal to achieve similar experimental conditions also in superconductors [@beenakker1991josephson], only reached in atomic scale mechanically tunable break junctions of conventional superconducting metals, but here the Fermi wavelength is so short that it leads to a mixing of quantum transport with atomic orbital physics [@scheer1997conduction]. Here we demonstrate for the first time the formation of a superconducting quantum point contact (SQPC) with split gate technology in a superconductor, utilizing the unique gate tunability of the two dimensional superfluid at the LaAlO$_3$/SrTiO$_3$ (LAO/STO) interface [@reyren2007superconducting; @caviglia2008electric; @goswami2016quantum]. When the constriction is tuned through the action of metallic split gates we identify three regimes of transport: (i) SQPC for which the supercurrent is carried only by a few quantum transport channels. (ii) Superconducting island strongly coupled to the equilibrium reservoirs. (iii) Charge island with a discrete spectrum weakly coupled to the reservoirs. Our experiments demonstrate the feasibility of a new generation of mesoscopic all-superconductor quantum transport devices.'
author:
- Holger Thierschmann
- Emre Mulazimoglu
- Nicola Manca
- Srijit Goswami
- 'Teun M. Klapwijk'
- 'Andrea D. Caviglia'
bibliography:
- 'Bibliography\_full.bib'
- 'Bibliography\_full.bib'
title: 'Superconducting quantum point contact with split gates in the two dimensional LaAlO$_3$/SrTiO$_3$ superfluid'
---
[^1]
[^2]
![**Split gate device: (a)** Generalized sketch of the device. At the interface between the STO substrate and the the 12 unit cell (u.c.) layer of crystalline LAO (c-LAO) the superconducting 2DES (blue) is formed, which can be tuned insulating (shaded blue) locally under the split gates (yellow), thus forming a superconducting constriction. **(b)** False color AFM image showing the device layout. The potential of the split gates (yellow) L and R is controlled with the voltages V$_L$ and V$_R$, respectively. V$_L$ is kept at -1V. The conductive 2DES is formed in regions with c-LAO (blue). In areas which are protected with an AlO$_2$ hard mask LAO growth is amorphous (a-LAO, turquoise). The thin gate spanning the channel is not used in the experiments. The scale bar corresponds to 1 $\mu$m. **(c)** High bias (I=10 nA) differential resistance $r$ as a function of V$_R$. (o) - (iii) indicate the different regimes of transport (see text). V$_c$ denotes the formation of the constriction. []{data-label="fig:1"}](Fig1){width="1\linewidth"}
Various attempts have been made to combine the desired gate-tunability of the low electron density semiconductor with the use of conventional superconductors. However, these hybrid devices have introduced, compared to the GaAs/AlGaAs normal quantum transport case, the very important and yet very difficult to control influence of the interface between the two dissimilar materials [@takayanagi1995observation]. This makes the results dependent on the complexities of the proximity effect and thus complicates their interpretation. In principle, a new path has become available when it was discovered that in the two dimensional electronic system (2DES) at the LAO/STO interface superconductivity becomes suppressed when the electron density $n$ is reduced below a critical value $n_\text{c}$, for example by means of a gate voltage [@caviglia2008electric; @goswami2015nanoscale]. The Fermi-wavelength $\lambda_F$ in this system can be as large as $ 30 \text{ to } 50 $ nm [@tomczyk2016micrometer] and ballistic transport in the normal state has been demonstrated [@gallagher2014gate; @tomczyk2016micrometer]. The superconducting coherence length is about $\xi = 100 $ nm [@reyren2007superconducting]. This corresponds to spatial dimensions which are commonly achieved with present day lithography techniques. The creation of an SQPC with split gates in LAO/STO should therefore be within reach \[cf. Fig.\[fig:1\](a)\]. This approach can also offer insight into the nature of superconducting pairing at oxide interfaces. Unconventional pairing was recently suggested [@stornaiuolo2017signatures; @fidkowski2013magnetic] in light of a number of experimental observations, including strong spin orbit coupling [@caviglia2010tunable], co-existence of ferromagnetism and superconductivity [@bert2011direct; @dikin2011coexistence; @li2011coexistence], indications for electron pairing without macroscopic phase coherence [@cheng2015electron; @cheng2016tunable; @richter2013interface; @tomczyk2016micrometer] and a non-trivial relation between the critical temperature $T_\text{c}$ and charge carrier density $n$ [@caviglia2008electric; @richter2013interface].
Here we present experiments that demonstrate the formation of an SQPC with split gates in the LAO/STO superfluid. Our sample is fabricated following the procedure described by Goswami et al. [@goswami2016quantum] \[see Methods\]. Figure \[fig:1\](b) presents a false color atomic force microscope (AFM) image of the device layout. The metallic split gates (yellow) L and R cover the full width of the 5 $\mu$m wide 2DES (blue), except for a 150 nm region at its center. Transport experiments are performed in a current bias configuration (unless stated otherwise) at temperature $T_\text{base} <$ 40 mK. The resisitvly measured transition to the superconducting state is observed at $T_c \approx $100 mK \[see Methods and SI\]. Because of gate history effects, we carry out the experiment by putting electrode L on a fixed gate voltage ($V_\text{L}$ = -1V) to ensure depletion and we tune the constriction by only varying the voltage $V_R$ applied to gate R \[see SI\].
We expect the following scenario: When $V_\text{R}$ is changed towards negative values the charge carrier density $n$ gets reduced locally underneath the gate and gets closer to the critical density $n_c$ at which superconductivity becomes suppressed. At a certain gate voltage $V_\text{R} = V_\text{c}$ the condition $n = n_\text{c}$ is reached and a supercurrent can flow only through the constriction between the tips of gates, thus forming a weak link between the superconducting reservoirs. Outside this weak link, under the gates, the system acts as an insulator [@caviglia2008electric]. The number of transport modes available in the weak link is determined by its effective width. The constriction width is reduced when V$_R$ is further decreased and therefore the number of transmission channels decreases which is expected to lead to a step-wise reduction of the critical current $I_\text{c}$ [@beenakker1991josephson]. For V$_R \ll V_c$ transport will be dominated by a low transmissivity and the current is pinched off.
![ **Transport regimes in the constriction** **(a)** Differential resistance $r$ versus current $I$ for gate voltage $V_\text{R} = 0$ to $-3$V. Gate L is kept at $V_L = -1$ V. The four regimes of transport (o) to (iii) are indicated. I$_P$, reminiscent of the critical current $I_c$ of the weak link, is indicated. **(b)** Same data as in (a) but with voltage drop $V$ on the vertical axis and differential conductance $g$ represented by the color scale. V$_P$ denotes the voltage drop at I$_P$. Dotted lines in regime (iii) indicate conductance diamonds.[]{data-label="fig:overview"}](Fig2){width="1\linewidth"}
In order to study this scenario, we record a series of $V$-$I$ curves and vary $V_\text{R}$ from 0 to -3V. Panoramic overviews of the results are given in Fig. \[fig:overview\] (a) and (b) in color plots. Figure \[fig:overview\](a) presents the differential resistance $r=$d$V/$d$I$ and Fig. \[fig:overview\](b) shows the differential conductance $g=$d$I$/d$V$ with the current $I$ and voltage drop $V$ on the vertical axis, respectively. It can be seen that the constriction undergoes four different regimes of transport \[labelled (o) and (i) to (iii) in the figures\] as $V_\text{R}$ is varied from 0 to -3 V. Regime (o) (ranging from $V_\text{R}$ = 0 to -0.9 V) corresponds to the open current path configuration with $V_\text{R}>V_\text{c}$. A sharp peak in $r$ is visible at $I = \pm$ 5 nA, labelled $I_P$, which is reminiscent of a critical current $I_c$. Correspondingly, a dip occurs in $g$ at $V_\text{p}=\pm 44~\mu$V (Fig.\[fig:overview\](b)). At V$_R = V_\text{c} \approx -0.9$ V the critical density $n_\text{c}$ is reached. Here $I_p$ drops significantly because the current path becomes confined. At high currents in Fig.\[fig:overview\](a), as shown for $I = 10$ nA in Fig.\[fig:1\](c), this point of confinement is apparent in a step increase in $r$, similar to the well-known behavior in semiconductor heterostructures [@zheng1986gate]. It marks the transition to regime (i) ($V_\text{R}$ = -0.9 to -1.6 V). In this regime $I_P$ decreases when $V_\text{R}$ is reduced indicating the gate tunable weak link. In regime (ii) ($V_\text{R}$ = -1.6 to -2.4 V) regions of high resistance at zero bias appear and disappear periodically. As we will show below, this can be attributed to the emergence of a conductive island which dominates transport through the constriction. In regime (iii) ($V_\text{R}$ = -2.4 to -3 V) the device always exhibits a high resistance at zero bias. Figure \[fig:overview\](b) reveals that this regime is controlled by conductance diamonds (indicated with dashed lines).
{width="0.8\linewidth"}
Let us start the discussion with the weak link regime (i). Here we observe a rounded supercurrent and an excess current $I_\text{exc} \approx$ 1nA \[Fig.\[fig:SQPC\] (a)\]. $I_P$, reminiscent of the critical current $I_c$, changes from 3.7 to 3.0 nA \[Fig.\[fig:SQPC\](b)\] when V$_\text{R}$ is varied. The voltage $V_P = 44~\mu$V \[cf. Fig. \[fig:SQPC\](c)\] can be related to the superconducting gap $V_P/2 \approx \Delta \approx 22 ~\mu$eV, which is compatible with the value inferred from the resistively measured $T_c$, $\Delta_{Tc}=1.76k_BT_c = 15~\mu$eV. The high bias conductance $g_\text{n}$ \[Fig. \[fig:SQPC\](d)\] is of the order of half the quantum of conductance, changing with V$_R$ from 0.6 to 0.47 (2$e^2$/h) \[20 to 28 k$\Omega$\]. As shown by Monteiro et al. [@monteiro2017side] the phase correlation length in our 2DES is about 170 nm, whereas the lithographically determined channel-width is about 150 nm. It is therefore reasonable to interpret the data from a quantum transport perspective. For low carrier densities the Fermi wavelength $\lambda_F$ is several 10 nm [@tomczyk2016micrometer]. This and the relatively low value of $g_\text{n}$ suggest that we have only a few modes with a finite transmissivity in the channel. With increasing $V_R$ we do not observe the expected quantum transport step-like features in $g_\text{n}$, although the trace in Fig.\[fig:1\](e) is obviously not monotonous. In order to extract the transmissivity of the weak link we calculate from $I_\text{exc}$ and $g_\text{n}$ the barrier strength Z as a function of V$_R$ using the BTK-formalism for an S-S interface [@blonder1982transition]. Z is related to the normal state transmission probability $\tau$ by $\tau = (1+Z^2)^{-1}$. In this manner we obtain $Z\approx 0.8$ and, correspondingly, $\tau \approx 0.6$ \[Fig.\[fig:SQPC\](e)\]. Comparison with the measured g$_\text{n}$ thus suggests a total mode conductance of $2e^2/h$, such that $g(\tau=0.6) = 0.6\times2e^2/h$, close to the measured values. If we follow recent experiments by Gallagher et al. [@gallagher2014gate] who observed e$^2$/h modes in a normal state QPC, we could also consider only one mode with a higher transmissivity. However, this would require a re-analysis of the excess current based on an unconventional order parameter.
If we continue the discussion in the conventional picture, for a SQPC with perfect transmission ($\tau =1$) Beenakker and van Houten [@beenakker1991josephson] found that the critical current is given by $I_c=N e\Delta(\hbar)^{-1}$, where $N$ was chosen to represent the number of spin degenerate modes (which contribute each 2$e^2/h$ to the normal conductance). Using this relation and including the obtained $\tau$ as a pre-factor, we can calculate the maximum supercurrent expected for our device, which yields $I_c \approx 3$ nA. This is in good agreement with the measured $I_P$, as can be seen in the bottom panel in Fig. \[fig:SQPC\](e). For comparison we also plot the expected $I_c$ for a diffusive junction [@beenakker1992three], which clearly gives much smaller values. The critical current $I_c\approx 3$ nA implies a Josephson coupling energy $E_J= 6.2~\mu $eV. This is comparable to the bath temperature, $k_BT_\text{base}=3.4~ \mu $eV. Therefore, as for the few-mode atomic scale point contacts [@goffman2000supercurrent; @chauvin2007crossover], the supercurrent is rounded.
![**Regimes (iii) and (ii): Conductive island.** **(a)** Conductance diamonds measured in regime (iii) with voltage bias V on the vertical axis. $E$ denotes the addition energy of the island. The diamonds exhibit a gap of $V_\text{gap} \approx \pm$ 30 $\mu V$. Green arrows indicate signatures of excited states due to quantum confinement. Note that due to gate history effects, this regime occurs at a lower V$_\text{R}$ range. **(b)** $g$ measured regime (ii). For small voltages $g$ exhibits peaks with periodicity $\Delta V_s = $ 70 mV. For $V>15~\mu $V the periodicity increases by a factor 2, $\Delta V_n =$ 35 mV (dashed white lines). Black, red triangles and blue, green triangle indicate the line cuts shown in (c) and (d), respectively. **(c)** vertical line cuts from (b) at V$_\text{R}$ = -1.88 V and V$_\text{R}$ = -1.92 V. **(d)** Horizontal line cuts from (b) at V=0 (top panel) and V$= 20~ \mu$V (bottom panel). The change in periodicity by a factor 2 suggests a change in number of transferred charges from N=2 to N=1, indicative for a superconducting island. []{data-label="fig:dot"}](Fig3){width="1\linewidth"}
Let us now turn to the regime of conductance diamonds (CDs), regime (iii). Figure \[fig:dot\](a) presents a detailed measurement of $g$ in this region. Note that this measurement was carried out in a voltage bias configuration. We observe a series of CDs whose size $E$ on the (vertical) voltage axis is of the order of $80$ to $150~\mu$V. In gating experiments with non-superconducting materials, for instance in narrow semiconductor channels or graphene nano ribbons, CDs are known to occur in the low density limit because of puddles of charge carriers which form due to small inhomogeneities in the potential landscape, thus leading to quantum dot-like transport behavior [@staring1992coulomb; @mceuen1999disorder; @escott2010resonant; @zwanenburg2013silicon; @todd2008quantum; @liu2009electrostatic]. From this analogy we infer that in regime (iii) the superfluid inside the constriction is at the transition to full depletion. The size of the CDs directly reflects the addition energy $E$ that has to be paid in order to change the island occupation number and thus, to enable transport. $E$ is composed of various contributions of which the most dominant ones typically are the Coulomb charging energy $U = Ne^2(2C_\Sigma)^{-1}$ \[with $C_\Sigma$ being the total capacitance of the island and $N$ the number of charges to be added or removed\] and the energy level quantization due to quantum confinement $\delta\varepsilon$. For quantum dots in LAO/STO [@maniv2016tunneling; @cheng2015electron; @cheng2016tunable], Coulomb contributions are small because the STO substrate exhibits an extremely large dielectric constant $\epsilon_r=25000$ at low T and for small electric fields [@neville1972permittivity] which suppresses Coulomb repulsion. For our device, however, the fields originating from the split gates can not be neglected [@monteiro2017side]. We have performed simulations of the dielectric environment in the region surrounding the constriction using finite element techniques \[see SI\]. Our results indicate that the geometry of the gates leads to a strong field focusing effect which reduces $\epsilon_r$ in the constriction such that Coulomb repulsion becomes relevant. The numerical simulations yield charging energies of $U\approx 100~\mu$eV for an island with $\sim$ 50 nm radius, compatible with our experiment. The data in Fig. \[fig:dot\](a) further show signatures of transport through excited states originating from quantum confinement, as can be seen from the fine structure of conductance lines parallel to the diamond edges between two adjacent diamonds \[green arrows in Fig.\[fig:dot\](a)\] [@hanson2007spins; @zwanenburg2013silicon]. This allows us to estimate $\delta\varepsilon \approx 10 \text{ to } 20~ \mu$eV, which would lead to an island size of $\sim$ 80 nm, similar to the size obtained from the finite element simulations of the electrostatic properties. These values are also compatible with the electronic inhomogeneities typically observed in LAO/STO, which correlate with structural effects [@bert2011direct; @honig2013local; @kalabukhov2009cationic].
The island couples to superconducting reservoirs, which can be inferred from the voltage gap $V_\text{gap} \approx \pm 30~ \mu$V that separates the CDs in positive and negative bias direction [@tuominen1992experimental; @de2010hybrid]. As expected, $V_\text{gap}$ vanishes when a perpendicular magnetic field B=1T is applied \[see SI\]. We further observe pronounced negative differential conductance (NDC) along the edges of the CDs, which can be related to the sharp changes in density of states in the superconducting reservoirs around $\pm \Delta$. Since NDC occurs symmetrically for both positive and negative bias, we conclude that both reservoirs exhibit a superconducting energy gap \[see SI\]. When we compare the value of $V_\text{gap}=2\Delta$ with the superconducting gap in the reservoirs, $\Delta \approx 22~\mu$eV, we obtain reasonable agreement. We note that in this regime (iii) the level spacing of quantum states on the island is of the same order as the superconducting gap, $\delta \varepsilon\sim\Delta$. We are therefore in the limit of Anderson’s criterion of superconductivity at small scales ($\delta \varepsilon < \Delta$) [@anderson1959theory; @van2001superconductivity].
Finally we turn to the strong coupling regime (ii) \[Fig. \[fig:dot\](b)\]. The pattern of gapped CDs is not visible here. Instead we observe zero bias conductance peaks which are of the order of the quantum of conductance, $g \geq (2e^2/h)$, \[cf. Fig.\[fig:dot\](c), red curve\]. They alternate with regions where $g$ is suppressed. This suggests that the island is more transparent in this regime, allowing for Cooper pair transport [@tinkham1996introduction] at zero bias. The peaks in $g$ occur periodically in $\Delta V_\text{R}$, with a periodicity $\Delta V_\text{s} = 70$ mV \[Fig. \[fig:dot\](d), top panel\]. Above a certain bias voltage $V\approx \pm 15 \mu$V, the periodicity changes by a factor 2, $\Delta V_\text{n} = 35$ mV \[Fig. \[fig:dot\](d), bottom panel\]. This suggests that the parity of the island influences its energy state, as expected for a superconducting island [@tuominen1992experimental]. In its ground state the island hosts Cooper pairs (even parity) and thus exhibits a charging energy $2U$, reflecting the Cooper pair’s charge 2$e$ ($N$=2). Above a critical bias voltage the odd-parity state becomes available for quasi particles in the reservoirs thus enabling single electron transport across the island ($N$=1). This results in period doubling of the Coulomb blockade oscillations. Our data therefore suggest that in the strong coupling regime (ii) the island is in a superconducting state, thus forming a superconducting quantum dot (SQD).
We conclude that we have realized for the first time a superconducting quantum point contact with a split gate technique, of which the superconducting and normal transport is independent of unknown material interfaces. The present technology can serve as a basis for future experiments which will make it possible to evaluate the microscopic properties of the LAO/STO interface superconductivity and the properties of genuine superconducting quantum point contacts as originally envisioned [@beenakker1991josephson]. It may furthermore enable the investigation of nano scale superconductivity in few electron quantum dots.
Acknowledgments {#acknowledgments .unnumbered}
=================
We thank L.M.K. Vandersypen for comments on our manuscript. This work was supported by The Netherlands Organisation for Scientific Research (NWO/OCW) as part of the Frontiers of Nanoscience program, the Dutch Foundation for Fundamental Research on Matter (FOM) and the European research council (METIQUM, grant no. 339306). T.M.K. further acknowledges support from the Ministry of Education and Science of the Russian Federation under Contract No. 14.B25.31.007.
Author contributions {#author-contributions .unnumbered}
====================
S.G. T.M.K. and A.D.C. conceived the experiment. E.M. fabricated the samples. E.M. and H.T. carried out the experiments. H.T. and T.M.K analyzed the data with input from E.M. N.M. carried out the finite element simulations. H.T., E.M. and T.M.K wrote the manuscript. All authors commented on the manuscript. A.D.C. supervised the project.
Methods
=======
Device Fabrication
------------------
We use single crystal TiO$_2 $ terminated, (001) oriented SrTiO$_{3} $ (Crystec GmBH) as a substrate without further modification. The fabrication involves three electron beam lithography steps (EBL). The first EBL defines the positions of reference markers which are obtained by Tungsten *(W)* sputtering and consecutive lift-off. The second EBL step patterns the geometry of the device: Those regions which are to remain insulating are covered with 20 nm of sputtered AlO$_{2}$ (lift-off process in warm (50 ) acetone). Next, the LaAlO$_{3}$ (LAO) layer is grown by means of pulsed laser deposition (PLD) at 770 with an O$_{2}$ pressure of $p_\text{O2}= 6 \times 10^{-5}$ mbar. Only in those regions which are not covered by the AlO$_2$ hard mask growth is crystalline such that the STO surface is covered with a 12 unit cell (5 nm) LAO layer, giving rise to the 2DES at the interface. In all other regions the AlO$_2$ mask prevents the formation of the 2DES and the LAO layer is amorphous. Growth is monitored in-situ by reflection high energy electron diffraction *(RHEED)* which confirms layer-by-layer growth. After LAO deposition, the sample is annealed for one hour at 600 C and at a pressure of $p_\text{O2}=300$ mbar in order to suppress the formation of O$_{2}$ vacancies. The final EBL step defines the pattern of gate electrodes. Polymer residuals are removed with an Oxygen plasma. Evaporation of 100 nm gold *(Au)* is followed by gentle lift-off in acetone. The sample is mounted in a chip carrier with silver paint, serving as a back gate. Ultrasonic wedge bonding provides Ohmic contacts to the 2-dimensional electron system (2DES).
Electrical measurement setup and device characterization
--------------------------------------------------------
All measurements (unless stated otherwise) are performed using dc electronics, with the current sourced at reservoir S of the sample and drained at reservoir D. The resulting voltage drop V is probed at separate contacts in the respective reservoirs. The dilution refrigerator is equipped with copper powder filters, which are thermalized at the mixing chamber, and Pi-filters at room temperature.
The carrier density in the 2DES is adjusted globally by applying a negative back gate voltage $V_\text{BG} = -1.875$ V, which corresponds to a reduced density compared to $V_\text{BG}=0$. We determine the carrier density from Hall measurements performed at 300 mK using voltage probes on opposite sides of the reservoir with width w = 150 $\mu$m. The longitudinal resistance is determined from voltage measurements between probes separated by l= 112.5 $\mu$m. This yields a carrier density $n \approx 3 \times 10^{13}$ cm$^{-2}$ and a mobility $\mu \approx$ 800 cm$^2$(Vs)$^{-1}$. For this carrier density we observe the resistively measured superconducting transition at $T_c\approx 100$ mK, which corresponds to a BCS gap $\Delta_{Tc} = 15~\mu$eV.
Supplementary Information
=========================
Device Fabrication
------------------
![**Device fabrication:** **(a)** Fabrication flow. The color legend indicates the deposited materials at the different stages. **(b)** The in-situ RHEED oscillations monitored during growth. They confirm layer-by-layer growth. The insets show the RHEED diffraction pattern before and after the deposition of the LAO layers. **(c)** Optical image of the device after the lift off process. Ohmic contacts and the gate electrodes are labelled. **(d)** Atomic Force Microscopy (AFM) image showing the top gate architecture of the device.[]{data-label="fig:SI_Fab"}](FigSI_Fab){width="0.9\linewidth"}
Resistively measured T$_c$
--------------------------
{width="\linewidth"}
Additional data on the weak link regime (i)
-------------------------------------------
![**Additional data** from the weak link regime (i) discussed in the main text, showing a set of full V-I curves for different V$_\text{R}$. []{data-label="fig:WL"}](FigWL){width="0.5\linewidth"}
Calculation of Z, $\tau$ and I$_c$ in the weak link regime (i)
--------------------------------------------------------------
We use the Blonder-Tinkham-Klapwjik (BTK) formalism described in Ref. [@blonder1982transition] to calculate the barrier parameter Z from the excess current and the high bias conductance. The excess current $I_\text{exc}$ at an S-S interface is related to the Z parameter by
$$I_{exc} = 2\frac{g_n}{e(1-B(\infty))} \times \int_0^\infty dE (A(E)-B(E)+B(\infty)),$$
with $$\begin{aligned}
&A=\frac{\Delta^2}{E^2+(\Delta^2-E^2)(1+2Z^2)^2}, \\
&B=1-A,\end{aligned}$$ for $E<\Delta$, and $$\begin{aligned}
&A=\frac{u_0^2 v_0^2}{\gamma^2},\\
&B=\frac{(u_0^2-v_0^2)^2Z^2(1+Z^2)}{\gamma^2},\end{aligned}$$ for $E>\Delta$. Furthermore, $B(\infty)=\frac{Z^2}{1+Z^2}$, and
$$\begin{aligned}
&u_0 = \frac{1}{2}(1+((E^2-\Delta^2)/E^2)^{1/2}, \\
&v_0 = 1-u_0^2,\\
&\gamma= (u_0^2 +Z^2(u_0^2-v_0^2))^2).\end{aligned}$$
We determine $I_\text{exc}$ from the experimental data by extrapolating the high bias conductance $g_n$ at I = 9 nA towards V=0. This yields the data shown in the top panel of Fig. 4(e) in the main text. Combining this with the respective $g_n$ for each gate voltage and using $\Delta$=22 $\mu$eV, as extracted from the dI/dV vs V curves, allows us to calculate the corresponding Z parameter, which leads to the curve shown in Fig. \[fig:Z\]. By comparison we are then able to find the Z parameter as a function of $V_R$ and calculate the corresponding transmission coefficient in the normal state, $\tau = 1/(1+Z^2)$ [@blonder1982transition].
\(h) ![**Calculated Z-parameter:** Experimentally determined $I_\text{exc}/g_n$ and the corresponding Z-parameter determined for an S-S interface using BTK theory.[]{data-label="fig:Z"}](Zparameter "fig:"){width="0.4\linewidth"}
For calculation of the critical current $I_c$ of a disordered point contact in the diffusive transport regime we apply the equation provided by Beenakker [@beenakker1992three]
$$I_c = 1.32\frac{\pi \Delta}{2e} \langle G\rangle$$
where we have used the experimentally determined $g_n$ for the average conductance $\langle G\rangle$ and $\Delta=22~\mu$eV. This yields the blue dashed curve shown in the bottom panel in Fig.4(e) in the main text.
Estimating the size of the island
---------------------------------
### Numerical simulations of the electrostatic environment in the constriction
We model the dielectric environment of the constriction by finite elements analysis. For that purpose we developed a 3D model of our device in COMSOL5.2. The simulation is developed similarly to that reported in [@monteiro2017side], based on the electrostatic module. The structure is modeled with 3 layers stacked on top of each other. At the top there is a 5nm-thick LaAlO$_3$ layer, having a dielectric constant of 24 [@krupka1994dielectric] and insulating character. The middle layer is the 2DES, modelled as a 10-nm thick metal with conductivity calculated from the experimental data. The bottom layer is the 1$\mu$m-thick SrTiO$_3$, which is an insulator with a field-dependent dielectric constant described by the Landau-Ginsburg-Devenshire Theory [@landau1984electrodynamics; @ang2004dc]:
$$\epsilon_{STO} = 1+ \frac{B}{[1+(E/E_0)^2]^{2/3}},$$
with E being the local electric field, $B = 25000$ and $E_0 = 82000$ V/m [@stornaiuolo2014weak]. The split gates L and R are modelled as 100 nm thick triangular Au electrodes. The tips of the split gate are separated from each other by the distance $D=150$ nm. Gate voltages $V_L$ and $V_R$ are applied to the respective gates with respect to the drain reservoir, which is kept at ground potential. The source reservoir is voltage biased. The island is modelled as a conductive disc with diameter $d$, which is separated from source and drain by gaps of width $g$.
![**Finite element analysis model of the device.** **(a)** Layer stack and geometry.**(b)** spacial map of the dielectric constant in the constriction at low temperature for gate voltages $V_\text{L}=-1$V, $V_\text{R}=0$V and **(c)** for $V_\text{L}=-1$V, $V_\text{R}=0$V.[]{data-label="fig:Model"}](FigModel){width="0.6\linewidth"}
Figure \[fig:Model\](b) shows the spatial map of $\epsilon_r$ at the LAO/STO interface for $V_L = -1V$ and $V_R=0$. This visualizes the huge $\epsilon_r$ in large parts of the sample. In Fig.\[fig:Model\](c) the $\epsilon_r$ -map is shown for both gates at the same voltage ($V_L = V_R=-1V$). This results in reduction of the dielectric constant by more than one order of magnitude in the constriction and thus in the vicinity of the island.
The single electron charging energy of the island $E_C$ is extracted by calculating the voltage between source and drain that is required to change the polarization on the island by $e$ (electronic charge) while $V_{L,R} = -1 V$. This calculation is carried out for different values of the island’s diameter $d$ and the gap $g$. The result is shown in Fig. \[fig:Model2\](a). Charging energies of 60 $\mu$eV \[marked by the dashed line in Fig. \[fig:Model2\](a)\] to 100 $\mu$eV are obtained for island diameters of 70 nm to 120 nm if one allows $g$ to vary between approximately 3 nm and 6 nm.
![**Analysis of the Coulomb Diamonds:** **(a)** Results of the numerical simulation for $V_\text{L}=-1$V, $V_\text{R}=-1V$V showing the single electron charging energy as a function of island diameter. Colors indicate the assumed gap size ranging from $g=2$ to $6$ nm. **(b)** Close-up of the conductive region between two adjacent Coulomb diamonds, extracted from the main text. The borders of the diamonds are denoted with blue. Green lines indicate transport signatures of excited states on the island due to quantum confinement. The energy separation between different states are indicated with $a=14~\mu$V, $b= 17~ \mu $eV, $c = 21 ~\mu $eV.[]{data-label="fig:Model2"}](FigModel_n1V){width="0.7\linewidth"}
### Analysis of excited states signatures
We can further use the transport signatures of (excited) quantum states observed in the conductance diamond regime (iii) to estimate the size of the island.
A close up of the voltage range where these features are observed is depicted in Fig. \[fig:Model2\](b). Only positive bias voltages are shown. The delimiting lines of the Coulomb diamonds are indicated with blue lines. Transport signatures of excited states of the island due to quantum confinement can be observed in the region between two adjacent Coulomb diamonds. They appear as lines of enhanced conductance which run in parallel with the borders of the Coulomb diamonds [@hanson2007spins; @zwanenburg2013silicon]. In Fig. \[fig:Model2\](b) they are denoted with green lines. The separation between these lines along the (vertical) bias voltage axis indicates their difference in energy, $\delta \varepsilon$. As an example, the energy separation of 4 such lines is indicated in Fig. \[fig:Model2\](b) with $a=14~\mu$eV, $b=17~\mu$eV, $c=21~\mu$eV. Using a simple particle-in-a-box picture, we can estimate the spatial dimension $d$ required to obtain quantization energies of this order,
$$d=\sqrt{\frac{h^2}{8m\delta \varepsilon}},$$
where $m = 0.7m_e$ is the effect electron mass in the LAO/STO 2DES, $m_e$ is the the bare electron mass and h Planck’s constant. Approximating by using $\delta \varepsilon = 20 \mu eV$ yields an island radius of approximately 80 nm. This is in the same range as the result obtained from the purely electrostatic considerations above.
This analysis clearly shows that the energy scale of the conductance diamonds is at least a factor 3 larger than that observed for the electronic orbital contributions originating from quantum confinement. Moreover, the numerical simulations clearly show that that Coulomb repulsion cannot be neglected in the device. The results strongly suggests that Coulomb blockade is the main contribution to the observed conductance diamonds. Using both signatures to estimate island size independently yields consistent results.
Coulomb diamond regime (iii) for large magnetic field
-----------------------------------------------------
Figure \[fig:Bfield\] (a) shows the series of Coulomb diamonds discussed in the main text with a perpendicular magnetic field B=1 T applied. Since $B$ is much larger than the typical critical magnetic field in superconducting LAO/STO 2DES, $B_c \approx 0.2T$, superconducting transport in the leads is suppressed. Therefore, the voltage gap $V_\text{gap} \approx 30~\mu$eV observed for $B=0$ vanishes. The zero bias conductance is given in Fig. \[fig:Bfield\](b). Interestingly, the amplitudes appear to alternate in an odd-even manner,
resembling the parity effect in superconducting islands with two normal electrodes due to quasi-particle poising [@hergenrother1994charge] or photon assisted tunneling processes from radiation leaking though an imperfect shielding [@hergenrother1995photon]. This could suggest that despite the high magnetic field paired electrons are still present on the island. In the light of recent publications [@cheng2015electron; @cheng2016tunable; @tomczyk2016micrometer] this could hint at another signature of electron pairing without macroscopic superconductivity in LAO/STO.
![**Coulomb diamonds in a magnetic field:** **(a)** Coulomb diamonds discussed in the main text with a perpendicular magnetic field B=1T applied. It can be seen that the gap $V_\text{gap}$ has vanished. **(b)** Zero bias conductance as a function of V$_R$ extracted from (a).[]{data-label="fig:Bfield"}](Fig_Bfield){width="0.6\linewidth"}
Negative differential resistance and sub gap features in the Coulomb diamond regime (iii)
-----------------------------------------------------------------------------------------
We observe clear signatures of negative differential conductance (NDC) at the edges of the Coulomb diamonds. They occur for both positive and negative bias voltage, which indicates the presence of a superconducting gap in both reservoirs. As an example, a close up of the X shaped structure observed around $V_R=-0.75$ V is presented in Fig.\[fig:X\](a). Similar features have been observed recently by Cheng et al [@cheng2016tunable], who pointed out a connection to tunable electron-electron interactions on LAO/STO quantum dots. It can clearly be seen from Fig.\[fig:X\](a) that the four ’arms’ of the X intersect around V=0 ($\bigcirc$). This suggests that conductance originates from alignment of the island state with states close to the Fermi levels in the reservoirs (cf. the corresponding energy diagram in \[fig:X\](c). Each of the ’arms’ exhibits a pronounced negative differential conductance (NDC). This becomes highlighted in Fig.\[fig:X\](b) where traces of $g$ are shown that are obtained from the vertical line cuts denoted $\alpha$ and $\beta$ in Fig.\[fig:X\](a). When going from V=0 towards positive or negative bias, $g$ exhibits first a positive peak at the intersections with the X structure, followed by a change of sign and a subsequent negative signal of approximately equal magnitude \[black arrows in Fig.\[fig:X\](b)\].
![ **Negative differential resistance in the Coulomb diamonds of regime (iii)** **(a)** a close-up from Fig.4a) in the main text around $V_R=-0.75$ V. $\alpha$ and $\beta$ denote vertical line cuts shown in (b) which highlight the NDC features occurring symmetric with respect to bias voltage. The symbols ($\bigcirc$, $\Box$,$\triangle$) indicate the energy level configurations sketched in (c) and (d). []{data-label="fig:X"}](FigX){width="0.7\linewidth"}
Generally, NDC reflects the alignment of the island state with a sharp DOS peak in the reservoirs. This becomes clear when we consider the energy diagrams presented in Fig.\[fig:X\](d) which sketch the configurations indicated with $\triangle$ and $\Box$ in Fig. \[fig:X\](a). For both configurations increasing the bias voltage misaligns the island state with the DOS peak in the reservoir and thus reduces the current, leading to NDC. From the symmetric occurrence of NDC with respect to both $V$ and $V_R$ we infer that both reservoirs exhibit a sharp DOS peak. The location of the peaks close to the Fermi level could indicate that it is related to the Cooper pair DOS. Recent tunneling experiments, on the other hand, show indications of quasi particle sub gap states [@kuerten2017in], which might give rise to features similar to the ones observed here. We note that NDC at the boundaries of the Coulomb diamonds disappears if a magnetic field is applied, cf. Fig.\[fig:Bfield\]. This further confirms that the NDC observed here originates from superconductivity.
Pre-Characterization of the Gates
---------------------------------
![**Pre-characterization of the gates:** (a) Leakage currents of the gates measured at T=1K. (b) Pinch off curve at 1K for both gates L and R tuned simultaneously at 1K. **(c)** Pinch off curves when a voltage is applied only to one of the gates while the respective other one is grounded.[]{data-label="fig:Gates"}](FigGates){width="0.8\linewidth"}
Figure \[fig:Gates\] (a) shows measurements of the gate leakage currents I$ _{leak}$ measured at $T\sim 1$ K by recording the dc drain current while varying the voltage applied to the respective gate. We find that even at V$_\text{L,R} = -4$ V the leakage current remains small, $|I_{leak}| < 10$ pA, even at -4 V.
In order to characterize the influence of the gate, we bias the 2DES with a constant voltage ($ V $=1 mV) and measure the drain current (I$ _{D} $) while changing the applied gate voltage. Note that this was done at finite back gate voltage to reduce the global carrier density such that the insulating state (pinch-off) is reached within the available gate voltage range. Figure \[fig:Gates\] (b) shows the result obtained for applying a voltage to gates L and R simultaneously. At low temperature, we observe the behaviour shown in Fig. \[fig:Gates\](c). Varying gate R alone enables us to block transport through the channel. Gate L, in contrast, does not pinch off the current, even at $V_{L}=-4V$. This indicates that the current does not flow through the 2DES underneath gate L, even at $V_L$=0, suggesting that it is depleted already at this stage and hence, the weak effect of gate L on transport. Note that superconductivity is suppressed in these measurements due to a perpendicular magnetic field
[^1]: these authors contributed equally; [email protected]
[^2]: these authors contributed equally; [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the design and performance of the LIGO Input Optics subsystem as implemented for the sixth science run of the LIGO interferometers. The Initial LIGO Input Optics experienced thermal side effects when operating with 7 W input power. We designed, built, and implemented improved versions of the Input Optics for Enhanced LIGO, an incremental upgrade to the Initial LIGO interferometers, designed to run with 30 W input power. At four times the power of Initial LIGO, the Enhanced LIGO Input Optics demonstrated improved performance including better optical isolation, less thermal drift, minimal thermal lensing and higher optical efficiency. The success of the Input Optics design fosters confidence for its ability to perform well in Advanced LIGO.'
author:
- 'Katherine L. Dooley'
- 'Muzammil A. Arain'
- David Feldbaum
- 'Valery V. Frolov'
- Matthew Heintze
- Daniel Hoak
- 'Efim A. Khazanov'
- Antonio Lucianetti
- 'Rodica M. Martin'
- Guido Mueller
- Oleg Palashov
- Volker Quetschke
- 'David H. Reitze'
- 'R. L. Savage'
- 'D. B. Tanner'
- 'Luke F. Williams'
- Wan Wu
title: Characterization of thermal effects in the Enhanced LIGO Input Optics
---
Introduction
============
The field of ground-based gravitational-wave (GW) physics is rapidly approaching a state with a high likelihood of detecting GWs for the first time in the latter half of this decade. Such a detection will not only validate part of Einstein’s general theory of relativity, but initiate an era of astrophysical observation of the universe through GWs. Gravitational waves are dynamical strains in space-time, $h =
\Delta L/L$, that travel at the speed of light and are generated by non-axisymmetric acceleration of mass. A first detection is expected to witness an event such as a binary black hole/neutron star merger [@Abadie2010Predictions].
The typical detector configuration used by current generation gravitational-wave observatories is a power-recycled Fabry-Perot Michelson laser interferometer featuring suspended test masses in vacuum as depicted in Figure \[fig:IFOschematic\]. A diode-pumped, power amplified, and intensity and frequency stabilized Nd:YAG laser emits light at $\lambda = 1064$ nm. The laser is directed to a Michelson interferometer whose two arm lengths are set to maintain destructive interference of the recombined light at the anti-symmetric (AS) port. An appropriately polarized gravitational wave will differentially change the arm lengths, producing signal at the AS port proportional to the GW strain and the input power. The Fabry-Perot cavities in the Michelson arms and a power recycling mirror (RM) at the symmetric port are two modifications to the Michelson interferometer that increase the laser power in the arms and therefore improve the detector’s sensitivity to GWs.
![(Color online) Optical layout of a Fabry-Perot Michelson laser interferometer, showing primary components. The four test masses, beam splitter and power recycling mirror are physically located in an ultrahigh vacuum system and are seismically isolated. A photodiode at the anti-symmetric port detects differential arm length changes.[]{data-label="fig:IFOschematic"}](figures/IFOsimple_thesis.pdf){width="1.0\columnwidth"}
A network of first generation kilometer scale laser interferometer gravitational-wave detectors completed an integrated 2-year data collection run in 2007, called Science Run 5 (S5). The instruments were: the American Laser Interferometer Gravitational-wave Observatories (LIGO)[@Abbott2009LIGO], one in Livingston, LA with 4 km long arms and two in Hanford, WA with 4 km and 2 km long arms; the 3 km French-Italian detector VIRGO[@Acernese2008Virgo] in Cascina, Italy; and the 600 m German-British detector GEO[@Luck2006Status] located near Hannover, Germany. Multiple separated detectors increase detection confidence through signal coincidence and improve source localization via waveform reconstruction.
The first generation of LIGO, now known as Initial LIGO, achieved its design goal of sensitivity to GWs in the 40–7000 Hz band, including a record strain sensitivity of $2\times10^{-23}/\sqrt{\mathrm{Hz}}$ at 155 Hz. However, only nearby sources produce enough GW strain to appear above the noise level of Initial LIGO and no gravitational wave has yet been found in the S5 data. A second generation of LIGO detectors, Advanced LIGO, has been designed to be at least an order of magnitude more sensitive at several hundred Hz and above and to give an impressive increase in bandwidth down to 10 Hz. Advanced LIGO is expected to open the field of GW astronomy through the detection of many events per year [@Abadie2010Predictions]. To test some of Advanced LIGO’s new technologies and to increase the chances of detection through a more sensitive data taking run, an incremental upgrade to the detectors was carried out after S5 [@Adhikari2006Enhanced]. This project, Enhanced LIGO, culminated with the S6 science run from July 2009 to October 2010. Currently, construction of Advanced LIGO is underway. Simultaneously, VIRGO and GEO are both undergoing their own upgrades [@Acernese2008Virgo; @Luck2010Upgrade].
The baseline Advanced LIGO design [@AdvLigoSysDesign] improves upon Initial LIGO by incorporating improved seismic isolation [@Robertson2004Seismic], the addition of a signal recycling mirror at the output port [@Meers1988Recycling], homodyne readout, and an increase in available laser power from 8 W to 180 W. The substantial increase in laser power improves the shot-noise-limited sensitivity, but introduces a multitude of thermally induced side effects that must be addressed for proper operation.
Enhanced LIGO tested portions of the Advanced LIGO designs so that unforeseen difficulties could be addressed and so that a more sensitive data taking run could take place. An output mode cleaner was designed, built and installed, and DC readout of the GW signal was implemented [@Fricke2011DC]. An Advanced LIGO active seismic isolation table was also built, installed, and tested [@KisselThesis Chapter 5]. In addition, the 10 W Initial LIGO laser was replaced with a 35 W laser [@Frede2007Fundamental]. Accompanying the increase in laser power, the test mass Thermal Compensation System [@Willems2009Thermal], the Alignment Sensing and Control [@DooleyAngular], and the Input Optics were modified.
This paper reports on the design and performance of the LIGO Input Optics (IO) subsystem in Enhanced LIGO, focusing specifically on its operational capabilities as the laser power is increased to 30 W. Substantial improvements in the IO power handling capabilities with respect to Initial LIGO performance are seen. The paper is organized as follows. First, in Section \[sec:role\] we define the role of the IO subsystem and detail the function of each of the major IO subcomponents. Then, in Section \[sec:problems\] we describe thermal effects which impact the operation of the IO and summarize the problems experienced with the IO in Initial LIGO. In Section \[sec:design\] we present the IO design for Advanced LIGO in detail and describe how it addresses these problems. Section \[sec:performance\] presents the performance of the prototype Advanced LIGO IO design as tested during Enhanced LIGO. Finally, we extrapolate from these experiences in Section \[sec:aLIGO\] to discuss the expected IO performance in Advanced LIGO. The paper concludes with a summary in Section \[sec:summary\].
Function of the Input Optics {#sec:role}
============================
{width="100.00000%"}
The Input Optics is one of the primary subsystems of the LIGO interferometers. Its purpose is to deliver an aligned, spatially pure, mode-matched beam with phase-modulation sidebands to the power-recycled Fabry-Perot Michelson interferometer. The IO also prevents reflected or backscattered light from reaching the laser and distributes the reflected field from the interferometer (designated the *reflected port*) to photodiodes for sensing and controlling the length and alignment of the interferometer. In addition, the IO provides an intermediate level of frequency stabilization and must have high overall optical efficiency. It must perform these functions without limiting the strain sensitivity of the LIGO interferometer. Finally, it must operate robustly and continuously over years of operation. The conceptual design is found in Ref. [@Camp1996InputOutput].
As shown in Fig. \[fig:IOblock\], the IO subsystem consists of four principle components located between the pre-stabilized laser and the power recycling mirror:
- electro-optic modulator (EOM)
- mode cleaner cavity (MC)
- Faraday isolator (FI)
- mode-matching telescope (MMT)
Each element is a common building block of many optical experiments and not unique to LIGO. However, their roles specific to the successful operation of interferometry for gravitational-wave detection are of interest and demand further attention. Here, we briefly review the purpose of each of the IO components; further details about the design requirements are in Ref. [@Camp1997Input].
Electro-optic modulator
-----------------------
The Length Sensing and Control (LSC) and Angular Sensing and Control (ASC) subsystems require phase modulation of the laser light at RF frequencies. This modulation is produced by an EOM, generating sidebands of the laser light which act as references against which interferometer length and angle changes are measured [@Fritschel2001Readout]. The sideband light must be either resonant only in the recycling cavity or not resonant in the interferometer at all. The sidebands must be offset from the carrier by integer multiples of the MC free spectral range to pass through the MC.
Mode cleaner
------------
Stably aligned cavities, limited non-mode-matched (junk) light, and a frequency and amplitude stabilized laser are key features of any ultra sensitive laser interferometer. The MC, at the heart of the IO, plays a major role.
A three-mirror triangular ring cavity, the MC suppresses laser output not in the fundamental TEM$_{00}$ mode, serving two major purposes. It enables the robustness of the ASC because higher order modes would otherwise contaminate the angular sensing signals of the interferometer. Also, all non-TEM$_{00}$ light on the length sensing photodiodes, including those used for the GW readout, contributes shot noise but not signal and therefore diminishes the signal to noise ratio. The MC is thus largely responsible for achieving an aligned, minimally shot-noise-limited interferometer.
The MC also plays an active role in laser frequency stabilization [@Fritschel2001Readout], which is necessary for ensuring that the signal at the anti-symmetric port is due to arm length fluctuations rather than laser frequency fluctuations. In addition, the MC passively suppresses beam jitter at frequencies above 10 Hz.
Faraday isolator
----------------
Faraday isolators are four-port optical devices which utilize the Faraday effect to allow for non-reciprocal polarization switching of laser beams. Any backscatter or reflected light from the interferometer (due to impedance mismatch, mode mismatch, non-resonant sidebands, or signal) needs to be diverted to protect the laser from back propagating light, which can introduce amplitude and phase noise. This diversion of the reflected light is also necessary for extracting length and angular information about the interferometer’s cavities. The FI fulfils both needs.
Mode-matching telescope
-----------------------
The lowest-order MC and arm cavity spatial eigenmodes need to be matched for maximal power buildup in the interferometer. The mode-matching telescope is a set of three suspended concave mirrors between the MC and interferometer that expand the beam from a radius of 1.6 mm at the MC waist to a radius of 33 mm at the arm cavity waist. The MMT should play a passive role by delivering properly shaped light to the interferometer without introducing beam jitter or any significant aberration that can reduce mode coupling.
Thermal problems in Initial LIGO {#sec:problems}
================================
The Initial LIGO interferometers were equipped with a 10 W laser, yet operated with only 7 W input power due to power-related problems with other subsystems. The EOM was located in the 10 W beam and the other components experienced anywhere up to 7 W power. The 7 W operational limit was not due to the failure of the IO; however, many aspects of the IO performance did degrade with power.
One of the primary problems of the Initial LIGO IO [@Adhikari1998Input] was thermal deflection of the back propagating beam due to thermally-induced refractive index gradients in the FI. A significant beam drift between the interferometer’s locked and unlocked states led to clipping of the reflected beam on the photodiodes used for length and alignment control (see Fig. \[fig:IOschematic\]). Our measurements determined a deflection of approximately 100 [rad]{}/W in the FI. This problem was mitigated at the time by the design and implementation of an active beam steering servo on the beam coming from the isolator.
There were also known limits to the power the IO could sustain. Thermal lensing in the FI optics began to alter significantly the beam mode at powers greater than 10 W, leading to a several percent reduction in mode matching to the interferometer [@UFLIGOGroup2006Upgrading]. Additionally, absorptive FI elements would create thermal birefringence, degrading the optical efficiency and isolation ratio with power [@Khazanov1999Investigation]. The Initial LIGO New Focus EOMs had an operational power limit of around 10 W. There was a high risk of damage to the crystals under the stress of the 0.4 mm radius beam. Also, anisotropic thermal lensing with focal lengths as severe as 3.3 m at 10 W made the EOMs unsuitable for much higher power. Finally, the MC mirrors exhibited high absorption (as much as 24 ppm per mirror)–enough that thermal lensing of the MC optics at Enhanced LIGO powers would induce higher order modal frequency degeneracy and result in a power-dependent mode mismatch into the interferometer [@Bullington2008Modal; @Arain2007Note]. In fact, as input power increased from 1 W to 7 W the mode matching decreased from 90% to 83%.
In addition to the thermal limitations of the Initial LIGO IO, optical efficiency in delivering light from the laser into the interferometer was not optimal. Of the light entering the IO chain, only 60% remained by the time it reached the power recycling mirror. Moreover, because at best only 90% of the light at the recycling mirror was coupled into the arm cavity mode, room was left for improvement in the implementation of the MMT.
{width="100.00000%"}
Enhanced LIGO Input Optics Design {#sec:design}
=================================
The Enhanced LIGO IO design addressed the thermal effects that compromised the performance of the Initial LIGO IO, and accommodated up to four times the power of Initial LIGO. Also, the design was a prototype for handling the 180 W laser planned for Advanced LIGO. Because the adverse thermal properties of the Initial LIGO IO (beam drift, birefringence, and lensing) are all attributable primarily to absorption of laser light by the optical elements, the primary design consideration was finding optics with lower absorption [@UFLIGOGroup2006Upgrading]. Both the EOM and the FI were replaced for Enhanced LIGO. Only minor changes were made to the MC and MMT. A detailed layout of the Enhanced LIGO IO is shown in Figure \[fig:IOschematic\].
Electro-optic modulator design
------------------------------
We replaced the commercially-made New Focus 4003 resonant phase modulator of Initial LIGO with an in-house EOM design and construction. Both a new crystal choice and architectural design change allow for superior performance.
The Enhanced LIGO EOM design uses a crystal of rubidium titanyl phosphate (RTP), which has at most 1/10 the absorption coefficient at 1064 nm of the lithium niobate (LiNbO$_3$) crystal from Initial LIGO. At 200 W the RTP should produce a thermal lens of 200 m and higher order mode content of less than 1%, compared to the 3.3 m lens the LiNbO$_3$ produces at 10 W. The RTP has a minimal risk of damage, because it has both twice the damage threshold of LiNbO$_3$ and is subjected to a beam twice the size of that in Initial LIGO. RTP and LiNbO$_3$ have similar electro-optic coefficients. Also, RTP’s $dn/dT$ anisotropy is 50% smaller. Table \[tab:EOMcrystals\] compares the properties of most interest of the two crystals.
units LiNbO$_3$ RTP
--------------------------------------- ------------- ----------- ---------
damage threshold MW/cm$^2$ 280 $>600$
absorption coeff. at 1064 nm ppm/cm $< 5000$ $< 500$
electro-optic coeff. ($n_z^3 r_{33}$) pm/V 306 239
$dn_y/dT$ 10$^{-6}$/K 5.4 2.79
$dn_z/dT$ 10$^{-6}$/K 37.9 9.24
\[tab:EOMcrystals\]
We procured the RTP crystals from Raicol and packaged them into specially-designed, custom-built modulators. The crystal dimensions are $4 \times 4 \times 40$ mm and their faces are wedged by $2.85^\circ$ and anti-reflection (AR) coated. The wedge serves to separate the polarizations and prevents an etalon effect, resulting in a suppression of amplitude modulation. Only one crystal is used in the EOM in order to reduce the number of surface reflections. Three separate pairs of electrodes, each with its own resonant LC circuit, are placed across the crystal in series, producing the three required sets of RF sidebands: 24.5 MHz, 33.3 MHZ and 61.2 MHz. A diagram is shown in Fig. \[fig:EOM\]. Reference [@Quetschke2008ElectroOptic] contains further details about the modulator architecture.
![(Color online) Electro-optic modulator design. (a) The single RTP crystal is sandwiched between three sets of electrodes that apply three different modulation frequencies. The wedged ends of the crystal separate the polarizations of the light. The p-polarized light is used in the interferometer. (b) A schematic for each of the three impedance matching circuits of the EOM. For the three sets of electrodes, each of which creates its own $C_{crystal}$, a capacitor is placed parallel to the LC circuit formed by the crystal and a hand-wound inductor. The circuits provide 50 $\Omega$ input impedance on resonance and are housed in a separate box from the crystal.[]{data-label="fig:EOM"}](figures/EOMthesis.pdf "fig:") ![(Color online) Electro-optic modulator design. (a) The single RTP crystal is sandwiched between three sets of electrodes that apply three different modulation frequencies. The wedged ends of the crystal separate the polarizations of the light. The p-polarized light is used in the interferometer. (b) A schematic for each of the three impedance matching circuits of the EOM. For the three sets of electrodes, each of which creates its own $C_{crystal}$, a capacitor is placed parallel to the LC circuit formed by the crystal and a hand-wound inductor. The circuits provide 50 $\Omega$ input impedance on resonance and are housed in a separate box from the crystal.[]{data-label="fig:EOM"}](figures/EOMcircuit_thesis.pdf "fig:")
Mode cleaner design
-------------------
The MC is a suspended 12.2 m long triangular ring cavity with finesse $\mathcal{F}$=1280 and free spectral range of 12.243 MHz. The three mirror architecture was selected over the standard two mirror linear filter cavity because it acts as a polarization filter and because it eliminates direct path back propagation to the laser [@Raab1992Estimation]. A pick-off of the reflected beam is naturally facilitated for use in generating control signals. A potential downside to the three mirror design is the introduction of astigmatism, but this effect is negligible due to the small opening angle of the MC.
The MC has a round-trip length of 24.5 m. The beam waist has a radius of 1.63 mm and is located between the two 45$^\circ$ flat mirrors, MC1 and MC3. See Figure \[fig:IOschematic\]. A concave third mirror, MC2, 18.15 m in radius of curvature, forms the far point of the mode cleaner’s isosceles triangle shape. The power stored in the MC is 408 times the amount coupled in, equivalent to about 2.7 kW in Initial LIGO and at most 11 kW for Enhanced LIGO. The peak irradiances are 32 kW/cm$^2$ and 132 kW/cm$^2$ for Initial LIGO and Enhanced LIGO, respectively.
The MC mirrors are 75 mm in diameter and 25 mm thick. The substrate material is fused silica and the mirror coating is made of alternating layers of silica and tantala. In order to reduce the absorption of light in these materials and therefore improve the transmission and modal quality of the beam in the MC, we removed particulate by drag wiping the surface of the mirrors with methanol and optical tissues. The MC was otherwise identical to that in Initial LIGO.
Faraday isolator design
-----------------------
The Enhanced LIGO FI design required not only the use of low absorption optics, but additional design choices to mitigate any residual thermal lensing and birefringence. In addition, trade-offs between optical efficiency in the forward direction, optical isolation in the backwards direction, and feasibility of physical access of the return beam for signal use were considered. The result is that the Enhanced LIGO FI needed a completely new architecture and new optics compared to both the Initial LIGO FI and commercially available isolators.
Figure \[fig:FI\] shows a photograph and a schematic of the Enhanced LIGO FI. It begins and ends with low absorption calcite wedge polarizers (CWP). Between the CWPs is a thin film polarizer (TFP), a deuterated potassium dihydrogen phosphate (DKDP) element, a half-wave plate (HWP), and a Faraday rotator. The rotator is made of two low absorption terbium gallium garnet (TGG) crystals sandwiching a quartz rotator (QR) inside a 7-disc magnet with a maximum field strength of 1.16 T. The forward propagating beam upon passing through the TGG, QR, TGG, and HWP elements is rotated by $+22.5^\circ -
67.5^\circ + 22.5^\circ + 22.5^\circ = 0^\circ$. In the reverse direction, the rotation through HWP, TGG, QR, TGG is $-22.5^\circ +
22.5^\circ + 67.5^\circ + 22.5^\circ = 90^\circ$. The TGG crystals are non-reciprocal devices while the QR and HWP are reciprocal.
![(Color online) Faraday isolator photograph and schematic. The FI preserves the polarization of the light in the forward-going direction and rotates it by 90 degrees in the reverse direction. Light from the MC enters from the left and exits at the right towards the interferometer. It is ideally p-polarized, but any s-polarization contamination is promptly diverted $\sim 10$ mrad by the CWP and then reflected by the TFP and dumped. The p-polarized reflected beam from the interferometer enters from the right and is rotated to s-polarized light which is picked-off by the TFP and sent to the Interferometer Sensing and Control (ISC) table. Any imperfections in the Faraday rotation of the interferometer return beam results in p-polarized light traveling backwards along the original input path.[]{data-label="fig:FI"}](figures/FI_cropped2.jpg "fig:"){width="0.9\columnwidth"} ![(Color online) Faraday isolator photograph and schematic. The FI preserves the polarization of the light in the forward-going direction and rotates it by 90 degrees in the reverse direction. Light from the MC enters from the left and exits at the right towards the interferometer. It is ideally p-polarized, but any s-polarization contamination is promptly diverted $\sim 10$ mrad by the CWP and then reflected by the TFP and dumped. The p-polarized reflected beam from the interferometer enters from the right and is rotated to s-polarized light which is picked-off by the TFP and sent to the Interferometer Sensing and Control (ISC) table. Any imperfections in the Faraday rotation of the interferometer return beam results in p-polarized light traveling backwards along the original input path.[]{data-label="fig:FI"}](figures/FI_thesis.pdf "fig:")
### Thermal birefringence
Thermal birefringence is addressed in the Faraday rotator by the use of the two TGG crystals and one quartz rotator rather than the typical single TGG [@Khazanov2000Suppression]. In this configuration, any thermal polarization distortions that the beam experiences while passing through the first TGG rotator will be mostly undone upon passing through the second. The multiple elements in the magnet required a larger magnetic field than in Initial LIGO. The 7-disc magnet is 130 mm in diameter and 132 mm long and placed in housing 155 mm in diameter and 161 mm long. The TGG diameter is 20 mm.
### Thermal lensing
Thermal lensing in the FI is addressed by including DKDP, a negative $dn/dT$ material, in the beam path. Absorption of light in the DKDP results in a de-focusing of the beam, which partially compensates for the thermal focusing induced by absorption in the TGGs [@Mueller2002Method; @Khazanov2004Compensation]. The optical path length (thickness) of the DKDP is chosen to slightly over-compensate the positive thermal lens induced in the TGG crystals, anticipating other positive thermal lenses in the system.
### Polarizers
The polarizers used (two CWPs and one TFP) each offer advantages and disadvantages related to optical efficiency in the forward-propagating direction, optical isolation in the reflected direction, and thermal beam drift. The CWPs have very high extinction ratios ($>10^5$) and high transmission ($>$ 99%) contributing to good optical efficiency and isolation performance. However, the angle separating the exiting orthogonal polarizations of light is very small, on the order of 10 mrad. This small angle requires the light to travel relatively large distances before we can pick off the beams needed for interferometer sensing and control. In addition, thermally induced index of refraction gradients due to the 4.95$^{\circ}$ wedge angle of the CWPs result in thermal drift. However, the CWPs for the Enhanced LIGO FI have a measured low absorption of 0.0013 cm$^{-1}$ with an expected thermal lens of 60 m at 30 W and drift of less than 1.3 [rad]{}/W [@UFLIGOGroup2006Upgrading].
The advantages of the thin film polarizer over the calcite wedge polarizer are that it exhibits negligible thermal drift when compared with CWPs and it operates at the Brewster angle of 55$^\circ$, thus diverting the return beam in an easily accessible way. However, the TFP has a lower transmission than the CWP, about 96%, and an extinction ratio of only 10$^3$.
Thus, the combination of CWPs and a TFP combines the best of each to provide a high extinction ratio (from the CWPs) and ease of reflected beam extraction (from the TFP). The downsides that remain when using both polarizers are that there is still some thermal drift from the CWPs. Also the transmission is reduced due to the TFP and to the fact that there are 16 surfaces from which light can scatter.
### Heat conduction {#sec:heatconduction}
Faraday isolators operating in a vacuum environment suffer from increased heating with respect to those operating in air. Convective cooling at the faces of the optical components is no longer an effective heat removal channel, so proper heat sinking is essential to minimize thermal lensing and depolarization. It has been shown that Faraday isolators carefully aligned in air can experience a dramatic reduction in isolation ratio ($>$ 10-15 dB) when placed in vacuum [@TheVIRGOCollaboration2008Invacuum]. The dominant cause is the coupling of the photoelastic effect to the temperature gradient induced by laser beam absorption. Also of importance is the temperature dependence of the Verdet constant–different spatial parts of the beam experience different polarization rotations in the presence of a temperature gradient [@Barnes1992Variation].
To improve heat conduction away from the Faraday rotator optical components, we designed a housing for the TGG and quartz crystals that provided improved heat sinking to the Faraday rotator. We wrapped the TGGs with indium foil that made improved contact with the housing and we cushioned the DKDP and the HWP with indium wire in their aluminum holders. This has the additional effect of avoiding the development of thermal stresses in the crystals, an especially important consideration for the very fragile DKDP.
Mode-matching telescope design
------------------------------
The mode matching into the interferometer (at Livingston) was measured to be at best 90% in Initial LIGO. Because of the stringent requirements placed on the LIGO vacuum system to reduce phase noise through scattering by residual gas, standard opto-mechanical translators are not permitted in the vacuum; it is therefore not possible to physically move the mode matching telescope mirrors while operating the interferometer. Through a combination of needing to move the MMTs in order to fit the new FI on the in-vacuum optics table and additional measurements and models to determine how to improve the coupling, a new set of MMT positions was chosen for Enhanced LIGO. Fundamental design considerations are discussed in Ref. [@Delker1997Design].
Performance of the Enhanced LIGO Input Optics {#sec:performance}
=============================================
The most convincing figure of merit for the IO performance is that the Enhanced LIGO interferometers achieved low-noise operation with 20 W input power without thermal issues from the IO. Additionally, the IO were operated successfully up to the available 30 W of power. (Instabilities with other interferometer subsystems limited the Enhanced LIGO science run operation to 20 W.)
We present in this section detailed measurements of the IO performance during Enhanced LIGO. Specific measurements and results presented in figures and the text come from Livingston; performance at Hanford was similar and is included in tables summarizing the results.
Optical efficiency
------------------
The optical efficiency of the Enhanced LIGO IO from EOM to recycling mirror was 75%, a marked improvement over the approximate 60% that was measured for Initial LIGO. A substantial part of the improvement came from the discovery and subsequent correction of a 6.5% loss at the second of the in-vacuum steering mirrors directing light into the MC (refer to Fig. \[fig:IOschematic\]). A 45$^\circ$ reflecting mirror had been used for a beam with an 8$^\circ$ angle of incidence. Losses attributable to the MC and FI are described in the following sections. A summary of the IO power budget is found in Table \[tab:pwrbudget\].
Livingston Hanford
--------------------------- ------------ -----------
MC visibility 92% 97%
MC transmission 88% 90%
Composite MC transmission 81% (72%) 87%
FI transmission 93% (86%) 94% (86%)
- TFP loss 4.0% 2.7%
IO efficiency (PSL to RM) 75% (60%) 82%
\[tab:pwrbudget\]
### Mode cleaner losses
The MC was the greatest single source of power loss in both Initial and Enhanced LIGO. The MC visibility, $$V = \frac{P_{\mathrm{in}} - P_{\mathrm{refl}}}{P_{\mathrm{in}}},
\label{eq:vis}$$ where $P_{\mathrm{in}}$ is the power injected into the MC and $P_{\mathrm{refl}}$ the power reflected, was 92%. Visibility reduction is the result of higher order mode content of $P_{\mathrm{in}}$ and mode mismatch into the MC. The visibility was constant within 0.04% up to 30 W input power at both sites, providing a positive indication that thermal aberrations in the MC and upstream were negligible.
88% of the light coupled into the MC was transmitted. 2.6% of these losses were caused by poor AR coatings on the second surfaces of the $45^\circ$ MC mirrors. The measured surface microroughness of $\sigma_{rms}< 0.4$ nm [@1998Component] caused scatter losses of $[4 \pi \sigma_{rms}/\lambda]^2 < 22$ ppm per mirror inside the MC, or a total of 2.7% losses in transmission.
Another source of MC losses is via absorption of heat by particulates residing on the mirror’s surface. We measured the absorption with a technique that makes use of the frequency shift of the thermally driven drumhead eigenfrequencies of the mirror substrate [@Punturo2007Mirror]. The frequency shift directly correlates with the MC absorption via the substrate’s change in Young’s modulus with temperature, $dY/dT$. A finite element model (COMSOL [@COMSOL]) was used to compute the expected frequency shift from a temperature change of the substrate resulting from the mirror coating absorption. The measured eigenfrequencies for each mirror at room temperature are 28164 Hz, 28209 Hz, and 28237 Hz, respectively.
We cycled the power into the MC between 0.9 W and 5.1 W at 3 hour intervals, allowing enough time for a thermal characteristic time constant to be reached. At the same time, we recorded the frequencies of the high Q drumhead mode peaks as found in the mode cleaner frequency error signal, heterodyned down by 28 kHz. See Figure \[fig:MCabsorption\]. Correcting for ambient temperature fluctuations, we find a frequency shift of 0.043, 0.043, and 0.072 Hz/W. As a result of drag-wiping the mirrors, the absorption decreased for all but one mirror, as shown for both Hanford and Livingston in Table \[tab:MCabsorption2\].
![(Color online) Data from the MC absorption measurement. Power into the MC was cycled between 0.9 W and 5.1 W at 3 hour intervals (bottom frame) and the change in frequency of the drumhead mode of each mirror was recorded (top frame). The ambient temperature (middle frame) was also recorded in order to correct for its effects.[]{data-label="fig:MCabsorption"}](figures/MCdrumhead_fixed.pdf){width="1.0\columnwidth"}
mirror Livingston Hanford
-------- -------------------- -----------------
MC1 2.1 ppm (18.7 ppm) 5.8 (6.1 ppm)
MC2 2.0 ppm (5.5 ppm) 7.6 (23.9 ppm)
MC3 3.4 ppm (12.8 ppm) 15.6 (12.5 ppm)
: Absorption values for the Livingston and Hanford mode cleaner mirrors before (in parentheses) and after drag wiping. The precision is $\pm 10\%$.
\[tab:MCabsorption2\]
### Faraday isolator losses
The FI was the second greatest source of power loss with its transmission of 93%. This was an improvement over the 86% transmission of the Initial LIGO FI. The most lossy element in the FI is the thin film polarizer, accounting for 4% of total losses. The integrated losses from AR coatings and absorption in the TGGs, CWPs, HWP, and DKDP account for the remaining 3% of missing power.
Faraday isolation ratio
-----------------------
The isolation ratio is defined as the ratio of power incident on the FI in the reverse direction (the light reflected from the interferometer) to the power transmitted in the reverse direction and is often quoted in decibels: isolation ratio = $10
\log_{10}(P_{\mathrm{in-reverse}}/P_{\mathrm{out-reverse}})$. We measured the isolation ratio of the FI as a function of input power both in air prior to installation and *in situ* during Enhanced LIGO operation.
To measure the in-vacuum isolation ratio, we misaligned the interferometer arms so that the input beam would be promptly reflected off of the $97\%$ reflective recycling mirror. This also has the consequence that the FI is subjected to twice the input power. Our isolation monitor was a pick-off of the backwards transmitted beam taken immediately after transmission through the FI that we sent out of a vacuum chamber viewport. Refer to the “isolation check beam” in Fig. \[fig:IOschematic\]. The in air measurement was done similarly, except in an optics lab with a reflecting mirror placed directly after the FI.
![Faraday isolator isolation ratio as measured in air prior to installation and *in situ* in vacuum. The isolation worsens by a factor of 6 upon placement of the FI in vacuum. The linear fits to the data show a constant in-air isolation ratio and an in-vacuum isolation ratio degradation of 0.02 dB/W.[]{data-label="fig:IR"}](figures/FaradayIR.pdf){width="1.0\columnwidth"}
Figure \[fig:IR\] shows our isolation ratio data. Most notably, we observe an isolation decrease of a factor of six upon placing the FI in vacuum, a result consistent with that reported by Ref. [@TheVIRGOCollaboration2008Invacuum]. In air the isolation ratio is a constant 34.46 $\pm$ 0.04 dB from low power up to 47 W, and in vacuum the isolation ratio is 26.5 dB at low power. The underlying cause is the absence of cooling by air convection. If we attribute the loss to the TGGs, then based on the change in TGG polarization rotation angle necessary to produce the measured isolation drop of 8 dB and the temperature dependence of the TGG’s Verdet constant, we can put an upper limit of 11 K on the crystal temperature rise from air to vacuum. Furthermore, a degradation of 0.02 dB/W is measured in vacuum.
Thermal steering
----------------
We measured the *in situ* thermal angular drift of both the beam transmitted through the MC and of the reflected beam from the FI with up to 25 W input power. Just as for the isolation ratio measurement, we misaligned the interferometer arms so that the input beam would be promptly reflected off of the recycling mirror. The Faraday rotator was thus subjected to up to 50 W total and the MC to 25 W.
Pitch and yaw motion of the MC transmitted and interferometer reflected beams were recorded using the quadrant photodiode (QPD) on the IO table and the RF alignment detectors on the Interferometer Sensing and Control table (see Fig. \[fig:IOschematic\]). There are no lenses between the MC waist and its measurement QPD, so only the path length between the two were needed to calibrate in radians the pitch and yaw signals on the QPD. The interferometer reflected beam, however, passes through several lenses. Thus, ray transfer matrices and the two alignment detectors were necessary to determine the Faraday drift calibration.
![(Color online) Mode cleaner and Faraday isolator thermal drift data. (a) Angular motion of the beam at the MC waist and FI rotator as the input power is stepped. The beam is double-passed through the Faraday isolator, so it experiences twice the input power. (b) Average beam angle per power level in the MC and FI. Linear fits to the data are also shown. The slopes for MC yaw, MC pitch, FI yaw, and FI pitch, respectively, are 0.0047, 0.44, 1.8, and 3.2 [rad]{}/W.[]{data-label="fig:drift"}](figures/forthesis_refldriftx10.pdf "fig:"){width="1.0\columnwidth"} ![(Color online) Mode cleaner and Faraday isolator thermal drift data. (a) Angular motion of the beam at the MC waist and FI rotator as the input power is stepped. The beam is double-passed through the Faraday isolator, so it experiences twice the input power. (b) Average beam angle per power level in the MC and FI. Linear fits to the data are also shown. The slopes for MC yaw, MC pitch, FI yaw, and FI pitch, respectively, are 0.0047, 0.44, 1.8, and 3.2 [rad]{}/W.[]{data-label="fig:drift"}](figures/alldrift.pdf "fig:"){width="1.0\columnwidth"}
Figure \[fig:drift\] shows the calibrated beam steering data. The angle of the beam out of the MC does not change measurably as a function of input power in yaw (4.7 nrad/W) and changes by only 440 nrad/W in pitch. For the FI, we record a beam drift originating at the center of the Faraday rotator of 1.8 [rad]{}/W in yaw and 3.2 [rad]{}/W in pitch. Therefore, when ramping the input power up to 30 W during a full interferometer lock, the upper limit on the drift experienced by the reflected beam is about 100 [rad]{}. This is a thirty-fold reduction with respect to the Initial LIGO FI and represents a fifth of the beam’s divergence angle, $\theta_{div}$ = 490 [rad]{}.
Thermal lensing
---------------
We measured the profiles of both the beam transmitted through the mode cleaner and the reflected beam picked off by the FI at low ($\sim$ 1 W) and high ($\sim$ 25 W) input powers to assess the degree of thermal lensing induced in the MC and FI. Again, we misaligned the interferometer arms so that the input beam would be promptly reflected off the recycling mirror. We picked off a fraction of the reflected beam on the Interferometer Sensing and Control table and of the mode cleaner transmitted beam on the IO table (refer to Fig. \[fig:IOschematic\]), placed lenses in each of their paths, and measured the beam diameters at several locations on either side of the waists created by the lenses. A change in the beam waist size or position as a function of laser power indicates the presence of a thermal lens.
![(Color online) Profile at high and low powers of a pick-off of the beam transmitted through the MC. The precision of the beam profiler is $\pm 5\%$. Within the error of the measurement, there are no obvious degradations.[]{data-label="fig:MC_lensing"}](figures/MCTrans_datafit.pdf){width="1.0\columnwidth"}
![(Color online) Faraday isolator thermal lensing data. With 25 W into the Faraday isolator (corresponding to 50 W in double pass), the beam has a steeper divergence than a pure TEM$_{00}$ beam, indicating the presence of higher order modes. Errors are $\pm
5.0\%$ for each data point.[]{data-label="fig:FI_lensing"}](figures/REFL_datafit.pdf){width="1.0\columnwidth"}
As seen in Fig. \[fig:MC\_lensing\] and \[fig:FI\_lensing\], the waists of the two sets of data are collocated: no thermal lens is measured. For the FI, the divergence of the low and high power beams differs, indicating that the beam quality degrades with power. The $M^2$ factor at 1 W is 1.04 indicating the beam is nearly perfectly a TEM$_{00}$ mode. At 25 W, $M^2$ increases to 1.19, corresponding to increased higher-order-mode content. The percentage of power in higher-order modes depends strongly on the mode order and relative phases of the modes, and thus cannot be determined from this measurement [@Kwee2007Laser].
The results for the MC are consistent with no thermal lensing. The high and low power beam profiles are within each other’s error bars and well below our requirements.
We also measured the thermal lensing of the EOM prior to its installation in Enhanced LIGO by comparing beam profiles of a 160 W beam with and without the EOM in its path. The data for both cross-sections of the beam is presented in Fig. \[fig:EOMlensing\]. We observe no significant thermal lensing in the y-direction and a small effect in the x-direction. An upper limit for the thermal lens in the x-direction can be calculated to be greater than 4 m, which is 10 times larger than the Rayleigh range of the spatial mode. The mode matching degradation is therefore less than 1%. Although a direct test for Advanced LIGO because of the power used, this measurement also serves to demonstrate the effectiveness of the EOM design for Enhanced LIGO powers.
![(Color online) EOM thermal lensing data. The x- and y-direction beam profiles with 160 W through the EOM (closed circles and squares) place a lower limit of 4 m on the induced thermal lens when compared to the beam profiles without the EOM (open circles and squares).[]{data-label="fig:EOMlensing"}](figures/EOMlensing.pdf){width="1.0\columnwidth"}
Mode-matching
-------------
We measured the total interferometer visibility (refer to Eq. \[eq:vis\]) as an indirect way of determining the carrier mode-matching to the interferometer. In this case, $P_{\mathrm{in}}$ is the power in the reflected beam when the interferometer cavities are unlocked and $P_{\mathrm{refl}}$ is the power in the reflected beam when all of the interferometer cavities are on resonance.
The primary mechanisms that serve to reduce the interferometer visibility from unity are: carrier mode-matching, carrier impedance matching, and sideband light. We measured the impedance matching at LLO to be $>$ 99.5%; impedance matching therefore makes a negligible contribution to the power in the reflected beam. We also measured that due to the sidebands, the carrier makes up 86% of the power in the reflected beam with the interferometer unlocked and 78% with the interferometer locked; to compensate, we reduce the total $P_{\mathrm{refl}}/P_{\mathrm{in}}$ ratio by 10%. With the interferometer unlocked, there is also a 2.7% correction for the transmission of the RM.
Initially, anywhere between 10% and 17% of the light was rejected by the interferometer due to poor, power-dependent mode matching. After translating the mode-matching telescope mirrors during a vacuum chamber incursion and upgrading the other IO components, the mode mismatch we measured was 8% and independent of input power. The MMT thus succeeds in coupling 92% of the light into the interferometer at all times, marking both an improvement in MMT mirror placement and success in eliminating measurable thermal issues.
Implications for Advanced LIGO {#sec:aLIGO}
==============================
As with other Advanced LIGO interferometer components, Enhanced LIGO served as a technology demonstrator for the Advanced LIGO Input Optics, albeit at lower laser powers than will be used there. The performance of the Enhanced LIGO IO components at 30 W of input power allows us to infer their performance in Advanced LIGO. The requirements for the Advanced LIGO IO demand are for similar performance to Enhanced LIGO, but with almost 8 times the laser power.
The Enhanced LIGO EOM showed no thermal lensing, degraded transmission, nor damage in over 17,000 hours of sustained operation at 30 W of laser power. Measurements of the thermal lensing in RTP at powers up to 160 W show a relative power loss of $< 0.4\%$, indicating that thermal lensing should be negligible in Advanced LIGO. Peak irradiances in the EOM will be approximately four times that of Enhanced LIGO (a 45% larger beam diameter will somewhat offset the increased power). Testing of RTP at 10 times the expected Advanced LIGO irradiance over 100 hours show no signs of damage or degraded transmission.
The MC showed no measurable change in operational state as a function of input power. This bodes well for the Advanced LIGO mode cleaner. Compared with the Enhanced LIGO MC, the Advanced LIGO MC is designed with a lower finesse (520) than Initial LIGO (1280). For 150 W input power, the Advanced LIGO MC will operate with 3 times greater stored power than Initial LIGO. The corresponding peak irradiance is 400 kW/m$^2$, well below the continuous-wave coating damage threshold. Absorption in the Advanced LIGO MC mirror optical coatings has been measured at 0.5 ppm, roughly four times less than the best mirror coating absorption in Enhanced LIGO, so the expected thermal loading due to coating absorption should be reduced in Advanced LIGO. The larger Advanced LIGO MC mirror substrates and higher input powers result in a significantly higher contribution to bulk absorption, roughly 20 times Enhanced LIGO, however the expected thermal lensing leads to small change ($< 0.5 \%$) in the output mode [@Arain2007Note].
The Enhanced LIGO data obtained from the FI allows us to make several predictions about how it will perform in Advanced LIGO. The measured isolation ratio decrease of 0.02 dB/W will result in a loss of 3 dB for a 150 W power level expected for Advanced LIGO relative to its cold state. However, the Advanced LIGO FI will employ an *in situ* adjustable half wave plate which will allow for a partial restoration of the isolation ratio. In addition, a new FI scheme to better compensate for thermal depolarization and thus yield higher isolation ratios will be implemented [@Snetkov2011Compensation]. The maximum thermally induced angular steering expected is 480 [rad ]{}(using a drift rate of 3.2 [rad]{}/W), approximately equal to the beam divergence angle. This has some implications for the Advanced LIGO length and alignment sensing and control system, as the reflected FI beam is used as a sensing beam. Operation of Advanced LIGO at high powers will likely require the use of a beam stabilization servo to lock the position of the reflected beam on the sensing photodiodes. Although no measurable thermal lensing was observed (no change in the beam waist size or position), the measured presence of higher order modes in the FI at high powers is suggestive of imperfect thermal lens compensation by the DKDP. This fault potentially can be reduced by a careful selection of the thickness of the DKDP to better match the absorbed power in the TGG crystals.
Summary {#sec:summary}
=======
In summary, we have presented a comprehensive investigation of the Enhanced LIGO IO, including the function, design, and performance of the IO. Several improvements to the design and implementation of the Enhanced LIGO IO over the Initial LIGO IO have lead to improved optical efficiency and coupling to the main interferometer through a substantial reduction in thermo-optical effects in the major IO optical components, including the electro-optic modulators, mode cleaner, and Faraday isolator. The IO performance in Enhanced LIGO enables us to infer its performance in Advanced LIGO, and indicates that high power interferometry will be possible without severe thermal effects.
The authors thank R. Adhikari for his wisdom and guidance, B. Bland for providing lessons to K. Dooley and D. Hoak on how to handle the small optics suspensions, K. Kawabe and N. Smith-Lefebvre for their support at LHO, T. Fricke for engaging in helpful discussions, and V. Zelenogorsky and D. Zheleznov for their assistance in preparing for the Enhanced LIGO IO installation. Additionally, the authors thank the LIGO Scientific Collaboration for access to the data. This work was supported by the National Science Foundation through grants PHY-0855313 and PHY-0555453. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0757058. This paper has LIGO Document Number LIGO-P1100056.
[10]{}
Abadie, J., et al., “[Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors]{},” Classical and Quantum Gravity **27**, 173001+ (2010).
Abbott, B. P., et al., “[LIGO: the Laser Interferometer Gravitational-Wave Observatory]{},” Reports on Progress in Physics **72**, 076901+ (2009).
Acernese, F., et al., “[The Virgo 3 km interferometer for gravitational wave detection]{},” Journal of Optics A: Pure and Applied Optics **10**, 064009+ (2008).
Lück, H., et al., “[Status of the GEO600 detector]{},” Classical and Quantum Gravity **23**, S71–S78 (2006).
Adhikari, R., P. Fritschel, and S. Waldman, “[Enhanced LIGO]{},” Tech. Rep. T060156, LIGO Laboratory (2006).
Lück, H., et al., “[The upgrade of GEO 600]{},” Journal of Physics: Conference Series **228**, 012012+ (2010).
, “[Advanced LIGO Systems Design]{},” Tech. Rep. T010075, LIGO Laboratory (2009).
Robertson, N. A., et al., “[Seismic isolation and suspension systems for Advanced LIGO]{},” in “Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series,” , vol. 5500 of *Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series*, Hough, J., and G. H. Sanders, eds. (2004), vol. 5500 of *Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series*, pp. 81–91.
Meers, B. J., “[Recycling in laser-interferometric gravitational-wave detectors]{},” Physical Review D **38**, 2317–2326 (1988).
Fricke, T., et al., “[DC Readout Experiment in Enhanced LIGO]{},” Classical and Quantum Gravity, accepted for publication (2011).
Kissel, J. S., “Calibrating and improving the sensitivity of the [LIGO]{} detectors,” Ph.D. thesis, Louisiana State University (2010).
Frede, M., B. Schulz, R. Wilhelm, P. Kwee, F. Seifert, B. Willke, and D. Kracht, “[Fundamental mode, single-frequency laser amplifier for gravitational wave detectors]{},” Opt. Express **15**, 459–465 (2007).
Willems, P., A. Brooks, M. Mageswaran, V. Sannibale, C. Vorvick, D. Atkinson, R. Amin, and C. Adams, “[Thermal Compensation in Enhanced LIGO]{},” (2009).
Dooley, K., et al., “[Angular Sensing and Control of the Enhanced LIGO Interferometers]{},” in preparation .
Camp, J., D. Reitze, and D. Tanner, “[Input/Output Optics Conceptual Design]{},” Tech. Rep. T960170, LIGO Laboratory (1996).
Camp, J., D. Reitze, and D. Tanner, “[Input Optics Design Requirements Document]{},” Tech. Rep. T960093, LIGO Laboratory (1997).
Fritschel, P., R. Bork, G. González, N. Mavalvala, D. Ouimette, H. Rong, D. Sigg, and M. Zucker, “[Readout and Control of a Power-Recycled Interferometric Gravitational-Wave Antenna]{},” Appl. Opt. **40**, 4988–4998 (2001).
Adhikari, R., A. Bengston, Y. Buchler, T. Delker, D. Reitze, Q.-z. Shu, D. Tanner, and S. Yoshida, “[Input Optics Final Design]{},” Tech. Rep. T980009, LIGO Laboratory (1998).
, and [IAP Group]{}, “[Upgrading the Input Optics for High Power Operation]{},” Tech. Rep. E060003, LIGO Laboratory (2006).
Khazanov, E. A., O. V. Kulagin, S. Yoshida, D. B. Tanner, and D. H. Reitze, “[Investigation of self-induced depolarization of laser radiation in terbium gallium garnet]{},” IEEE Journal of Quantum Electronics **35**, 1116–1122 (1999).
Bullington, A. L., B. T. Lantz, M. M. Fejer, and R. L. Byer, “[Modal frequency degeneracy in thermally loaded optical resonators]{},” Appl. Opt. **47**, 2840–2851 (2008).
Arain, M., “[A Note on Substrate Thermal Lensing in Mode Cleaner]{},” Tech. Rep. T070095, LIGO Laboratory (2007).
Quetschke, V., “[Electro-Optic Modulators and Modulation for Enhanced LIGO and Beyond]{},” Coherent Optical Technologies and Applications pp. CMC1+ (2008).
Raab, F., and S. Whitcomb, “[Estimation of Special Optical Properties of a Triangular Ring Cavity]{},” Tech. Rep. T920004, LIGO Laboratory (1992).
Khazanov, E., N. Andreev, A. Babin, A. Kiselev, O. Palashov, and D. H. Reitze, “[Suppression of self-induced depolarization of high-power laser radiation in glass-based Faraday isolators]{},” J. Opt. Soc. Am. B **17**, 99–102 (2000).
Mueller, G., R. S. Amin, D. Guagliardo, D. McFeron, R. Lundock, D. H. Reitze, and D. B. Tanner, “[Method for compensation of thermally induced modal distortions in the input optical components of gravitational wave interferometers]{},” Classical and Quantum Gravity **19**, 1793+ (2002).
Khazanov, E., et al., “[Compensation of thermally induced modal distortions in Faraday isolators]{},” IEEE Journal of Quantum Electronics **40**, 1500–1510 (2004).
, “[In-vacuum optical isolation changes by heating in a Faraday isolator]{},” Appl. Opt. **47**, 5853–5861 (2008).
Barnes, N. P., and L. B. Petway, “[Variation of the Verdet constant with temperature of terbium gallium garnet]{},” J. Opt. Soc. Am. B **9**, 1912–1915 (1992).
Delker, T., R. Adhikari, S. Yoshida, and D. Reitze, “[Design Considerations for LIGO Mode-Matching Telescopes]{},” Tech. Rep. T970143, LIGO Laboratory (1997).
“[Component Specification: Substrate, Mode Cleaner Flat Mirror]{},” Tech. Rep. E970148, LIGO Laboratory (1998).
Punturo, M., “[The mirror resonant modes method for measuring the optical absorption]{},” Tech. Rep. VIR-001A-07, VIRGO (2007).
“[COMSOL]{},” http://www.comsol.com.
Kwee, P., F. Seifert, B. Willke, and K. Danzmann, “[Laser beam quality and pointing measurement with an optical resonator.]{}” The Review of scientific instruments **78** (2007).
Snetkov, I., I. Mukhin, O. Palashov, and E. Khazanov, “[Compensation of thermally induced depolarization in Faraday isolators for high average power lasers]{},” Opt. Express **19**, 6366–6376 (2011).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider two interacting quantum dots coupled by standard *s-wave* superconductors. We derive an effective Hamiltonian, and show that over a wide parameter range a degenerate ground state can be obtained. An exotic form of Majorana bound states are supported at these degeneracies, and the system can be adiabatically tuned to a limit in which it is equivalent to the one-dimensional wire model of Kitaev. We give the form of a Majorana bound state in this system in the strong interaction limit in the many-particle picture. We also study the Josephson current in this system, and demonstrate that a double slit-like pattern emerges in the presence of an extra magnetic field. This pattern is shown to disappear with increasing interaction strength, which is due to the current being carried by chargeless Majorana bound states.'
author:
- 'Thomas E. O’Brien'
- 'Anthony R. Wright'
- Menno Veldhorst
title: 'Many-particle Majorana bound states: derivation and signatures in superconducting double quantum dots'
---
\[sec:Introduction\]Introduction
================================
The idea of protected ground state degeneracies in condensed matter systems has attracted much interest recently. It is proposed that in a $p$-wave superconductor where both spin and particle-hole degeneracy are absent, a zero-energy state called the Majorana bound state can appear. This elusive quasiparticle obeys non-Abelian statistics [@Ivanov], and thus could serve as a qubit building block for topological quantum computation [@Freedman; @SimonReview; @AliceaReview]. While $p$-wave superconductors are rare, nanotechnology opens the possibility to design this unconventional superconducting state using suitable combinations of materials.
With these prospects in mind, the hunt for Majorana bound states is stronger than ever, and there is much experimental activity to realize proposed schemes that contain these particles. Recent work includes superconductor-topological insulator systems [@FuKane2009] and semiconductor nanowires in the presence of a strong Zeeman and Rashba spin orbit field [@Alicea; @Lutchyn; @Oreg] or a quantum dot system [@SauDasSarma; @Fulga2013; @LeijnseFlensberg]. Experiments demonstrating zero bias conductance peaks in nanowire systems [@Mourik; @Das], and supercurrents [@Sacepe2011], Fraunhofer patterns and Shapiro steps [@VeldhorstJJ], and SQUIDs [@Veldhorstsquid] in superconductor-topological insulator devices are steps towards the definitive detection of a Majorana bound state. More recent experiments [@Yazdani2014] have provided even stronger evidence for Majoranas in fabricated iron atomic chains on superconducting lead, however no evidence of braiding has yet been demonstrated.
![Schematic design of the hybrid superconductor quantum dot system. A Josephson junction is formed by connecting a double quantum dot on both sides to superconducting leads. The strength of the Zeeman spin splitting in the quantum dots is determined by the external field and the stray field of the nanomagnet. We note that fields $E_Z \approx k_B T$ are sufficient for the realization of Majorana bound states. The superconducting leads form a loop, as in a RF SQUID geometry, which allows to perform a current-phase relationship measurement. The possible trajectories in the junction that determine the total supercurrent are shown in the upper left. Axes are defined in the bottom-left corner for future use.[]{data-label="fig:Schematic"}](Overview_2.eps){width="8.5cm"}
Superconductor-quantum dot devices have several advantages over other proposed systems to support Majorana bound states. These systems can be lithographically defined, are strongly gate tunable and are readily operated in the few electron regime. The superconducting proximity effect is also gate tunable, which allows the stringent conditions for the presence of Majorana bound states to be satisfied. Unfortunately, Majorana bound states in the single particle formalism rely on a strong Zeeman field in combination with strong spin-orbit coupling [@SauDasSarma; @Fulga2013] (or a strong magnetic field gradient between the dots mimicking the spin-orbit coupling [@LeijnseFlensberg]), making the experimental realization a serious challenge. Specifically, the Zeeman splitting biases towards the spin-polarised Kitaev state [@Kitaev], while the induced $s$-wave superconducting gap $\Delta$ biases against this state. This then requires that the Zeeman splitting is large enough, so that the bias towards the Kitaev state dominates. Recent results [@WolmsSternFlensberg] suggest the possibility of braiding Kramers pairs of Majorana bound states. This relaxes the requirement for time-reversal symmetry breaking, but requires protection from perturbations that mix these pairs instead.
Fortunately, these requirements can be strongly relaxed as shown recently by Wright and Veldhorst [@TonyMennoPaper], who considered the effect of strong correlations for the engineering of Majorana bound states. They demonstrated that only vanishingly small anisotropic Zeeman splitting, $\sim k_B T$, is required. The resulting Majorana bound states are described in the many-particle formalism, and although not strictly localized, they have the property of a relaxed form of localization. Specifically, the operator corresponding to a Majorana bound state is localized to a single site with respect to single creation or annihilation operator terms, but is necessarily non-local with respect to number operator terms. Since number operators do not gain a phase during braiding operations, the Majorana bound states retain a local relative phase, which is responsible for their non-Abelian behaviour. These results open new perspectives for the realization of Majorana bound states. Therefore, a rigorous derivation of how the Majorana bound states behave in transport measurements and in the presence of magnetic fields is highly valuable.
In this paper, we explore the superconductor-double quantum dot system in detail, and in particular the regions where Majorana bound states are predicted to appear. We start by considering two quantum dots proximity coupled to superconducting leads in the presence of an anisotropic magnetic field and derive an effective Hamiltonian. We investigate thoroughly the form of the excitation operators at a degeneracy point, and present concrete arguments for their being Majorana bound states. This discussion is based on the many-particle requirements for a Majorana bound state, and the existence of a continuous transition between this system and Kitaev’s ground-breaking model. Finally, we investigate the Josephson supercurrent that may provide experimental evidence for Majorana bound states in these systems.
\[sec:Layout\]Device layout
===========================
Fig. \[fig:Schematic\] shows a schematic design of the considered system. A Josephson junction is realized by connecting a double quantum dot on both sides to a superconducting lead. Two magnetic fields are present; a ’local’ magnetic field produced by the stray fields of the nanomagnet together with an ’external’ field from another source. Together, these determine the Zeeman splitting $E_Z$ in the dots and define the spin quantization axis, as well as determining the flux $\Phi_J$ through the quantum dot Josephson junction and the flux $\Phi_S$ through the superconducting ring. Anisotropy in the local magnetic field due to the position and strength of the nanomagnet allows for a nonzero angle $\theta$ between the spin axis of the quantum dots. We will show that Majorana bound states appear in small magnetic fields, $E_Z \approx k_B T$, and for any non-zero $\theta$.
An important measure of the system is the supercurrent that can flow through the quantum dots via the superconducting proximity effect. This supercurrent is dependent on the flux $\Phi_J$ through the junction, and the superconducting phase difference $\phi_-=\phi_1-\phi_2$. The superconducting phase $\phi_-$ can be tuned via the flux $\Phi_S$ through the loop of the superconducting leads. This layout is similar in design to an RF SQUID consisting of a Josephson junction, and has often been used to measure the current phase relation (CPR). Since the protection of the Majorana bound states in this system is via parity conservation, this arrangement of a closed loop without macroscopic leads to the outside world should minimize quasiparticle poisoning.
The possible trajectories of quasiparticles in the system that determine the supercurrent through the junction are shown in Fig. \[fig:Schematic\]. A Cooper pair can tunnel from the superconducting lead to a quantum dot via Andreev reflection (AR) or split over the two quantum dots via crossed Andreev reflection (CAR). Quasiparticle tunneling between the quantum dots, via the superconducting leads, is called elastic co-tunneling (EC). Interestingly, CAR is unaffected by the magnetic field through the junction, such that the supercurrent dependence on $\Phi_J$ is determined by AR and EC. The superconducting phase difference $\phi_-$ controlled by $\Phi_S$, however, only affects AR and CAR, but not EC. We will show that these dependencies lead to novel current-phase relationships and results in a strong flexibility to realize Majorana bound states.
The requirement for only small Zeeman splitting gives several advantages over other quantum dot Majorana bound state proposals. Firstly, it opens a wider range of suitable materials. Experimentally, supercurrents through quantum dots formed in carbon nanotubes[@Herrero2006], InAs nanowires [@Dam2006], InAs quantum dots [@Buizert2007], and graphene [@Dirks2011] have been observed, making them potential candidates to observed the many-particle Majorana bound states. Secondly, the experimental conditions are strongly relaxed, since the conditions where Majorana modes arise and CPR measurements are greatly simplified.
\[sec:EffHam\]Derivation of effective Hamiltonian
=================================================
The Hamiltonian describing the system shown in Fig. \[fig:Schematic\] is given by
$${\mathcal{H}}={\mathcal{H}}_S+{\mathcal{H}}_D+{\mathcal{H}}_U+{\mathcal{H}}_T.
\label{eqn:HamOverview}$$
We will describe each term in turn. The superconducting loop is modelled as a linear chain with chemical potential $\mu$, hopping strength $t_S$, and superconducting order parameter $\Delta_Se^{i\phi_j}$. Note that whilst the magnitude $\Delta_S$ is expected to be constant throughout the superconductor, the superconducting phase $\phi_j$ will change as we wind about the magnetic field $\Phi_S$. We write
$$\begin{aligned}
{\mathcal{H}}_S&=-t_s\sum_{{\langle}i,j{\rangle},\sigma}{\hat{f}^{\dag}}_{i\sigma}{\hat{f}}_{j\sigma}+\mu\sum_{j,\sigma}{\hat{f}^{\dag}}_{j\sigma}{\hat{f}}_{j\sigma}\nonumber\\
&\;\;\;\;\;+\sum_{j}(\Delta_S e^{i\phi_{j}}{\hat{f}^{\dag}}_{j{\uparrow}}{\hat{f}^{\dag}}_{j{\downarrow}}+\text{h.c.}).\end{aligned}$$
Here, ${\langle}i,j{\rangle}$ denotes pairs of nearest neighbors. We will ultimately only be concerned with the phases on the ends of the superconductors - let us label these $\phi_1$ and $\phi_N$, and $\phi_{\pm}=\phi_1\pm\phi_N$. Importantly, the phase difference $\phi_-$ can be tuned to a high degree of accuracy by adjusting $\Phi_S$.
The dots (${\mathcal{H}}_D$) have an on-site potential $\epsilon_j$, and are considered in the presence of a small (but non-zero) magnetic field, leading to a Zeeman energy splitting $E_Z$
$${\mathcal{H}}_D=\sum_{j=1}^2\sum_{\sigma}\epsilon_j{\hat{c}^{\dag}}_{j\sigma}{\hat{c}}_{j\sigma}-E_Z\sum_{j}({\hat{c}^{\dag}}_{j{\uparrow}}{\hat{c}}_{j{\uparrow}}-{\hat{c}^{\dag}}_{j{\downarrow}}{\hat{c}}_{j{\downarrow}}).$$
We consider the quantum dots to be operated in the few electron regime, where a strong Coulombic repulsion is present between two electrons on the same dot. We model this as a Hubbard-style interaction
$${\mathcal{H}}_U=U\sum_{j=1}^2{\hat{c}^{\dag}}_{j{\uparrow}}{\hat{c}}_{j{\uparrow}}{\hat{c}^{\dag}}_{j{\downarrow}}{\hat{c}}_{j{\downarrow}}.
\label{eqn:HamU}$$
The two superconductors are coupled to the dots via the proximity effect, which allows for electron tunneling between either end of the superconductor and either dot:
$${\mathcal{H}}_T=\sum_{n=\{1,N\}}\sum_{j=\{1,2\}}\sum_{\sigma}\Gamma_{n,j}({\hat{f}^{\dag}}_{n\sigma}{\hat{c}}_{j\sigma}+\text{h.c.}).
\label{eqn:HamT}$$ We assume that the tunnel coupling $\Gamma_{n,j}$ of the two dots have the same amplitude $\Gamma$, and study the effect of finite phase difference. In practice this phase difference is realized by the magnetic flux through the quantum dot Josephson junction, as shown in Fig. \[fig:Schematic\]. In the Peierls substitution, the tunneling obtains a phase proportional to the size of the field
$$\Gamma_{n,j}=\Gamma\exp\left(-i\frac{\pi}{\Phi_0}\int_j^n\mathbf{A}\cdot d\mathbf{r}\right).$$
Here, $\Phi_0=h/2e$ is the magnetic flux quantum, and we integrate along lines between the superconductors and the dots (see Fig. \[fig:Schematic\]). This is a slight simplification, as the electrons do not strictly travel along any given line, but the result is essentially the same. Choosing the gauge $\mathbf{A}=-By\hat{x}$ (refer to Fig. \[fig:Schematic\] for axis), we calculate
$$\Gamma_{n,j}=\Gamma\exp\left(\pm i \pi \frac{\Phi_J}{\Phi_0}\right),$$
where $\Phi_J$ is the enclosed flux, and the positive sign is taken when the path travels anticlockwise about the origin.
As all terms in the Hamiltonian involving the superconductors are quadratic, they may be removed to write down an effective Hamiltonian for the dots via an integration over Grassman variables [@AltlandSimons]. To do this, we write down the partition function of the system
$$\begin{aligned}
{\mathcal{Z}}&=\int D[\bar{\Psi},\Psi]\exp(-S[\bar{\Psi},\Psi]),\label{PartitionFunction}\\
S[\bar{\Psi},\Psi]&=\int_0^{\beta}d\tau\left[\bar{\Psi}\partial_{\tau}\Psi+{\mathcal{H}}[\bar{\Psi},\Psi]\right],\end{aligned}$$
where ${\mathcal{H}}$ is the functional form of the Hamiltonian. Here, $\Psi$ and $\bar{\Psi}$ are vectors of Grassman variables, which are the eigenvalues of the annihilation operators for some Fermionic coherent state $|\psi{\rangle}$ [@AltlandSimons]. We use the following notation to separate the Grassman variables associated with the dots from those associated with the superconductor,
$${\hat{c}}_{i\sigma}|\psi{\rangle}=\psi_{i\sigma}|\psi{\rangle},\;\;{\hat{f}}_{j\sigma}|\psi{\rangle}=\phi_{j\sigma}|\psi{\rangle}.$$
Then we can define
$$\Psi=\left(\begin{array}{c}\psi\\\phi\end{array}\right),\;\;\bar{\Psi}=\left(\begin{array}{cc}\bar{\psi}&\bar{\phi}\end{array}\right).$$
Here $\psi$($\phi$) contains the $\psi_{i\sigma}$($\phi_{j\sigma}$) terms, and $\bar{\psi}$ ($\bar{\phi}$) contains their adjoints, which are defined by ${\langle}\psi|{\hat{c}^{\dag}}_{i\sigma}={\langle}\psi|\bar{\psi}_{i\sigma}$ and ${\langle}\psi|{\hat{f}^{\dag}}_{j\sigma}={\langle}\psi|\bar{\phi}_{j\sigma}$. As we are dealing with a superconducting system, it is necessary to use an electron-hole Nambu basis, and include the adjoint variables $\bar{\psi}_{i\sigma}$ ($\bar{\phi}_{j\sigma}$) in the $\psi$ ($\phi$) vector (which doubles in size). We can then expand ${\mathcal{H}}[\bar{\Psi},\Psi]$ into the terms from equation \[eqn:HamOverview\], replacing $\Psi$ with either $\psi$ or $\phi$ depending on which species is being considered. Furthermore, quadratic Hamiltonian terms can be rewritten as matrix products, for example ${\mathcal{H}}_D[\bar{\psi},\psi]\rightarrow \bar{\psi}H_D\psi$. In this notation, our action becomes $$\begin{aligned}
S[\bar{\Psi},\Psi]&=\int_0^{\beta}d\tau\left[\bar{\psi}\partial_{\tau}\psi+{\mathcal{H}}_U[\bar{\psi},\psi]+\bar{\psi}H_D\psi\right.\nonumber\\&\left.+\bar{\phi}{\mathcal{G}}^{-1}\phi+\bar{\psi}M\phi+\bar{\phi}M^{\dag}\psi\right].
\label{eqn:EffAction}\end{aligned}$$
Here, we have defined ${\mathcal{G}}^{-1}=\partial_{\tau}+H_S$, and split the terms from ${\mathcal{H}}_T$ into $M$, which contains the information on tunneling from the superconductors to the dot, and its adjoint $M^{\dag}$. Note that we are treating each Grassman variable as independent from its corresponding adjoint. We now shift all $\phi$ dependence to a single term by completing the square
$$\begin{aligned}
S[\bar{\Psi},\Psi]&=\int_0^{\beta}d\tau\left[\bar{\psi}\partial_{\tau}\psi+{\mathcal{H}}_U[\bar{\psi},\psi]+\bar{\psi}H_D\psi\right.\nonumber\\
&\left.+(\bar{\phi}{\mathcal{G}}^{-1}+\bar{\psi}M){\mathcal{G}}({\mathcal{G}}^{-1}\phi+ M^{\dag}\psi)-\bar{\psi}M{\mathcal{G}}M^{\dag}\psi\right].\end{aligned}$$
The term containing the $\phi$ dependence may be integrated out to give a constant [@AltlandSimons]. Our effective Hamiltonian then consists of the original ${\mathcal{H}}_D$ and ${\mathcal{H}}_U$ terms, and a new term which was the remainder from completing the square
$${\mathcal{H}}_{new}[\psi,\bar{\psi}]=-\bar{\psi}M{\mathcal{G}}M^{\dag}\psi.$$
--------------------------------------- -------------------------------------------
{width="8.8cm"} {width="8.8cm"}
{width="8.8cm"} {width="8.8cm"}
--------------------------------------- -------------------------------------------
This term can be evaluated via matrix multiplication, but we must first specify a basis, which we do by explicitly writing down the vectors $\psi$ and $\phi$. The coupling Hamiltonian ${\mathcal{H}}_T$ only includes operators from the $N$th site of either superconductor, and thus non-zero contributions will come only from products of matrix elements from these sites. Equivalently, for our purposes we can use a basis for $\phi$ which only includes the $N$th site terms, reducing it to a manageable size. We write
$$\begin{aligned}
\bar{\psi}&=\left(\bar{\psi}_{1{\uparrow}},\bar{\psi}_{2{\uparrow}},\bar{\psi}_{2{\downarrow}},\bar{\psi}_{1{\downarrow}},\psi_{1{\uparrow}},\psi_{2{\uparrow}},\psi_{2{\downarrow}},\psi_{1{\downarrow}}\right),\\
\bar{\phi}&=\left(\bar{\phi}_{1{\uparrow}},\bar{\phi}_{N{\uparrow}},\bar{\phi}_{N{\downarrow}},\bar{\phi}_{1{\downarrow}},\phi_{1{\uparrow}},\phi_{N{\uparrow}},\phi_{N{\downarrow}},\phi_{1{\downarrow}}\right).\end{aligned}$$
We assume for now that no flux passes between the dots ($\Gamma_{n,j}=\Gamma$), and then our coupling matrix $M$ is
$$\begin{aligned}
M=M^{\dag}&=\Gamma\left(\begin{array}{cc}M_{AA}&0\\0&-M_{AA}\end{array}\right),\\
M_{AA}&=\left(\begin{array}{cccc}1&1&0&0\\1&1&0&0\\0&0&1&1\\0&0&1&1\end{array}\right).\nonumber\end{aligned}$$
To calculate ${\mathcal{G}}$, we use the Matsubara representation. We make the following substitution $$\psi(\tau)=\frac{1}{\sqrt{\beta}}\sum_{\omega_n}\psi_ne^{-i\omega_n\tau},$$ where $\omega_n$ are the Matsubara frequencies $\omega_n=(2n+1)\frac{\pi}{\beta}$. Then, we have $\partial_{\tau}\rightarrow -i\omega_n$, and we can calculate
$$\begin{aligned}
{\mathcal{G}}^{-1}&=\left(\begin{array}{cc}G_{AA}&G_{AB}\\G_{AB}^{\dag}&G_{BB}\end{array}\right),\nonumber\\&G_{AA}=(\mu-i\omega_n)I,\;\;G_{BB}=-(\mu+i\omega_n)I,\nonumber\\
G_{AB}&=\Delta_S\left(\begin{array}{cccc}0&0&0&e^{i\phi_1}\\0&0&e^{i\phi_2}&0\\0&-e^{i\phi_2}&0&0\\-e^{i\phi_1}&0&0&0\end{array}\right).\nonumber\end{aligned}$$
The inverse to this matrix is
$${\mathcal{G}}=\frac{1}{\mu^2+\omega_n^2+\Delta_S^2}\left(\begin{array}{cc}-G_{BB}&G_{AB}\\G_{AB}^{\dag}&-G_{AA}\end{array}\right),$$
and from this we calculate
$$\begin{aligned}
&-M{\mathcal{G}}M^{\dag}=\frac{-\Gamma^2}{\mu^2+\omega_n^2+\Delta_S^2}\\&\times\left(\begin{array}{cc}M_{AA}(i\omega_n+\mu)M_{AA}&-M_{AA}G_{AB}M_{AA}\\-M_{AA}G_{AB}^{\dag}M_{AA}&M_{AA}(i\omega_n-\mu)M_{AA}\end{array}\right),\\
&M_{AA}^2=2M_{AA},\\&M_{AA}G_{AB}M_{AA}=2(e^{i\phi_1}+e^{i\phi_2})\left(\begin{array}{cccc}0&0&1&1\\0&0&1&1\\-1&-1&0&0\\-1&-1&0&0\end{array}\right).\end{aligned}$$
It remains to invert the Fourier transform, but this will return an action which is non-local in time $$\begin{aligned}
&-\frac{1}{\beta}\sum_n\bar{\psi}_n M{\mathcal{G}}M^{\dag}\psi_n\nonumber\\
&\;\;\;\;\;=-\frac{1}{\beta^2}\int_0^\beta d\tau d\tau'\sum_n\bar{\psi}(\tau)\left(M{\mathcal{G}}M^{\dag}\right)_n\psi(\tau')e^{i\omega_n(\tau'-\tau)}.\end{aligned}$$
Only one summation over $n$ needs to be computed, and this can be performed using complex integration techniques [@AltlandSimons]: $$\begin{aligned}
&\sum_n\frac{1}{\mu^2+\omega_n^2+\Delta_S^2}e^{i\omega_n(\tau-\tau')}\\
&\;\;\;\;\;=\frac{\beta}{e^{-K\beta}+1}\frac{1}{K}e^{-K|\tau-\tau'|},\end{aligned}$$ where $K=\sqrt{\Delta_S^2+\mu^2}$. We then approximate this by the following function $$\frac{\beta}{e^{-K\beta}+1}\frac{1}{K^2}\delta(\tau-\tau').$$
This allows us to write the action as
$$\begin{aligned}
S[\bar{\psi},\psi]&=S_{eff}[\bar{\psi},\psi]+S_{pert}[\bar{\psi},\psi]\\
S_{eff}[\bar{\psi},\psi]&=\int_0^\beta d\tau\left[\bar{\psi}\partial_\tau\psi+{\mathcal{H}}_U[\bar{\psi},\psi]+\bar{\psi} H_D\psi-\bar{\psi}\frac{\Gamma^2}{K^2(e^{-K\beta}+1)}{\mathcal{M}}\psi\right]\\
S_{pert}[\bar{\psi},\psi]&=\int_0^{\beta}d\tau d\tau'\bar{\psi}\left(\frac{\Gamma^2}{K^2(e^{-K\beta}+1)}\delta(\tau-\tau')-\frac{\Gamma^2}{K(e^{-K\beta}+1)}e^{-K|\tau-\tau'|}\right){\mathcal{M}}\psi\label{eqn:SPert}\\
{\mathcal{M}}&=\left(\begin{array}{cc}M_{AA}(-K+\mu)M_{AA}&-M_{AA}G_{AB}M_{AA}\\-M_{AA}G_{AB}^{\dag}M_{AA}&M_{AA}(-K-\mu)M_{AA}\end{array}\right)\end{aligned}$$
Our effective action ${\mathcal{S}}_{eff}$ is local in time, and so we can extract an effective Hamiltonian by undoing the procedure used to write equation \[eqn:EffAction\]. We obtain an elastic co-tunneling (EC) term
$${\mathcal{H}}_{ct}=t\sum_{\sigma}{\hat{c}^{\dag}}_{1,\sigma}{\hat{c}}_{2,\sigma} + \text{h.c,}$$
and crossed (CAR) and normal (AR) Andreev reflection terms
$${\mathcal{H}}_{Ar}=\Delta e^{i\phi_+/2}\cos(\phi_-/2)\sum_{i,j}{\hat{c}^{\dag}}_{i{\uparrow}}{\hat{c}^{\dag}}_{j{\downarrow}}+\text{h.c,}$$
where $\phi_{\pm}=\frac{1}{2}(\phi_1\pm\phi_2)$. $t$ and $\Delta$ can be read immediately from the effective action $$\begin{aligned}
t=\frac{-2\Gamma^2\mu}{K^2(e^{-K\beta}+1)}\\
\Delta=\frac{-2\Delta_S\Gamma^2}{K^2(e^{-K\beta}+1)}.\end{aligned}$$
Corrections to the effective action can be calculated via a cumulant expansion [@MengPaper; @RozhkovArovas]. To do this, we write $${\mathcal{Z}}\approx\int{\mathcal{D}}[\bar{\psi},\psi]e^{-S_{eff}}(1-S_{pert}+\frac{1}{2}S_{pert}^2-\ldots).
\label{eqn:CumulantExpansion}$$ The first order correction is the thermal expectation value of $S_{pert}$ in our effective approximation. This can be written $${\langle}S_{pert}{\rangle}_{eff}=\int_0^{\beta}d\tau {\mathcal{G}}_{pert}(\tau)_{i,j}{\langle}{\mathcal{T}}_{\tau}\bar{\psi}_i(\tau)\psi_j(0){\rangle}_{eff}.$$ Here, ${\mathcal{G}}_{pert}$ is taken from the effective action (Eq.\[eqn:SPert\]) by writing $S_{pert}[\bar{\psi},\psi]=\int)^{\beta}d\tau d\tau'\bar{\psi}{\mathcal{G}}_{pert}(\tau-\tau')\psi$. Exact calculations of ${\langle}{\mathcal{T}}_{\tau}\bar{\psi}_i(\tau)\psi_j(0){\rangle}_{eff}$ are done in [@MengPaper] for a similar system to ours, and are of the order of $e^{-|\Delta E|}$, where $\Delta E$ is the spacing between lowest energy levels. As we are interested in our system at degeneracy, this will be $\approx 1$. The integral over $\tau$ can be calculated by substituting in ${\mathcal{G}}_{pert}$ $$\int_0^\beta d\tau{\mathcal{G}}_{pert}=\frac{\Gamma^2}{K^2(e^{-K\beta}+1)}e^{-K\beta}{\mathcal{M}}.$$ This becomes exponentially small in the large $\beta$ limit, justifying our approximation at low temperatures. Higher order terms in our cumulant expansion will scale at higher powers of $e^{-K\beta}$. This is thus an acceptable approximation at low temperatures, but we need to be aware of two sources of error that come from this approximation. Firstly, it will cause corrections to the relative energy levels (as in [@MengPaper]), and secondly it will give any quasiparticles a finite lifetime on the order of $\hbar/{\langle}S_{pert}{\rangle}_{eff}$. The energy corrections will not mix the even and odd parity sections of the effective Hamiltonian, and as we will show in the next section, this implies they do not prevent the existence of Majorana bound states. However, the finite quasiparticle lifetime will need to be accounted for in any experimental design.
Previous results in the literature show that elastic co-tunneling is to lowest order in tunneling amplitude equal in magnitude and opposite in sign to crossed Andreev reflection[@Falci2001]. When higher order terms are included, EC has a larger contribution [@Kalenkov2007], however the electromagnetic environment [@Yeyati2007] and Coulomb interactions [@Recher2001] can result in CAR being dominant instead. This is demonstrated in recent experiments [@Hofstetter2009; @Herrmann2010].
An effective spin-orbit coupling is obtained by rotating the local magnetic field on dot $2$, using the nanomagnet shown in Fig. \[fig:Schematic\]. This can be treated as a uniform spin rotation on the respective site, sending ${\hat{c}}^{(\dag)}_{2\sigma}\rightarrow\cos(\theta/2){\hat{c}}^{(\dag)}_{2\sigma}+\sigma\sin(\theta/2){\hat{c}}^{(\dag)}_{2\bar{\sigma}}$. The effective Hamiltonian is then [@TonyMennoPaper]:
$$\begin{aligned}
{\mathcal{H}}_{eff}&=\sum_{j,\sigma}\epsilon'_j{\hat{c}^{\dag}}_{j,\sigma}{\hat{c}}_{j,\sigma}-E_Z\sum_{j}({\hat{c}^{\dag}}_{j{\uparrow}}{\hat{c}}_{j{\uparrow}}-{\hat{c}^{\dag}}_{j{\downarrow}}{\hat{c}}_{j{\downarrow}})+U\sum_j{\hat{c}^{\dag}}_{j{\uparrow}}{\hat{c}}_{j{\uparrow}}{\hat{c}^{\dag}}_{j{\downarrow}}{\hat{c}}_{j{\downarrow}}+t\sum_{\sigma}\left(\cos(\theta/2){\hat{c}^{\dag}}_{1,\sigma}{\hat{c}}_{2,\sigma}+\sigma\sin(\theta/2){\hat{c}^{\dag}}_{1,\sigma}{\hat{c}}_{2,\bar{\sigma}}+\text{h.c.}\right)\nonumber\\&+\Delta e^{i\phi_+/2}\cos(\phi_-/2)\left(\sum_i{\hat{c}^{\dag}}_{i{\uparrow}}{\hat{c}^{\dag}}_{i{\downarrow}}+\sum_{\sigma}(\sigma\cos(\theta/2){\hat{c}^{\dag}}_{1\sigma}{\hat{c}^{\dag}}_{2\bar{\sigma}}-\sin(\theta/2){\hat{c}^{\dag}}_{1\sigma}{\hat{c}^{\dag}}_{2\sigma}+\text{h.c.})\right).
\label{EffectiveHamiltonian}\end{aligned}$$
The effect of the magnetic flux through the quantum dot Josephson junction can be found by altering $\Gamma_{n,j}$ for the coupling matrix $M$. This results in the elastic co-tunneling term being universally multiplied by a factor that comes from interference between the two possible paths from one dot to the other.
$$t\rightarrow t\cos\left(\frac{\pi}{2}\frac{\phi_J}{\phi_0}\right),$$
Crossed Andreev reflection is unaffected by the flux through the junction, as the electron-hole time reversed partners cancel the magnetic field. However, for normal Andreev reflection, this new source of interference sums with the phase difference between the possible parent superconductors. This changes
$$\cos(\phi_-/2){\hat{c}^{\dag}}_{i{\uparrow}}{\hat{c}^{\dag}}_{i{\downarrow}}\rightarrow\cos\left(\phi_-/2\pm \frac{\pi}{2}\frac{\phi_J}{\phi_0}\right){\hat{c}^{\dag}}_{i{\uparrow}}{\hat{c}^{\dag}}_{i{\downarrow}},$$
where the positive sign is taken for $i=1$ and negative sign for $i=2$.
\[sec:Characterization\]Degenerate ground states
================================================
For this section we will assume that the effective on-site potential of the two dots has been tuned to the chemical potential of the superconductors, which we define as our zero of energy ($\epsilon_1+t=\epsilon_2+t=0$). A discussion of the effects of the on-site energies deviating from this ‘sweet spot’ has been presented elsewhere [@LeijnseFlensberg]. We will assume for this section as well that there is no magnetic flux passing between the dots ($\Lambda=0$).
To investigate the appearance of Majorana bound states in this system, we first investigate what freedom we have to tune to a ground state degeneracy. Here, we explicitly require this degeneracy to be between states with different particle number parity, as these are protected against mixing. Due to this protection, we can separate our basis states into even and odd sectors, reducing the Hamiltonian to an $8\times 8$ matrix for each. These sectors cannot be split further by particle number or spin when the superconducting or anisotropic Zeeman terms are respectively present.
To characterise the system, in Fig. \[fig:CharacterPlots\] we present surface plots of the difference in energy between the lowest energy even and odd particle number eigenstates (which we call the even-odd excitation energy), for various sets of parameters. For fixed values of $t$ and $\Delta$, there exists a minimum value of $U$ required for the degeneracy we require (at $t=\Delta$ and $E_Z=0$, this is at $U=3t$; see Fig. \[fig:CharacterPlots\].a). It should be noted that there exists no degeneracy at $U=0$, $Z=0$ for any $t$ or $\Delta$; crossed Andreev reflection prevents our system realising a spinful Kitaev chain in the non-interacting limit.
Above the minimum $U$ value, the degeneracy can always be realized by tuning the superconducting phase difference $\phi_-$. This is important, as whilst other parameters will be relatively constrained in an experiment, the relative superconducting phases are freely tunable via changing the flux $\Phi_S$ through the superconducting loop, as shown in Fig. \[fig:Schematic\]. Furthermore, higher order corrections from the cumulant expansion in Sec.\[sec:EffHam\] will not mix the even and odd sectors, and thus, though they might cause corrections to the position of the lines of degeneracy, they will not remove the ability to tune to them.
It is also important to note that the even-odd degeneracy can be reached for $E_Z$ at and near $0$. When $E_Z\approx \pm k_BT$, we expect a single MBS to be present on either dot; the MBS will be different depending on the sign of $E_Z$. When $E_Z=0$, the well-known Kramers degeneracy is also present in the odd parity states. The two species of MBS on each dot then should form Kramers pairs, as outlined in [@WolmsSternFlensberg]. However, our system as described does not have the means for protecting against the mixing of the Kramers pairs, and so gapping out one of the species by a small magnetic field $E_Z>k_B T$ is preferable. As we are requiring only small $E_Z$, the spin-orbit coupling angle $\theta$ will have negligible impact on the eigenenergies (see Fig.\[fig:CharacterPlots\].b).
In the limit that $U\rightarrow\infty$, the system can no longer support states containing two electrons on a single dot. This reduces our basis to five even particle number and four odd particle number states. We break the Kramers degeneracy and diagonalize our Hamiltonian for non-zero $E_Z$, but then consider the form of the wavefunctions as $E_Z\rightarrow 0$. This provides a good approximation for the negligible $E_Z$ case, where the Kramers degeneracy is only broken on the order of the temperature. We will detail the $E_Z>0$ results here - our procedure easily generalises to the $E_Z<0$ sector, and to the $E_Z=0$ sector with the mixing of the Kramers pairs.
At $E_Z=0^+$ , we find an even particle number state $|\Psi_E{\rangle}$ with energy $\epsilon=-\sqrt{2}\Delta|\cos(\phi_-)|$, and an odd particle number state $|\Psi_O{\rangle}$ with energy $\epsilon=-t$. The requirement for a degenerate ground state then is that $\sqrt{2}\Delta|\cos(\phi_-)|=t$. This requirement will be satisfied at some $\phi_-$ whenever $\sqrt{2}\Delta > t$. From Fig. \[fig:CharacterPlots\], we see that this upper bound is lowered if $U$ is finite. When $\cos(\phi_-)>0$, the form of the two lowest energy eigenstates are
$$\begin{aligned}
|\Psi_E{\rangle}&=\frac{1}{2}\left[\sqrt{2}e^{i\phi_+/4}-e^{-i\phi_+/4}\cos(\theta/2)({\hat{c}^{\dag}}_{1{\uparrow}}{\hat{c}^{\dag}}_{2{\downarrow}}+{\hat{c}^{\dag}}_{2{\uparrow}}{\hat{c}^{\dag}}_{1{\downarrow}})\right.\nonumber\\&\left.\hspace{0.8cm}+e^{-i\phi_+/4}\sin(\theta/2)({\hat{c}^{\dag}}_{1{\uparrow}}{\hat{c}^{\dag}}_{2{\uparrow}}+{\hat{c}^{\dag}}_{1{\downarrow}}{\hat{c}^{\dag}}_{2{\downarrow}})\right]|v{\rangle},\\
|\Psi_O{\rangle}&=\frac{1}{\sqrt{2}}\left[\cos(\theta/4)({\hat{c}^{\dag}}_{2{\uparrow}}-{\hat{c}^{\dag}}_{1{\uparrow}})\right.\nonumber\\&\left.\hspace{2.5cm}+\sin(\theta/4)({\hat{c}^{\dag}}_{2{\downarrow}}+{\hat{c}^{\dag}}_{1{\downarrow}})\right]|v{\rangle}.\end{aligned}$$
\[sec:Evidence for Majorana bound states\] Evidence for Majorana bound states
=============================================================================
We now present evidence for the existence of Majorana bound states when our system is close to a degeneracy. We continue to take the $U\rightarrow\infty$ limit, which is equivalent to an assumption that doublons are not present. In Fig. \[fig:DoublonDensity\], we see that for $U$ much greater than the critical value the doublon density has dropped to a negligible amount. This implies that the following arguments should hold for a large range of finite $U$ also. Experimentally, quantum dots can have charging energies of several meV, so that $U$ will typically be large.
![Doublon (doubly occupied site) density for the lowest energy even and odd states of the double quantum dot as $U$ increases. Other parameters are $\Delta=t$, $\theta=\phi_-=\phi_+=0$, $E_Z=0$. We see that the doublon density is fairly small for most $U>3t$ (to the right of the dotted line), which is where the ground state degeneracy is present. Thus the infinite $U$ approximation should be reasonably accurate for most systems where a degenerate ground state is present.[]{data-label="fig:DoublonDensity"}](EigenstatePlot.eps){width="8.5cm"}
As in any interacting problem, the form of the excitations between two states is difficult to determine. It is possible to show for this system that no excitation operators between $|\Psi_E{\rangle}$ and $|\Psi_O{\rangle}$ may consist only of single creation or annihilation operators. For, consider the part of the operator that would excite $|\Psi_O{\rangle}$ to the two particle basis state components of $|\Psi_E{\rangle}$. This must not generate doublons under our assumption, and so it must take the form
$$\begin{aligned}
e^{-i\phi_+/4}A\left[-\frac{\cos(\theta/2)}{\cos(\theta/4)}{\hat{c}^{\dag}}_{2{\downarrow}}-\frac{\cos(\theta/2)}{\sin(\theta/4)}{\hat{c}^{\dag}}_{2{\uparrow}}\right]\\
+e^{-i\phi_+/4}B\left[\frac{\cos(\theta/2)}{\cos(\theta/4)}{\hat{c}^{\dag}}_{1{\downarrow}}-\frac{\cos(\theta/2)}{\sin(\theta/4)}{\hat{c}^{\dag}}_{1{\uparrow}}\right].\end{aligned}$$
Then, evaluating the action of our excitations upon $|\Psi_O{\rangle}$, we find the following two equations which needs to be fulfilled: $$\begin{aligned}
(A+B)\cos(\theta/2)\tan(\theta/4)&=\frac{1}{\sqrt{2}}\sin(\theta/2)\\
(A+B)\cos(\theta/2)\cot(\theta/4)&=-\frac{1}{\sqrt{2}}\sin(\theta/2).\end{aligned}$$ However, these can never be satisfied simultaneously! As such, an excitation made out of single products of creation and annihilation operators is not possible, which forces our solutions to differ significantly from those studied in [@Alicea].
![Surface plot of the even-odd excitation energy showing the spinless Kitaev phase at large $\epsilon_1=\epsilon_2=E_Z$, and the DQD phase at vanishing $E_Z$. Our other parameters are $\Delta=\sqrt{2}t$, $\theta=\pi/2$, and $U=10$. We see that we can continuously transition from a system supporting Kitaev Majorana bound states (which are achieved as $E_Z\rightarrow\infty$ and $\Delta\cos(\phi_-/2)=t$) to the Majorana bound states we study in this paper (where $E_Z/t\approx k_B T$), whilst maintaining a degeneracy (white lines) at all times. This gives further evidence that our description of the system as containing Majorana bound states is accurate.[]{data-label="fig:ChangingPhase"}](SamePhasePlot.eps){width="8.5cm"}
This result is not surprising. In general only non-interacting systems can be guaranteed to have excitations consisting of single products of creation and annihilation operators. However, without this restriction, there is a large degree of freedom in our choice of possible operators that excite between our ground states. The question then is; if we were to write down an operator that has the form of a Majorana bound state, would we be correct in doing so?
The properties of Majorana bound states in non-interacting systems are well-known [@SimonReview; @AliceaReview; @Kitaev]. We wish to interpret the properties of these non-interacting excitations in terms of the states they excite between. Then, extending to the interacting case, if our system has these properties the Majorana picture will be the correct way to view it.
Firstly, to have a zero energy excitation, we require $|\Psi_O{\rangle}$ and $|\Psi_E{\rangle}$ to be degenerate, as has been discussed in the previous section. Then, we note that a general non-interacting Majorana can be written $\gamma=\hat{C}+\hat{C}^{\dag}$, where $\hat{C}^{\dag}$ is a sum of creation operators, and $\hat{C}$ is a sum of annihilation operators. We want both of these terms to excite between the ground states, as otherwise $\hat{C}$ and $\hat{C}^{\dag}$ are the excitations themselves, not $\gamma$. As such, we require *both* ${\langle}\Psi_E|\hat{C}^{\dag}|\Psi_O{\rangle}$ and ${\langle}\Psi_E|\hat{C}|\Psi_O{\rangle}$ to be non-zero. In general, this will be satisfied as long as ${\langle}\Psi_E|{\hat{c}^{\dag}}_i|\Psi_O{\rangle}$ and ${\langle}\Psi_E|{\hat{c}}_i|\Psi_O{\rangle}$ are both themselves non-zero for $i$ within the region our MBS is localised to.
For our system, we want to consider excitations on either the first or the second site, and thus we calculate ${\langle}\Psi_E|{\hat{c}^{\dag}}_{1{\uparrow}}|\Psi_O{\rangle}=\frac{1}{2\sqrt{2}}\sin(\theta/4)$, ${\langle}\Psi_E|{\hat{c}}_{1{\uparrow}}|\Psi_O{\rangle}=\frac{1}{2}e^{-i\phi_+/4}\cos(\theta/4)$. Similar results are found for the other spin species and sites. We see that this condition does hold for our system, except at $\theta=0$ (up to rotations of $2\pi$). This is expected, as spin-orbit coupling is known to be required for Majorana bound states to exist [@AliceaReview].
Finally, as in [@Kitaev], we require that our Majorana bound states come in spatially separated pairs (i.e. pairs which are separated by a distance greatly exceeding the exponential localisation length). In our system spatial separation is limited, as we only have two sites. This problem is naturally dealt with in larger arrays of quantum dots, which will be required to demonstrate existence of braiding (which cannot be acheived with only two MBSs [@SimonReview]). For experimental realisation of localisation in our system, we need to fine-tune our parameters enough that the localisation length (which is a function of fluctuations in the ground state energy gap) is much less than a single site. This is especially important, as unlike the quadratic protection found in the system of [@LeijnseFlensberg], the band gap here grows linearly with our shift from the degeneracy. However, we have great control over the superconducting phase difference $\phi_-$ through the flux $\Phi_S$. For small deviations from the degeneracy ($\Delta E<<E_Z$), we would expect the MBSs to delocalise exponentially over the system in a manner similar to the Kitaev chain [@SauDasSarma]. At deviations larger than this, mixing of Kramers pairs will be the biggest concern, as our proposal does not protect this in the way of [@WolmsSternFlensberg].
We can relax the localisation condition somewhat, as suggested in [@TonyMennoPaper]. To do so, we note that a braiding consists of evolving a creation operator by a phase $e^{i\phi}$. The corresponding annihilation operator must then evolve by the phase $e^{-i\phi}$, and so the number operator will be invariant. We should thus be able to use any number operators we wish to describe our Majorana bound state, whilst retaining the localization for the purposes of braiding by insisting that single products of creation and annihilation operators are restricted to a small number of neighbouring sites. This argument is true for any spin rotation (as we cannot spatially separate spins). As such, we define the rotated number operators $\hat{n}_{i\sigma\rho}$ by
$$\begin{aligned}
\hat{n}_{i\sigma\rho} &= \cos^2(\rho/2)\hat{n}_{i\sigma_z}+\sin^2(\rho/2)\hat{n}_{i\bar{\sigma}_z}\nonumber\\&\hspace{2cm}+\frac{1}{4}\sin(\rho)(\hat{n}_{i\sigma_x}-\hat{n}_{i\bar{\sigma}_x}),\end{aligned}$$
which have corresponding rotated creation operators ${\hat{c}^{\dag}}_{i\sigma\rho}=\cos(\rho/2){\hat{c}^{\dag}}_{i\sigma}+\sigma\sin(\rho/2){\hat{c}^{\dag}}_{i\bar{\sigma}}$.
Our Majorana bound state is then a self-adjoint operator that excites each ground state to the other, made up of products of these rotated number operators, and the creation and annihilation operators from a single site. In [@TonyMennoPaper] it was required only that we consider the effect of this excitation within the ground state subspace, but it can be generalised to the entire $16$ dimensional Hilbert space. If we set $\rho=\frac{\pi}{2}-\frac{\theta}{2}$ and $\eta=\frac{\pi}{2}+\frac{\theta}{2}$, and define
$$\begin{aligned}
\hat{z}_{i\sigma}&=1-\hat{n}_{i\sigma},\\
\hat{g}_{i\sigma}&=e^{-i\phi_+/4}{\hat{c}^{\dag}}_{i\sigma}+e^{i\phi_+/4}{\hat{c}}_{i\sigma},\label{g1Eqn}\\
\hat{g}'_{i\sigma}&=\frac{1}{\sqrt{2}}(e^{i\phi_+/4}{\hat{c}^{\dag}}_{i\sigma}+e^{-i\phi_+/4}{\hat{c}}_{i\sigma}),\label{g2Eqn}\end{aligned}$$
then we can write operators $\gamma_1$ and $\gamma_2$ localized to site $1$ and $2$ respectively. These correspond to quasiparticles with finite lifetimes as discussed in section \[sec:EffHam\].
$$\begin{aligned}
\gamma_1&={\hat{z}}_{1{\downarrow}}\{-\hat{g}_{1{\uparrow}}\cos(\theta/4){\hat{z}}_{2{\uparrow}\eta}{\hat{z}}_{2{\downarrow}\eta}+\hat{g}'_{1{\uparrow}}[\sin(\theta/4)(\hat{n}_{1{\uparrow}\eta}{\hat{z}}_{2{\downarrow}\eta}+\hat{n}_{2{\downarrow}\eta}{\hat{z}}_{2{\uparrow}\eta})+\cos(\theta/4)(\hat{n}_{2{\downarrow}\eta}-\hat{n}_{2{\uparrow}\eta})]\}\nonumber\\
&+{\hat{z}}_{1{\uparrow}}\{\hat{g}_{1{\downarrow}}\sin(\theta/4){\hat{z}}_{2{\uparrow}\eta}{\hat{z}}_{2{\downarrow}\eta}+\hat{g}'_{1{\downarrow}}[\cos(\theta/4)(\hat{n}_{2{\uparrow}\eta}{\hat{z}}_{2{\downarrow}\eta}+\hat{n}_{2{\downarrow}\eta}{\hat{z}}_{2{\uparrow}\eta})+\cos(\theta/4)(\hat{n}_{2{\downarrow}\eta}-\hat{n}_{2{\uparrow}\eta})]\}\\
\gamma_2 &={\hat{z}}_{2{\downarrow}}\{\hat{g}_{2{\uparrow}}\cos(\theta/4){\hat{z}}_{1{\uparrow}\rho}{\hat{z}}_{1{\downarrow}\rho}+\hat{g}'_{2{\uparrow}}[\sin(\theta/4)(\hat{n}_{1{\uparrow}\rho}{\hat{z}}_{1{\downarrow}\rho}+\hat{n}_{1{\downarrow}\rho}{\hat{z}}_{1{\uparrow}\rho})+\cos(\theta/4)(\hat{n}_{1{\uparrow}\rho}-\hat{n}_{1{\downarrow}\rho})]\}\nonumber\\
&+{\hat{z}}_{2{\uparrow}}\{\hat{g}_{2{\downarrow}}\sin(\theta/4){\hat{z}}_{1{\uparrow}\rho}{\hat{z}}_{1{\downarrow}\rho}+\hat{g}'_{2{\downarrow}}[-\cos(\theta/4)(\hat{n}_{1{\uparrow}\rho}{\hat{z}}_{1{\downarrow}\rho}+\hat{n}_{1{\downarrow}\rho}{\hat{z}}_{1{\uparrow}\rho})+\sin(\theta/4)(\hat{n}_{1{\uparrow}\rho}-\hat{n}_{1{\downarrow}\rho})]\}.\end{aligned}$$
Note that these operators are not unique; for example the operator $\hat{n}_{1{\uparrow}\rho}\hat{n}_{1{\downarrow}\rho}$ does not act on $|\Psi_E{\rangle}$ or $|\Psi_O{\rangle}$ in this limit, and so terms containing this operator can be removed, allowing us to rewrite our Majorana bound state in terms of products of no more then $5$ creation and annihilation operators. These operators specifically were chosen as they act only on the finite energy states, and have other properties which will be discussed in the next section.
![(main) Josephson current in the double quantum dot, assuming parity is not conserved. Vertical lines indicate the values of $\phi_-$ required for a ground state degeneracy in the system of matching colour. If a measurement was made slowly enough, fluctuations in the parity of either superconductor would ensure this is the case. (inset) the same plots, but with conservation of parity, assuming we are either in the odd (top) or the even (bottom) sector. A real measurement would likely fall between the two, giving a measure of how well parity is conserved.[]{data-label="JosephsonNoParity"}](JosephsonNoParity.eps){width="8.5cm"}
![Josephson current results for the even particle number sector with increasing $E_Z$, as we head towards the Kitaev regime ($\theta=\pi/2$, $E_Z\rightarrow\infty$). Vertical lines indicate the values of $\phi_-$ required for a ground state degeneracy in the system of matching colour. We see that this has a similar effect to increasing $U$, but the current through the even sector drops to one quarter of the strength. This is due to the Kitaev regime not being able to access as many even particle number states to transmit current. (Inset) the odd particle number sector is similar to the increasing $U$ results also.[]{data-label="JosephsonZFig"}](JosephsonZFig.eps){width="8.5cm"}
In order to provide further evidence that these operators should correspond to Majorana bound states, we demonstrate a method by which our system can be continuously tuned to the one-dimensional wire model of Kitaev. If our on-site energies are locked to the energy of the Zeeman field ($\epsilon_1+t=\epsilon_2+t=E_Z$), then we effectively have a zero-energy on-site potential for spin-up electrons, and an on-site potential for spin down electrons equal to $2E_Z$. In the limit as $E_Z\rightarrow\infty$, this removes the possibility of spin down excitations. An effective Hamiltonian can then be written for the remaining states by removing all terms that contain creation or annihilation operators for spin down electrons, leaving
\[t\]
-------------------------------------------- ---------------------------------------------
{width="8.5cm"} {width="8.5cm"}
-------------------------------------------- ---------------------------------------------
$$\begin{aligned}
{\mathcal{H}}_{eff}&=t\cos(\theta/2)\cos(2\Lambda){\hat{c}^{\dag}}_{1{\uparrow}}{\hat{c}}_{2{\uparrow}}\nonumber\\&-\Delta e^{i\phi_+/2}\cos(\phi_-/2)\sin(\theta/2){\hat{c}^{\dag}}_{1{\uparrow}}{\hat{c}^{\dag}}_{2{\uparrow}}+\text{h.c.}\end{aligned}$$
We see that this is the superconducting wire model of Kitaev for two sites. This system will then support Majorana bound states when $t\cos(\theta/2)\cos(2\Lambda)=\pm\Delta \cos(\phi_-/2)\sin(\theta/2)$. In Fig. \[fig:ChangingPhase\], we demonstrate the possibility to tune between this limit and the previously considered system with vanishing $E_Z$, whilst retaining a degeneracy at all times. This provides further evidence for the existence of Majorana bound states in the small $E_Z$ limit, as we do not see any evidence for a phase transition during this tuning.
If the large $E_Z$ limit could be achieved in an experiment, it may have some advantages over wire systems in which the signatures of Majorana bound states were previously measured. As the magnitude of the effective order parameter can be tuned, it should be relatively easy to find a region where Majorana bound states are present. Also, as our dots are discrete, we should hopefully be able to measure the localization of Majorana bound states in a line of dots (where a similar limit presents itself) to the ends, as others should have minimal conductance.
\[sec:Josephson\] Current-phase relationship and Fraunhofer pattern
===================================================================
In this section we discuss the Josephson supercurrent through the double quantum dot. Specifically, we study the dependence of the supercurrent on the phase difference $\phi_-$, and its behaviour when the magnetic flux passing between the dots is changed. The Josephson supercurrent can be calculated as the derivative of the free energy with respect to the superconducting phase difference [@Droste]$J=\partial_{\phi_-}-T\ln\sum_i e^{-E_i/T}$. In the limit of infinite $U$, supercurrent is absent in the odd parity state, as both local and crossed Andreev reflection are not possible [@TonyMennoPaper]. By comparison, a Josephson current exists for the even number parity sector, and is $4\pi$ periodic in the absence of relaxation. A measurement of the periodicity of the Josephson current could then be used to determine how well parity is conserved. If we start at some value of $\phi_-$ at which the system is non-degenerate, and tune it through a degeneracy, we shift the energy levels until they cross. If our system was perfectly conserving of parity, our now-excited state would not be able to relax into the new ground state, and the Josephson current would either be $4\pi$ periodic or flat, depending on the initial state of the system. However, as we are still connected to the superconducting leads, we would realistically expect some perturbation of these to occur after a finite period of time that would break this parity conservation. This would then correspond to a sudden jump to the ground state, and a corresponding change in the Josephson current. As the perturbation frequency increases, the free energy would become a function of the entire system rather than one parity sector. In Fig. \[JosephsonNoParity\], we plot this for various values of $U$ (with parity-conserving counterparts inset, and vertical lines to indicate the values of $\phi_-$ required for a ground state degeneracy). Measurements of deviations from this plot as the sweep time (across $\phi_-$) is decreased would then give a measure of the coherence time of our system’s parity conservation.
If we measure the Josephson current as we increase $E_Z$ whilst holding $\epsilon_i=E_Z$ (which takes us towards the effective Kitaev wire described previously), we find that the behaviour mimics that for increasing $U$, except for one difference. At large $E_Z$, all four doubly occupied states are gapped out save for one (where both electrons are spin-up), and our current is thus reduced four-fold. This is seen in Fig. \[JosephsonZFig\]. The similarity in behaviour between increasing $U$ and increasing $E_Z$ provides further evidence that the Majorana bound state picture is correct for the large $U$ limit, as it has similar characteristics to the large $E_Z$ limit where Majorana bound states are expected to appear.
In Fig. \[DoubleSlitDiffraction\] we show the dependence of the supercurrent on the magnetic flux piercing the Josephson junction (see Fig. \[fig:Schematic\]). The standard Fraunhofer pattern [@FraunhoferPaper] arises from interference of the supercurrent density in a junction, which can be thought of as a single slit of finite width. By contrast, we are considering a magnetic flux tube between the dots, and only allowing electrons to pass through either dot. This can be roughly considered a double-slit for electron transport. As such, we expect a double slit-like pattern in the critical current. We show this in Fig. \[DoubleSlitDiffraction\] for both the even and odd sectors, and the entire system.
As $U$ is increased, we see that the diffraction patterns for either number parity sector saturate, but at different levels. This is to be expected, as the even ground state permits crossed Andreev reflection only, which has no dependence on the flux between the dots, whereas the odd ground state permits elastic co-tunneling only, which does not permit Josephson current. However, it can also be explained by tunneling through the Majorana bound states. To see this, consider the product $\gamma_1\gamma_2$ of our Majorana operators. This can be written as the sum of two parts, $(\gamma_1\gamma_2)_O+(\gamma_1\gamma_2)_E$, which act on the odd and even sectors of the Hilbert space individually. Each term in the odd sector must contain a lone creation or annihilation operator from each site with either spin, and as our operators only act on the finite energy states, where fermion number is conserved, all terms will take the form of ${\hat{c}^{\dag}}_1{\hat{c}}_2$ (or the Hermitian conjugate), multiplied by an appropriate number operator. This excitation then describes only elastic co-tunneling on the odd states. For the even states, we calculate explicitly $$\begin{aligned}
(\gamma_1\gamma_2)_E&=\frac{e^{i\phi_+/2}}{\sqrt{2}}(\cos(\theta/2)({\hat{z}}_{1{\downarrow}}{\hat{z}}_{2{\uparrow}}{\hat{c}}_{2{\downarrow}}{\hat{c}}_{1{\uparrow}}+{\hat{z}}_{1{\uparrow}}{\hat{z}}_{2{\downarrow}}{\hat{c}}_{1{\downarrow}}{\hat{c}}_{2{\uparrow}})\nonumber\\&+\sin(\theta/2)({\hat{z}}_{1{\downarrow}}{\hat{z}}_{2{\downarrow}}{\hat{c}}_{2{\uparrow}}{\hat{c}}_{1{\uparrow}}+{\hat{z}}_{1{\uparrow}}{\hat{z}}_{2{\uparrow}}{\hat{c}}_{2{\downarrow}}{\hat{c}}_{1{\downarrow}}))+\text{h.c.}\end{aligned}$$ We see here that the excitation describes only crossed Andreev reflection for the even states. The Majorana picture then gives the expected result for Josephson current, justifying it further.
The Josephson current through the total system (without parity conservation) displays an interesting trait here, as it retains the double-slit pattern, but picks up a $\pi$ phase shift at $U\rightarrow\infty$. This is due to the Josephson current in the even number parity ground state being the highest when the state is the highest energy, which then requires the odd number parity ground state to be higher energy still. This energy is $\Phi_J$-dependent in the manner shown above. As such, this is not a pattern caused by interference between electrons.
\[sec:conclusion\]Conclusion
============================
We have investigated the double quantum dot model, first proposed in [@TonyMennoPaper] as a potential system to support Majorana bound states. We have derived an effective Hamiltonian of the system, and modeled the spectra, showing a range of parameters for which the system should be easily tunable to a degeneracy. In the limit as $U\rightarrow\infty$, we have written down the form of a Majorana bound state that excites between the ground states. We have shown that the degeneracy can be tuned continuously to a system equivalent to the one-dimensional wire model of Kitaev. Finally, we have discussed how measurements of the Josephson current can display the conservation of parity in the system, and a measurement of the Fraunhofer-type effect associated with Josephson current that disappears when current travels through chargeless modes.
To the best of the authors’ knowledge this model presents the first example of Majorana bound states in a system with strong correlations, where they do not present themselves as single-particle excitations. This makes it important to justify the fact that these excitations are Majorana-like in nature. The similarities to the Kitaev model, and the existence of a continuous transition to this model presents a strong case, which is backed up by the results from the Josephson current. While the two-dot setup as proposed here is the simplest system to construct Majorana bound states, observing non-Abelian statistics from the many-particle Majorana bound states demands larger systems to move the Majorana bound states around each other. These larger systems will also provide topological protection as the Majorana modes are separated by the number of dots [@SauDasSarma; @Fulga2013].
[27]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} M. Freedman, M. Larsen, and Z.Wang, Comm. Math Phys. **227**, 605 (2002).
C. Nayak, S. Simon, A. Stern, M.Freedman, and S.Das Sarma, Rev. Mod. Phys, **80**, 1083 (2008).
@noop [**** ()]{} @noop [****, ()]{} [**** ()](http://www.scopus.com/inward/record.url?eid=2-s2.0-79961212414&partnerID=40&md5=e5196606add67d9da131bde9d3e96cb4) Roman M. Lutchyn, Jay D. Sau, and S. Das Sarma, Phys. Rev. Lett. **105**, 077001 (2010). Yuval Oreg, Gil Refael, and Felix von Oppen, Phys. Rev. Lett. **105**, 177002 (2010). @noop [**** ()]{} I.C. Fulga, A. Haim, A.R. Akhmerov, and Y. Oreg, New. J. Phys. **15**, 045020 (2013).
@noop [**** ()]{} @noop [****, ()]{} @noop [****, ()]{} Sac$\acute{\textrm{e}}$p$\acute{\textrm{e}}$, B. *et al.* Gate-tuned norrmal and superconducting transport at the surface of a topological insulator. Nature Comm. **2**, 575 (2011). M. Veldhorst, M. Snelder, M. Hoek, T. Gang, X.L. Wang, V.K. Guduru, U. Zeitler, W.G. v.d.Wiel, A.A. Golubov, H. Hilgenkamp, and A. Brinkman, Nature Materials **11**, 417 (2011).
M. Veldhorst, C.G. Molenaar, X.L. Wang, H. Hilgenkamp, and A. Brinkman, Appl. Phys. Lett. **100**, 072602 (2012).
Nadj-Perge, Stevan and Drozdov, Ilya K. and Li, Jian and Chen, Hua and Jeon, Sangjun and Seo, Jungpil and MacDonald, Allan H. and Bernevig, B. Andrei and Yazdani, Ali, Science, **346**, 6209, 602-607 (2014).
I.C. Fulga, A. Haim, A.R. Akhmerov, and Y. Oreg, New. J. Phys. **15**, 045020 (2013).
A.Y. Kitaev, Physics-Uspekhi **44**, 131(2001).
K. Wölms, A. Stern and K. Flensberg, Phys. Rev. Lett. **113**, 246401 (2014).
A.R. Wright and M. Veldhorst, Phys. Rev. Lett. **111**, 096801 (2013).
P. Jarillo-Herrero, J.A. van Dam, and L.P. Kouwenhoven, Nature **439**, 953 (2006). J.A. van Dam, Y.V. Nazarov, E.P.A.M. Bakkers, S. De Franceschi, and L.P. Kouwenhoven, Nature **442**, 667 (2006). C. Buizert, A. Oiwa, K. Shibata, K. Hirakawa, and S. Tarucha, Phys. Rev. Lett. **99**, 136806 (2007). T. Dirks, T.L. Hughes, S. Lal, B. Uchoa, Y.F. Chen, C. Chialvo, P.M. Goldbart, and N. Mason, Nat. Phys. **7**, 386 (2011).
A. Altland and B. Simons, *Condensed Matter Field Theory*, Cambridge books online (Cambridge University Press, 2010).
T. Meng, S. Florens, and P. Simon, Phys. Rev. B. **79**, 224521 (2009). A.V. Rozhkov, and D.P. Arovas, Phys. Rev. Lett **82**(13), 2788 (1999).
@noop [****, ()]{} @noop [****, ()]{} L. Hofstetter, S. Csonka, J. Nyg’27ard, and C. Schönenberger, Nature **461**, 960 (2009).
L. G. Herrmann, F. Portier, P. Roche, A.L. Yeyati, T. Kontos, and C. Strunk, Phys. Rev. Lett. **104**, 026801 (2010).
G. Falci, D. Feinberg, and F.W.J. Hekking, Europhys. Lett. **54**, 255 (2001). M.S. Kalenkov and A.D. Zaikin, Phys. Rev. B **75**, 172503 (2007). A. Levy Yeyati, F.S. Bergeret, A. Martín-Rodero, and T.M. Klapwijk, Nature Phys. **3**, 455 (2007). P. Recher, E.V. Sukhorukov, and D. Loss, Phys. Rev. B **63**, 165314 (2001).
C. Chamon, R. Jackiw, Y. Nishida, S.-Y. Pi, and L. Santos, Phys. Rev. B **81**, 224515 (2010). C.W.J. Beenakker, ArXiv:1312.2001 (2013).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the existence of steady states to the Keller-Segel system with linear chemotactical sensitivity function on a smooth bounded domain in ${\mathbb{R}}^N,$ $N\ge3,$ having rotational symmetry. We find three different types of chemoattractant concentration which concentrate along suitable $(N-2)-$dimensional minimal submanifolds of the boundary. The corresponding density of the cellular slime molds exhibit in the limit one or more Dirac measures supported on those boundary submanifolds.'
address:
- 'O. Agudelo - NTIS Department of Mathematics, Západočeská Univerzita v Plzni'
- 'A. Pistoia - Dipartimento di Scienze Base e Applicate La Sapienza Universitá di Roma.'
author:
- Oscar Agudelo
- Angela Pistoia
title: 'Boundary concentration phenomena for the higher-dimensional Keller-Segel system'
---
Introduction
============
In 1970 Keller and Segel [@KELLERSEGEL] presented a system of two strongly coupled parabolic PDE’s to describe the aggregation of cellular slime molds like Dictyostelium Discoidem. Assuming the whole process to take place on a suitable bounded region $D$ in ${\mathbb{R}}^N,$ $N\ge1$, with no flux across the boundary, the myxoamoebae density of the cellular slime molds $w(t,x)$ and the chemoattractant concentration $v(t,x)$ at time $t$ in a point $x$ in $D$ satisfy the system $$\label{ks}
\left\{\begin{aligned}
&\partial_t w=\nabla\left[\mu(w,v)\nabla w-\chi(w,v)\nabla v\right]\ \hbox{in}\ {\mathbb{R}}\times D\\
&\partial_t v=\gamma_0\Delta v+k(w,v)\ \hbox{in}\ {\mathbb{R}}\times D\\
&\partial_\nu w=\partial_\nu v=0\ \hbox{in}\ {\mathbb{R}}\times\partial D\\
&w(0,x)=w_0(x),\ v(0,x)=v_0(x)\ \hbox{in}\ D,
\end{aligned}
\right.$$ where
- $\mu(w,v)$ is the random motility coefficient
- $\chi(w,v)=\chi_0\mu(w,v)w \nabla \Phi(v)$ is the total chemotaxic flux, where $\chi_0$ is a constant and $\Phi$ is a smooth increasing function called [*chemotactic sensitivity function*]{}
- $\gamma_0>0$ is a constant diffusion coefficient
- $k(w,v)$ models the reaction, which commonly is $k(w,v)=\gamma_0(\alpha w-\beta v) $ for some constants $\alpha$ and $\beta$.
- $\nu$ is the unit inner normal derivative at the boundary.
This model has attracted the attention of many mathematicians, since it has led to a variety of stimulating problems. Many contributions have been made towards understanding analytical aspects of system . We refer the reader to [@BILER; @BRENNERCONSTANTINKADANOFFSCHENKELVENKATARAMANI; @CHILDRESS; @CORRIASPERTHAMEZAAQ; @DOLBEAULTPERTHAME; @GUERRAPELETIER; @HERREROVELAZQUEZ1; @HERREROVELAZQUEZ2; @HERREROVELAZQUEZ3; @HORSTMANN; @JAGERWAND] and references therein. In particular, we quote the recent survey [@BBTW] which focuses on the original model and some of its developments and is devoted to the qualitative analysis of analytic problems, such as the existence of solutions, blow-up and asymptotic behavior.\
The understanding of the global dynamics of the system is strongly related to the existence of steady states, namely solutions to the system $$\label{kss}
\left\{\begin{aligned}
& \nabla\left[\mu(w,v)\nabla w-\chi(w,v)\nabla v\right]=0\ \hbox{in}\ D\\
& \gamma_0\Delta v+k(w,v)=0\ \hbox{in}\ D\\
&\partial_\nu w=\partial_\nu v=0\ \hbox{in}\ \partial D.\\
\end{aligned}
\right.$$
The present paper deals with the existence of stationary solutions to the problem when the chemotactical sensitivity function $\Phi$ is linear, i.e. $\Phi(v)=v$,. In such a case the study of reduces to a single equation. Indeed, the function $w(x)=\lambda e^{\chi_0 v}$ solves the first equation and the second equation reduces to $$\label{lin-sen}
\Delta v+\alpha \lambda e^{\chi_0 v}-\beta v=0\ \hbox{in}\ D,\quad \partial_\nu v=0\ \hbox{on}\ \partial D.$$
Here we study problem , which without loss of generality can be rewritten as $$\label{p}
\Delta v+ {\varepsilon}^2 e^{ v}- v=0\ \hbox{in}\ D,\quad \partial_\nu v=0\ \hbox{on}\ \partial D.$$ when $ D$ is a bounded domain in ${\mathbb{R}}^N,$ $N\ge1$, $\nu$ is the inner unit normal vector to ${\partial}\Omega$ and ${\varepsilon}$ is a small parameter.\
The one dimensional version of equation , was first treated by Schaaf in [@SCHAAF]. In the two-dimensional case, Wang and Wei [@WANGWEI] and Senba and Suzuki [@SENBASUZUKI] proved the existence of a non-costant solution for ${\varepsilon}\le {\varepsilon}_0$ for some ${\varepsilon}_0$ and ${\varepsilon}$ possibly different from certain values depending on the domain. Successively, del Pino and Wei in [@DELPINOWEI] constructed solutions to which concentrate (as ${\varepsilon}$ goes to zero) at $\kappa$ different points $\xi_1,\dots,\xi_\kappa$ on the boundary of $ D$ and $\ell$ different points $\xi_{\kappa+1},\dots,\xi_{\kappa+\ell}$ inside the domain $ D.$ In particular far away from those points the leading behavior of $v_{\varepsilon}$ is given by $$\label{kk}
v_{\varepsilon}(x) \to \sum\limits_{i=1}^\kappa {1\over 2} G(x,\xi_i) + \sum\limits_{i=1}^\ell G(x, \xi_{i})$$ where $G(\cdot,\xi)$ is the Green function for the problem $$\left\{
\begin{aligned}
&-\Delta G + G = 8\pi \delta_\xi, \ \mbox{in}\ D,\\
& \frac{\partial G}{\partial\nu}=0\ \mbox{on}\ \partial D.
\end{aligned}
\right.$$ Here ${\delta}_ {{{\boldsymbol \zeta}}}$ represents the Dirac’s mass concentrated at the point $\zeta$. The corresponding solution $u_{\varepsilon}(x)={\varepsilon}^2e^{v_{\varepsilon}}$ of the first equation in exhibits, in the limit, $\kappa$ Dirac measures on the boundary of the domain and $\ell$ Dirac measures inside the domain with respectively weights $4\pi$ and $8\pi,$ namely $$u_{\varepsilon}\rightharpoonup \sum\limits_{i=1}^\kappa 4\pi\delta_{\xi_i}+\sum\limits_{i=1}^\ell 8\pi\delta_{\xi_{\kappa+i}}$$
Recently, Del Pino, Pistoia and Vaira in [@dpv] built a solution to problem which concentrate along the whole boundary. In particular, far away from the boundary the leading behavior of $v_{\varepsilon}$ is given by $${1\over|\ln{\varepsilon}|}v_{\varepsilon}(x)\to\mathcal G(x)$$ where $ \mathcal G$ is the unique solution of the problem $$\label{G}
- \Delta \mathcal G + \mathcal G =0 \hbox{ in } D, \quad \mathcal G = 1 \hbox{ on } \partial D.$$
The corresponding solution $u_{\varepsilon}={\varepsilon}^2e^{v_{\varepsilon}}$ of the first equation in exhibits in the limit a Dirac measures supported on the boundary with a suitable weight, namely $${1\over|\ln{\varepsilon}|}u_{\varepsilon}\rightharpoonup - \partial_\nu \mathcal G\ \delta_{\partial D} .$$ ($\partial_\nu \mathcal G <0$, because of the maximum principle and Hopf’s Lemma) .
As far as we know the only results dealing with higher-dimensional cases concerns the case when $D$ is a ball. Biler in [@BILER] established the existence of a strictly decreasing radial solutions, while Pistoia and Vaira in [@pv] found a second radial solution which is increasing and concentrates along the whole boundary of the domain as ${\varepsilon}$ approaches zero.\
Clearly, a natural question arises.\
[*Do there exist solutions to problem in higher dimensional domains? In particular, is it possible to find solutions to problem which concentrates on suitable submanifolds of the boundary as the parameter ${\varepsilon}$ approaches zero?*]{}\
In the present paper we give a positive answer when the domain has a rotational symmetry. Let $n=1,2,$ be fixed. Let $\Omega$ be a smooth open bounded domain in ${\mathbb{R}}^2 $ such that $$\overline {\Omega} \subset \{\left( x_{1},x_{2},x^{\prime
}\right) \in \mathbb{R}^{n}\times \mathbb{R}^{2-n}:x_{i}>0,\ i=1
,n\}.$$ Let $M=\sum\limits_{i=1}^n M_i,$ $M_{i}\geq 2,$ and set $$\label{D}
{D}:=\{(y_{1},y_{n},x^{\prime })\in \mathbb{R}^{M_{1}}\times \mathbb{R}^{M_{n}}\times \mathbb{R}^{2-n}:\left( \left\vert
y_{1}\right\vert ,\left\vert y_{n}\right\vert ,x^{\prime }\right)
\in \Omega\}.$$Then $ {D}$ is a smooth bounded domain in $\mathbb{R}^{N}$, $N:=M+2-n.$
The solutions we are looking for are $\Gamma-$invariant for the action of the group $\Gamma:=\mathcal{O}(M_1)\times\mathcal{O}(M_n)$ on ${\mathbb{R}}^N$ given by $$(g_{1},g_{n})(y_{1},y_{n},x^{\prime }):=(g_{1}y_{1}
,g_{n}y_{n},x^{\prime }).$$Here $\mathcal{O}(M_i)$ denotes the group of linear isometries of ${\mathbb{R}}^{M_i}.$
A simple calculation shows that a function $v$ of the form $$\label{simmetria}v(y_{1}
,y_{n},x^{\prime })=u\left( \left\vert y_{1}\right\vert ,\left\vert
y_{n}\right\vert ,x^{\prime }\right)$$ solves problem if and only if $u$ solves the problem $$-\Delta u+\sum_{i=1}^{n}\frac{M_{i}-1}{x_{i}}\frac{\partial u}{\partial
x_{i}}+u={\varepsilon}^2 e^u\quad \text{in}\ \Omega,\qquad \partial_\nu u=0\quad \text{on}\
\partial \Omega,$$which can be rewritten as$$-\text{div}(a(x)\nabla u)+a(x)u= {\varepsilon}^2 a(x)e^{u}\quad \text{in}\ \Omega ,\qquad u=0\quad \text{on}\ \partial \Omega,$$where$$\label{a}
a(x_{1},x_{n}):=x_{1}^{M_{1}-1}\cdot x_{n}^{M_{n}-1}.$$ Thus, we are led to study the more general anisotropic equation
$$\label{emdenfowlerhighdim}
-{{\rm div}}(a(x)\nabla u) +a(x)u = {\varepsilon}^2 a(x)e^{u} \quad \hbox{in} \quad \Omega ,\quad \quad \frac{{\partial}u}{{\partial}\nu}=0 \quad \hbox{on} \quad {\partial}\Omega,$$
where $\Omega\subset {\mathbb{R}}^2$ is a smooth bounded domain, $a: \overline{\Omega}\to {\mathbb{R}}$ is a strictly positive smooth function and ${\varepsilon}>0$ is a small parameter. Here $\nu$ stands for the inner unit normal vector to ${\partial}\Omega$.
Our goal is to construct solutions to problem which concentrate at points $\zeta_1,\dots,\zeta_m$ of the boundary of $\Omega$ as $\epsilon$ goes to $0.$ They correspond via to $\Gamma-$invariant solutions to problem with layers which concentrate along the $\Gamma-$orbit $\Xi(\zeta_i)$ of $\zeta_i,$ for $i=1,\dots,m$ as $\epsilon$ approaches zero. Here $$\label{orbit}
\Xi(\zeta_i):=\{(y_{1},y_{n},x^{\prime })\in \partial D:\left( \left\vert
y_{1}\right\vert ,\left\vert y_{n}\right\vert ,x^{\prime }\right)=\zeta_i
\in \partial\Omega\}$$ is a $(N-2)-$dimensional minimal submanifold of the boundary of $D $ diffeomorphic to $ \mathbb{S}^{M_1-1}\times\mathbb{S}^{M_n-1} $ where $\mathbb{S}^{M_i-1}$ is the unit sphere in ${\mathbb{R}}^{M_i} . $\
\
In order to state our main result, we need to introduce some tools.
The basic cell in our construction are the so-called [*standard bubbles*]{} $$\label{bubble}
U_{\mu,\zeta}(x):=\ln \left(8\mu^2\over(\mu^2+|x-\zeta|^2)^2\right),\ x,\zeta\in{\mathbb{R}}^2,\ \mu>0,$$ which solve the Liouville equation $$\label{liu}
\Delta U+e^U=0\ \hbox{in}\ {\mathbb{R}}^2.$$ To get a good approximation, we need to project the bubbles in order to fit the Neumann boundary condition with the linear differential operator $$\label{linearpartanisotropiceqn}
{\mathcal{L}}_a u:= \Delta u+ \nabla (\ln a) \cdot \nabla u-u,$$ namely $${\mathcal{L}}_a PU_{\mu,\zeta}={\mathcal{L}}_a U_{\mu,\zeta}\ \hbox{in}\ \Omega,\ \partial_\nu PU_{\mu,\zeta}=0\ \hbox{on}\ {\partial}\Omega.$$ To compute the error given by the projected bubble, we need to perform a careful analysis of the regularity and asymptotic behavior of the Green’s function $G_a(\cdot,\zeta)$ associate with ${\mathcal{L}}_a$ with Neumann boundary condition, namely $$\label{greensfunctioneqn0}\left\{\begin{aligned}
&{\mathcal{L}}_a \,G_a(\cdot,\zeta) + 8\pi{\delta}_ {{{\boldsymbol \zeta}}}=0\quad \quad \hbox{in}\quad \Omega,\\
&
\frac{{\partial}G_a(\cdot,\zeta)}{{\partial}\nu}=0\quad \quad \hbox{on} \quad {\partial}\Omega,
\end{aligned}\right.$$ for every $\zeta\in \overline{\Omega}$. The regular part of $G_a(\cdot,\zeta)$ is defined for $x\in \Omega$ as $$\label{regularpartGreen1}
H_a(x,\zeta):=
\left\{
\begin{array}{ccc}
G_a(x,\zeta) + 4 \ln \left(|x-\zeta|\right),&\quad
\zeta \in \Omega,\\
\\
G_a(x,\zeta) + 8 \ln\left(|x-\zeta|\right),&\quad \zeta\in {\partial}\Omega.
\end{array}
\right.$$
Now, we can state our main results.
Our first result concerns with the existence of solutions whose concentration points are inside the domain and approaches different points of the boundary as ${\varepsilon}$ goes to zero.
\[theo2\] Assume
- there exist $m$ different points $\zeta^*_1,\ldots,\zeta^*_m \in {\partial}\Omega$ such that $\zeta^*_i$ is either a strict local maximum or a strict local minimum point of $a $ restricted to ${\partial}\Omega$, satisfying that $${\partial}_{\nu}a(\zeta^*_i ):= \nabla a(\zeta^*_i )\cdot \nu(\zeta^*_i )<0, \quad \forall i=,\ldots,m.$$
Then if ${\varepsilon}>0$ is small enough there exist $m$ points $\zeta^{{\varepsilon}}_1,\ldots,\zeta^{{\varepsilon}}_m \in \Omega$, $m$ positive real numbers $d^{{\varepsilon}}_1,\ldots,d^{{\varepsilon}}_m$ and a positive solution $u_{{\varepsilon}}$ of equation such that $$\label{profileof thesolutiontheo2}
u_{{\varepsilon}}(x):=\sum_{i=1}^m \ln\left(\frac{1}{\left( {d_i^{\varepsilon}}^2 + |x-\zeta^{{\varepsilon}}_i|^2\right)^2}\right)+ H_a(x,\zeta^{{\varepsilon}}_i) + o(1), \quad \forall x \in \overline{\Omega},$$ where as ${\varepsilon}\to 0$ $$\zeta^{{\varepsilon}}_i \to \zeta^*_i, \quad {{\rm dist}}(\zeta^{{\varepsilon}}_i ,{\partial}\Omega)={\mathcal{O}}\left(|\ln \left({{\varepsilon}}\right)|^{-1}\right)$$ and $$c\ln \left(\frac{1}{{\varepsilon}}\right) \leq d^{{\varepsilon}}_i \leq C\ln \left(\frac{1}{{\varepsilon}}\right).$$ for some positive constants $c$ and $C.$\
In particular, [$$u_{\varepsilon}(x) \to \sum\limits_{i=1}^m G_a(x,\zeta^*_i)\ \hbox{in}\ \Omega\setminus\operatorname{\cup}\limits_{i=1}^mB(\zeta_i^*,r)\ \hbox{as}\ {\varepsilon}\to0$$ for some $r>0$ and $${\varepsilon}^2\int\limits_\Omega e^{u_{\varepsilon}(x)}dx\to 8\pi m\ \hbox{as}\ {\varepsilon}\to0.$$]{}
Our second result concerns with the existence of solutions whose concentration points lie on the boundary and are far away from each other as ${\varepsilon}$ goes to zero.
\[theo3\] Assume
- there exist $m$ different points $\zeta^*_1,\ldots,\zeta^*_m\in {\partial}\Omega$ such that $\zeta^*_i$ is either a strict local maximum or a strict local minimum point of $a $ restricted to ${\partial}\Omega.$
Then if ${\varepsilon}>0$ is small enough there exist $m$ points $\zeta^{{\varepsilon}}_1,\ldots,\zeta^{{\varepsilon}}_m \in {\partial}\Omega$, $m$ positive real numbers $d^{{\varepsilon}}_1,\ldots,d^{{\varepsilon}}_m$ and a positive solution $u_{{\varepsilon}}$ of equation such that $$\label{profileof thesolutiontheo3}
u_{{\varepsilon}}(x):=\sum_{i=1}^m \ln\left(\frac{1}{\left( d_i^2 + |x-\zeta^{{\varepsilon}}_i|^2\right)^2}\right)+ \frac{1}{2}H_a(x,\zeta^{{\varepsilon}}_i) + o(1), \quad \forall x \in \overline{\Omega},$$ where as ${\varepsilon}\to 0$, $$\zeta^{{\varepsilon}}_i \to \zeta_i^* \hbox{as}\ {\varepsilon}\to0 \quad \hbox{and} \quad
c \leq d^{{\varepsilon}}_i \leq C,$$ for some positive constants $c$ and $C.$\
In particular, [$$u_{\varepsilon}(x) \to \sum\limits_{i=1}^m {1\over2} G_a(x,\zeta^*_i)\ \hbox{in}\ \Omega\setminus\operatorname{\cup}\limits_{i=1}^mB(\zeta_i^*,r)\ \hbox{as}\ {\varepsilon}\to0$$ for some $r>0$ and $${\varepsilon}^2\int\limits_\Omega e^{u_{\varepsilon}(x)}dx\to 4\pi m\ \hbox{as}\ {\varepsilon}\to0.$$]{}
Our last existence result concerns with the existence of solutions whose concentration points lie on the boundary and collapse to the same point as ${\varepsilon}$ goes to zero.
\[theo4\] Assume
- $\zeta_0 \in {\partial}\Omega$ is a strict local maximum point of $a $ restricted to ${\partial}\Omega$.
Then for any integer $m\geq 1$ if ${\varepsilon}>0$ small enough, there exist $m$ points $\zeta^{{\varepsilon}}_1,\ldots,\zeta^{{\varepsilon}}_m \in {\partial}\Omega$, and positive real numbers $d^{{\varepsilon}}_1,\ldots,d^{{\varepsilon}}_m$ and a positive solution $u_{{\varepsilon}}$ of equation such that $$\label{profileof thesolutiontheo4}
u_{{\varepsilon}}(x):=\sum_{i=1}^m \ln\left(\frac{1}{\left( d_i^2 + |x-\zeta^{{\varepsilon}}_i|^2\right)^2}\right)+ \frac{1}{2}H_a(x,\zeta^{{\varepsilon}}_i) + o(1), \quad \forall x \in \overline{\Omega},$$ where as ${\varepsilon}\to 0$, $$\zeta^{{\varepsilon}}_i \to \zeta_0\quad \hbox{and} \quad
c\ln \left(\frac{1}{{\varepsilon}}\right) \leq d^{{\varepsilon}}_i \leq C\ln \left(\frac{1}{{\varepsilon}}\right),$$ for some positive constants $c$ and $C.$
In particular, [$$u_{\varepsilon}(x) \to {m\over2} G_a(x,\zeta_0)\ \hbox{in}\ \Omega\setminus B(\zeta_0,r)\ \hbox{as}\ {\varepsilon}\to0$$ for some $r>0$ and $${\varepsilon}^2\int\limits_\Omega e^{u_{\varepsilon}(x)}dx\to 4\pi m\ \hbox{as}\ {\varepsilon}\to0.$$]{}
All the previous arguments yield immediately the following existence result for the higher-dimensional problem .
\[main\] Assume $D$ is as described in .
- If (A1) of Theorem holds, then for every ${\varepsilon}>0$ small enough, there exists a $\Gamma-$invariant solution $v_{\varepsilon}$ to problem with $m$ layers which concentrate at $m$ distinct $(N-2)-$dimensional minimal submanifold of the boundary of $D $ as ${\varepsilon}\to0.$ [Moreover (see ) $${\varepsilon}^2\int\limits_{\mathcal D} e^{v_{\varepsilon}(x)}dx\to 8\pi \sum\limits_{i=1}^m |\Xi(\zeta^*_i)|\ \hbox{as}\ {\varepsilon}\to0.$$]{}
- If (A2) of Theorem holds, then for every ${\varepsilon}>0$ small enough, there exists a $\Gamma-$invariant solution $v_{\varepsilon}$ to problem with $m$ layers which concentrate at $m$ distinct $(N-2)-$dimensional minimal submanifold of the boundary of $D $ as ${\varepsilon}\to0.$ [Moreover (see ) $${\varepsilon}^2\int\limits_{\mathcal D} e^{v_{\varepsilon}(x)}dx\to 4\pi \sum\limits_{i=1}^m|\Xi(\zeta^*_i)|\ \hbox{as}\ {\varepsilon}\to0.$$]{}
- If (A3) of Theorem , then for any integer $m\ge1$ and every ${\varepsilon}>0$ small enough, there exists a $\Gamma-$invariant solution $v_{\varepsilon}$ to problem with $m$ layers which concentrate at the same $(N-2)-$dimensional minimal submanifold of the boundary of $D $ as ${\varepsilon}\to0.$ [Moreover (see ) $${\varepsilon}^2\int\limits_{\mathcal D} e^{v_{\varepsilon}(x)}dx\to 4\pi m|\Xi(\zeta_0)|\ \hbox{as}\ {\varepsilon}\to0.$$]{}
Let us make some comments.\
First, we strongly believe that our results hold true even if we drop the symmetry assumption. In particular, we conjecture that (i) and (ii) of Theorem \[main\] can be rephrased in the more general form
- [*if $D$ is a general bounded domain in ${\mathbb{R}}^N$ with $N\ge3$ and $\Xi$ is a $(N-2)-$dimensional minimal submanifold (possibly non-degenerate) of the boundary of $D$ with a suitable sign on the sectional curvatures, then problem has a solution with an interior layer concentrating along $\Xi$ as ${\varepsilon}$ goes to zero.*]{}\
- [*if $D$ is a general bounded domain in ${\mathbb{R}}^N$ with $N\ge3$ and $\Xi$ is a $(N-2)-$dimensional minimal submanifold (possibly non-degenerate) of the boundary of $D$, then problem has a solution with a boundary layer concentrating along $\Xi$ as ${\varepsilon}$ goes to zero.*]{}\
Second, the proof of our result relies on a well known Lyapunov-Schmidt procedure. The same strategy has been used by Wei, Ye and Zhou [@WEIYEZHOU1; @WEIYEZHOU2] to find concentrating solutions for the anisotropic Dirichlet problem $$-{{\rm div}}(a(x)\nabla u) = {\varepsilon}^2 a(x)e^{u} \quad \hbox{in} \quad \Omega ,\quad \quad u=0 \quad \hbox{on} \quad {\partial}\Omega,$$ where $\Omega\subset {\mathbb{R}}^2$ is a smooth bounded domain, $a: \overline{\Omega}\to {\mathbb{R}}$ is a strictly positive smooth function and ${\varepsilon}>0$ is a small parameter.
The structure of the paper is the following. In section \[anisotropic robin’s function\] we perform a careful study of Green’s function introduced in . In section \[approximation\] we provide the approximation of the solutions predicted by our existence Theorems and compute the error created by this approximation. Section \[Reductionscheme\] concerns with the finite dimensional reduction scheme which is the first step in the proof of our existence results. In section \[energy estimates\] we find precise energy estimates for the approximation found in section \[approximation\]. Finally, in section \[proofoftheorems\] we provide the detailed proof of our existence Theorems using variational and topological arguments.
Anisotropic Green’s function {#anisotropic robin's function}
============================
In this part we analyze the asymptotic boundary behavior of the functions $G(\cdot,\zeta)=G_a(\cdot,\zeta)$ and $H(\cdot,\zeta)=H_a(\cdot,\zeta)$ introduced in and , respectively.
We begin by recalling some well known facts about Sobolev spaces and refer the reader to [@HAIMBREZIS; @DINEZZAPALATUCCIVALDINOCI; @GILBARGTRUDINGER] and references therein, for an exhaustive description of these spaces and the related results hereby presented.
Let $\Omega\subset {\mathbb{R}}^2$ be a bounded domain with smooth boundary. The space $L^{p}(\Omega)$ is the space of measurable functions $v:\Omega \to {\mathbb{R}}$ for which the norm $$\|v\|_{L^{p}(\Omega)}:=\left\{
\begin{array}{ccc}
\left(\int_{\Omega}|v(x)|^p\,dx\right)^{\frac{1}{p}},& 1 \leq p < \infty\\
\\
\sup_{x\in \Omega}|v(x)|,& p=\infty
\end{array}
\right.$$ is finite. The Sobolev space $W^{k,p}(\Omega)$ is the space of functions in $L^p(\Omega)$ having [*weak derivatives*]{}, up to order $k$, also in $L^p(\Omega)$. The space $W^{k,p}(\Omega)$ is a Banach space endowed with the norm $$\|v\|_{W^{k,p}(\Omega)}:=\sum_{i=0}^k
\|D^i v\|_{L^{p}(\Omega)}.$$
Given $k\in \mathbb{N}$, we let $C^{k}(\overline{\Omega})$ denote the space of functions having continuous derivatives of order $k$ up to the boundary. In addition, for any ${\alpha}\in (0,1]$, we denote $C^{k,{\alpha}}(\overline{\Omega})$ the [*Hölder space*]{}, consisting of functions $v \in C^k(\overline{\Omega})$ for which the [*Hölder norm*]{} $$\|v\|_{C^{k,{\alpha}}(\overline{\Omega})}:= \sum_{i=0}^k \|D^i v\|_{L^{\infty}(\Omega)} + \sup \limits_{x,y \in \overline{\Omega}\,\,x\neq y}\frac{|D^i v(x)- D^i v(y)|}{|x-y|^{{\alpha}}}$$ is finite.
We will make use of the following embeddings for Sobolev functions. $$\label{SobolevEmdebbings}
W^{2,p}(\Omega) \hookrightarrow \left\{
\begin{array}{ccc}
W^{1,\frac{2p}{2-p}}(\Omega)\cap C^{0,2\left(1-\frac{1}{p}\right)}(\overline{\Omega}),& 1<p<2\\
\\
C^{1,1-\frac{2}{p}}(\overline{\Omega}), & p>2.
\end{array}
\right.$$
From the continuity of the [*trace operator*]{} together with Sobolev embeddings in one dimensional manifolds, we find that for any $1<p<2$, $$\label{traceembedding}
W^{1,p}(\Omega)\hookrightarrow L^q({\partial}\Omega), \quad \quad \forall q \in \left(1,
\frac{p}{2-p}\right).$$
Set $$\gamma(x)= \left(\nabla \,\ln a\right)(x), \quad \hbox{for } x\in \overline{\Omega}$$ and notice that $\gamma \in C^{\infty}(\overline{\Omega})$, since $a \in C^{\infty}(\overline{\Omega})$ and $a>0$.
Recall also from that $${\mathcal{L}}\,:= \Delta \,+\, \gamma(x)\cdot \nabla \,-\, 1$$ and that $G=G(x,\zeta)$, the [*Green’s function*]{} associated to ${\mathcal{L}}$ satisfies for every $\zeta\in \overline{\Omega}$ the boundary value problem $$\begin{aligned}
{\mathcal{L}}\,G(\cdot,\zeta) + 8\pi{\delta}_ {{{\boldsymbol \zeta}}}&=&0\quad \quad \hbox{in}\quad \Omega, \label{greensfunctioneqn}\\
\nonumber\\
\frac{{\partial}G(\cdot,\zeta)}{{\partial}\nu}&=&0\quad \quad \hbox{on} \quad {\partial}\Omega.\label{greensfunctionbdrcondition} \end{aligned}$$
For $x\in \Omega$, the regular part of $G(x,\zeta)$ is the function $$\label{regularpartGreen}
H(x,\zeta):=
\left\{
\begin{array}{ccc}
G(x,\zeta) + 4 \ln \left(|x-\zeta|\right),&\quad
\zeta \in \Omega,\\
\\
G(x,\zeta) + 8 \ln\left(|x-\zeta|\right),&\quad \zeta\in {\partial}\Omega
\end{array}
\right.$$ Let us introduce the vector function $R=R(z)$, solving $$\label{ellipticequationR20}
\Delta_z R - R = \frac{z}{|z|^2} \quad \quad \hbox{in } {\mathbb{R}}^2, \quad \quad R\in L^{\infty}_{loc}({\mathbb{R}}^2).$$
We remark that standard regularity theory implies that $R\in W^{2,p}_{loc}({\mathbb{R}}^2)\cap C^{\infty}({\mathbb{R}}^2-\{0\})$, for any $p\in (1,2)$. On the other hand, Sobolev embeddings allow us to conclude that for any ball $B_r(0)$ of radius $r>0$ and centered at the origin $$\label{integrabilityR}
R\in W^{1,p}(B_r(0)) \cap C^{0,\frac{1}{p}}(\overline{B_r(0)})
, \quad \quad \forall \,\,p\in(1,\infty).$$
Our first result uses the function $R(z)$ to describe the regularity of the family of functions $\zeta \in \overline{\Omega} \mapsto H(\cdot,\zeta)$ and concerns with the local behavior of $H(x,\zeta)$.
\[RegularPartInner\] Let $R=R(z)$ be the function described in . There exists a function $H_1=H_1(x,\zeta)$, such that
- for every $\zeta,x\in \overline{\Omega}$, $$\label{regularpartqualitativebehavior}
H(x,\zeta)=H_1(x,\zeta)\,+\,\left\{
\begin{array}{cc}
4\gamma(\zeta)\cdot R(x-\zeta),& \zeta \in \Omega \\
\\
8\gamma(\zeta)\cdot R(x-\zeta),& \zeta \in {\partial}\Omega
\end{array}
\right.$$
and
- the mapping $
\zeta \in \overline{\Omega} \mapsto H_1(\cdot,\zeta)$ belongs to $C^1\left(\Omega; C^1(\overline{\Omega})\right)\cap C^1({\partial}\Omega;C^1(\overline{\Omega}) )$.
In particular, for any ${\alpha}\in (0,1)$, $H\in C^{0,{\alpha}}(\overline{\Omega}\times {\Omega})\cap C^{0,{\alpha}}(\overline{\Omega}\times {\partial}{\Omega})$ and the Robin’s function $\zeta \in \overline{\Omega} \mapsto H(\zeta,\zeta)$ belongs to $C^{1}(\Omega)\cap C^1({\partial}\Omega)$.
For $\zeta \in \overline{\Omega}$, let us write $$\label{constantczeta}
c:=\left\{
\begin{array}{ccc}
4, &\hbox{if } \zeta \in \Omega\\
8, &\hbox{if } \zeta \in {\partial}\Omega.
\end{array}
\right.$$
From and , we observe that $${\mathcal{L}}\, H(x,\zeta)= c\gamma(x)\cdot\frac{x-\zeta}{|x-\zeta|^2}\,-\, c\ln(|x-\zeta|), \quad \quad \forall\,\, x\in \Omega\label{EqnregularpartGreen2}$$ with the boundary condition $$\frac{{\partial}H(x,\zeta)}{{\partial}\nu_x}= c\nu(x)\cdot\frac{x-\zeta}{|x-\zeta|^2}\,, \quad \quad \forall\,\, x\in {\partial}\Omega, \quad x\neq \zeta
\label{EqnregularpartGreen2boundary}.$$
The right hand side in can be written as $$\label{righthand29}
c\gamma(\zeta)\cdot \frac{x-\zeta}{|x-\zeta|^2} + E(x,\zeta),$$ where $$E(x,\zeta):=c\left(\gamma(x) -\gamma(\zeta)\right)\cdot \frac{x-\zeta}{|x-\zeta|^2} -c\ln(|x-\zeta|).$$
Using a smooth extension of the function $\gamma$ to a larger compact domain containing $\Omega$, we find a constant $C>0$ depending only on $\gamma$ and $\Omega$ such that $$\left|\left(\gamma(x)-\gamma(\zeta)\right)\cdot \frac{(x-\zeta)}{|x-\zeta|^2}\right|\leq C, \quad \hbox{for } x\in \Omega.$$
On the other hand, given $p\in (0,1)$, there exists a constant $C=C(p,\Omega)>0$, such that for every $\zeta \in \overline{\Omega}$, $$\int_{\Omega}\left|\ln(|x-\zeta|)\right|^p \,dx \leq C\int_{0}^{2{\rm diam}(\Omega)}r|\ln(r)|^p\,dr \leq C.$$
We conclude that for any $p\in (1,\infty)$, the mapping $$\zeta \in \overline{\Omega} \mapsto E(\cdot,\zeta)\in L^p(\Omega)$$ is well defined. The Dominated Convergence Theorem yields that $\zeta \mapsto E(\cdot,\zeta)$ belongs to $C({\Omega}; L^p(\Omega))\cap C({\partial}{\Omega}; L^p(\Omega))$.
Next, let $I_{2\times 2}$ be the $2 \times 2$ identity matrix and $$(x-\zeta)\otimes (x-\zeta):=
\left[
\begin{array}{cc}
(x_1-\zeta_1)^2& (x_1-\zeta_1) (x_2-\zeta_2)\\
(x_1-\zeta_1)(x_2-\zeta_2)& (x_2-\zeta_2)^2
\end{array}
\right].$$
We compute for $x,\zeta \in \overline{\Omega}$, $x\neq \zeta$ $$\nabla_ {{{\boldsymbol \zeta}}}E(x,\zeta)=$$ $$c\left( D \gamma(\zeta)\cdot \frac{x-\zeta}{|x-\zeta|^2} + \frac{4(\gamma(x)-\gamma(\zeta)}{|x-\zeta|^2}\cdot\left(I_{2\times 2} - 2\,\frac{(x-\zeta)\otimes (x-\zeta)}{|x-\zeta|} \right)\right).$$
Using again Dominated Convergence Theorem again we obtain that for any $p\in (1,2)$, $\zeta\in \overline{\Omega} \mapsto E(\cdot,\zeta)$ belongs to $C^1({\Omega};L^p(\Omega))\cap C^1({\partial}{\Omega};L^p(\Omega))$.
Define $$H_1(x,\zeta):= H(x,\zeta) - c\,\gamma(\zeta)\cdot R(x-\zeta), \quad \quad \forall x,\zeta\in \Omega,$$ where $R=R(z)$ is the vector function described in and $c$ is described in .
From , and , we compute the equation for $H_1(\cdot,\zeta)$ to obtain that $${\mathcal{L}}\, H_1(x,\zeta)= -c\,\gamma(x)\cdot \left(\gamma(\zeta)\cdot D_{z} R(x-\zeta)\right)\,+\,E(x,\zeta), \quad \quad \hbox{in }\Omega\label{EqnregularpartGreen1.1}$$ and the boundary condition reads as $$\frac{{\partial}H_1(x,\zeta)}{{\partial}\nu_x}\,=\,c\,\nu(x)\cdot\left(\frac{x-\zeta}{|x-\zeta|^2} - \gamma(\zeta)\cdot D_z R(x-\zeta)\right)\quad \quad \hbox{on }{\partial}\Omega \label{EqnregularpartGreen2.1}.$$
The fact that $\zeta \mapsto E(\cdot,\zeta)$ belongs to $C({\Omega};L^p(\Omega))\cap C({\partial}{\Omega};L^p(\Omega))$ for any $p \in (1,\infty)$, together with , imply that for any $p\in (1,\infty)$ and any $\zeta \in \overline{\Omega}$, the right hand side in equation belongs to $L^p(\Omega)$.
In the case $\zeta \in \Omega$, the right hand side in is smooth. In the case $\zeta\in {\partial}\Omega$, we appeal to Lemma \[boundaryterm\] in the Appendix and embedding to find that for any $\zeta \in {\partial}{\Omega}$ the right hand side in belongs to $L^p({\partial}\Omega)$ for any $p>1$.
Standard elliptic regularity theory implies that for any $p\in (1,\infty)$, $H_1(\cdot,\zeta)\in W^{2,p}(\Omega)$ and the Sobolev embeddings in yield that $H_1(\cdot,\zeta)\in C^{1,{\alpha}}(\overline{\Omega})$, for any ${\alpha}\in (0,1)$.
Finally, we check that $\zeta \mapsto H_1(\cdot,\zeta)$ belongs to $C^1(\Omega;C^1(
\overline{\Omega}))\cap C^1({\partial}\Omega;C^1(\overline{\Omega}))$. We first deal with the inner regularity. Recall that for any $p\in (1,2)$, $R\in W^{2,p}_{loc}({\mathbb{R}}^2)$ and $\zeta \mapsto \nabla_ {{{\boldsymbol \zeta}}} E$ belongs $C({\Omega};L^p(\Omega))$.
A direct application of the Dominated Convergence Theorem yields that the mapping $$\zeta \in {\Omega} \mapsto \nabla_ {{{\boldsymbol \zeta}}}\left[\,-c\,\gamma(x)\cdot \left(\gamma(\zeta)\cdot D_{z} R(x-\zeta)\right)\,+\,E(x,\zeta)\,\right]$$ belongs to $C({\Omega};L^p(\Omega))$ and consequently, the mapping $\zeta \in \Omega\mapsto\nabla_ {{{\boldsymbol \zeta}}} H_1(\cdot,\zeta)\in W^{2,p}(\Omega)$ is well defined and solves $${\mathcal{L}}\, \left(\nabla_ {{{\boldsymbol \zeta}}} H_1(x,\zeta)\right)= \nabla_ {{{\boldsymbol \zeta}}}\left[\,-c\,\gamma(x)\cdot \left(\gamma(\zeta)\cdot D_{z} R(x-\zeta)\right)\,+\,E(x,\zeta)\,\right]
, \quad \quad \hbox{in }\Omega\label{EqnregularpartGreen1.2}$$ with the boundary condition $$\frac{{\partial}\left(\nabla_ {{{\boldsymbol \zeta}}}H_1(x,\zeta)\right)}{{\partial}\nu_x}\,=\,c\nabla_ {{{\boldsymbol \zeta}}}\left(\nu(x)\cdot\left(\frac{x-\zeta}{|x-\zeta|^2} - \gamma(\zeta)\cdot D_z R(x-\zeta)\right) \right)\quad \quad \hbox{on }{\partial}\Omega \label{EqnregularpartGreen2.2}.$$
Regularity theory and Sobolev embeddings in and imply that the mapping $\zeta \in \Omega \mapsto \nabla_ {{{\boldsymbol \zeta}}}H_1(\cdot,\zeta)$ belongs to $C(\Omega;C^{0,{\alpha}}(\Omega))$ for any ${\alpha}\in (0,1)$.
As for the boundary regularity, we proceed in the same way as we did for the inner regularity, replacing $\nabla_ {{{\boldsymbol \zeta}}}$ by its the tangential component respect to the ${\partial}\Omega$. This concludes the proof of the lemma.
Next, we introduce some notation that will be needed for subsequent developments. Fix $\eta>0$ small such that every $\zeta \in \Omega$ with ${{\rm dist}}(\zeta,{\partial}\Omega)< \eta$, has a well defined reflection across ${\partial}\Omega$ along the normal direction, $\zeta^* \in \Omega^c$. Denote, $$\Omega_{\eta}:= \{\zeta\in \Omega\,:\, {{\rm dist}}(\zeta,\Omega)<\eta\}$$ which is also a smooth domain. Observe that for any $\zeta \in \Omega_{\eta}$, $|\zeta- \zeta^*|=2{{\rm dist}}(\zeta, {\partial}\Omega)$.
Our next result concerns the boundary asymptotic behavior of the Robin’s function, which we recall is given by $\zeta \in \overline{\Omega} \mapsto H(\zeta,\zeta)$.
\[asymptoticsof H\] There exists a mapping $z\in C(\Omega_{\eta};C^{0,{\alpha}}(\overline{\Omega}))\cap L^{\infty}(\Omega_{\eta};C^{0,{\alpha}}(\overline{\Omega}))$ such that $$H(x,\zeta):= -4\ln\left(|x-\zeta^*|\right) + z(x,\zeta), \quad \quad \forall\,\, x,\in \overline{\Omega}, \quad \forall\,\, \zeta \in \Omega_{\eta}.$$
Even more, for every $\zeta \in \Omega_{\eta}$ and $x\in \Omega$ $$\label{zeta}
z(x,\zeta)=4\gamma(\zeta)\cdot R(x- \zeta) - 4\gamma(\zeta^*)\cdot R(x-\zeta^*) + \tilde{z}(x,\zeta),$$ where the mapping $\zeta \in \Omega_{\eta} \mapsto\tilde{z}(\cdot,\zeta)$ belongs to $C^1\left(\overline{\Omega_{\eta};} C^1(\overline{\Omega})\right).$
Consider the function $$z(x,\zeta):= H(x,\zeta) +4\ln\left(|x-\zeta^*|\right), \quad \quad \forall\,\, x,\in \overline{\Omega}, \quad \forall\,\, \zeta \in \Omega_{\eta} ,\quad x\neq \zeta.$$
We directly compute from and , to find that $${\mathcal{L}}\, z(\cdot,\zeta)= 4\gamma(x)\cdot\left[\frac{x-\zeta}{|x-\zeta|^2}-\frac{x-\zeta^*}{|x-\zeta^*|^2}\right]\,-\, 4\left[\ln(|x-\zeta|)- \ln(|x-\zeta^*|)\right], \quad \quad \forall\,\, x\in \Omega\label{EqnResidueregularpartGreen}.$$ with the boundary condition $$\frac{{\partial}z(x,\zeta)}{{\partial}\nu_x}= 4\nu(x)\cdot\left[\frac{x-\zeta}{|x-\zeta|^2}- \frac{x-\zeta^*}{|x-\zeta^*|^2}\right] \quad \quad \forall\,\, x\in {\partial}\Omega
\label{EqnResidueregularpartGreen2boundary}.$$
The right hand side in equation can be written as $$4\gamma(\zeta)\cdot\frac{x-\zeta}{|x-\zeta|^2} -4\gamma(\zeta^*)\cdot\frac{x-\zeta^*}{|x-\zeta^*|^2} + \tilde{E}(x,\zeta),$$ where $$\begin{gathered}
\tilde{E}(x,\zeta):= 4\left(\gamma(x)-\gamma(\zeta)\right)\cdot \frac{x-\zeta}{|x-\zeta|^2} - 4\left(\gamma(x)-\gamma(\zeta^*)\right) \cdot\frac{x-\zeta^*}{|x-\zeta^*|^2}
\\
\,-\, 4\left[\ln(|x-\zeta|)- \ln(|x-\zeta^*|)\right], \quad \quad \forall\,\zeta \in \Omega_{\eta}, \quad x\in \Omega. \end{gathered}$$
Proceeding in the same fashion as in the proof of Theorem \[RegularPartInner\], we obtain that the mapping $\zeta \in \overline{\Omega_{\eta}} \mapsto \tilde{E}(\cdot,\zeta)$ belongs to $C^1(\overline{\Omega_{\eta}};L^p(\Omega))$ for any $p\in (1,\infty)$.
To justify , we use the function $R=R(z)$, from . We decompose $z(x,\zeta)$ as $$z(x,\zeta)=4\gamma(\zeta)\cdot R(x-\zeta) - 4\gamma(\zeta^*)\cdot R(x-\zeta^*) + \tilde{z}(x,\zeta)$$ to find that if $x\in\Omega$ $${\mathcal{L}}\, \tilde{z}(x,\zeta)=-4\,\gamma(x)\cdot \left(\gamma(\zeta)\cdot D_{z} R(x-\zeta)\,-\,\gamma(\zeta^*)\cdot D_{z} R(x-\zeta^*)\right)+\tilde{E}(x,\zeta)\label{EqnResidueregularpartGreen1}.$$ with the boundary condition if $x\in{\partial}\Omega$ $$\frac{{\partial}\tilde{z}(x,\zeta)}{{\partial}\nu_x}=
4\nu(x)\cdot\left[\frac{x-\zeta}{|x-\zeta|^2}- \frac{x-\zeta^*}{|x-\zeta^*|^2}- \left(\gamma(\zeta)\cdot D_{z} R(x-\zeta)\,-\,\gamma(\zeta^*)\cdot D_{z} R(x-\zeta^*)\right)\right].
\label{EqnResidueregularpartGreen2boundary1}$$
From and , proceeding again as in the proof of Theorem \[RegularPartInner\], we obtain that the mapping $\zeta\in \overline{\Omega_{\eta}} \mapsto \tilde{z}(\cdot,\zeta)$ belongs to $C^1(\overline{\Omega_{\eta}}; C^1(\overline{\Omega}))$. This concludes the proof of the proposition.
For further develoments we notice that from Proposition \[asymptoticsof H\] the function $H=H(x,\zeta)$ has continuous partial derivatives in the set $\Omega\times \Omega \setminus \{(x,\zeta)\,:\, x\neq \zeta\}$.
Also, directly from Proposition \[asymptoticsof H\] we obtain the following corollary.
\[corolario\] Under the assumptions in Proposition 2.1, the Robin’s function, $$\zeta \in \Omega \mapsto H(\zeta,\zeta)$$ satisfies that $${H}(\zeta,\zeta):= -4\ln\left({{\rm dist}}(\zeta,{\partial}\Omega)\right) + {\rm z}(\zeta), \quad \quad \forall \,\zeta \in \Omega_{\eta},\label{asymptoticsrobinfunction}$$ where ${\rm z}\in C^1(\overline{\Omega_{\eta}})$ and $${\rm z}(\zeta):= 4\gamma(\zeta)\cdot R(0) - \gamma(\zeta^*)\cdot R(\zeta - \zeta^*)+ \tilde{z}(\zeta,\zeta), \quad \quad \forall \,\,\zeta\in \Omega_{\eta}.$$
The approximation of the solution {#approximation}
=================================
In this part we find an appropriate aproximation for a solution of equation which will allow us to carry out a reduction procedure. We also compute the error created by the choice of our approximation.
To set up our approximation, we first identify the formal limit problem associated to equation .
Dividing by $a(x)$, equation becomes $$\label{emdenfowlerneumannbdcond2}
\Delta u + \gamma(x)\cdot\nabla u - u + {\varepsilon}^2 \, e^{u}=0 \quad \hbox{in }\Omega,\qquad
\frac{{\partial}u}{{\partial}\nu} =0 \quad \hbox{on }{\partial}\Omega.$$
Take ${\varepsilon}>0$ and set $\Omega_{{\varepsilon}}:= {\varepsilon}^{-1}\Omega$. If $u$ is a solution of , the function $$\label{rescaling}
v(y)=4\,\ln({\varepsilon}) + u({\varepsilon}y), \quad \quad y\in \Omega_{{\varepsilon}}$$ solves the equation $$\label{emdenfowlerneumannbdcond22}
\Delta v + {\varepsilon}\gamma({\varepsilon}y)\cdot\nabla v - {\varepsilon}^2(v-4\ln({\varepsilon})) + \, e^{v}=0 \quad \hbox{in }\Omega_{{\varepsilon}},\quad \quad \frac{{\partial}v}{{\partial}\nu_{{\varepsilon}}} =0 \quad \hbox{on }{\partial}\Omega_{{\varepsilon}}$$ where $\nu_{{\varepsilon}}$ is the inner unit normal vector to ${\partial}\Omega_{{\varepsilon}}$.
From , formally as ${\varepsilon}\to 0^+$, we obtain the limit equation $$\label{LiouvilleEqnR2}
\Delta V + e^V=0, \quad \hbox{in }{\mathbb{R}}^2, \quad \quad \nabla V \in L^2({\mathbb{R}}^2).$$
Solutions to are given by $$\label{LiouvilleEqnnoscaling}
V(y):= \ln
\left(\frac{8\,d^2}{\left(d^2 + |y-\zeta'|^2\right)^2}\right), \quad \hbox{for } y \in {\mathbb{R}}^2$$ where $d \in {\mathbb{R}}$ and $\zeta'\in {\mathbb{R}}^2$ are arbitrary parameters.
Pulling back the rescaling in , for any $d>0$ and any $\zeta\in {\mathbb{R}}^2$, the function $$\label{LiouvilleEqnrescaling}
U_{d,\zeta}(x):= \ln
\left(\frac{8\,d^2}{\left({\varepsilon}^2 d^2 + |x-\zeta|^2\right)^2}\right), \quad \hbox{for } x \in {\mathbb{R}}^2$$ solves the equation $$\label{LiouvilleEqnR2rescaling}
\Delta U_{d,\zeta} + {\varepsilon}^2e^{U_{d,\zeta}}=0, \quad \hbox{in }{\mathbb{R}}^2, \quad \quad \nabla_x U_{d,\zeta} \in L^2({\mathbb{R}}^2).$$
Let $m\in \mathbb{N}$ be fixed. Consider $m$ real numbers $d_i>0$ and $m$ arbitrary different points $\zeta_i \in \overline{\Omega}$. For every $i=1,\ldots,m$, define $$\label{LiouvilleEqnrescalingwithparameters}
U_i(x):= U_{d_i,\zeta_i}(x)=\ln\left(\frac{8d^2_i}{\left({\varepsilon}^2 \,d_i^2 + |x-\zeta_i|^2\right)^2}\right),\quad \quad x\in {\mathbb{R}}^2.$$
Let $PU_{i}\in H^1(\Omega)$ be the solution of $$\label{EquationPU}
\Delta PU_{i}
+ \gamma(x) \cdot \nabla PU_{i} - PU_{i}+{\varepsilon}^2e^{U_{i}}=0 \quad \hbox{in }\Omega,
\qquad
\frac{{\partial}PU_{i}}{{\partial}\nu}=0 \quad \hbox{on }{\partial}\Omega.$$
By standard regularity theory, $PU_{i}\in C^{\infty}(\overline{\Omega})$, so that $PU_{i}$ is indeed a classical solution of .
Observe that each function $PU_{i}$ depends on ${\varepsilon}>0$, $d_i$ and $\zeta_i$, but for notational simplicity, we unify this depedence using the subindex $i$.
For every $i=1,\ldots,m$, consider the function $H_{i}\in C^{\infty}(\overline{\Omega})$ given by $$H_{i}(x):=PU_{i}(x)- U_{i}(x), \quad \quad \hbox{for}\quad x\in \overline{\Omega}.$$
From , $H_{i}$ solves the equation $$\label{EquationHdzeta}
\Delta H_{i} + \gamma(x)\cdot \nabla H_{i} - H_{i} = U_{i} - \gamma(x)\cdot \nabla U_{i} \quad \hbox{in }\Omega$$ with the boundary condition $$\label{BdConditionHdzeta}
\frac{{\partial}H_i}{{\partial}\nu} = -\frac{{\partial}U_{i}}{{\partial}\nu} \quad \hbox{on }{\partial}\Omega.$$
The following assumptions on the parameters $d_i$ and $\zeta_i$ will play a crucial role in what follows.
We assume that for every $i=1,\ldots,m$, the parameters $d_i>0$ and $\zeta_i\in \overline{\Omega}$ depend on ${\varepsilon}>0$. This dependence is expressed by the conditions: $$\label{assumptionsdi}
\lim_{{\varepsilon}\to 0^+}{\varepsilon}\, d^{{\alpha}}_i =0, \quad \quad \forall\, {\alpha}>0$$ and in the case that $\zeta_i \in \Omega$, for some $c_0>0$ and for some $\kappa\geq 1$ $$\label{assumptionszetai}
{{\rm dist}}(\zeta_i,{\partial}\Omega)\geq c_0|\ln({\varepsilon})|^{-\kappa}.$$
Next, lemma concerns the asymptotic behavior of the functions $H_{i}$ in terms of $d_i$, $\zeta_i$ and ${\varepsilon}>0$ small enough. For every fixed $i=1,\ldots,m$ and $\zeta_i\in \overline{\Omega}$, we will use the convention that $$c_i:=
\left\{
\begin{array}{ccc}
1, & \zeta_i \in \Omega\\
\frac{1}{2}, &\zeta_i \in {\partial}\Omega.
\end{array}
\right.$$
\[expansionHdzeta\] Assume conditions and . Then, for every $i=1,\ldots,m$ and every ${\varepsilon}>0$ small enough, there exists a function $z_i$ such that
- for every $x\in \Omega$ $$\label{AsymptResiduePU}
H_{i}(x)= -\ln(8d_i^2) + c_i\,H(x,\zeta_i) + z_{i}(x),$$ where $H=H_a$ is defined in and
- $\forall \,p\in(1,2)$, $z_i \in W^{2,p}(\Omega) \cap C(\overline{\Omega})$ and $$\|z_{i}\|_{W^{2,p}(\Omega)}\,+\,\|z_{i}\|_{L^{\infty}(\Omega)}\leq C{\varepsilon}^{\frac{1}{p}}d_i^{\frac{1}{p}},$$ where the constant $C>0$ depedens only on $p$.
For notational simplicity, throughout this proof, we omit the subindex $i$. Let ${\varepsilon}>0$ be small, $d>0$ and $\zeta \in \overline{\Omega}$.
For $x\in \Omega$, we have that $H(x):= PU(x) - U(x)$. We set for $x\in \overline{\Omega}$ $$z(x) := H(x) + \ln(8d^2) -
\left\{
\begin{array}{ccc}
H(x,\zeta), & \hbox{if } \quad \zeta \in \Omega,\\
\\
\frac{1}{2}H(x,\zeta), & \hbox{if } \quad \zeta \in {\partial}\Omega.
\end{array}
\right.$$
Recall from that ${\mathcal{L}}= \Delta + \gamma(x)\cdot \nabla -1$. From and , the equation for $z$ reads as $${\mathcal{L}}\, z(x) = 2\ln\left(\frac{|x-\zeta|^2}{{\varepsilon}^2 d^2 + |x-\zeta|^2}\right) + 4\gamma(x)\cdot \frac{(x-\zeta)}{|x-\zeta|^2}\cdot \frac{{\varepsilon}^2 d^2}{{\varepsilon}^2d^2 + |x-\zeta|^2} \quad \hbox{in }\Omega,$$ $$\frac{{\partial}z}{{\partial}\nu} = -4\nu(x)\cdot \frac{(x-\zeta)}{|x-\zeta|^2}\cdot \frac{1}{{\varepsilon}^2d^2 + |x-\zeta|^2} \quad \hbox{on }{\partial}\Omega.$$
For any $p>1$, we estimate $$\begin{aligned}
\int_{\Omega}\left|\ln\left(\frac{|x-\zeta|^2}{{\varepsilon}^2 d^2 + |x-\zeta|^2}\right)\right|^pdx &\leq & C\int_{0}^{2\, {\rm diam}(\Omega)}r\left|\ln\left(\frac{r^2}{{\varepsilon}^2 d^2 +r^2}\right)\right|^p dr\\
&=& C{\varepsilon}^2 d^2 \int_{0}^{\frac{2{\rm diam}(\Omega)}{{\varepsilon}\,d}}r\left|\ln\left(1+\frac{1}{r^2}\right)
\right|^pdr\\
&\leq &C {\varepsilon}^2 d^2\, \left(\int_0^{1}r\left|\ln\left(1+\frac{1}{r^2}\right)
\right|^pdr+\int_{1}^{\infty}r^{1-2p}dr\right)\\
&\leq& C {\varepsilon}^2 d^2.\end{aligned}$$
Hence, we obtain that $$\label{EQNrighthandsideI}
\left\|\ln\left(\frac{|x-\zeta|^2}{{\varepsilon}^2 d^2 + |x-\zeta|^2}\right)\right\|_{L^p(\Omega)} \leq C {\varepsilon}^{\frac{2}{p}} d^{\frac{2}{p}}.$$
As for the second term, let us take $p\in(1,2)$, so that $$\begin{aligned}
\int_{\Omega}\left|\frac{\gamma(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot \frac{{\varepsilon}^2 d^2}{{\varepsilon}^2d^2 + |x-\zeta|^2}\right|^p dx&\leq & C{\varepsilon}^{2-p} d^{2-p} \int_{0}^{\frac{2{\rm diam}(\Omega)}{{\varepsilon}d}} \frac{r^{1-p}}{(1+r^2)^p}dr\\
&\leq &C{\varepsilon}^{2-p} d^{2-p} \left(\int_{0}^{1} \frac{r^{1-p}}{(1+r^2)^p}dr+\int_1^{\infty}r^{1-p}dr
\right)
\\
&\leq & C{\varepsilon}^{2-p} d^{2-p}\end{aligned}$$ and therefore $$\label{EQNrighthandsideII}
\left\|\frac{\gamma(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot \frac{{\varepsilon}^2 d^2}{{\varepsilon}^2d^2 + |x-\zeta|^2}\right\|_{L^p(\Omega)}\leq C {\varepsilon}^{\frac{2}{p}-1}d^{\frac{2}{p}-1}.$$
As for the boundary term, if $\zeta\in \Omega$ we use conditions and , to find that, provided ${\varepsilon}>0$ and ${\varepsilon}d$ are small enough $$\left|\frac{\nu(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot\frac{{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}\right|\leq \frac{C{\varepsilon}^2 d^2}{|x-\zeta|^3}\leq C{\varepsilon}d, \quad \quad \forall \,x\in {\partial}\Omega,$$
On the other hand, if $\zeta \in {\partial}\Omega$, from Lemma , we estimate $$\left|\frac{\nu(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot\frac{{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}\right|\leq \frac{C{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}, \quad \quad \forall \,x\in {\partial}\Omega.$$
Let us take ${\delta}>0$ small but fixed, so that $$\left|\frac{\nu(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot\frac{{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}\right|\leq \frac{C{\varepsilon}^2 d^2}{{\delta}^2}, \quad \quad \forall \,x\in {\partial}\Omega\cap B^c_{{\delta}}(\zeta)$$ while for any $p>1$, we estimate $$\begin{aligned}
\int_{{\partial}\Omega\cap B_{{\delta}}(\zeta)} \left|\frac{\nu(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot\frac{{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}\right|^p dx &\leq & C {\varepsilon}d \int_{0}^{\frac{{\delta}}{{\varepsilon}d}}\frac{1}{(1 + s^2)^p}ds\\
&\leq & C {\varepsilon}d.\end{aligned}$$
We conclude that for any $\zeta \in \overline{\Omega}$ $$\label{EQNrighthandsideBdCondition}
\left\|\frac{\nu(x)\cdot (x-\zeta)}{|x-\zeta|^2}\cdot\frac{{\varepsilon}^2 d^2}{{\varepsilon}^2 d^2+ |x-\zeta|^2}\right\|_{L^p({\partial}\Omega)}\leq C{\varepsilon}^{\frac{1}{p}}d^{\frac{1}{p}}.$$
Standard elliptic regularity implies that for any $p\in (1,2)$, $z\in W^{2,p}(\Omega)$. The Sobolev embeddings in together with estimates , and , imply that $$\|z\|_{L^{\infty}(\Omega)}+\|z\|_{W^{2,p}(\Omega)}\leq C{\varepsilon}^{\frac{1}{p}} d^{\frac{1}{p}}$$ and this concludes the proof of the lemma.
Now we are in a position to set our approximation. Consider the parameters $d_1, \ldots,d_m \in {\mathbb{R}}_+$ and $\zeta_1, \ldots,\zeta_m \in \overline{\Omega}$ satisfying and respectively. In addition, assume that $$\label{logconditiondi}
\ln\left(8 d_i^2\right)= c_iH(\zeta_i,\zeta_i) \,+ \, \sum_{j\neq i} c_jG(\zeta_i,\zeta_j), \quad \quad \forall\,i=1,\ldots,m.$$
We set as approximation the function $$\label{approx}
u_{{\varepsilon}}(x):=\sum_{i=1}^m PU_i(x)=\sum_{i=1}^m U_i(x)+ H_i(x), \quad \quad \hbox{for }x\in \overline{\Omega}.$$
Using the rescaling in , we also set for every $y\in \Omega_{{\varepsilon}}$ $$\begin{aligned}
v_{{\varepsilon}}(y)&:=& 4\,\ln({\varepsilon}) + u_{{\varepsilon}}({\varepsilon}y)\label{rescalingapprox}\\
&=& 4(1 - m)\ln({\varepsilon}) + \sum_{i=1}^m \ln\left(\frac{8d_i^2}{\left(d_i^2 \,+\, |y -\zeta_i'|^2\right)^2}\right) + H_i({\varepsilon}y),\nonumber\end{aligned}$$ where we have denoted $\zeta_i':= \frac{\zeta_i}{{\varepsilon}}$.
For our subsequent developments, we introduce another condition on the numbers $d_1,\ldots,d_m$ and the points $\zeta_1,\ldots,\zeta_m\in \overline{\Omega}$. Consider the real number $$\label{distancezetaizetaj}
c_{{\varepsilon}}:=\min\{|\zeta_i -\zeta_j|\,:\, i,j=1,\ldots,m, \quad i\neq j\}$$ which is well defined, positive and uniformly bounded above, since $\Omega$ is bounded. We assume in addition that $$\label{nottoclosezetaij}
\lim \limits_{{\varepsilon}\to 0^+} \frac{c_{{\varepsilon}}}{{\varepsilon}\,d_i}=\infty, \quad \quad \forall \,i=1,\ldots,m.$$
Condition means that as ${\varepsilon}\to 0^+$, the numbers $d_i$ might go to infinity, but at a rate that is controlled by the distance between the points $\zeta_i$.
Let us denote $$W:= e^{v_{{\varepsilon}}}, \quad \quad S(v_{{\varepsilon}}):= \Delta v_{{\varepsilon}} + {\varepsilon}\gamma({\varepsilon}y)\cdot\nabla v_{{\varepsilon}} - {\varepsilon}^2(v_{{\varepsilon}}-4\ln({\varepsilon})) + \, e^{v_{{\varepsilon}}}.$$
Next lemma provides the size of the error term $S(v_{{\varepsilon}})$.
\[sizeof the errorSvep\] Assume hypotheses in Lemma \[expansionHdzeta\] and conditions and . Then for any ${\alpha}, \beta \in (0,1)$, there exists ${\varepsilon}_0>0$ small such that for any ${\varepsilon}\in (0,{\varepsilon}_0)$, there exists a function $\theta_{{\varepsilon}}(y)$ such that $$|\theta_{{\varepsilon}}(y)|\leq C {\varepsilon}^{3+{\alpha}} + C {\varepsilon}^{\beta} \sum_{i=1}^m |y-\zeta_i'|^{\beta}, \quad \quad \forall\, y\in \Omega_{{\varepsilon}}$$ and $$\label{sizeW}
W(y)= \sum_{i=1}^m \frac{8\,d_i^2}{\left({\varepsilon}^2d_i^2 \,+\,|y-\zeta_i'|\right)^2}\left(1 + \theta_{{\varepsilon}}(y)\right), \quad \quad \forall\, y\in \Omega_{{\varepsilon}}.$$
Even more, $$\label{sizeSvep}
|S(v_{{\varepsilon}})(y)|\leq C {\varepsilon}^{{\alpha}}\sum_{i=1}^m \frac{1}{1+ |y-\zeta_i'|^3}, \quad \quad \forall\, y\in \Omega.$$
We proceed as in the proof of expressions (20) and (21) in [@DELPINOWEI]. For the sake of completeness we include the detailed computations.
We first prove . Let ${\varepsilon}>0$ be small. Observe that for any $i=1,\ldots,m$ and any $y\in \Omega_{{\varepsilon}}$ $$\begin{aligned}
v_{{\varepsilon}}(y)&=&4\ln({\varepsilon}) + U_i({\varepsilon}y) + H_i({\varepsilon}y)+ \sum_{j\neq i}\left(U_j({\varepsilon}y) + H_j({\varepsilon}y)\right)\\
&=&\ln\left(\frac{8d^2_i}{(d^2_i + |y-\zeta'_i|^2)^2}\right)+ H_i({\varepsilon}y)+ \sum_{j\neq i}\left(U_j({\varepsilon}y) + H_j({\varepsilon}y)\right).\end{aligned}$$
Using , we find that $$\begin{aligned}
v_{{\varepsilon}}(y)&=& \ln\left(\frac{8d^2_i}{(d^2_i + |y-\zeta'_i|^2)^2}\right)- \ln(8 d_i^2) + c_i H({\varepsilon}y, \zeta_i) + z_i({\varepsilon}y) +\\
&&+ \sum_{j\neq i}\left(\ln\left(\frac{8d^2_j}{({\varepsilon}^2 d^2_j + |{\varepsilon}y-\zeta_j|^2)^2}\right)-\ln(8 d_j^2) + c_jH({\varepsilon}y, \zeta_j) + z_j({\varepsilon}y)\right).\end{aligned}$$
Therefore, for any ${\alpha}\in (0,1)$, $$\begin{gathered}
\label{vepfirstasymptotics}
v_{{\varepsilon}}(y)=\ln\left(\frac{1}{(d^2_i + |y-\zeta'_i|^2)^2}\right)+ c_i H({\varepsilon}y, \zeta_i)\\
+ \sum_{j\neq i}\left(\ln\left(\frac{1}{({\varepsilon}^2 d^2_j + |{\varepsilon}y-\zeta_j|^2)^2}\right) + c_j H({\varepsilon}y, \zeta_j)\right)
+\sum_{j=1}^m{\mathcal{O}}_{L^{\infty}(\Omega_{{\varepsilon}})}({\varepsilon}^{{\alpha}}d_j^{{\alpha}}).\end{gathered}$$
To estimate more accurately , we first notice from Theorem \[RegularPartInner\] that for any fixed $j=1,\ldots,m$ and any $\beta\in (0,1)$, $$\label{HolderregularpartGreensFnct}
H({\varepsilon}y,\zeta_j)=H(\zeta_i,\zeta_j)+ {\mathcal{O}}_{L^{\infty}(\Omega_{{\varepsilon}})}(|{\varepsilon}y - \zeta_i|^{\beta}), \quad \quad \forall\, y\in \Omega_{{\varepsilon}}.$$
Next, we consider two regimes: the first one close to the point $\zeta_i'$ and the second one when we are far from $\zeta_i'$. To be more precise, take any $\tilde{{\alpha}}\in (0,1)$ and ${\delta}_{{\varepsilon}}\in (0,(1-\tilde{{\alpha}}) c_{{\varepsilon}}]$, where $c_{{\varepsilon}}$ is defined in . Notice that for any $j\neq i$ and any $y\in B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta'_i)$ $$\begin{aligned}
|y-\zeta'_j|
\geq \frac{|\zeta_i-\zeta_j|}{{\varepsilon}} - |y-\zeta'_i| \geq \frac{\tilde{{\alpha}}\,|\zeta_i-\zeta_j|}{{\varepsilon}},\end{aligned}$$ so that $|{\varepsilon}y -\zeta_j|\geq \tilde{{\alpha}} c_{{\varepsilon}}$.
Using conditions and , for any $y\in B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')$ and any $j\neq i$, we compute
$$\begin{aligned}
\label{eqnasymptitoclog}
\ln\left(\frac{1}{({\varepsilon}^2 d^2_j + |{\varepsilon}y-\zeta_j|^2)^2}\right)&=&-4\ln\left(|{\varepsilon}y-\zeta_j|\right)+ {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}({\varepsilon}^2d_j^2 |y-\zeta'_j|^{-2})\nonumber\\
&=&-4\ln(|{\varepsilon}y -\zeta_j|) + {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}({\varepsilon}^4 d_j^4)\nonumber\\
&=&-4\ln(|\zeta_i -\zeta_j|) + {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}\left({\varepsilon}^{4-\tilde{{\alpha}}} + {\varepsilon}\,|y-\zeta_i'|\right).\end{aligned}$$
Therefore, using expressions and , we can choose $\tilde{{\alpha}}>1-{\alpha}$ such that for any $y\in B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta'_i)$ expression reads as $$\begin{aligned}
v_{{\varepsilon}}(y)&=&\ln\left(\frac{1}{(d^2_i + |y-\zeta'_i|^2)^2}\right)+ c_iH(\zeta_i, \zeta_i) + \sum_{j\neq i} \left(-4\ln(|\zeta_i -\zeta_j|) + c_j\,H(\zeta_i, \zeta_j)\right)
\nonumber\\
&&+ {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}\left({\varepsilon}^{4-\tilde{{\alpha}}} + {\varepsilon}\,d_i|y-\zeta_i'|\right)+ {\mathcal{O}}_{L^{\infty}\left(\Omega_{{\varepsilon}}\right)}(|{\varepsilon}y - \zeta_i|^{\beta})\nonumber\\
&=&\ln\left(\frac{1}{(d^2_i + |y-\zeta'_i|^2)^2}\right)+ c_iH(\zeta_i, \zeta_i)
+ \sum_{j\neq i} c_jG(\zeta_i,\zeta_j)\nonumber\\
&&+ {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}\left({\varepsilon}^{3+{\alpha}} + {\varepsilon}^{\beta}|y-\zeta_i'|^{\beta}\right)\nonumber\\
&=& \ln\left(\frac{8d_i^2}{(d^2_i + |y-\zeta'_i|^2)^2}\right)+ {\mathcal{O}}_{L^{\infty}\left(B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')\right)}\left({\varepsilon}^{3+{\alpha}} + {\varepsilon}^{\beta}|y-\zeta_i'|^{\beta}\right).\label{vepsecondasymptotics}\end{aligned}$$
Directly from the identities in , we obtain that in $B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')$ $$\label{Wfirstestimate}
e^{v_{\epsilon}(y)}=\frac{8d_i^2}{(d_i^2 + |y-\zeta'_i|^2)^2}\,e^{{\mathcal{O}}\left({\varepsilon}^{3+{\alpha}} + {\varepsilon}^{\beta}|y-\zeta_i'|^{\beta}\right)}.$$
On the other hand, from and , we can choose ${\delta}_{{\varepsilon}}$ such that for every $i=1,\ldots,m$, [$\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}d_i}$ is bounded below by $c {\varepsilon}^{1-{\alpha}}d_i^{1-{\alpha}}$]{}. Hence, if $|y-\zeta_i'|\geq \frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}$, $\frac{|{\varepsilon}y - \zeta_i|}{{\varepsilon}d_i}\geq \frac{{\delta}_{{\varepsilon}}}{{\varepsilon}d_i}$ and from we find that that $$\begin{aligned}
v_{{\varepsilon}}(y)&=&4\ln({\varepsilon}) -\sum_{i=1}^m -4\ln(|{\varepsilon}y - \zeta_i|) + {\mathcal{O}}_{L^{\infty}}({\varepsilon}^2 d_i^2 |{\varepsilon}y - \zeta_i|^{-2})\\
&=& 4\ln({\varepsilon}) - \sum_{i=1}^m 4\ln(|{\varepsilon}y - \zeta_i|) + {\mathcal{O}}_{L^{\infty}(\Omega)}({\varepsilon}^{{\alpha}}d_i^{{\alpha}})\\
&=& 4\ln({\varepsilon}) + {\mathcal{O}}_{L^{\infty}(\Omega)}\left(\ln({\delta}_{{\varepsilon}})\right).\end{aligned}$$
Hence, for $y\in \Omega_{{\varepsilon}}-\cup_{i=1}^m B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')$, we have that $$e^{v_{{\varepsilon}}(y)}={\mathcal{O}}_{L^{\infty}(\Omega_{{\varepsilon}}-\cup_{i=1}^m B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i'))}({\varepsilon}^{3+{\alpha}}).$$
Denoting by $\chi_{B}$ the characteristic function of a set $B\subset {\mathbb{R}}^2$, we write for $y\in \Omega_{{\varepsilon}}$ $$\begin{aligned}
e^{v_{{\varepsilon}}(y)}&:=&\sum_{i=1}^m \left(e^{v_{{\varepsilon}}(y)}\chi_{B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')}+e^{v_{{\varepsilon}}(y)}\chi_{\Omega_{{\varepsilon}}-\cup_{i=1}^m B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')}\right)\\
&=&\sum_{i=1}^m\frac{8d_i^2}{(d_i^2 + |y-\zeta'_i|^2)^2}\,e^{{\mathcal{O}}({\varepsilon}^{3+{\alpha}}+ |{\varepsilon}y - \zeta_i|^{\beta})} +{\mathcal{O}}({\varepsilon}^{3+{\alpha}})\end{aligned}$$ and asymptotics in follow.
To find estimate , we use the fact that $$\begin{aligned}
S(v_{{\varepsilon}}(y))&=&e^{v_{{\varepsilon}}(y)} -\sum_{i=1}^m e^{4\ln({\varepsilon})+ U_i({\varepsilon}y)}\\
&=&\sum_{i=1}^m \left(e^{v_{{\varepsilon}}(y)}-e^{4\ln({\varepsilon})+U_i({\varepsilon}y)}\right)\chi_{B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')}\\
&&+ \left(e^{v_{{\varepsilon}}(y)}-{\varepsilon}^4\sum_{i=1}^m e^{U_i({\varepsilon}y)}\right)\chi_{\Omega_{{\varepsilon}}-\cup_{i=1}^m B_{\frac{{\delta}_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i')}\\
&=&\sum_{i=1}^m \frac{8d_i^2}{(d_i^2 + |y-\zeta_i'|^2)^2}\theta_{{\varepsilon}}(y) + {\mathcal{O}}({\varepsilon}^{3 + {\alpha}})\end{aligned}$$ from where follows.
The reduction scheme {#Reductionscheme}
====================
In this part we use the approximation described in section \[approximation\] to solve equation using a finite dimensional reduction reduction procedure.
We use the convention that $m=k+l$ for some $k,l \in \{0,\ldots,m\}$ and that $$\zeta_1,\ldots,\zeta_k \in \Omega \quad \hbox{and}\quad \zeta_{k+1},\ldots,\zeta_{k+l} \in {\partial}\Omega.$$
It will be more convenient to work with the rescaling . Using the function described in , we look for a solution $V_{{\varepsilon}}$ of equation having the form $$V_{{\varepsilon}}(y):=v_{{\varepsilon}}(y) + \phi(y), \quad \quad y\in \Omega_{{\varepsilon}}$$ so that $\phi$ must solve the nonlinear boundary value problem $$\label{nonlineareqnphi1}
\Delta \phi -{\varepsilon}^2 \phi = -S(v_{{\varepsilon}})-e^{v_{{\varepsilon}}}\phi -{\varepsilon}\gamma({\varepsilon}y) \cdot \nabla\phi- N(\phi) \quad \hbox{in} \quad \Omega_{{\varepsilon}}$$ with the boundary condition $$\label{boundaryconditionphi1}
\frac{{\partial}\phi}{{\partial}\nu}=0 \quad \hbox{on} \quad {\partial}\Omega_{{\varepsilon}},$$ where we have denoted $$N(\phi):= e^{v_{{\varepsilon}}}\left[e^{\phi} -1 - \phi\right].$$
To solve -, we follow the developments from section 3 in [@DELPINOWEI].
For $i=1,\ldots,m$ fixed, we write $$J_i:=\left\{
\begin{array}{ccc}
2,& \hbox{for } 1 \leq i \leq k\\
\\
1,& \hbox{for } k+1 \leq i \leq k+l.
\end{array}
\right.$$
Consider the following linear problem: given a function $h \in L^{\infty}(\Omega_{{\varepsilon}})$, find $\phi \in C^1(\overline{\Omega_{{\varepsilon}}})$ and constants $c_{ij}$ for $i=1,\ldots,m$ and for $j=1,J_i$ such that $$\label{lineareqanphi1}
\Delta \phi -{\varepsilon}^2 \phi = -e^{v_{{\varepsilon}}}\phi + h +
\sum_{i=1}^m \sum_{j=1,J_i}c_{ij}\chi_{ij} Z_{ij} \quad \hbox{in }\quad \Omega_{{\varepsilon}}$$ with the boundary and orthogonality conditions $$\label{boundaryorthogonalityconditionphi1}
\frac{{\partial}\phi}{{\partial}\nu}=0 \quad \hbox{on} \quad {\partial}\Omega_{{\varepsilon}}, \quad \quad \int_{\Omega_{{\varepsilon}}}\chi_{ij}Z_{ij}\phi =0, \quad \forall\, i=1,\ldots,m, \quad j=1,J_i,$$ where the functions $Z_{ij},\chi_{\ij}$ are next defined.
For $i=1,\ldots,m$, we set $$z_{i0}(y):= \frac{1}{d_i} - \frac{2 d_i}{d_i^2 + |y|^2}, \quad \quad z_{ij}= \frac{y_j}{d_i^2 + |y|^2}, \quad \forall j=1,J_i$$
For any $i=1,\ldots,m$ fixed, Lemma 2.1 in [@ESPOSITOGROSSIPISTOIA] guarantees that the only solutions to $$\Delta \phi + \frac{8d_i^2}{d_i^2 + |y|^2}\phi =0 \quad \hbox{in }{\mathbb{R}}^2, \quad \quad |\phi(y)|\leq C(1 +|y|^{\sigma}), \quad \hbox{for some }\sigma>0$$ are the linear combinations of $z_{ij}(y)$ for $j=0,1,2$. Observe that $$\frac{8d_i^2}{d_i^2 + |y|^2}=e^{V(y)}, \quad y\in {\mathbb{R}}^2,$$ where $V$ is the function in with $d=d_i$ and $\zeta'=0$.
Next, let $r_0>0$ be a large number and $\chi:{\mathbb{R}}\to [0,1]$ be a non-negative smooth cut-off function so that $$\chi(r)=\left\{
\begin{array}{ccc}
1,& \hbox{if} \quad r\leq r_0\\
\\
0,& \hbox{if } \quad r\geq r_0+1.
\end{array}
\right.$$
For $i=1,\ldots,k$, we have $\zeta_i\in \Omega$ and we define $$\chi_i(y):=\chi(|y-\zeta_i'|), \quad \quad Z_{ij}(y):=\chi_i(y)\,z_{ij}(y-\zeta_i') , \quad j=1,2.$$
As for $i=k+1, \ldots,k+l$, we have that $\zeta_i \in {\partial}\Omega$. For notational simplicity, assume for the moment that $\zeta_i=0$ and that the inner unit normal vector to ${\partial}\Omega$ at $\zeta_i$ is the vector ${\rm e}_2=(0,1)$. Hence, there exists $r_1>0$, ${\delta}>0$ small and a function ${\rm p}: (-{\delta},{\delta})\to {\mathbb{R}}$ satisfying $${\rm p}\in C^{\infty}(-{\delta},{\delta}), \quad {\rm p}(0)=0, \quad {\rm p}'(0)=0$$ and such that $$\Omega \cap B_{r_1}(\zeta) =\{(x_1,x_2)\,:\, -{\delta}<x_1< {\delta}, \quad {\rm p}(x_1)<x_2\}\cap B_{r_1}(0,0).$$
Consider the flattening change of variables $F_i:\Omega \cap B_{r_1}(\zeta)\to {\mathbb{R}}^2$ defined by $$F_i:=(F_{i1},F_{i2}), \quad \quad \hbox{where}\quad F_{i1}:=x_1 + \frac{x_2 - {\rm p(x_1)}}{1 + |{\rm p'}(x_1)|^2}{\rm p}'(x_1) \quad \quad F_{i2}:= x_2 - G(x_1).$$
At this point we remark that the radius $r_{1}$ must satisfy that $r_{1}\in (0,c_{{\varepsilon}})$, where $c_{{\varepsilon}}$ is defined in .
Throughout our discussions we will assume that $r_{1}\in (\frac{1}{2}c_{{\varepsilon}},c_{{\varepsilon}})$, so that $\frac{r_1}{{\varepsilon}}\to \infty$ as ${\varepsilon}\to 0$.
Recalling that $\zeta_i'=\frac{\zeta_i}{{\varepsilon}}$, we set for $y\in {\partial}\Omega_{{\varepsilon}} \cap B_{\frac{r_1}{{\varepsilon}}}(\zeta_i')$, $$F^{{\varepsilon}}_i(y):= \frac{1}{{\varepsilon}}F_i({\varepsilon}y)$$ and define $$\chi_{i}(y):= \chi(|F_i^{{\varepsilon}}(y)|), \quad \quad Z_{ij}(y):=\chi_i(y)z_{ij}(F_i^{{\varepsilon}}(y)).$$
The following proposition accounts for the solvability of the linear problem -. For this we define the following norm $$\label{normh}
\|h\|_{*}:=\sup \limits_{y \in \Omega_{{\varepsilon}}} \frac{|h(y)|}{{\varepsilon}^2 \,+\,\sum_{i=1}^m \left(1 + |y - \zeta_i'|\right)^{-2-\sigma}}.$$
\[solvabilitylinearproblem\] Assume the conditions of Lemma \[sizeof the errorSvep\] on the parameters $d_i$ and $\zeta_i$. For any ${\varepsilon}>0$ small enough and any given $h\in L^{\infty}(\Omega_{{\varepsilon}})$ there exists $\phi\in C^1(\overline{\Omega_{{\varepsilon}}})$ and constants $c_{ij}\in {\mathbb{R}}$ where $\phi$ is the unique solution of . Moreover, there exists a constant $C>0$ independent of ${\varepsilon}>0$ such that $$\|\nabla \phi\|_{L^{\infty}(\Omega_{{\varepsilon}})}\,+\,\|\phi\|_{L^{\infty}(\Omega_{{\varepsilon}})}\leq C\,\ln\left(\frac{1}{{\varepsilon}}\right)\|h\|_{*}.$$
With our choice of the radius $r_{1}$, the proof of Proposition \[solvabilitylinearproblem\] follows the same lines of the proof of Proposition 3.1 in [@DELPINOWEI] with only slight changes. We leave details to the reader.
Denote by ${{\boldsymbol \zeta}}:= (\zeta_1,\ldots,\zeta_m)$ and $T(h)=\phi$ the linear operator given by Proposition \[solvabilitylinearproblem\]. Clearly $T(h)$ depends on ${{{\boldsymbol \zeta}}}$, so we write $$T(h)=T(h)({{{\boldsymbol \zeta}}}).$$
Proceeding exactly as in section 3 in [@DELPINOKOWALCZYKMUSSO], we obtain that the mapping $ {{{\boldsymbol \zeta}}}\to T(h)( {{{\boldsymbol \zeta}}})$ is differentiable in $ {{{\boldsymbol \zeta}}}$ and $$\|D_{ {{{\boldsymbol \zeta}}}}T(h)\|\leq C\ln\left(\frac{1}{{\varepsilon}}\right)^{2}\|h\|_{*}.$$
As a direct application of Proposition \[solvabilitylinearproblem\] and a fixed point argument, we solve the nonlinear problem
$$\label{nonlinearlineareqanphi1}
\Delta \phi -{\varepsilon}^2 \phi = -S(v_{{\varepsilon}})-e^{v_{{\varepsilon}}}\phi -{\varepsilon}\gamma({\varepsilon}y) \cdot \nabla\phi- N(\phi) +
\sum_{i=1}^m \sum_{j=1,J_i}c_{ij}\chi_{ij} Z_{ij} \quad \hbox{in }\quad \Omega_{{\varepsilon}}$$
with the boundary and orthogonality conditions in .
\[solvabilitynonlinearprojectedproblem\] Under assumptions in Propososition \[solvabilitylinearproblem\], given any ${\alpha}>0$ for every ${\varepsilon}>0$ small there exists a solution $\phi$ and constants $c_{ij}\in {\mathbb{R}}^N$ satisfying - and such that $$\|\nabla \phi\|_{L^{\infty}(\Omega_{{\varepsilon}})}\,+\,\|\phi\|_{L^{\infty}(\Omega_{{\varepsilon}})}\leq C\,{\varepsilon}^{{\alpha}}\ln\left(\frac{1}{{\varepsilon}}\right).$$
Even more, $\phi=\Phi({ {{{\boldsymbol \zeta}}}})$ is differentiable respect to ${ {{{\boldsymbol \zeta}}}}$ and $$\|D_{{ {{{\boldsymbol \zeta}}}}}\Phi\|\leq C{\varepsilon}^{{\alpha}}\ln\left(\frac{1}{{\varepsilon}}\right)^{2}.$$
We proceed using a fixed point argument, as in the proof of Lemma 4.1 in [@DELPINOWEI] using the fact that $$\|\nabla \phi\|_*\leq C \|\nabla \phi\|_{L^{\infty}(\Omega_{{\varepsilon}})}.$$
The energy estimate {#energy estimates}
===================
Our next step, consists in choosing the points $\zeta_i\in \overline{\Omega}$ so that the real numbers, (actually functions of $ {{{\boldsymbol \zeta}}}:=(\zeta_1,\ldots,\zeta_m)$) $c_{ij}$, in equation , are identically zero. Thus leading to the solution of -.
Notice that the dimension of the approximate kernel of the linear problem - is $3k + 2l$. We get rid of $m=k+l$ elements of this approximate kernel by choosing parameters $d_1,\ldots,d_m$ satisfying . On the other hand, associated to the points $\zeta_i \in \Omega$, we must get rid off the constants $c_{i1}$ and $c_{i2}$, while for points $\zeta_i \in {\partial}\Omega$, we must get rid of only the constants $c_{i1}$.\
Throughout this part, we take $\zeta_1, \ldots,\zeta_m \in \overline{\Omega}$ satisfying conditions and . In addition, we assume that for some $\kappa \geq 1$ $$\label{closebutnotmuch}
c|\ln({\varepsilon})|^{-\kappa}\leq |\zeta_i - \zeta_j| \leq C , \quad \forall\, i,j =1,\ldots,m, \quad i\neq j.$$ Since $$c_{{\varepsilon}}:=\min\{|\zeta_i -\zeta_j|\,:\, i,j=1,\ldots,m, \quad i\neq j\}$$ we notice that $c_{{\varepsilon}}\geq c|\ln({\varepsilon})|^{-\kappa}$.
Since the approximation in depends on the points $\zeta_i$, we write $$u_{{\varepsilon}}( {{{\boldsymbol \zeta}}}):=u_{{\varepsilon}}=\sum_{i=1}^m U_i + H_i, \quad \quad \hbox{in} \quad \overline{\Omega},$$ where $$U_{i}(x):= \ln
\left(\frac{8d_i^2}{\left({\varepsilon}^2d_i^2 + |x-\zeta_i|^2\right)^2}\right),\quad \quad \hbox{for } x \in {\mathbb{R}}^2$$ and the $d_i$’s are real numbers satisfying $$\label{logconditiondi1}
\ln\left(8 d_i^2\right)= c_iH(\zeta_i,\zeta_i) \,+ \, \sum_{j\neq i} c_jG(\zeta_i,\zeta_j), \quad \quad \forall\,i=1,\ldots,m.$$ Denote also $
\phi=\phi( {{{\boldsymbol \zeta}}})$ and $c_{ij}( {{{\boldsymbol \zeta}}})
$ the unique solution of - and finally set $\tilde{\phi}( {{{\boldsymbol \zeta}}}):= \phi( {{{\boldsymbol \zeta}}})(\frac{x}{{\varepsilon}})$.
For any $u\in H^1(\Omega)$, we consider the energy $$\label{energy}
J(u):=\frac{1}{2}\int_{\Omega}a(x)\left(|\nabla u|^2 + u^2\right)dx \,-\,{\varepsilon}^2\int_{\Omega}a(x)e^u$$ which is well defined due to the Moser-Trudinger inequality. It is well known that critical points of $J$ are nothing but solutions of equation . Let us introduce the so-called [*reduced energy*]{} $$\label{red-energy}
\mathcal{F}_{{\varepsilon}}( {{{\boldsymbol \zeta}}}):= J\left(u_{{\varepsilon}}( {{{\boldsymbol \zeta}}}) + \tilde{\phi}( {{{\boldsymbol \zeta}}})\right).$$
The aim of the next result is to understand the role played by $\mathcal{F}_{{\varepsilon}}$ and its asymptotic behavior as ${\varepsilon}\to0.$
\[variational reduction\]
- If $ {{{\boldsymbol \zeta}}}=(\zeta_1,\ldots,\zeta_m),$ with the $\zeta_i$’s satisfying conditions , and , is a critical point of the functional $\mathcal{F}_{{\varepsilon}}$ then $u=u_{{\varepsilon}}( {{{\boldsymbol \zeta}}}) + \tilde{\phi}( {{{\boldsymbol \zeta}}})$ is a critical point of $J$, i.e. a solution of equation .
- There holds true that $$\mathcal{F}_{{\varepsilon}}( {{{\boldsymbol \zeta}}})=\mathcal F_0({{\boldsymbol \zeta}}) + \tilde{\theta}_{{\varepsilon}}( {{{\boldsymbol \zeta}}})$$ $C^1-$uniformly in ${{\boldsymbol \zeta}}=(\zeta_1,\ldots,\zeta_m)$. Here (given $c_0:=-4\pi(2-\ln8)$) $$\label{interior}\begin{aligned}
\mathcal F_0({{\boldsymbol \zeta}}):= &\sum_{i=1}^m a(\zeta_i) \left(c_0+8\pi|\ln {\varepsilon}|\right)
-4\pi \sum_{i=1}^m
a(\zeta_i)\left[H(\zeta_i,\zeta_i) +\sum_{j\neq i}G(\zeta_i,\zeta_j)\right]\\
&\hbox{if $\zeta_1,\dots,\zeta_m\in\Omega$}\\
\end{aligned}$$ and $$\label{boundary}\begin{aligned}
\mathcal F_0({{\boldsymbol \zeta}}):= &\sum_{i=1}^m a(\zeta_i) \left(c_0+8\pi|\ln {\varepsilon}|\right)
-2\pi \sum_{i=1}^m
a(\zeta_i)\left[H(\zeta_i,\zeta_i) +\sum_{j\neq i}G(\zeta_i,\zeta_j)\right]\\
&\hbox{if $\zeta_1,\dots,\zeta_m\in{\partial}\Omega$}\\
\end{aligned}$$ Moreover $\tilde{\theta}_{{\varepsilon}}$ is a $C^1-$ function such that $
|\tilde{\theta}_{{\varepsilon}}| + |\nabla _{ {{{\boldsymbol \zeta}}}} \tilde{\theta}_{{\varepsilon}}|
\to 0
$ as ${\varepsilon}\to 0$,
First of all, we prove $(i)$ and the fact that there exists a $C^1$ differentiable function $\tilde{\theta}_{{\varepsilon}}$ such that $$\mathcal{F}_{{\varepsilon}}( {{{\boldsymbol \zeta}}})=J(u_{{\varepsilon}}( {{{\boldsymbol \zeta}}})) + \tilde{\theta}_{{\varepsilon}}( {{{\boldsymbol \zeta}}})$$ and as ${\varepsilon}\to 0$, $$|\tilde{\theta}_{{\varepsilon}}| + |\nabla _{ {{{\boldsymbol \zeta}}}} \tilde{\theta}_{{\varepsilon}}|
\to 0$$ uniformly in ${{\boldsymbol \zeta}}=(\zeta_1,\ldots,\zeta_m)$. The proof follows the same lines of Lemma 5.1 and Lemma 5.2 in [@DELPINOWEI], with no changes. We leave details to the reader. We also refer the reader to [@WEIYEZHOU1; @WEIYEZHOU2] for similar computations.\
\
Then we analyze the asymptotic behavior of the energy $J(u_{{\varepsilon}}({{\boldsymbol \zeta}}))$. We only consider the case when the points belong to $\Omega$, because the proof when the points lie on the boundary ${\partial}\Omega$ relies on similar arguments. [Moreover, we only compute the $C^0-$expansion because the $C^1-$estimate can be obtained in a similar way.]{}
First of all, since for every $i=1,\ldots,m$ $${{\rm div}}(a(x)\nabla PU_i)-a(x)PU_i+{\varepsilon}^2a(x)e^{U_i} =0\quad \quad \hbox{in} \quad \Omega$$ $$\frac{{\partial}PU_i}{{\partial}\nu}=0\quad \quad \hbox{on} \quad {\partial}\Omega$$ we know that for any $j,i=1,\ldots,m$, $$\int_{\Omega}a(x)\left(\nabla PU_i\cdot \nabla PU_j + PU_i\cdot PU_j\right)dx = {\varepsilon}^2\int_{\Omega}a(x)e^{U_i}PU_jdx.$$
Hence, we compute $$\begin{aligned}
J(u_{{\varepsilon}}({{\boldsymbol \zeta}}))&=&\frac{1}{2}\int_{\Omega}a(x)\left(|\sum_{i=1}^m \nabla PU_i|^2 + |\sum_{i=1}^m PU_i|^2\right)- {\varepsilon}^2\int_{\Omega}a(x)e^{\sum_{i=1}^m PU_i}\\
&=&\frac{1}{2}\sum_{i=1}^m \int_{\Omega}a(x)\left(\left|\nabla PU_i\right|^2 + \left|PU_i\right|^2\right)\\
&&+ \frac{1}{2}\sum_{i,j=1 i\neq j}^m \int_{\Omega}a(x)\left(\nabla PU_i\cdot \nabla PU_j + PU_i\cdot PU_j\right)-{\varepsilon}^2\int_{\Omega}a(x)e^{\sum_{i=1}^m PU_i}
\\
&=&\underbrace{\sum_{i=1}^m\frac{{\varepsilon}^2}{2}\int_{\Omega}a(x)e^{U_i}PU_i dx}_{I} + \underbrace{\sum_{i,j \,\,i\neq j}\frac{{\varepsilon}^2}{2}\int_{\Omega}a(x)e^{U_i}PU_jdx}_{II} -\underbrace{{\varepsilon}^2\int_{\Omega}a(x)e^{\sum_{i=1}^m PU_i}dx}_{III}.\end{aligned}$$
We first compute $I$. Fix $i=1,\ldots,m$ so that $${\varepsilon}^2\int_{\Omega}a(x)e^{U_i}PU_idx=\int_{\Omega}a(x)\frac{8{\varepsilon}^2d_i^2}{({\varepsilon}^2 d_i^2 + |x-\zeta_i|^2)^2}\left(U_i(x) + H_i(x)\right)dx.$$
From Lemma \[expansionHdzeta\], for any ${\alpha}\in (0,1)$ $$U_i(x)+H_i(x)=\ln\left(\frac{1}{({\varepsilon}^2 d_i^2 + |x-\zeta_i|^2)^2}\right) + H(x,\zeta_i) + {\mathcal{O}}({\varepsilon}^{{\alpha}}), \quad \quad x\in\Omega.$$
Using the change of variables $x=\zeta_i +{\varepsilon}d_i y$, we obtain $${\varepsilon}^2\int_{\Omega}a(x)e^{U_i}PU_idx$$
$$=\int_{\Omega_{{\varepsilon}d_i} -\frac{\zeta_i}{{\varepsilon}d_i}}a(\zeta_i + {\varepsilon}d_i y)\frac{8}{(1+|y|^2)^2}\left(-\ln({\varepsilon}^4d_i^4)+\ln\left(\frac{1}{(1+|y|^2)^2}\right)+ H(\zeta_i + {\varepsilon}d_i y,\zeta_i) + {\mathcal{O}}({\varepsilon}^{{\alpha}})\right)dy$$
$$= \int_{\Omega_{{\varepsilon}d_i} -\frac{\zeta_i}{{\varepsilon}d_i}}a(\zeta_i)\frac{8}{(1+|y|^2)^2}\left(-\ln({\varepsilon}^4d_i^4)\,+\,\ln\left(\frac{1}{(1+|y|^2)^2}\right)+ H(\zeta_i,\zeta_i) + {\mathcal{O}}({\varepsilon}^{{\alpha}}d_i^{{\alpha}}|y|^{{\alpha}})\right)dy$$
$$=\int_{{\mathbb{R}}^{2}}a(\zeta_i)\frac{8}{(1+|y|^2)^2}\left(-\ln({\varepsilon}^4d_i^4)\,+\,\ln\left(\frac{1}{(1+|y|^2)^2}\right)+ H(\zeta_i,\zeta_i) \right)dy + {\mathcal{O}}({\varepsilon}^{{\alpha}}d_i^{{\alpha}}).$$
Since $$\int_{{\mathbb{R}}^2}\frac{8}{(1+|y|^2)^2}dy=8\pi \quad \quad \hbox{and} \quad \quad \int_{{\mathbb{R}}^2}\frac{8}{(1+|y|^2)^2}\ln\left(\frac{1}{(1+|y|^2)^2}\right)dy=-16\pi,$$ we find that $$\label{estimateI}
I=-8\pi\sum_{i=1}^m a(\zeta_i)\left(\ln({\varepsilon}^2 d_i^2)+1\right) + 4\pi\sum_{i=1}^ma(\zeta_i)H(\zeta_i,\zeta_i) + {\mathcal{O}}({\varepsilon}^{{\alpha}}).$$ Here we also used the fact that the parameters $d_i$’s satisfy $
\frac{1}{C}\leq d_i \leq |\ln\left({{\varepsilon}}\right)|^C
$ and $
\lim_{{\varepsilon}\to 0^+} {\varepsilon}d_i =0
$ (as it follows directly from and Proposition and expression and ).
Next, we compute $II$. First we notice for $i\neq j$ that $${\varepsilon}^2\int_{\Omega}a(x)e^{U_i}PU_jdx$$ $$=\int_{\Omega}a(x)\frac{8 {\varepsilon}^2d_i^2}{({\varepsilon}^2d_i^2+|x-\zeta_i|^2)^2}\left(\ln\left(\frac{1}{({\varepsilon}^2d_j^2+|x-\zeta_j|^2)^2}\right)
+ H(x,\zeta_j) + {\mathcal{O}}({\varepsilon}^{{\alpha}}d_j^{{\alpha}})\right)dy.$$
Recall that $$c_{{\varepsilon}}:=\min\{|\zeta_i -\zeta_j|\,:\, i,j=1,\ldots,m, \quad i\neq j\}.$$
On the other hand, for $|x -\zeta_j|\geq c_{{\varepsilon}}$, we have that $$\begin{aligned}
\ln\left(\frac{1}{({\varepsilon}^2 d^2_j + |x-\zeta_j|^2)^2}\right)&=&-4\ln\left(|x-\zeta_j|\right)+ {\mathcal{O}}_{L^{\infty}(B_{c_{{\varepsilon}}}(\zeta_i))}({\varepsilon}^2d_j^2 |y-\zeta'_j|^{-2})\\
&=&-4\ln(|{\varepsilon}y -\zeta_j|) + {\mathcal{O}}_{L^{\infty}(B_{\frac{c_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i'))}({\varepsilon}^2 d_j^2 |y-\zeta'_j|^{-2})\\
&=&-4\ln(|\zeta_i -\zeta_j|) + {\mathcal{O}}_{L^{\infty}(B_{\frac{c_{{\varepsilon}}}{{\varepsilon}}}(\zeta_i'))}\left({\varepsilon}^{{\alpha}}\right).\end{aligned}$$
Also $$H(x,\zeta_j)=H(\zeta_i,\zeta_j)+ {\mathcal{O}}_{L^{\infty}(\Omega)}(|x - \zeta_j|^{{\alpha}}), \quad \quad \forall\, x\in \Omega.$$ so that $$\begin{aligned}
{\varepsilon}^2\int_{\Omega}a(x)e^{U_i}PU_jdx&=&\int_{\Omega}a(\zeta_i)\frac{8}{(1+|y|^2)^2}\left(-4\ln\left(|\zeta_i-\zeta_j|\right)
+ H(\zeta_i,\zeta_j) + {\mathcal{O}}({\varepsilon}^{{\alpha}})\right)dy\\
&=&8\pi a(\zeta_i)G(\zeta_i,\zeta_j) + {\mathcal{O}}({\varepsilon}^{{\alpha}}).\end{aligned}$$
Therefore, $$\label{estimateII}
II=\sum_{i,j,\,\,i\neq j}^m 4\pi \,a(\zeta_i)G(\zeta_i,\zeta_j) + {\mathcal{O}}({\varepsilon}^{{\alpha}}).$$
Finally, we compute $III$. To do this we appeal to Lemma \[sizeof the errorSvep\] to obtain that for fix $i=1,\ldots,m$ and for any $x\in B_{c_{{\varepsilon}}}(\zeta_i)$ $$\begin{aligned}
III&=&{\varepsilon}^2\int_{\Omega}a(x)e^{\sum_{j=1}^m U_j(x) +H_j(x)}dx \nonumber\\
&=&{\varepsilon}^2 \sum_{i=1}^m\int_{\Omega}a(x)e^{\sum_{j=1}^m U_j(x) +H_j(x)}\chi_{B_{c_{{\varepsilon}}}(\zeta_i)}dx + {\varepsilon}^2\int_{\Omega}a(x)e^{\sum_{j=1}^m U_j(x) +H_j(x)}\chi_{\Omega-\cup_{i=1}^m B_{c_{{\varepsilon}}}(\zeta_i)}dx\nonumber\\
&=&8\pi \sum_{i=1}^m a(\zeta_i) + {\mathcal{O}}({\varepsilon}^{{\alpha}}).\label{estimateIII}\end{aligned}$$
Finally putting together estimates , and we obtain that $$J(u_{{\varepsilon}}({{\boldsymbol \zeta}})) = -8\pi\sum_{i=1}^m a(\zeta_i)\left(2+\ln({\varepsilon}^2 d_i^2)\right) +4\pi \sum_{i=1}^m
a(\zeta_i)\left[H(\zeta_i,\zeta_i) +\sum_{j\neq i}G(\zeta_i,\zeta_j)\right] + {\mathcal{O}}({\varepsilon}^{{\alpha}})$$ and using condition the desired estimate follows.
Proofs of Theorems {#proofoftheorems}
==================
Let $
{{{\bf s}}}:=( {s}_1,\ldots, {s}_m)\in ({\partial}\Omega)^m $ and $ {{\bf t}}:=(t_1,\ldots,t_m)\in ({\mathbb{R}}_+)^m .$ Let us consider the *configuration space* $$\Lambda_{{\delta}}:=\{
({{\bf s}},{{\bf t}})\in ({\partial}\Omega )^m\times ({\mathbb{R}}_+)^m\,:\, |s_i-s_j|\ge \delta, \ \delta<t_i<1/\delta\},$$ for some $\delta>0$ small, independent of ${\varepsilon}>0.$
For any point in $({{\bf s}},{{\bf t}})\in \Lambda_{{\delta}}$, we set $$\label{zetai}
\zeta_i:=s _{i} + |\ln({\varepsilon})|^{-1}\,t _i\,\nu( { s_{i}}), \quad \quad \forall i=1,\ldots,m.$$
Observe that , , and hold true. By Proposition \[variational reduction\] we have to find a critical point ${{\boldsymbol \zeta}}^{\varepsilon}:=(\zeta_1^{\varepsilon},\dots,\zeta_m^{\varepsilon})\in \Omega\times\dots\times \Omega$ of the reduced energy $\mathcal F_{\varepsilon}$ where each $\zeta_i^{\varepsilon}$ is as in . If we use the parametrization of the points $\zeta_i$ given in , we are lead to find a critical point $({{\bf s}}^{\varepsilon},{{\bf t}}^{\varepsilon})$ of the reduced energy $\mathcal F_{\varepsilon}({{\bf s}},{{\bf t}}).$
By and the property of Robin’s function we deduce that $\mathcal F_{\varepsilon}$ reduces to $$\label{cruciale} \mathcal F_{\varepsilon}({{\bf s}},{{\bf t}})= 8\pi|\ln{\varepsilon}|\left[\sum_{i=1}^m a(s_i)+\Upsilon_{\varepsilon}({{\bf s}})\right] + 16\pi\left[\sum_{i=1}^m a(s_i) \ln t_i+t\partial_\nu a(s_i)\right]+\Theta_{\varepsilon}({{\bf s}},{{\bf t}}),$$ where the smooth functions $\Upsilon_{\varepsilon}({{\bf s}})$ only depends on ${{\bf s}}$ while $ \Theta_{\varepsilon}({{\bf s}},{{\bf t}})$ depends on both ${{\bf s}}$ and ${{\bf t}}$ and they are higher order terms, namely $|\Upsilon_{\varepsilon}|,|\nabla \Upsilon_{\varepsilon}|,|\Theta_{\varepsilon}|,|\nabla \Theta_{\varepsilon}|\to0$ as ${\varepsilon}\to0.$ The proof of this claim is postponed to the end.
Once the estimate is proved, the claim easily follows by degree theory. Indeed, using , let us introduce the continuous functions $$\begin{aligned}
\label{cru3}
& {\nabla_{s_i}\mathcal F_{\varepsilon}({{\bf s}},{{\bf t}})\over |\ln{\varepsilon}|}= \underbrace{8\pi \nabla a(s_i )}_{:=\mathcal S_i(s_i)}+o(1),\ i=1,\dots,m\\
\label{cru4} & {\nabla_{t_i}\mathcal F_{\varepsilon}({{\bf s}},{{\bf t}}) }= \underbrace{16\pi \left[{a(s_i )\over t_i}+\partial_\nu a(s_i) \right]}_{:=\mathcal T_i(s_i,t_i)}+o(1),\ i=1,\dots,m .
\end{aligned}$$ Under assumption [*(A1)*]{}, we know that $a$ has $m$ different strict local minima or local maxima points $\zeta^*_1,\dots,\zeta_m^*\in {\partial}\Omega.$ Set $s_i^*:=\zeta_i^*.$ Therefore, for some $\rho>0$ small enough, the *Brouwer degree* $ \mathrm {deg}(\nabla a, B(s^*_i,\rho),0)$ is well defined and (see, for example, Corollary 2 in [@AMANN]) $$\label{cru5}
\mathrm {deg}(\mathcal S_i, B(s^*_i,\rho),0)=\pm1\not=0\ \hbox{for any}\ i=1,\dots,m.$$
On the other hand, since $\partial_\nu a(\zeta^*_i)<0$, we can choose $\rho$ small enough so that for any $s_i\in B(s^*_i,\rho)$ there exists a unique $t_i=t_i(s_i)=-{a(s_i)\over\partial_\nu a(s_i)}$ such that $\mathcal T_i(s_i,t_i)=0.$
Set $t_i^*:=t_i(s^*_i)$ and let $\rho$ smaller if necessary, so that the Brouwer degree $$\mathrm {deg}( (\mathcal S_i, \mathcal T_i),
B(s^*_i,\rho)\times B(t_i^*,\rho) ,0)$$ is well defined. Since $\partial_{t_i}\mathcal T_i(s_i,t_i(s_i))<0,$ by (see, for example, Lemma 5.7 in [@mopi]) we immediately get that $$\label{cru6}
\mathrm {deg}( (\mathcal S_i, \mathcal T_i),
B(s_i^*,\rho)\times B(t_i^*,\rho) ,0)\not=0\ \hbox{for any}\ i=1,\dots,m.$$ Finally, by using the properties of Brouwer degree, we get $$\label{cru7}\begin{aligned}
& \mathrm {deg}( (\mathcal S_1, \mathcal T_1),\dots,(\mathcal S_m, \mathcal T_m) ,
\left(B(s_1^*,\rho)\times B(t_1^*,\rho)\right)\times\dots\times \left(B(s_m^*,\rho)\times B(t_m^*,\rho)\right),0)\\
&=\mathrm {deg}( (\mathcal S_1, \mathcal T_1),
B(s_1^*,\rho)\times B(t_1^*,\rho) ,0)\times\dots\times\mathrm {deg}( (\mathcal S_m, \mathcal T_m),
B(s_m^*,\rho)\times B(t_m^*,\rho) ,0)\\
&\not=0.
\end{aligned}$$ Combining with and we deduce that if ${\varepsilon}$ is small enough there exists $({{\bf s}}^{\varepsilon},{{\bf t}}^{\varepsilon})$ such that $\nabla \mathcal F_{\varepsilon}({{\bf s}}^{\varepsilon},{{\bf t}}^{\varepsilon})=0$. In particular, ${{\bf s}}^{\varepsilon}=(s_1^{\varepsilon},\dots,s_m^{\varepsilon})\to (\zeta^*_1,\dots, \zeta^*_m)$ as ${\varepsilon}\to0.$ That concludes the proof.\
Let us prove .
Using a smooth extension of the function $a(x)$ we can perform a Taylor expansion around every point $s_i$ to obtain that $$\label{taylora(x)}
a(\zeta_i)=a(s _i) + |\ln({\varepsilon})|^{-1}\,t _i {\partial}_{\nu}a(s _i) + {\mathcal{O}}([|\ln({\varepsilon})|^{-1}t _i]^2).$$ On the other hand, from Corollary \[corolario\] and the regularity in of the function $R(z)$ described in , we find that for any ${\alpha}\in (0,1)$ and any $\zeta_i$ as in , [$$\label{expasionrobin}
H(\zeta_i,\zeta_i)=4\ln\left(\ln\left(\frac{1}{{\varepsilon}}\right)\right)+4\ln\left(\frac{1}{2\,t_i}\right)
\,+\,{\rm z}(s_i)\,+\,{\mathcal{O}}\left([|\ln({\varepsilon})|^{-1}t_i]^{{\alpha}}\right).$$]{}
On the other hand, since $R\in C^{\infty}({\mathbb{R}}^2-\{0\})$, we can use expression , Proposition \[asymptoticsof H\] and the fact that $|s_i-s_j|\ge c>0$, to obtain the expansion $$\label{boundedgreen}\begin{aligned}
G(\zeta_i,\zeta_j)&= - 4\ln\left(|s_i-s_j|\right)\,+\,\tilde{z}(s_i,s_j)
\\ &
+\frac{1}{\ln\left(\frac{1}{{\varepsilon}}\right)}\left[4 t_j\,R(s_i-s_j)\cdot \left(D\gamma(s_j)\cdot \nu(s_j)\right)+ \nabla \tilde{z}(s_i,s_j)\cdot (t_i\nu(s_i),t_j\nu(s_j))\right]\\
&+ o\left(|\ln({\varepsilon})|^{-1}[|t_i|+ |t_j|]\right).
\end{aligned}$$
Hence, from , putting together expressions , and , we get $$\begin{aligned}
\mathcal F_0({{\bf s}},{{\bf t}})&=
\sum_{i=1}^m\left[{\rm c}_0 + {\rm c}_1\ln\left(\ln\left(\frac{1}{{\varepsilon}}\right)\right) +{\rm c}_2\ln\left(\frac{1}{{\varepsilon}}\right)\right]a(s_i)\\ &
+16\pi\sum_{i=1}^m\left[\ln\left(t_i\right)a(s_i) + \left(1 + \frac{\ln\left(t_i\right)}{\ln\left(\frac{1}{{\varepsilon}}\right)}\right) t_i\,{\partial}_{\nu}a(s_i)\right]\\ &
-4\pi\sum_{i=1}^m \left[ {\rm z}(s_i) - \sum_{j\neq i} 4\ln\left(|s_i -s_j|\right) + \tilde{z}(s_i,s_j)\right]\left(a(s_i)\,+\,\frac{t_i}{\ln\left(\frac{1}{{\varepsilon}}\right)}{\partial}_{\nu}a(s_i)\right)
\\ &+{\mathcal{O}}\left(|\ln({\varepsilon})|^{-{\alpha}}|{{\bf t}}|^{{\alpha}}\right)\end{aligned}$$ for some constants ${\rm c}_0,{\rm c}_1$ and $,{\rm c}_2>0$ and estimate follows.\
This concludes the proof of the Theorem.
We next proceed with the proof of Theorem \[theo3\].
By Proposition \[variational reduction\] we have to find a critical point ${{\boldsymbol \zeta}}^{\varepsilon}:=(\zeta_1^{\varepsilon},\dots,\zeta_m^{\varepsilon})\in\partial\Omega\times\dots\times\partial\Omega$ of the reduced energy $\mathcal F_{\varepsilon}.$ By we have that $\mathcal F_{\varepsilon}$ reduces to $$\mathcal F_{\varepsilon}({{\boldsymbol \zeta}})=|\ln{\varepsilon}|\left[8\pi\sum\limits_{i=1}^m a(\zeta_i)+o(1)\right]$$ $C^1-$uniformly in $ \left\{(\zeta_1,\dots,\zeta_m)\in{\partial}\Omega\times\dots\times{\partial}\Omega\ :\ |\zeta_i-\zeta_j|\ge c\ \hbox{for any }\ i\not=j\right\}$. The claim follows from the fact that $a$ has $m$ different strict local maxima or local minima points $\zeta^*_1,\dots,\zeta^*_m$ on the boundary ${\partial}\Omega$ which are stable under $C^1-$perturbation, using a degree argument as in the proof of Theorem \[theo2\]. In particular, there exists a critical point $(\zeta_1^{\varepsilon},\dots,\zeta_m^{\varepsilon})$ of $\mathcal F_{\varepsilon}$ such that each $\zeta_i^{\varepsilon}\to \zeta^*_i$ for $i=1,\dots,m.$
We next proceed with the proof of Theorem \[theo4\].
By Proposition \[variational reduction\] we have to find a critical point ${{\boldsymbol \zeta}}^{\varepsilon}:=(\zeta_1^{\varepsilon},\dots,\zeta_m^{\varepsilon})\in\partial\Omega\times\dots\times\partial\Omega$ of the reduced energy $\mathcal F_{\varepsilon}.$ By we have that $\mathcal F_{\varepsilon}$ reduces to $$\mathcal F_{\varepsilon}({{\boldsymbol \zeta}})= 8\pi|\ln{\varepsilon}| \sum\limits_{i=1}^m a(\zeta_i)-2\pi\sum_{j,i=1\atop j\neq i}^m a(\zeta_i) G(\zeta_i,\zeta_j)+o(1)$$ $C^0-$uniformly in $ \left\{(\zeta_1,\dots,\zeta_m)\in{\partial}\Omega\times\dots\times{\partial}\Omega\ :\ |\zeta_i-\zeta_j|\ge c|\ln{\varepsilon}|^{-\kappa}\ \hbox{for any }\ i\not=j\right\}$.
Arguing exactly as in Lemma 7.1 of [@WEIYEZHOU2] we prove that $\mathcal F_{\varepsilon}$ has a local maximum point $(\zeta_1^{\varepsilon},\dots,\zeta_m^{\varepsilon})$ such that each $\zeta_i^{\varepsilon}\to \zeta_0$ as ${\varepsilon}\to0.$
Appendix
========
The following technical lemma is rather standard, but for the sake of completeness, we include the details of the proof.
\[boundaryterm\] For every $x, \zeta \in {\partial}\Omega$, define $g(x,\zeta)$ as $$g(x,\zeta):=
\left\{
\begin{array}{ccc}
\frac{\nu(x)\cdot(x-\zeta)}{|x -\zeta|^2},& \quad x \in {\partial}\Omega-\{\zeta\},\\
\\
\frac{1}{2}k_{{\partial}\Omega}(\zeta), & \quad x = \zeta,
\end{array}
\right.$$ where $k_{{\partial}\Omega}(y)$ is the signed curvature of ${\partial}\Omega$ at $y$. Then for every $\zeta \in
{\partial}\Omega$, $g(\cdot,\zeta)\in C^{\infty}({\partial}\Omega)$. Even more, the mapping $\zeta \in {\partial}\Omega \mapsto g(\cdot, \zeta)$ belongs to $C^1({\partial}\Omega; C^1({\partial}\Omega))$.
Recall that ${\partial}\Omega$ is smooth. Let $\zeta \in {\partial}\Omega$ be arbitrary. After a translation and a rotation, if necessary, we may assume that $\zeta=(0,0)\in {\mathbb{R}}^2$ and that $\nu(\zeta)=(0,1)$. Hence, there exists $R>0$, ${\delta}>0$ small and a function ${\rm p}: (-{\delta},{\delta})\to {\mathbb{R}}$ satisfying $${\rm p}\in C^{\infty}(-{\delta},{\delta}), \quad {\rm p}(0)=0, \quad {\rm p}'(0)=0$$ and such that $$\Omega \cap B_R(\zeta) =\{(x_1,x_2)\,:\, -{\delta}<x_1< {\delta}, \quad {\rm p}(x_1)<x_2\}\cap B_{R}(0,0).$$
The inner unit normal vector in ${\partial}\Omega \cap B_R(\zeta)$ is computed as $$\nu(x_1,{\rm p}(x_1))=\frac{(-{\rm p}'(x_1),1)}{\sqrt{1 +|{\rm p}'(x_1)|^2 }}, \quad x_1 \in (-{\delta},{\delta}),$$ so that for $x_1\in (-{\delta},{\delta})$ $$\label{smoothnessboundarycondition}
g(x_1,{\rm p}(x_1),\zeta)=\frac{1}{\sqrt{1 +|{\rm p}'(x_1)|^2 }}\cdot
\frac{{\rm p}(x_1) - x_1\,{\rm p}'(x_1)}{x_1^2 + {\rm p}^2(x_1)}.$$
Since ${\rm p}$ is smooth, we use Taylor’s expansion together with the fact that ${\rm p}(0)={\rm p}'(0)=0$ to find that $$\label{redefinitionbdcondregularpart}
{\rm p}(x_1) = x_1^2\int_{0}^1(1-t) {\rm p}''(t\, x_1)dt\,=\,x_1^2 \,{\rm r}(x_1), \quad \quad {\rm p}'(x_1)=x_1\int_0^1 {\rm p}''(t\,x_1)dt\,=\, x_1\,{\rm q}(x_1),$$ where $${\rm r}(x_1):=\int_{0}^1(1-t) {\rm p}''(t\, x_1)dt, \quad \quad {\rm q}(x_1):=\int_0^1 {\rm p}''(t\,x_1)dt, \quad \quad \hbox{for } \quad -{\delta}<x_1<{\delta}.$$
Putting together and we obtain $$\label{eqrmpsmooth}
\frac{{\rm p}(x_1) - x_1\,{\rm p}'(x_1)}{x_1^2 + {\rm p}(x_1)^2}=\frac{{\rm r}(x_1) - {\rm q}(x_1)}{1 + x_1^2\,{\rm r}^2(x_1)}$$ which is a smooth function in $(-{\delta},{\delta})$.
By succesive differentiation of the identity in , we conclude that $g(x,\zeta)$ is smooth and also notice that $$\lim_{x_1\to 0}g((x_1,p(x_1)),\zeta)=-\frac{1}{2}{\rm p}''(0)=\frac{1}{2}k_{{\partial}\Omega}(0,0).$$ Finally from the smoothness of the tangent bundle of ${\partial}\Omega$, $T({\partial}\Omega)$, we obtain that $\zeta \mapsto g(\cdot,\zeta)$ belongs to $C^1({\partial}\Omega; C^1({\partial}\Omega))$ and this concludes the proof of the lemma. Using
[**Acknowledgments:** ]{} The research of the first author was supported by the Grant 13-00863S of the Grant Agency of the Czech Republic. The research of the second author was partially supported by GNAMPA.
[99]{} <span style="font-variant:small-caps;">Amann E.</span> A note on degree theory for gradient mappings. Proceedings of the American Mathematical Society. Volume 85, Number 4, 591-595 1982.
<span style="font-variant:small-caps;">Bellomo M., A. Bellouquid, Y. Tao and M. Winkler</span> Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues Math. Models Methods Appl. Sci. 25, 1663-1763 (2015).
<span style="font-variant:small-caps;">Biler P.</span> Local and global solvability of some parabolic system modeling chemotaxis. Adv. Math. Sci. Appl. 8 715-43. 1998.
<span style="font-variant:small-caps;">Brenner M P, Constantin P, Kadanoff L P, Schenkel A and Venkataramani S C</span> Diffusion, attraction and collapse. Nonlinearity 12 1071-98. 1999 .
<span style="font-variant:small-caps;">Brezis, H.</span> Functional analysis, Sobolev spaces and partial differential equations. Universitext. Springer, New York, 2011.
<span style="font-variant:small-caps;">Childress S.</span> Chemotactic Collapse in Two Dimensions (Lecture Notes in Biomathematics) vol 55 (Berlin: Springer) pp 61-8. 1984.
<span style="font-variant:small-caps;">Childress S and Percus J K.</span> Nonlinear aspects of chemotaxis Math. Biosci. 56 217-37. 1981.
<span style="font-variant:small-caps;">Corrias L, Perthame B and Zaag H.</span> Global solutions of some chemotaxis and angiogenesis systems in high space dimensions Milan J. Math. 72 1-28. 2004.
<span style="font-variant:small-caps;">Del Pino M, Kowalczyk M and Musso M.</span> Singular limits in Liouville-type equations Calc. Var. Partial Diff. Eqns 24 45-82. 2005.
<span style="font-variant:small-caps;">Del Pino M, Pistoia A and Vaira G.</span> Large mass boundary condensation patterns in the stationary Keller-Segel system arXiv 1403.2511
<span style="font-variant:small-caps;">del Pino M and Wei, J C.</span> Collapsing steady states of the Keller-Segel system. Nonlinearity 19, no. 3, 661-684. 2006.
<span style="font-variant:small-caps;">Di Nezza E, Palatucci G and Valdinoci E.</span> Hitchhiker’s guide to the fractional Sobolev spaces. Bull. Sci. Math. 136, no. 5, 521-573. 2012.
<span style="font-variant:small-caps;">Dolbeault J and Perthame B.</span> Optimal critical mass in the two-dimensional Keller-Segel model in R2 C. R. Math. Acad. Sci. Paris 339 611-6. 2004
<span style="font-variant:small-caps;">Esposito P, Grossi M and Pistoia A.</span> On the existence of blowing-up solutions for a mean field equation Ann. Inst. H. Poincar´e Anal. Non Lineaire. 22 227-57. 2005.
<span style="font-variant:small-caps;">Guerra I and Peletier M.</span> Self-similar blow-up for a diffusion-attraction problem Nonlinearity 17 2137-62. 2004.
<span style="font-variant:small-caps;">Gilbarg, D and Trudinger N.S.</span> Elliptic partial differential equations of second order. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001.
<span style="font-variant:small-caps;">Henry D.</span> Geometric Theory of Semilinear Parabolic Equations (Berlin: Springer) 1981.
<span style="font-variant:small-caps;">Herrero M A and Velazquez J J L.</span> Singularity patterns in a chemotaxis model Math. Ann. 306 583-623. 1996.
<span style="font-variant:small-caps;">Herrero M A and Velazquez J J L.</span> Chemotactic collapse for the Keller-Segel model J. Math. Biol. 35 177-96. 1996.
<span style="font-variant:small-caps;">Herrero M A and Velazquez J J L.</span> A blow-up mechanism for a chemotaxis model Ann. Scuola Norm. Sup.Pisa IV 35 633-83. 1997.
<span style="font-variant:small-caps;">Horstmann D.</span> On the existence of radially symmetric blow-up solutions for the Keller-Segel model J. Math. Biol. 44 463-78. 2002.
<span style="font-variant:small-caps;">Jager W and Luckhaus S.</span> On explosions of solutions to a system of partial differential equations modelling chemotaxis Trans. Am. Math. Soc. 329 819-24. 1992.
<span style="font-variant:small-caps;">Keller E F and Segel L A.</span> Initiation of slime mold aggregation viewed as an instability J. Theor. Biol. 26 399-415. 1970.
<span style="font-variant:small-caps;">Molle, R and Pistoia, A</span> Concentration phenomena in elliptic problems with critical and supercritical growth. Adv. Differential Equations 8 (2003), no. 5, 547–570.
<span style="font-variant:small-caps;">Nagai T, Senba T and Yoshida K.</span> Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis Funckcjal. Ekvac. 40 411-33. 1997.
<span style="font-variant:small-caps;">Pistoia A. and Vaira G.</span> Steady states with unbounded mass of the Keller-Segel system. Proc. Roy. Soc. Edinburgh Sect. A [**145**]{} (2015), no. 1, 203–222.
<span style="font-variant:small-caps;">Schaaf R.</span> Stationary solutions of chemotaxis systems. Trans. Am. Math. Soc. 292 531-56. 1985.
<span style="font-variant:small-caps;">Senba T and Suzuki T.</span> Some structures of the solution set for a stationary system of chemotaxis Adv. Math. Sci. Appl. 10 191-224. 2000.
<span style="font-variant:small-caps;">Temam R.</span> Infinite-Dimensional Dynamical Systems in Mechanics and Physics (New York: Springer). 1988.
<span style="font-variant:small-caps;">Wang G and Wei J.</span> Steady state solutions of a reaction-diffusion system modeling Chemotaxis Math. Nachr. 233-234 221-36. 2002.
<span style="font-variant:small-caps;">Wang, Y and Wei L.</span> Multiple boundary bubbling phenomenon of solutions to a Neumann problem. Adv. Differential Equations 13, no. 9-10, 829-856. 2008.
<span style="font-variant:small-caps;">Wei JC, Ye D and Zhou F.</span> Analysis of boundary bubbling solutions for an anisotropic Emden-Fowler equation. Ann. Inst. H. Poincaré Anal. Non Linéaire 25, no. 3, 425-447, 2008.
<span style="font-variant:small-caps;">Wei JC, Ye D and Zhou F.</span> Bubbling solutions for an anisotropic Emden-Fowler equation. Calc. Var. Partial Differential Equations 28, no. 2, 217-247. 2007.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
This paper introduces a new research problem of video domain generalization (**video DG**) where most state-of-the-art action recognition networks degenerate due to the lack of exposure to the target domains of divergent distributions. While recent advances in video understanding focus on capturing the temporal relations of the long-term video context, we observe that the global temporal features are less generalizable in the video DG settings. The reason is that videos from other unseen domains may have unexpected absence, misalignment, or scale transformation of the temporal relations, which is known as the temporal domain shift. Therefore, the video DG is even more challenging than the image DG, which is also under-explored, because of the entanglement of the spatial and temporal domain shifts.
This finding has led us to view the key to video DG as how to effectively learn the local-relation features of different time scales that are more generalizable, and how to exploit them along with the global-relation features to maintain the discriminability. This paper presents the **Adversarial Pyramid Network** (**APN**), which captures the local-relation, global-relation, and multilayer cross-relation features progressively. This pyramid network not only improves the feature transferability from the view of representation learning, but also enhances the diversity and quality of the new data points that can bridge different domains when it is integrated with an improved version of the image DG adversarial data augmentation method [@ADA]. We construct four video DG benchmarks: UCF-HMDB, Something-Something, PKU-MMD, and NTU, in which the source and target domains are divided according to different datasets, different consequences of actions, or different camera views. The APN consistently outperforms previous action recognition models over all benchmarks.
author:
- |
Zhiyu Yao[^1] , Yunbo Wang$^{*}$, Xingqiang Du, [Mingsheng Long]{} (), and Jianmin Wang\
School of Software, BNRist, Tsinghua University, China\
[{yaozy19,wangyb15,dxq18}@mails.tsinghua.edu.cn, {mingsheng,jimwang}@tsinghua.edu.cn]{}
bibliography:
- 'egbib.bib'
title: Adversarial Pyramid Network for Video Domain Generalization
---
Introduction
============
![ We view the key to solving video DG as how to exploit local and global temporal cues to overcome the space-time domain shift. While the global temporal relations appear to be less generalizable, the local temporal relations can increase feature transferability, but may also lead to incorrect generalizations without the help of global temporal relations. In this example, we may recognize a video of *playing basketball* from the unseen target domain via a shared sub-action *dribbling* (). However, through the local temporal feature of *running*, which is less discriminative from a classification perspective, we may misrecognize a video of *playing football* () as *playing basketball*. This work balances the transferability against discriminability between the local and global temporal features. []{data-label="video-dg-examples"}](local.pdf){width="\columnwidth"}
Improving the transferability of video recognition models has not been fully investigated before, but it is significant in real-world applications where datasets consist of a limited number of population sources. Previous deep networks achieve competitive results in the intra-domain setting [@Wang16; @wang2017spatiotemporal; @carreira2017quo; @NonLocal2018; @zhou2018temporal], assuming that training and test videos are independently and identically distributed (i.i.d.). We find that the performance of these models degenerates since the i.i.d. assumption might be violated as the domain shift is on. In this paper, we name this inter-domain experiment setting as the video domain generalization (**video DG**) problem, where models are trained on one source domain and evaluated on **unseen** target domains with the same label set. Unlike the extensively studied video domain adaptation problem [@chen2019temporal; @Jamal2018DeepDA], in video DG, not only labels but also data of the target domains are unavailable during training.
The essential question of video DG is how to mitigate the domain shift when the target distributions are totally unknown. The overall domain shift can be viewed as an entanglement of the spatial and temporal domain shifts. The spatial domain shift is caused by the variations of the appearance in video frames, which can be partly solved by previous image DG methods [@ADA; @Jigsaw; @dou2019domain]. This paper focuses more on the entangled spatial and temporal domain shifts, which are more complex, since we need to consider the misalignment of temporal features across domains further.
We claim that the key to this problem is to effectively capture the local temporal relations between frames and leverage them as the generalization bridges. Our motivation is that the sub-actions of videos under the same category might not be identical in different domains, but there must be an intersection of several local temporal features. For example, in Figure \[video-dg-examples\], videos of *playing basketball* from different domains may share multiple sub-actions such as layup and **dribbling** (strong local-relation features). But we notice that different categories may also share intersections of local temporal cues, *playing basketball* and *playing football* may share the same sub-action **running** (weak local-relation features). Although we don’t know what the strong local-relation features are, we know that they are necessarily highly correlated with the discriminative representations of the video, so we can cover the unknown target distributions by augmenting the training dataset via these shared local temporal cues. Otherwise, if we augment the dataset through the less discriminative weak local-relation features, it may lead to false generalization results.
In this paper, we present the Adversarial Pyramid Network (**APN**) to solve video DG, which is empirically established on the observation that the local-relation features improve the transferability while the global-relation features ensure the discriminability. The APN trades off the impact of these features in the processes of representation learning and the cross-domain data augmentation. At different pyramid levels, the APN captures local-relation features at different time scales and then aligns them to the global-relation features, thereby weakening the impact of less discriminative local-relation features that might lead to false generalization results. As a result, the APN learns multilayer cross-relation features with different levels of transferability and discriminability. Further, the multilayer cross-relation features can be used to expand the original Adversarial Data Augmentation (ADA) method [@ADA] specifically to video data, so as to solve the spatial and temporal domain shifts together. We observe that only if the multilayer cross-relation features are combined, can the ADA method achieve a notable improvement on video DG.
To sum up, this paper has the following contributions:
- **A new problem:** We introduce a new research problem of video DG and show that most previous video recognition networks degenerate in such settings as the spatiotemporal domain shift is on.
- **A deep network that learns more generalizable video representation:** We provide a pilot study of video DG and propose the new APN model that learns a pyramid of temporally cross-relation features that transfer well to other unseen domains. It is established on our key findings that the local-relation features are more generalizable while the global-relation features are more discriminative.
- **An extended ADA method for video data:** Tightly depending on the pyramid of cross-relation features, we adapt the image ADA method [@ADA] to space-time, along with a different minimax training procedure. We show that only with the relational feature pyramid, can the ADA be effectively used to improve video DG.
- **New video DG benchmarks:** We design four video DG benchmarks, based on five existing video datasets that are widely used in the standard, early and multi-view action recognition. These benchmarks cover different video DG scenarios with (1) only spatial domain shift; (2) entangled spatial and temporal domain shifts. Our approach achieves the best results consistently.
Related Work
============
This work is closely related to the deep learning methods in video action recognition and image DG, and is also related to the recent advances in video domain adaptation.
#### Video Action Recognition.
Many deep models for video action recognition are based on 2D CNNs [@Simonyan14; @Karpathy14; @feichtenhofer2016spatiotemporal; @Wang16; @zhou2018temporal; @sun2017optical; @STM; @lin2019tsm] and 3D CNNs [@C3D; @Tran15; @carreira2017quo; @qiu2017learning; @xie2017rethinking; @NonLocal2018; @CSN]. Among all these models, our APN model is most related to the Temporal Relation Network (TRN) [@zhou2018temporal], which learns local temporal features from different lengths and different combinations of short video snippets and then ensembles them together to get a global sequence-level feature. However, we find that the performance of TRN deteriorates in the video DG settings. We also find that TRN cannot further benefit from the modern DG method [@ADA].
Our work is distinct from TRN in two perspectives. First, our approach is fully motivated from the view of domain generalization. Thus we consider the cross-relation features in addition to the local relational features to trade off the discriminability and transferability. Second, we introduce a new approach that effectively adapts the previous ADA method to video DG. Different from TRN, our approach leverages a pyramid architecture based on the Transformer self-attention mechanism [@Transformer] to progressively fuse temporal relations at different time scales. From this perspective, our work is also related to recognition models that incorporate attention blocks [@attention-pooling; @NonLocal2018; @zhang2018attention; @wang2018eidetic]. Our model organizes the attention blocks in a pyramid framework, which is driven by a clear motivation and validated by extensive experiments
#### Image Domain Generalization.
Previous approaches in DG are mostly designed for image data [@li2018domain; @li2019feature; @shankar2018generalizing; @ADA; @Jigsaw; @dou2019domain], which can be divided into two groups: feature-based methods and data-based methods. Feature-based methods focus on extracting invariant cross-domain representations. Li *et al.* [@li2018domain] introduced a prior distribution on the feature representation via adversarial learning and a Maximum Mean Discrepancy (MMD) regularizer. Li *et al.* [@li2019feature] proposed a meta-learning approach to train a domain-invariant feature extractor. Data-based methods connect the source domain distributions and the unseen target distributions by expanding the training dataset. Shankar *et al.* [@shankar2018generalizing] augmented the source domain with domain-guided perturbations of the input instances. Volpi *et al.* [@ADA] proposed the ADA method, which augments the source domain with adversarial examples. Unlike all the above models, our approach is an early work for video DG, which extends the basic ADA method by particularly considering how to mitigate the temporal domain shift that is entangled with the spatial domain shift.
#### Image and Video Domain Adaptation.
Different from our DG problem, in domain adaptation (DA), the target distributions (without labels) are still available during training. While most previous work mainly focuses on solving the image DA problem [@yosinski2014transferable; @Ghifary2015domain; @tzeng2017adversarial; @long2018conditional], some recent models were proposed to solve the video DA problem [@Jamal2018DeepDA; @chen2019temporal]. They are related to our work due to the domain shift exists in both video DA and DG problems. However, these models close the representation distances across domains given the target distributions, and thus is not applicable to the DG settings. In contrast, the video DG problem is more challenging and has not been well explored.
Preliminary: Adv. Data Augmentation
===================================
In this section, we will show how the Adversarial Data Augmentation (ADA) method [@ADA] solves the image DG problem. The ADA method works in an iterative minimax training procedure: it generates adversarial examples that aim to fool the current discriminative network, and then appends these examples to the original data minibatch for image recognition. The ADA focuses on the following worst-case problem around the source distribution $Q$: $$\label{equal:worst-case}
\underset{\theta \in \Theta}{\operatorname{min}} \sup _{P : D(P, Q) \leq d} \mathbb{E}_{P}\big[\ell_\theta \left(X, Y\right)\big],$$ where $\theta \in \Theta$ is the set of weights of the entire model. $(X, Y) \in \mathcal{X} \times \mathcal{Y}$ indicates a source data point with its label. $\ell: \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}$ is the categorical cross-entropy loss.
The ADA method denotes by $D(P, Q)$ the distance metric around the source distribution $Q$ that characterizes the set of unknown populations we wish to generalize to. The perturbed new data distribution $P$ should be diverse enough but not deviate far from $Q$, $D\left(P, Q\right) \leq d$. $D(P, Q)$ is defined by the Wasserstein distance on the semantic space. Consider the transportation cost from $(z, y)$ to $\left(z^{\prime}, y^{\prime}\right)$: $$c\left((z, y),\left(z^{\prime}, y^{\prime}\right)\right) \triangleq \frac{1}{2}\left\|z-z^{\prime}\right\|_{2}^{2}+\infty \cdot \mathbf{1}\left\{y \neq y^{\prime}\right\},$$ which is denoted by $c\left(z,z^{\prime}\right)$ if $y=y^\prime$. By taking $g(x)$ as the output of the last hidden layer, the distance of two data points from the original space $\mathcal{X} \times \mathcal{Y}$ is defined as $$\label{max-phase}
c_{\theta}\left((x, y),\left(x^{\prime}, y^{\prime}\right)\right)=c\left(\left(g(x), y\right), \left(g(x^{\prime}), y^{\prime}\right)\right).$$
Thus, the worst-case formulation over data distributions can be defined as a surrogate loss [@ADA]: $$\label{max-mization-problem}
\sup _{x \in \mathcal{X}}\big\{\ell_\theta \left(x, y_{0}\right)-\gamma c_{\theta}\left((x, y_{0}),(x_{0}, y_{0})\right)\big\},$$ where $\gamma$ is a hyper-parameter of the transport cost penalty and $(x_0, y_0)$ is a data point from the source distribution $Q$.
The training procedure of ADA has two separate stages: a data augmentation stage and a minimization stage with respect to $\ell_\theta$. The data augmentation stage has two alternated training phases: a maximization phase with respect to [Eq. (\[max-mization-problem\])]{} and an online minimization phase of $\ell_\theta$ on the augmented dataset. In the $k$-th maximization phase, the new data point $(x_k, y_0)$ is generated by maximizing the perturbation over the source data $x_0$ with a factor of $\eta$: $$\Delta {x_k} = \eta \nabla_{x} \big \{\ell_\theta \left(x_k, y_0\right) - \gamma c_{\theta} \left((x_k, y_0), (x_0, y_0)\right) \big \}.
\label{max-mization}$$
The original image ADA method is not suitable for video DG in two ways. First, it defines the transportation cost $c(\cdot)$ at the activation of the last hidden layer according to [Eq. (\[max-phase\])]{}, which could be less generalizable on video data. In contrast, we define $c(\cdot)$ at different network levels to trade off between the transferability of the local temporal features and the discriminability of the global temporal features; thereby leading to more diverse new data points. Second, the training procedure of image ADA no longer fits the video DG problem. See [Section \[sec:video\_ada\]]{} for a more detailed discussion.
Adversarial Pyramid Network
===========================
{width="78.00000%"}
Our Adversarial Pyramid Network (APN) consists of three end-to-end trainable components (Figure \[fig:apn\]): a CNN-based frame encoder, an attention-based relational feature pyramid, and a new video ADA training procedure. In this section, we will first introduce the building blocks of these network components, and then describe the details of the relational feature pyramid. At last, we will present the extended ADA method for video data based on the relational feature pyramid, along with its minimax training procedure.
Building Blocks
---------------
#### Frame Encoder.
Given a video sequence, we divide it uniformly into $M$ segments, and then randomly sample one frame from each segment. We use a CNN to extract a feature $f_i$ from each sampled frame. Here, $f_i$ is the activation of the last hidden layer of the ResNet-50 model [@HeRes] (other network backbones can also be applicable) followed by a $D$-dimensional fully-connected layer and a Dropout layer with a $0.8$ dropout rate. We then concatenate $\{{f_1}, \ldots ,{f_M}\}$ at the time dimension and obtain $\mathcal{F}_{M} \in \mathbb{R}^{M \times D}$.
#### Multi-Scale Temporal Relations.
The idea of capturing multi-scale temporal relations is initially inspired by TRN [@zhou2018temporal], which has been proven to be effective for video understanding. But different from TRN, we innovatively propose an attention-based feature pyramid to progressively calculate frame relations within and across frame features of different time scales, which will be shown essential for generating effective spatiotemporal adversarial examples.
#### Attention Block.
Our pyramid architecture consists of multiple Transformer attention blocks [@Transformer]. Each block has three multi-head attention layers and two fully-connected layers followed by the layer normalization [@LN] (more network details in the supplementary materials). Typically, an attention block takes three inputs: the queries, the keys, and the values. Here, as we share common inputs for keys and values, it is denoted by $\text{Attention-Block}(\textrm{Query}, \textrm{Key})$.
Relational Feature Pyramid
--------------------------
By stacking the attention blocks, the APN learns the relational feature pyramid. It has three pyramid levels. At the first level, we learn the *within-relation* features, including the global temporal cues to summarize the overall patterns across the entire video, and the local temporal cues to bridge different video domains. At the second level, we learn *cross-relation* features by aligning the more domain-generalizable local relations to the more category-specific global relations, as we want to balance the feature transferability against the discriminability. At the last level, we fuse the cross-relation features and the global features to make the final predictions. We then use the level-II and level-III features to generate spatiotemporal adversarial examples.
#### Pyramid Level I: Within-Relation Attention.
The first pyramid level is applied to the output of the frame encoder to extract the within-relation features, which can be divided into the global one and the local ones. By taking as inputs $\mathcal{F}_M$, the concatenated frame features over $\{{f_1}, \ldots, {f_M}\}$, we have the global feature that reflects the overall temporal cues of the video: $\mathcal{R}_\textrm{Global}^\textrm{I} =\text{Attention-Block}(\mathcal{F}_M, \mathcal{F}_M).$ We then learn $M-2$ local relational features which are assumed to represent the common knowledge that can mitigate the temporal domain shift. Concretely, for each $m \in\{2, \ldots, M-1\}$, we first randomly sample $m$ consecutive items from the feature set $\{{f_1}, \ldots, {f_M}\}$, and concatenate them at the time dimension to obtain $\mathcal{F}_{m}$. Next, we use the attention block to obtain the local relational feature from an $m$-frame video snippet: $\mathcal{R}_{m}^\textrm{I} =\textrm{Attention-Block}(\mathcal{F}_{m}, \mathcal{F}_{m}).$ As a result, the first pyramid level provides a set of local relational features $\{{ \mathcal{R}_{2}^\textrm{I}}, \ldots ,{ \mathcal{R}_{M-1}^\textrm{I}}\}$ in addition to the global relational feature $\mathcal{R}_\textrm{Global}^\textrm{I} \in \mathbb{R}^{M \times D}$.
#### Pyramid Level II: Cross-Relation Attention.
Recalling the showcase of basketball (vs. football) in Figure \[video-dg-examples\], we may realize that local temporal cues can be more transferable but may also lead to false generalization. Thus, before generating the spatiotemporal adversarial examples to augment the training dataset, we need to constrain the learned local relations to more category-specific latent distributions. To this end, we use the second pyramid level to align the local features $\mathcal{R}_{m}^\textrm{I} \in \{{\mathcal{R}_{2}^\textrm{I}}, \ldots ,{\mathcal{R}_{M-1}^\textrm{I}}\}$ to the global feature $\mathcal{R}_\textrm{Global}^\textrm{I}$, and conduct the cross-relation attention such that $$\label{equ:loss3}
\mathcal{R}_{m}^\textrm{II} =\textrm{Attention-Block}(\mathcal{R}_\textrm{Global}^\textrm{I}, \mathcal{R}_{m}^\textrm{I}),$$ where we use $\mathcal{R}_\textrm{Global}^\textrm{I}$ as the queries and the $\mathcal{R}_{m}^\textrm{I}$ as the keys and the values. $\mathcal{R}_{m}^\textrm{I}$ has the dimensions of $\mathbb{R}^{m \times D}$ and $\mathcal{R}_{m}^\textrm{II}$ has the dimensions of $\mathbb{R}^{M \times D}$. In other words, the cross-relation features $\{{\mathcal{R}_{2}^\textrm{II}}, \ldots ,{\mathcal{R}_{M-1}^\textrm{II}}\}$ have the same size as $\mathcal{R}_\textrm{Global}^\textrm{I}$ in the spatiotemporal latent space. Then we ensemble all these cross-relation features of different time scales together into the overall level-II cross-relation feature: $$\label{equ:loss3}
\mathcal{R}^\textrm{II} = {\mathcal{R}_{2}^\textrm{II}} + \ldots + \mathcal{R}_{M-1}^\textrm{II},$$ which not only contains a variety of transferable knowledge but also keeps the category-specific knowledge. We thus use $\mathcal{R}^\textrm{II}$ as the first component to generate new data points.
#### Pyramid Level III: Cross-Relation Sum.
In the last level, we aggregate the cross-relation feature $\mathcal{R}^\textrm{II}$ and the global relational feature $\mathcal{R}_\textrm{Global}^\textrm{I}$ for the final classification and further data augmentation. We explored several aggregation approaches such as attention, concatenation, and so forth. Through experiments, we find that the element-wise sum operation is the most effective: $\mathcal{R}^\textrm{III} = \mathcal{R}^\textrm{II} + \mathcal{R}_\textrm{Global}^\textrm{I}.$
While the cross-relation attention block at the previous pyramid level focuses on finding more discriminative local-relation features by aligning them with the global-relation feature, the cross-relation sum operation here strengthens the influence of the global-relation feature. Thus, these two types of features are complementary to each other, leading the subsequent video ADA method to generate more diverse and representative adversarial examples.
source video dataset $\mathcal{S} = \left\{({X}_{i}, {Y}_{i})\right\}_{i=1, \ldots, n}$ learned APN weights $\theta$ **Initialize** $\theta \leftarrow \theta_{0}$ Sample $\left\{({X}, {Y})\right\}$ uniformly from video dataset $\mathcal{S}$ $\mathcal{R}^\textrm{II}_{0}, \mathcal{R}^\textrm{III}_{0}$ from $\text{APN}(X)$ $\theta \leftarrow \theta-\alpha \nabla_{\theta} \ell\left(h(\mathcal{R}^\textrm{III}_{0}), {Y}\right)$ ${X}^\textrm{II} \leftarrow {X}; {X}^\textrm{III} \leftarrow {X}$ $\mathcal{R}^\textrm{II}$ from $\text{APN}(X^\textrm{II})$ ${X}^\textrm{II} \leftarrow {X}^\textrm{II} + \eta \nabla_{x}\big\{\ell\left(h(\mathcal{R}^\textrm{II}), {Y}\right) - \gamma c \left(\mathcal{R}^\textrm{II}, \mathcal{R}^\textrm{II}_{0} \right) \big\}$ $\mathcal{R}^\textrm{III}$ from $\text{APN}(X^\textrm{III})$ ${X}^\textrm{III} \leftarrow {X}^\textrm{III} + \eta \nabla_{x}\big \{\ell\left(h(\mathcal{R}^\textrm{III}), {Y}\right)) - \gamma c \left(\mathcal{R}^\textrm{III}, \mathcal{R}^\textrm{III}_{0} \right) \big \}$ $\mathcal{R}^\textrm{II}$ from $\text{APN}(X^\textrm{II}), \quad \theta \leftarrow \theta-\alpha \nabla_{\theta} \ell\left(h(\mathcal{R}^\textrm{II}), {Y}\right)$ $\mathcal{R}^\textrm{III}$ from $\text{APN}(X^\textrm{III}), \quad \theta \leftarrow \theta-\alpha \nabla_{\theta} \ell \left(h(\mathcal{R}^\textrm{III}), {Y}\right)$
Video ADA with Relational Feature Pyramid {#sec:video_ada}
-----------------------------------------
Video ADA has its own challenge in spatiotemporal modeling compared with the original image ADA method. Besides using the within-relation and cross-relation feature pyramid to learn more generalizable video representations, we also use the multilayer cross-relation features to generate new perturbed data points that help bridge distributions across domains. First, we use the cross-relation feature of pyramid level-II to generate adversarial examples by maximizing the following loss with respect to the input data: $$\label{pyramid-II-loss}
\sup_{x \in \mathcal{X}} \big\{\ell\left(h(\mathcal{R}^\textrm{II}), y_{0}\right) -\gamma c \left(\mathcal{R}^\textrm{II}, \mathcal{R}^\textrm{II}_{0} \right)\big\},$$ where ${\mathcal{R}}_0^\textrm{II}$ is generated from the source data point $x_0$, $h$ is the classifier, which can be a multilayer perceptrons (MLP) or a couple of convolutional layers. Here, we simply use a fully-connected layer. As the original ADA method, $\ell$ is the categorical cross-entropy loss, and $c$ is the transportation cost measured by the mean squared loss. We obtain the *level-II adversarial examples* by Eq. .
As we have mentioned, ${\mathcal{R}}^\textrm{II}$ reflects more local temporal relations, being computed from cross-relation attention. We suppose that the local temporal relations are more generalizable but may lead to too diverse data augmentation results. As a compensate, we use ${\mathcal{R}}^\textrm{III}$, which reflects more global temporal relations by a residual connection from ${\mathcal{R}}_\textrm{Global}^\textrm{I}$, to generate the *level-III adversarial examples*. Similarly, we maximize the following loss function: $$\label{pyramid-III-loss}
\sup_{x \in \mathcal{X}} \big \{\ell\left(h(\mathcal{R}^\textrm{III}), y_{0}\right) - \gamma c \left(\mathcal{R}^\textrm{III}, \mathcal{R}^\textrm{III}_{0} \right) \big\}.$$
Notably, unlike in the original image ADA method, these spatiotemporal adversarial examples have a time dimension of $M$, which is equal to the number of video segments.
#### Training Procedure.
We show the minimax training procedure of our video ADA method that is based on the APN model in Algorithm \[alg:Framwork\]. It has two major differences from the image ADA method. Generally, they have different frameworks. The image ADA method has two separate training stages: it first generates new data in $K$ minimax training phases to augment the dataset $K$ times, and subsequently learn the model by minimizing the classification loss over the augmented dataset (see [@ADA]). Such a two-stage training strategy is not suitable for video data. Deep networks for video recognition commonly use frame sampling approaches to obtain different combinations of frames over the whole video, thus, enabling the ensemble learning and effectively avoid over-fitting. However, with the growth of the maximization phases, the original ADA method is more and more likely to sample the augmented examples to add the perturbation to, whose anchor frames are fixed during the entire training period. To solve this problem, we combine the two separate training stages of image ADA, generating adversarial examples and optimizing the classification loss over the augmented data **on-the-fly** (**line 13-14 in Alg. \[alg:Framwork\]**).
The second characteristic of our video ADA method is that we use both the relational feature pyramid rather than only the features of the last hidden layer to generate new data points (**line 7-12 in Alg. \[alg:Framwork\]**). Empirically, we find that the level-II adversarial examples reflect more local relations and thus more diverse to cover other population sources. On the other hand, the level-III adversarial examples are generated from features that are more close to the global relations due to the element-wise sum operation, and thus more akin to the category-specific distributions. That is why we use the pyramid of cross-relation features for video ADA.
Experiments
===========
We evaluate the APN on four video DG benchmarks. The first two benchmarks represent entangled spatial and temporal domain shifts. In UCF-HMDB, the source and target domains are divided according to different datasets that are collected from diverse sources. In Something-Something, domains are divided according to different consequences of the actions, such as “Doing something” in the source and “Pretending to do something” in the target. The other two benchmarks, PKU-MMD and NTU, represent a remarkable spatial domain shift since the source and target domains are naturally divided according to different camera views. In this section, we will show the overall implementation details and then analyze the experiment results on each benchmark.
Implementation Details
----------------------
We divide the source domain videos into a training set ($70\%$) and a validation set ($30\%$), following the training and validation protocol from image DG [@Ghifary2015domain]. For the network hyper-parameters, the number of video segments is set to $M=5$ for UCF-HMDB and Something-Something, and $M=3$ for PKU-MMD and NTU. The dimension of the frame feature is set to $D=256$. As for the hyper-parameters of the data augmentation process, we set the adversarial perturbation factor to $\eta=1$, the number of maximization phases to ${T}_\text{max}=5$, and train $4$ models with $\gamma \in \{0.001, 0.01, 0.1, 1\}$ to make the final predictions at test time using the ensemble strategy of the original ADA method [@ADA]. We use the SGD optimizer with a $0.001$ base learning rate and decay it by $0.1$ every $30$ epochs. We use the random cropping and horizontal flipping for the input frames at training time, and use the center cropping at test time. Our models converge in less than $16$ hours on $8$ GPUs for $150$ epochs with a minibatch size of $40$ video clips.
Compared Models
---------------
We compare our models with $5$ well-known or state-of-the-art video action recognition models: TSN [@Wang16], TRN [@zhou2018temporal], TSM [@lin2019tsm], I3D [@carreira2017quo], and Non-local [@NonLocal2018]. For a fair comparison, TSN, TSM, TRN, and our models all use the ResNet-50 [@HeRes] backbone that is pretrained on ImageNet. I3D and Non-local use the 3D ResNet-50 backbone with $32$ input frames and are pretrained on the Kinetics dataset. We also include baseline models by combining the above networks with the original image ADA method [@ADA], whose details can be found in the supplementary materials.
UCF-HMDB
--------
In this benchmark, we select $12$ overlapping categories shared by UCF101 [@UCF101] and HMDB51 [@HMDB51], consisting of $3{,}809$ videos. We take one dataset as the training/validation set and the other as the test set. A detailed list of the selected categories can be found in the supplementary material.
#### Results.
Table \[UCF-HMDB\] compares the video DG results of all compared models on UCF-HMDB. Our final APN (the one trained with space-time adversarial examples) achieves the best performance $65.74\%$, exceeding the second place (Non-local without ADA) by $\textbf{3.92\%}$.
Model U $\rightarrow$ H H $\rightarrow$ U Model U $\rightarrow$ H H $\rightarrow$ U Model U $\rightarrow$ H H $\rightarrow$ U
------------- ------------------- ------------------- ------------------ ------------------- ------------------- ------------- ------------------- -------------------
TRN 52.83 70.13 TRN-ATTN 53.33 70.93 APN 54.21 71.43
TRN + ADA 53.28 70.36 TRN-ATTN + ADA 53.72 71.77 APN + ADA 55.61 72.47
TRN + ADA\* 52.56 69.68 TRN-ATTN + ADA\* 52.17 70.28 APN + ADA\* **57.90** **73.57**
#### Are global temporal features generalizable?
Although the TSN and TSM models have been proven competitive for the standard video classification, they underperform in video DG (see Table \[UCF-HMDB\]). Further, we implement a baseline model over APN by using the global-relation feature $\mathcal{R}_\textrm{Global}^\textrm{I}$ instead of the cross-relation features $\mathcal{R}^\textrm{II}$ or $\mathcal{R}^\textrm{III}$ for generating adversarial examples, which obtains a $\textbf{53.37\%}$ accuracy from UCF to HMDB, and $\textbf{71.83\%}$ in turn. We notice that it only has a slight improvement over the vanilla APN without ADA, indicating that the global temporal features are not generalizable enough and cannot work well with adversarial video data augmentation.
#### Are cross-relation features generalizable?
Even without ADA training, the APN model alone still outperforms other compared models but Non-local by $\textbf{0.16\%}$. Non-local works better than I3D due to the self-attention block, which adaptively learns local and global temporal relations across a video clip of $32$ frames. This result partly verifies our findings that the cross-relation features are more generalizable. However, Non-local cannot achieve further improvement from ADA training. Only APN and TRN enable effective video ADA, increasing by $\textbf{2.92\%}$ and $0.34\%$, respectively. We may conclude that while video DG benefits from the cross-relation features from the representation learning perspective, it benefits more from the data perspective by using multilayer cross-relation features for video ADA.
#### Comparing with TRN.
TRN is our most important baseline model since it also captures local temporal relations at multiple time scales, which tend to be generalizable. To investigate the necessity of our *cross-relation* operations, we train a new baseline model named TRN-ATTN by applying the *within-relation* attention block to the $g_\theta$ and $h_\phi$ features of TRN (please look them up in [@zhou2018temporal]). As shown in Table \[UCF-HMDB-ablation\], our APN models consistently and remarkably outperform the TRN and TRN-ATTN models, validating the rationality of the relational feature pyramid in both representation learning and the generation of adversarial examples.
#### Why using relational feature pyramid for video ADA?
We also observe from Table \[UCF-HMDB-ablation\] that both the TRN and TRN-ATTN models degenerate when multilayer features are used for ADA. On the contrary, the multilayer cross-relation features of APN enable more effective ADA training and thus further improve video DG. We argue that it is because the level-II feature $\mathcal{R}^\textrm{II}$ represents more local relations through cross-relation attention, thus increasing the diversity of the new data points; while the level-III feature $\mathcal{R}^\textrm{III}$ reflects global relations more directly through the cross-relation sum operation, thus resulting in more representative spatiotemporal adversarial examples. As a comparison, by using $\mathcal{R}^\textrm{II}$ or $\mathcal{R}^\textrm{III}$ separately for generating the spatiotemporal adversarial examples, we obtain $\textbf{56.25\%}/\textbf{55.61\%}$ from UCF to HMDB, and $\textbf{72.53\%}/\textbf{72.47\%}$ in turn. From these results, we can see that $\mathcal{R}^\textrm{II}$ and $\mathcal{R}^\textrm{III}$ are complementary to each other and both necessary for video ADA.
Something-Something
-------------------
We construct this video DG benchmark by selecting $20$ basic categories from the Something-Something dataset [@something-something]. Under each basic category such as “Tearing”, there are two sub-categories with different consequences of the actions such as “Tearing something” and “Pretending to be tearing something that is not tearable”. We put different sub-categories to different domains. The domain shift in this context entangles the appearance and motion variations. The full category list can be found in the supplementary material. As a result, the source domain has $9{,}530$ videos and the target domain has around $4{,}000$ videos.
Model Source $\rightarrow$ Target Target $\rightarrow$ Source
----------- ------------------------------ ------------------------------
TSN 35.47 / 35.39 $\downarrow$ 22.96 / 22.56 $\downarrow$
TRN 37.15 / $\uparrow$ 23.82 / $\uparrow$
TSM 36.74 / 36.38 $\downarrow$ 23.22 / 23.10 $\downarrow$
I3D 31.20 / 30.54 $\downarrow$ 22.98 / 21.73 $\downarrow$
Non-local 33.60 / 33.21 $\downarrow$ 23.68 / 23.16 $\downarrow$
APN 38.24 / **40.60** $\uparrow$ 24.87 / **27.37** $\uparrow$
: Video DG results ($\%$) on Something-Something. In each column, the two values are obtained with/without ADA. []{data-label="something-something"}
#### Results.
Due to the extremely complicated variations of the object appearance and motion cues across domains, none of the compared models shows a great generalization ability on this challenging benchmark (see Table \[something-something\]). But still, the APN outperforms other models significantly, exceeding the second place (TRN with ADA) by $\textbf{3.32\%}$ on average. Note that both I3D and Non-local underperform on this benchmark than on the UCF-HMDB. The reason is that the Kinetics pretrained models are less effective for this dataset than for sports videos. In Figure \[fig:exapmles\_sth\], we show a showcase of the classification result on the target set.
![ A showcase of the video DG results on Something-Something. The first row shows training data from the source domain. The second row shows the test data from the target domains. The green bars indicate making correct predictions and the orange bars incorrect ones. The length of the bar denotes the confidence. []{data-label="fig:exapmles_sth"}](examples_something.pdf){width="1\columnwidth"}
PKU-MMD and NTU
---------------
The PKU-MMD dataset [@liu2017pku] and NTU dataset [@Shahroudy_2016_CVPR] are both large-scale benchmarks for multi-view human action understanding, where we can easily exploit the spatial domain shift across different camera views to build the video DG task. The PKU-MMD contains about $7{,}000$ trimmed short video clips of $41$ action categories such as “Drinking water” and “Sitting down”, which are recorded from three camera views (Left, Center, Right). The NTU dataset contains around $57\textrm{K}$ videos of $60$ action categories such as “Kicking something” and “Standing up”. It has $40$ subjects in $80$ viewpoints, which can be grouped into $3$ domains according to the camera angle. From Table \[PKU-MMD\] and Table \[NTU\], the APN consistently achieves the best results, on average exceeding the second place (TRN with ADA) by $\textbf{4.52\%}$ and $\textbf{3.69\%}$ respectively on these benchmarks. Figure \[fig:exapmle\_ntu\] gives a showcase of the classification result on the NTU target set.
![ Two showcases of the video DG results on NTU. The first row shows training data from the source domain. The second row shows the test data from the target domains. The green bars indicate making correct predictions and the orange bars incorrect ones. The length of the bar denotes the classification confidence. []{data-label="fig:exapmle_ntu"}](examples_ntu.pdf){width="1\columnwidth"}
Conclusion
==========
In this paper, we introduced a new problem of video DG, where models are trained on one source domain and evaluated on different unseen domains. We found that most action recognition networks underperform in such settings with the entangled spatial and temporal domain shifts. To solve this problem, we proposed the Adversarial Pyramid Network, which progressively learns generalizable and discriminative video representations at different pyramid levels. We then used the feature pyramid to generate adversarial examples in space-time, and thus derived a new adversarial video data augmentation method. We constructed four video DG benchmarks with different kinds of spatial and temporal domain shifts. Our approach was shown to consistently achieve the best results over these benchmarks.
[^1]: Equal contribution
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In view of the fundamental importance and many promising potential applications, non-Abelian statistics of topologically protected states has attracted much attention recently. However, due to the operational difficulties in solid state materials, non-Abelian statistics has not been experimentally realized yet. The superconducting quantum circuits system is scalable and controllable, thus is a promising platform for quantum simulation. Here, we propose a scheme to demonstrate non-Abelian statistics of topologically protected zero energy edge modes on a chain of the superconducting circuits. Specifically, we can realize topological phase transition by varying the hopping strength and magnetic filed in the chain, and the realized non-Abelian operation can be used in topological quantum computation. Considering the advantages of the superconducting quantum circuits, our protocol may shed light on quantum computation via topologically-protected states.'
author:
- 'Jun-Yi Cao'
- Jia Liu
- 'L. B. Shao'
- 'Zheng-Yuan Xue'
title: 'Detecting non-Abelian statistics of topological states on a chain of superconducting circuits'
---
Introduction
============
Following the Feynman’s suggestion about the possibility of a quantum computer in the late 1980s, Shor proposed a quantum algorithm that could efficiently solve the prime-factorization problem [@QC; @petershor]. Since then the research of quantum computation begun to be controversial. Recently topological quantum computation has become one of the perfect constructions to build quantum computer. The protocols based on the topological systems are neither built by bosons nor fermions, but so called non-Abelian anyons, which obey non-Abelian statistics. Therefore, realizing particles obey non-Abelian statistics in different physical systems has been taken into central stage for a long time. Physical systems with fractional quantum Hall effect has been developed a lot as a candidate for topological quantum computation and for the same reason Majorana fermions also have been paid great attentions in related researches [@natphys; @Kitaev_anyon; @FQH_FuKane; @FQH_Nagosa; @FQH_DasSarma; @FQH_Alicea; @FQH_Fujimoto; @FQH_SCZhang; @wenxg_prl; @kitaev_prl; @read_prb; @adystern_nature; @DasSarma_RMP; @Kitaev_majorana; @DarSarma_majorana; @liu_fop; @science_majorana]. However, up to now, experimental non-Abelian operations are still being halfway for real quantum computation, and thus the relevant researches still has great significance.
Recently, the superconducting quantum circuits system [@cqed1; @cqed2; @cqed3; @Nori-rew-Simu2-JC], a scalable and controllable platform which is suitable for quantum computation and simulation [@s1; @s2; @s3; @s4; @s5; @s6; @s7; @s8; @s9; @TS1D_prl], attracts great attention and has been applied in many researches. For example, the JC model [@JCModel], describing the interaction of a single two-level atom with a quantized single-mode photon, can be implemented by a superconducting transmission line resonator (TLR) couples a transmon. Meanwhile, JC units can be coupled in series, by superconducting quantum interference devices (SQUIDs), forming a chain [@Gu], or 2D lattice [@xue; @liu_QSH], providing a promising platform for quantum simulation and computation. Compared with cold atoms and optical lattice simulations [@SOC1; @SOC2; @SOC3], the superconducting circuits possess good individual controllability and easy scalability.
Here, we propose a scheme to demonstrate non-Abelian statistics of topologically protected zero-energy edge modes on a chain of superconducting circuits. Each site of the chain consists of a JC coupled system, the single-excitation manifold of which mimic the spin-1/2 states. Different neighbouring sites are connected by SQUIDs. In this setup, all the on-site potential, tunable spin-states transitions and synthetic spin-orbit coupling can be induced and adjusted independently by the driving detuning, amplitude and phases of the ac magnetic filed threading through the connecting SQUIDs. With appropriate parameters, topological states and the corresponding non-Abelian statistics can be explored and detected.
The model
=========
The proposed model
------------------
We propose to implement non-Abelian quantum operations in a one-dimension (1D) lattices with the Hamiltonian $$\begin{aligned}
H&=\sum_l t_0 (c_{l,\uparrow}^\dag c_{l+1,\uparrow}-c_{l,\downarrow}^\dag c_{l+1,\downarrow})+\text{h.c.}\\
&+\sum_l h_z(c_{l,\uparrow}^\dag c_{l,\uparrow}-c_{l,\downarrow}^\dag c_{l,\downarrow})\\
&-\sum_l i\Delta_0 e^{-i\varphi}(c_{l,\uparrow}^\dag c_{l+1,\downarrow}-c_{l+1,\uparrow}^\dag c_{l,\downarrow})+\text{h.c.},
\label{eq4}
\end{aligned}$$ where $c^{\dag}_{l,\uparrow}=|\bar{\uparrow}\rangle_l\langle G|$ and $c^{\dag}_{l,\downarrow}=|\bar{\downarrow}\rangle_l \langle G|$ are the creation and annihilation operators of polariton with ¡°spin¡± up and down in $l$th unit cell. First, set $\varphi = 0$, the Hamiltonian in Eq. (\[eq4\]) can be transformed to the momentum space as $H=\sum_{{\bf{k}}}\Psi^{\dag}_{\bf{k}}\hat{h}({\bf{k}})\Psi_{\bf{k}}$, where $\Psi_{\bf{k}}=\left( c_{{\bf{k}},\uparrow}, c_{{\bf{k}},\downarrow} \right)^T$, $$\hat{h}({\bf{k}})=\left(h_z+2t_0\cos({\bf{k}})\right)\sigma_z+ 2\Delta_0\sin({\bf{k}})\sigma_x,
\label{eq5}$$ and we have set lattice spacing $a = 1$, $\sigma_x$ and $\sigma_z$ are Pauli matrices. Energy bands of this system are given as $$E({\bf{k}}) = \pm\sqrt{ \left(h_z+2t_0\cos({\bf{k}})\right)^2+ \left(2\Delta_0\sin({\bf{k}})\right)^2},
\label{energy_band}$$ which indicates that the energy gap will close only when $h_z = \pm2t_0$. It is well-known that there happens a topological phase transition when the gap close and open. In order to identify the topological zero mode states $\psi_{0}$, we start from an half-infinite chain. There is a chiral symmetry $\sigma_y\hat{h}({\bf{k}})\sigma_y = -\hat{h}({\bf{k}})$. If there is a $\psi_{0}$ state inside the gap, $\sigma_y\psi_{0}$ is identical to $\psi_{0}$ up to a phase factor, since that under the chiral symmetry $E({\bf{k}}) \rightarrow -E({\bf{k}})$. As a result, $\psi_{0}$ must be an eigenstate of $\sigma_y$ as $\phi_{\pm}=\frac{1}{2} (1, \pm i)^T$ and $$h_z+2t_0\cos({\bf{k}})=\mp 2i\Delta_0\sin({\bf{k}}),
\label{eq6}$$ these two equations are obtained by substituting $\phi_{\pm}$ into Eq. (\[eq5\]) which are necessary to satisfy the former conditions. Notice that $\sigma_x\phi_{\pm} = \pm i\phi_{\mp}$, $\sigma_y\phi_{\pm} = \pm \phi_{\pm}$, $\sigma_z\phi_{\pm} = \phi_{\mp}$, according to Eq. (\[eq5\]), $\Delta_0 \rightleftharpoons -\Delta_0$ is equivalent to $\hat{h}({\bf{k}}) \rightarrow \sigma_z \hat{h}({\bf{k}}) \sigma_z$; $h_z = 0$, $t_0 \rightleftharpoons -t_0$ is equivalent to $\hat{h}({\bf{k}}) \rightarrow \sigma_x \hat{h}({\bf{k}}) \sigma_x$.
Since there is a gap in the bulk, these equations only have complex solutions which provide localized states at the edges. In order to satisfy the boundary conditions $\psi_{0}|_{x=0} = 0$ and $\psi_{0}|_{x=\infty} = 0$, the solutions of Eq. (\[eq6\]) for the same eigenstate $\phi_{\pm}$ must satisfy $\mathbf{Im}({\bf{k}}) > 0$, as there is no superposition of orthogonal states to satisfy this vanishing boundary condition. Careful analysis shows that there is an edge-state $\phi_+$ localized at $x = 0$ when $|h_z| < 2t_0$, $t_0 > 0$ and $\Delta_0 > 0$.
![Numerical calculations of a 1D consists of 16 unit-cell lattices with $t_0 = 1$ , $\Delta_0/t_0 = 0.99$ and $h_z/t_0 = 0.3$. (a) The eigen-energies of the finite system. An energy gap is opened in the bulk and there are two zero modes in the middle of the gap. It is obvious that those two zero modes are localized in different edges. (b) The phase diagram of the chain. The yellow and dark blue regions are of topological invariant $\nu = 1$ and $\nu = 0$, respectively. Four red dots A, B, C and D represent the parameters for demonstrating the non-Abelian quantum transformations. The red direction arrow means that $\hat{O}_1$ is executed first, and then $\hat{O}_2$ follows, and the blue direction arrow means that $\hat{O}_2$ is executed first, and then $\hat{O}_1$. (c) and (d) are numerically calculated left and right zero-energy edge states of the chain, the red and dashed blue line plot the probability amplitude of the numerically obtained wave functions.[]{data-label="figure1"}](FIGURE1.pdf){width="8cm"}
Non-Abelian statistics
----------------------
In order to set up a scheme can be achieved in experiments, we consider a chain with finite cells. Fortunately, topologically protected zero modes are stable until energy gap is closed and thus can survive under local perturbations, a robust quantum computation can be realized using those modes. For finite system, the same argument can be applied to the edge states. After numerical calculations, we find that a chain with 16 lattices is good enough to realize a non-Abelian operation with corresponding parameters. In all the following numerical calculation, we set $\varphi=0$ and $t_0$ being the energy unit. First, fix the energy levels of the Hamiltonian in Eq. (\[eq4\]) with corresponding parameters $t_0/2\pi = 4$ MHz, $\Delta_0/t_0 = 0.99$, and $h_z/t_0 = 0.3$.
As shown in Fig. \[figure1\](a), we can find zero energy modes that can be used to demonstrate their non-Abelian statistics. We choose four such modes to realize non-Abelian operation in our scheme, these are the four red dots in Fig. \[figure1\](b). As an addition, we also calculate the topological invariants [@Chern; @numbers; @invariants; @Xiao], $$\nu =\pm \frac{1}{2}\left[\sgn(\pm 2t_0+h_z)+\sgn(\pm 2t_0-h_z) \right], %(\pm t_0, \Delta_0, h_z)$$ according to $\nu$ we divide the $t_0-h_z$ plane into topological nontrivial and trivial phases, as Fig. \[figure1\] (b). We set two quantum operations $\hat{O}_1$ and $\hat{O}_2$, where $\hat{O}_1$ is implemented by firstly changing the signs of $t_0$ and $\Delta_0$, and then varying $h_z$ from $0.3t_0$ to 0; and $\hat{O}_2$ is obtained with constant $h_z$ while changes the signs of $t_0$, $\Delta_0$. We choose the initial state as $|\Psi_{i}(x)\rangle = |\Psi_{L,0}(x)\rangle$, and calculate the edge state of four red dot parameters related to non-Abelian quantum operations in Fig. \[figure1\](b), which are listed in the following.
Dot **A**, $t_0/2\pi = 4$ MHz, $\Delta_0/t_0 = 0.99$, $h_z/t_0 = 0.3$, the two zero modes edge states of the system are $$\label{eq7}
\begin{aligned}
|\Psi_{L,0}(x)\rangle&=N_0\frac{\left[ \left(\frac{-b_0+\sqrt{c_0}}{2}\right) ^x - \left(\frac{-b_0-\sqrt{c_0}}{2}\right) ^x \right]}{\sqrt{c_0}}\phi_+, \\
|\Psi_{R,0}(x)\rangle&=N_0\frac{\left[ \left(\frac{-b_0+\sqrt{c_0}}{2}\right) ^{N-x+1}- \left(\frac{-b_0-\sqrt{c_0}}{2}\right) ^{N-x+1} \right]}{\sqrt{c_0}}\phi_-,
\end{aligned}$$ as shown in Fig. \[figure1\](c) and \[figure1\](d), where $a_0 = (t_0-\Delta_0)/(t_0+\Delta_0)$, $b_0 = h_z/(t_0+\Delta_0)$, $c_0 = b_0^2-4a_0$, $N$ is the number of cells, $N_0$ is a normalized constant that can only be solved numerically.
Dot **B**, $t_0 \rightarrow -t_0$, $\Delta_0 \rightarrow -\Delta_0$, $h_z/t_0 = 0.3$, the two edge states can be obtained as $$\begin{aligned}
|\Psi_{L,1}(x)\rangle&=N_1\frac{\left[ \left(\frac{b_1+\sqrt{c_1}}{2}\right) ^x - \left(\frac{b_1-\sqrt{c_1}}{2}\right) ^x \right]}{\sqrt{c_1}}\phi_+, \\
|\Psi_{R,1}(x)\rangle&=N_1\frac{\left[ \left(\frac{b_1+\sqrt{c_1}}{2}\right) ^{N-x+1}- \left(\frac{b_1-\sqrt{c_1}}{2}\right) ^{N-x+1} \right]}{\sqrt{c_1}}\phi_-.
\end{aligned}
\label{eq9}$$ where $a_1 = 1/a_0$, $b_1 = h_z/(t_0-\Delta_0)$, $c_1 = b_1^2-4a_1$, $N_1$ is a normalized constant that can only be solved numerically.
Dot **C**, when $t_0 \rightarrow -t_0$, $\Delta_0 \rightarrow -\Delta_0$, $h_z/t_0 = 0.3 \rightarrow h_z = 0$, the two edge states can be obtained as $$\label{eq8}
\begin{aligned}
|\Psi_{L,2}(x)\rangle&=N_2 \sin\left(\frac{\pi}{2} x \right) e^{-\frac{a_2}{2} x } \phi_+, \\
|\Psi_{R,2}(x)\rangle&=N_2 \sin\left(\frac{\pi}{2} (N-x+1) \right) e^{-\frac{a_2}{2} (N-x+1) } \phi_-.
\end{aligned}$$ where $N_2 = \sqrt{2\sinh{a_2}}$, $a_2 = \ln{1/a_0}$.
Dot **D**, when $-t_0 \rightarrow t_0$, $-\Delta_0 \rightarrow \Delta_0$, $h_z = 0$, the two edge states can be obtained as $$\begin{aligned}
|\Psi_{L,3}(x)\rangle&=|\Psi_{L,2}(x)\rangle, \\
|\Psi_{R,3}(x)\rangle&=|\Psi_{R,2}(x)\rangle.
\end{aligned}
\label{eqP3}$$
We now proceed to detail our non-Abelian statistics demonstration for the zero modes, i.e., change the order of two operations $\hat{O}_1$ and $\hat{O}_2$ that are applied to an initial state $|\Psi_{i}(x)\rangle$ will leads to different final states. We consider the case that the $\hat{O}_1$ operation is implemented firstly, which is equivalent $\phi_{+} \rightarrow \phi_{-}$, so that $\hat{O}_1|\Psi_{i}(x)\rangle = |\Psi_{R,2}(x)\rangle$. When $\hat{O}_2$ is applied to $|\Psi_{R,2}(x)\rangle$, the Hamiltonian experienced two unitary transformations, $\sigma_x$ and $\sigma_z$, it is equivalent $\phi_{+} \rightarrow \phi_{-} \rightarrow \phi_{+}$, so we can get $\hat{O}_2|\Psi_{R,2}(x)\rangle = |\Psi_{R,3}(x)\rangle = |\Psi_{f}(x)\rangle$. As a result, the initial state $|\Psi_{i}(x)\rangle$ passes through $\hat{O}_1$ and then passes through $\hat{O}_2$, eventually transforming into the final state $|\Psi_{f}(x)\rangle$, that corresponds to the direction of the two red arrows in Fig. \[figure1\](b). Alternatively, when the quantum operation $\hat{O}_2$, ie $\phi_{+} \rightarrow \phi_{-}$ is applied to the initial state $|\Psi_{i}(x)\rangle$ firstly, we can get $\hat{O}_2|\Psi_{i}(x)\rangle = |\Psi_{R,1}(x)\rangle$. Then $\hat{O}_1$ is applied to $|\Psi_{R,1}(x)\rangle$, ie $\phi_{-} \rightarrow \phi_{+}$, we can get $\hat{O}_1|\Psi_{R,1}(x)\rangle = |\Psi_{L,3}(x)\rangle = |\Psi^{'}_{f}(x)\rangle$. As a result, the initial state $|\Psi_{i}(x)\rangle$ passes through $\hat{O}_2$ and then passes through $\hat{O}_1$, eventually transforming into $|\Psi^{'}_{f}(x)\rangle$, that corresponds to the direction of the two blue arrows in Fig. \[figure1\](b). The above operations could be written into formula $$\begin{aligned}
&\hat{O}_2\hat{O}_1|\Psi_{i}(x)\rangle=|\Psi_{f}(x)\rangle,\\
&\hat{O}_1\hat{O}_2|\Psi_{i}(x)\rangle=|\Psi^{'}_{f}(x)\rangle.
\label{operation}
\end{aligned}$$ It can be seen that $|\Psi_{f}(x)\rangle$ and $ |\Psi^{'}_{f}(x)\rangle$ are different in position distribution, and the two final states can be distinguished experimentally by measuring the position, we will show that in Fig. \[figure3\](a) and \[figure3\](b), in the following sections.
![The proposed setup of superconducting circuit to mimic a spin-1/2 lattice models. (a) The “spin-1/2” polariton lattice with two types of unit cells, R-type (colored red) and B-type (colored blue) arranged alternately, which are of unit cell are of the different qubit and photon eigenfrequencies and JC coupling strengths. Each unit cell has two pseudo-spin-1/2 states simulated by the two single-excitation eigenstates of the JC model. The neighboring unit cells are coupled by a combination of a SQUID in series, to induce the tunable inter-cell photon hopping. (b) The detuned couplings of inter-cell spin states. In order to achieve the simulated Hamiltonian, two sets of driving strength are assigned, the coupling strength between $|\bar{\uparrow}\rangle_{l}\leftrightarrow |\bar{\uparrow}\rangle_{l+1}$ and $|\bar{\downarrow}\rangle_{l}\leftrightarrow |\bar{\downarrow}\rangle_{l+1}$ is $t_0$ (red), and the coupling strength between $|\bar{\uparrow}\rangle_{l}\leftrightarrow |\bar{\downarrow}\rangle_{l+1}$ and $|\bar{\downarrow}\rangle_{l}\leftrightarrow |\bar{\uparrow}\rangle_{l+1}$ is $\Delta_0$ (black). (c) The polariton lattice in a rotating frame, where all polariton lattices can be considered the same, so that the proposed circuit simulates a 1D spin-1/2 tight-binding lattice model.[]{data-label="figure2"}](FIGURE2.pdf){width="8cm"}
Implementation
==============
With the previous discussion, now we will show how to realize our proposal in a superconducting circuits system. The method of realizing the 1D JC lattice in the superconducting circuit is shown in Fig. \[figure2\](a). We set the red and blue lattices alternately connected in series on one chain. Each lattice contains a JC coupling, where a TLR and a transmon are employed with resonant interaction [@SC3; @Nori-rew-Simu2-JC], and the adjacent lattices are connected by a grounded SQUID. As a result, setting $\hbar=1$ hereafter, the Hamiltonian of this JC lattice is $$H_{\text{JC}}=\sum_{l=1}^N h_l+\sum_{l=1}^{N-1} J_l(t)(a_l^\dag a_{l+1}+\text{h.c.}),
\label{eq1}$$ where $N$ is the number of the unit cells; $h_l=\omega_l \sigma_l^\dag \sigma_l^{-}+\omega_l a_l^\dag a_l+g_l(\sigma_l^\dag a_l+\text{h.c.})$ is the JC type interacting Hamiltonian in the $l$th unit cell with $\sigma_l^\dag=|\text{e}\rangle_l\langle \text{g}|$ and $\sigma_l^{-}=|\text{g} \rangle_l\langle \text{e}|$ are the raising and lowering operators of the $l$th transmon qubits, $a_l$ are the annihilation and creation operators of the photon in the $l$th TLR. The condition $g_l \ll \omega_l$ has to be met for justifying the JC coupling. Its three lowest energy dressed states are $|\uparrow\rangle_l=\frac{1}{2}(|\text{0e}\rangle_l+|\text{1g}\rangle_l), |\downarrow\rangle_l =\frac{1}{2}(|\text{0e}\rangle_l-|\text{1g}\rangle_l)$, and $|\text{0g}\rangle_l$, with the corresponding energies are $E_{l,\uparrow}=\omega_l+g_l, E_{l\downarrow}=\omega_l-g_l,$ and $0$. And $J_l(t)$ is the inter-cell hopping strengths between the $l$th and $(l+1)$th unit cells. Here, we exploit the two single-excitation eigenstates $|\uparrow \rangle_l$ and $|\downarrow \rangle_l$ to simulate the effective electronic spin-up and spin-down state, they are regarded as a whole and term as “polariton”.
We will show how coupling strength $J_l(t)$ is regulated by regulating the magnetic flux of the adjacent TLRs and SQUIDs. Because two single-excited dressed states act as two pseudo-spin states in each cell, there are four hoppings between two adjacent cells. To control the coupling strength and phase of each hopping, we introduce four driving field frequencies in each $J_l(t)$. For this purpose, we adopt two sets of unit cells, R-type and B-type, which are alternately linked on one chain, see Fig. \[figure2\](a). Setting the chain started with an R-type one, when $l$ is odd (even), $\omega_l=\omega_{\text{R}}(\omega_{_{\text{B}}})$ and $g_l=g_{\text{R}}(g_{\text{B}})$. Then, we set $\omega_{\text{R}}/2\pi = 6$ GHz, $\omega_{\text{B}}/2\pi = 5.84$ GHz, $g_{\text{R}}/2\pi = 200$ MHz, and $g_{\text{B}}/2\pi =120 $ MHz. In this way, the energy interval of the four hoppings are $\{|E_{l,\alpha}-E_{l+1,\alpha'}|/2\pi\}_{\alpha, \alpha'=\uparrow /\downarrow}=\{ 80, 160, 240, 480\}$ MHz. The frequency distances between every two of them are no less than 20 times of the effective hopping strength $t_0/2\pi = 4$ MHz, $\Delta_0/t_0 = 0.99$, thus they can be selectively addressed in frequency. Then the driving $J_l(t)$ has to correspondingly contain four tunes, written as $$J_l(t)=\sum_{\alpha,\alpha'}4t_{l,\alpha,\alpha'} \cos\left(\omega_{l,\alpha,\alpha'}^dt+\varphi_{l,\alpha,\alpha'} \right),$$ where $l=1, 2, \cdots, N$ and $\alpha,\alpha'\in \{\uparrow,\downarrow\}$. We will show that the time-dependent coupling strength $J_l(t)$ can induce a designable spin transition under a certain rotation wave approximation. First, We calculate the form of Hamiltonian Eq. (\[eq1\]) in the single-excitation state of the direct product space$\{|0\text{g}, \cdots , 0\text{g}, \underset{l\text{th}}{\alpha}, 0\text{g}, \cdots, 0\text{g} \rangle \}$. Hereafter, we use $|\bar{\alpha} \rangle_l$ to denote $|0\text{g}, \cdots, 0\text{g}, \underset{l\text{th}}{\alpha}, 0\text{g}, \cdots, 0\text{g}\rangle$, and $|G\rangle$ to denote $|0\text{g}, \cdots, 0\text{g}, \rangle$. Then, we define a rotating frame by $U=\exp\{-i\sum_l[h_l-h_z(|\bar{\uparrow}\rangle_l\langle\bar{\uparrow}| -|\bar{\downarrow}\rangle_l\langle\bar{\downarrow}|)]t\}$, and map the Hamiltonian in Eq. (\[eq1\]) into the single excitation subspace span$\{|\bar{\alpha}\rangle_l\}$ and get $$\begin{aligned}
\label{eq16}
H'_{\text{JC}}&=&U^{\dag}H_{\text{JC}}U+i\dot{U}^{\dag}U\notag\\
&=&\sum_{l=1}^{N} \left[ \sum_{\alpha}p_{l,\alpha}|\bar{\alpha}\rangle_l \langle \bar{\alpha}|\right]+U^{\dag} \left(\sum_{l=1}^{N-1}h_{int}^l \right)U.\end{aligned}$$ Selecting $\omega_{l,\alpha,\alpha'}^d=(E_{l,\alpha}-p_{l,\alpha})-(E_{l+1,\alpha'}-p_{l+1,\alpha'})$, under the rotating-wave approximation, i.e., $|\omega_{l,\alpha,\alpha'}^d|\gg t_{l,\alpha,\alpha'}$, and $|\omega_{l,\alpha,\alpha'}^d\pm\omega_{l,\alpha'',\alpha'''}^d|\gg t_{l,\alpha'',\alpha'''}$, Eq. (\[eq16\]) is simplified to $$\begin{aligned}
\label{eq17}
&&H'_{\text{JC}}=\sum_{l=1}^{N} \sum_{\alpha}p_{l,\alpha}|\bar{\alpha}\rangle_l \langle\bar{\alpha}|\\
&&+\sum_l^{N-1} \sum_{\alpha,\alpha'}\{ t_{l,\alpha,\alpha'} (2\delta_{\alpha,\alpha'}-1)\left|\bar{\alpha} \right>_{l,l+1}\left<\bar{\alpha'} \right| e^{-i\varphi_{l,\alpha,\alpha'}}+\text{h.c.} \}.\notag\end{aligned}$$ So that we can adjust $p_{l,\alpha},t_{l,\alpha,\alpha'}$, $\varphi_{l,\alpha,\alpha'}$, $\omega_{l,\alpha,\alpha'}$ to implement different forms of spin-orbit coupling. The on-site potential and the hopping patterns of the Hamiltonian before and after the unitary transformation are shown in Fig. \[figure2\](b) and \[figure2\](c), respectively.
According to Eq. (\[eq17\]), we choose $p_{l,\uparrow}=h_z$, $p_{l,\downarrow}=-h_z$, $t_{l,\uparrow,\uparrow}=-t_{l,\downarrow,\downarrow}=t_0$, $t_{l,\uparrow,\downarrow}=t_{l,\downarrow,\downarrow}=\Delta_0$, $\varphi_{l,\uparrow,\uparrow}=\varphi_{l,\downarrow,\downarrow}=0$, $\varphi_{l,\uparrow,\downarrow}=-\pi/2+\varphi$, $\varphi_{l,\downarrow,\uparrow}=-\pi/2-\varphi$, and the Hamiltonian becomes to the Hamiltonian in Eq. (\[eq4\]) that we want to simulate. In this case, the four drive frequencies added by external magnetic flux are $$\begin{aligned}
J_l(t)&=&4t_0\cos(\omega_{l,\uparrow,\uparrow}t)-4t_0\cos(\omega_{l,\downarrow,\downarrow}t)\notag\\
&&+4\Delta_0\cos(\omega_{l,\uparrow,\downarrow}t-\frac{\pi}{2}+\varphi)\notag\\
&&+4\Delta_0\cos(\omega_{l,\downarrow,\uparrow}t-\frac{\pi}{2}-\varphi),
\label{eq2}\end{aligned}$$ where $$\begin{aligned}
\omega_{l,\uparrow,\uparrow}&=&E_{l,\uparrow}-E_{l+1,\uparrow},\notag\\
\omega_{l,\uparrow,\downarrow}&=&E_{l,\uparrow}-E_{l+1,\downarrow}-2h_z,\notag\\
\omega_{l,\downarrow,\uparrow}&=&E_{l,\downarrow}-E_{l+1,\uparrow}+2h_z,\notag\\
\omega_{l,\downarrow,\downarrow}&=&E_{l,\downarrow}-E_{l+1,\downarrow},
\label{eq3}\end{aligned}$$ $4t_0$, $4\Delta_0$, $\varphi$, and $2h_z$ are the amplitudes, phases, and detuning. This time-dependent coupling strength $J_l(t)$ can be realized by adding external magnetic fluxes with dc and ac components threading the SQUIDs [@Gu; @Lei-induc3-prl]. The hopping strengths and hopping phases both can be controlled by the amplitudes and phases of the ac flux. We set $h_z/t_0=0.3$, then the least frequency distances between each two of them are nearly 20 times of the effective hopping strength $t_0$ and $\Delta_0$, so these four drive frequencies can achieve the corresponding four hopping, as shown in Fig. \[figure2\](b).
![Dynamical detection of polaritonic topological edge states. Time evolution of polaritonic density distribution $\langle \sigma^+\sigma^- + \hat{a}^\dagger \hat{a}\rangle$ when the JC lattice is in (a) $|\Psi_{f}(x)\rangle$ which obtained by the $\hat{O}_1$ and $\hat{O}_2$ quantum operations, and (b) $|\Psi^{'}_{f}(x)\rangle$ obtained by the $\hat{O}_2$ and $\hat{O}_1$ quantum operations. The edge-site population $P_1(t)$ and $P_2(t)$ at 1.5 $\mu$s and the oscillation center $\nu/2$ of (c) edge states $|\Psi_{f}(x)\rangle$, and (d) $|\Psi^{'}_{f}(x)\rangle$ for different decay rates $\gamma$.[]{data-label="figure3"}](FIGURE3.pdf){width="\columnwidth"}
Detection of topological properties
===================================
According to Eq. (\[eq7\]) and (\[eq8\]), or as shown in Fig. \[figure1\](c) and \[figure1\](d), the polariton in the left or right edge state is maximally distributed in the leftmost and rightmost JC lattice sites. Their internal spins are in the superposition states $\left(|\uparrow\rangle_l+i|\downarrow\rangle_l \right)/\sqrt{2}$ and $\left(|\uparrow\rangle_l-i|\downarrow\rangle_l \right)/\sqrt{2}$, respectively. In our demonstration of the non-Abelian statistics, the two final states $|\Psi_{f}(x)\rangle$ and $|\Psi^{'}_{f}(x)\rangle$ corresponds to the two edge states, which will mostly localized in their corresponding edge sites for a long time. Therefore, by detecting the population of the edge sites, we can successfully verify the final states.
When detecting the state $|\Psi_{f}(x)\rangle$ which is obtained by first applies $\hat{O}_1$ and then $\hat{O}_2$ quantum operations, the result is shown in Fig. \[figure3\](a), and detection of the state $|\Psi^{'}_{f}(x)\rangle$ obtained by the $\hat{O}_2$ and $\hat{O}_1$ is shown in Fig. \[figure3\](b). The initial states of the two detection are taken as $$\begin{aligned}
|\Psi_f(t=0)\rangle&=|0\text{g}\rangle_1\cdots|0\text{g}\rangle_{N-1} \left(|\uparrow\rangle_N-i|\downarrow\rangle_N \right)/\sqrt{2},\\
|\Psi^{'}_f(t=0)\rangle&=\left(|\uparrow\rangle_1+i|\downarrow\rangle_1 \right)|0\text{g}\rangle_2\cdots|0\text{g}\rangle_N/\sqrt{2}.\\
\end{aligned}$$ It can be seen that after the evolution of 3 $\mu$s, because of topology protection, the final density distribution of the polaritons in the JC model lattice is still mostly distributed at the corresponding ends. Therefore, the two quantum states $|\Psi_{f}(x)\rangle$ and $|\Psi^{'}_{f}(x)\rangle$ are experimentally distinguishable.
The polaritonic topological winding number can be related with the time-averaged dynamical chiral center associated with the single-polariton dynamics [@Gu; @Mei], i.e., $$\nu ={\lim_{T\rightarrow \infty }}\frac{2}{T}\int_{0}^{T}\text{d}t\, \langle \psi_{\text{c}}(t)| \hat{P}_{\text{d}} |\psi_{\text{c}}(t)\rangle,
\label{eq10}$$ where $T$ is the evolution time, $\hat{P}_{\text{d}}=\sum_{l=1}^{N}l \bm{\sigma_y}^l$, $|\psi_{\text{c}}(t)\rangle =\exp(-iHt)|\psi_{\text{c}}(0)\rangle$ is the time evolution of the initial single-polariton state $|\psi_{\text{c}}(0)\rangle=|0\text{g}\rangle_1\cdots |\uparrow\rangle_{\lceil N/2 \rceil} \cdots|0\text{g}\rangle_N$, where one of the middle JC lattice site has been put one polariton in, with its spin prepared in the state $|\uparrow\rangle$.
In Fig. \[figure3\](c) and \[figure3\](d), we plot the edge-site population $$\begin{aligned}
&P_1(t)=\text{Tr}\left[\rho(t)\left(a_1^\dag a_1+\sigma_1^+\sigma_1^-\right)\right],\\
&P_2(t)=\text{Tr}\left[\rho(t)\left(a_N^\dag a_N+\sigma_N^+\sigma_N^-\right)\right],
\end{aligned}$$ after 1.5 $\mu$s and the oscillation center $\nu/2$ of the state $|\Psi_{f}(x)\rangle$ and $|\Psi^{'}_{f}(x)\rangle$ for different decay rates. It shows that the edge state population and the chiral center smoothly decrease when the decay rate increase. It can be seen that as decay rate $\gamma$ continues to increase, it will run inside the system, the edge state will disappear due to noise, and the detection fails.
Finally, the influence of the system noise on the photon number and the decoherence of the qubit are evaluated by numerically integrate the Lindblad master equation, which can be written as $$\dot {\rho }=-{i}[H_{\text{JC}},\rho ]+\sum_{l=1}^{N}\sum _{i=1}^{3}\gamma\left(\Gamma_{l,i}\,\rho \Gamma_{l,i}^{\dagger }-{\frac {1}{2}}\left\{\Gamma_{l,i}^{\dagger }\Gamma_{l,i},\rho \right\}\right),
\label{eq11}$$ where $\rho$ is the density operator of the whole system, $\gamma$ is the decay rate or noise strength which are set to be the same here, $\Gamma_{l,1}=a_{l},\;\Gamma_{l,2}=\sigma^-_{l}$ and $\Gamma_{l,3}=\sigma^z_{l}$ are the photon-loss, transmon-loss and the transmon-dephasing operators in the $l$th lattice, respectively. The typical decay rate is $\gamma=2\pi\times5$ kHz, at this decay rate, the detection of the edge state $|\Psi_{f}(x)\rangle$ result in $P_1(\tau) = 0$, and $P_2(\tau)= 0.974$ when $\tau=1.5$ $\mu$s, which corresponds to a chiral center $\nu/2\simeq 0.451$. For the edge state $|\Psi^{'}_{f}(x)\rangle$, we have $P_1(\tau) = 0.971$, and $P_2(\tau)= 0$, which corresponds to a chiral center $\nu/2\simeq 0.453$. Because of topology protection, the system is less affected by decoherence effect, and these data are sufficient to distinguish edge states $|\Psi_{f}(x)\rangle$ and $|\Psi^{'}_{f}(x)\rangle$.
Conclusion
==========
In conclusion, we propose to establish a 1D chain by superconducting circuits and show that the non-Abelian statistics can be demonstrated experimentally. The advantages of superconducting circuits system make our scenario more feasible and stable, that will shed light on researches to achieve quantum computer. As an addition, we also discuss the effect of decoherence on the edge state of the system and the results prove that our protocol will stay reliable under decoherence, which is very important for realizing quantum computation in experiments.
This work was supported by the National Natural Science Foundation of China (Grant No. 11874156 and No. 11904111), the Key R&D Program of Guangdong Province (Grant No. 2018B030326001), the National Key R&D Program of China (Grant No. 2016YFA0301803), and the Project funded by China Postdoctoral Science Foundation (Grant No. 2019M652684).
[usrt]{}
M. A. Nielsen and I.L. Chuan, *Quantum Computaion and Quantum Information* (Cambridge University Press, Cambridge, 2000).
P. W. Shor, SIAM J. Comput. **26**, 1484 (1997).
X. G. Wen, Phys. Rev. Lett. [**66**]{}, 802 (1991).
A. Y. Kitaev, Ann. Phys. **303**, 2 (2003).
P. Bonderson, A. Kitaev, and K. Shtengel, Phys. Rev. Lett. [**96**]{}, 016803 (2006).
C. Nayak, Steven H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. **80**, 1083 (2008).
L. Fu and C. L. Kane, Phys. Rev. Lett. **100**, 096407 (2008).
N. Read, Phys. Rev. B [**79**]{}, 045308 (2009).
M. Sato, and S. Fujimoto, Phys. Rev. B **79**, 094504 (2009).
J. Linder, Y. Tanaka, T. Yokoyama, A. Sudbø, and N. Nagaosa, Phys. Rev. Lett. **104**, 067001 (2010).
J. D. Sau, R. M. Lutchyn, S. Tewari, and S. Das Sarma, Phys. Rev. Lett. **104**, 040502 (2010).
J. Alicea, Phys. Rev. B **81**, 125318 (2010).
X-L. Qi, T. L. Hughes, and S-C. Zhang, Phys. Rev. B **82**, 184516 (2010).
A. Stern, Nature [**464**]{}, 187 (2010).
A. Y. Kitaev, Phys. Uspekhi **44**, 131 (2001).
J. Alicea, Y. Oreg, G. Refael, F. von Oppen, and M. P. A. Fisher, Nat. Phys. **7**, 412 (2011).
R. M. Lutchyn, T. D. Stanescu, and S. Das Sarma, Phys. Rev. Lett. **106**, 127001 (2011).
J. Liu, C. F. Chan, and M. Gong, Front. Phys. **14**, 13609 (2019).
B. Jäack, Y.-L. Xie, J. Li, S.-J. Jeon, B. A. Bernevig, and A. Yazdani, Science **364**, 1255 (2019).
J. Q. You and F. Nori, Nature (London) [**474**]{}, 589 (2011).
M. H. Deveret and R. J. Schoelkopf, Science [**339**]{}, 1169 (2013).
Z.-L. Xiang, S. Ashhab, J. Q. You, and F. Nori, Rev. Mod. Phys. [**85**]{}, 623 (2013).
X. Gu, A. F. Kockum, A. Miranowicz, Y.-X. Liu, F. Nori, Phys. Rep. **718-719**, 1 (2017). A. A. Houck, H. E. Tureci, and J. Koch, Nat. Phys. [**8**]{}, 292 (2012).
Y. Salathé, M. Mondal, M. Oppliger, J. Heinsoo, P. Kurpiers, A. Potočnik, A. Mezzacapo, U. Las Heras, L. Lamata, E. Solano, S. Filipp, and A. Wallraff, Phys. Rev. X [**5**]{}, 021027 (2015).
R. Barends, L. Lamata, J. Kelly, L. García-Álvarez, A. G. Fowler, A. Megrant, E. Jeffrey, T. C. White, D. Sank, J. Y. Mutus, *et al*., Nat. Commun. [**6**]{}, 7654 (2015).
P. Roushan, C. Neill, J. Tangpanitanon, V. M. Bastidas, A. Megrant, R. Barends, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, *et al*., Science [**358**]{}, 1175 (2017).
E. Flurin, V. V. Ramasesh, S. Hacohen-Gourgy, L. S. Martin, N. Y. Yao, and I. Siddiqi, Phys. Rev. X [**7**]{}, 031023 (2017).
K. Xu, J. J. Chen, Y. Zeng, Y. R. Zhang, C. Song, W. X. Liu, Q. J. Guo, P. F. Zhang, D. Xu, H. Deng, K. Q. Huang, H. Wang, X. B. Zhu, D. N. Zheng, and H. Fan, Phys. Rev. Lett. [**120**]{}, 050507 (2018).
X.-Y. Guo, C. Yang, Y. Zeng, Y. Peng, H.-K. Li, H. Deng, Y.-R. Jin, S. Chen, D. Zheng, and H. Fan, Phys. Rev. Appl. [**11**]{}, 044080 (2019).
Z. Yan, Y. R. Zhang, M. Gong, Y. Wu, Y. Zheng, S. Li, C. Wang, F. Liang, J. Lin, Y. Xu, *et al*., Science [**364**]{}, 753 (2019).
Y. Ye, Z.-Y. Ge, Y. Wu, S. Wang, M. Gong, Y.-R. Zhang, Q. Zhu, R. Yang, S. Li, F. Liang, *et al*., Phys. Rev. Lett. [**123**]{}, 050502 (2019).
W. Cai, J. Han, F. Mei, Y. Xu, Y. Ma, X. Li, H. Wang, Y.-P. Song, Z.-Y. Xue, Z.-Q. Yin, S. Jia, and L. Sun, Phys. Rev. Lett. **123**, 080501 (2019).
E. T. Jaynes and F. W. Cummings, Proceed. IEEE **51** 89 (1963). F.-L. Gu, J. Liu, F. Mei, S.-T. Jia, D.-W. Zhang, and Z.-Y. Xue, npj Quantum Inf. **5**, 36 (2019).
Z.-Y. Xue, F.-L. Gu, Z.-P. Hong, Z.-H. Yang, D.-W. Zhang, Y. Hu, and J. Q. You, Phys. Rev. Appl. [**7**]{}, 054022 (2017). J. Liu, J.-Y. Cao, G. Chen, and Z.-Y. Xue, arXiv:1909.03674 (2019).
J. Dalibard, F. Gerbier, G. Juzeliūnas, P. Öhberg, Rev. Mod. Phys. **83**, 1523 (2011). V. Galitski and I. B. Spielman, Nature (London) **494**, 49 (2013). N. Goldman, G. Juzeliūnas, P. Öhberg, I. B. Spielman, Rep. Prog. Phys. **77**, 126401 (2014).
M. H. Devoret, R. J. Schoelkopf, Science **339**, 1169 (2013). D. N. Sheng, Z. Y. Weng, L. Sheng, and F. D. M. Haldane, Phys. Rev. Lett. **97**, 036808 (2006).
J. E. Moore and L. Balents, Phys. Rev. B. **75**, 121306(R) (2007).
D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. phys. [**82**]{}, 1959 (2010).
Fukui, Takahiro, Yasuhiro Hatsugai, and Hiroshi Suzuki, J. Phys. Soc. Jpn. **74**, 1674 (2005). S. Felicetti, M. Sanz, L. Lamata, G. Romero, G. Johansson, P. Delsing, E. Solano, Phys. Rev. Lett.**113**, 093602 (2014).
F. Mei, G. Chen, L. Tian, S. L. Zhu, S. Jia, Phys. Rev. A **98**, 032323 (2018).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We address the problem of analyticity up to the boundary of solutions to the Euler equations in the half space. We characterize the rate of decay of the real-analyticity radius of the solution $u(t)$ in terms of $\exp{\int_{0}^{t} \Vert \nabla u(s) \Vert_{L^\infty} ds}$, improving the previously known results. We also prove the persistence of the sub-analytic Gevrey-class regularity for the Euler equations in a half space, and obtain an explicit rate of decay of the radius of Gevrey-class regularity.'
address:
- 'Department of Mathematics, University of Southern California, Los Angeles, CA 90089'
- 'Department of Mathematics, University of Southern California, Los Angeles, CA 90089'
author:
- Igor Kukavica
- Vlad Vicol
title: 'The Domain of Analyticity of Solutions to the Three-Dimensional Euler Equations in a Half Space'
---
Introduction {#sec:intro}
============
The Euler equations on a half space for the velocity vector field $u(x,t)$ and the scalar pressure field $p(x,t)$, where $x\in \hhh = \{x\in {\mathbb R}^3:x_3 >0\}$ and $t \geq 0$, are given by $$\begin{aligned}
&\partial_t u + (u\cdot \nabla) u + \nabla p = 0, \ \mbox{in}\ \hhh \times (0,\infty), \tag{E.1}\label{eq:E1}\\
&\nabla \cdot u = 0, \ \mbox{in}\ \hhh \times (0,\infty), \tag{E.2}\label{eq:E2}\\
& u \cdot n = 0, \ \mbox{on}\ \partial\hhh \times (0,\infty)
\tag{E.3}\label{eq:E3},\end{aligned}$$ where $n = (0,0,-1)$ is the outward unit normal to $\partial\hhh =
\{ x\in {\mathbb R}^3:x_3 =0\}$. We consider the initial value problem associated to – with a divergence free initial datum $$\begin{aligned}
u(0) = u_0,\tag{E.4}\ \mbox{in}\ \hhh. \label{eq:E4}\end{aligned}$$ The local existence and uniqueness of $H^r$-solutions, with $r > 3/2 + 1$, on a maximal time interval $[0,T_*)$ holds (cf. [@BoB; @EM; @Ka; @MB; @T]), and $\lim_{T\nearrow T_*} \int_{0}^{T} \Vert \curl u(t) \Vert_{L^\infty} dt = \infty$, if $T_*<\infty$ (cf. [@BKM]); additionally the persistence of $C^\infty$ smoothness was proven by Foias, Frisch, and Temam [@FFT]. In this paper we address the solutions of the Euler initial value problem evolving from real-analytic and Gevrey-class initial datum (up to the boundary), and characterize the domain of analyticity. We emphasize that the radius of real-analyticity gives an estimate on the minimal scale in the flow [@HKR; @K2], and it also gives the explicit rate of exponential decay of its Fourier coefficients [@FT].
In a three dimensional bounded domain, the persistence of analyticity was proven by Bardos and Benachour [@BB] by an implicit argument (see also Alinhac and Métivier [@AM]). In [@B; @Be] the authors give an explicit estimate on the radius of analyticity, but which vanishes in finite time (independent of $T_*$). However, the proof of persistency [@BB] can be modified to show that the radius of analyticity decays at a rate proportional to the exponential of a high Sobolev norm of the solution (see also [@AM]). On the three dimensional periodic domain (or equivalently on ${\mathbb R}^3$) this is the same rate obtained by Levermore and Oliver in [@LO], using the method of Gevrey-class regularity. This Fourier based method was introduced by Foias and Temam [@FT] to study the analyticity of the Navier-Stokes equations. For further results on analyticity see [@AM; @CTV; @FTi; @GK1; @GK2; @K1; @K2; @L1; @L2; @Lb; @OT; @SC]. Explicit and even algebraic lower bounds for the radius of analyticity for dispersive equations were obtained by Bona, Grujić, and Kalisch in [@BGK1; @BGK2] (see also [@BG; @BL]).
In [@KV] we have proven that in the periodic setting, or on ${\mathbb R}^3$, the analyticity radius decays algebraically in the Sobolev norm $\Vert \curl u(t) \Vert_{H^r}$, with $r>7/2$, and exponentially in $\int_{0}^{t} \Vert \nabla u(s) \Vert_{L^\infty}$, for all $t<T_*$. In the present paper we show that the algebraic dependence on the Sobolev norm holds in the case when the domain has boundaries (cf. Theorem \[thm:main\]), thereby improving the previously known results. The interior analyticity in the case of the half-space, for short time (independent of $T_*$), was treated in [@SC]. We note that the shear flow example of Bardos and Titi [@BT] (cf. [@DM; @Y2]) may be used to construct explicit solutions to the three-dimensional Euler equations whose radius of analyticity is decaying for all time. Additionally we prove the persistence of sub-analytic Gevrey-class regularity up to the boundary (cf. [@FT; @LM]) for the Euler equations on the half space. To the best of our knowledge this was only known for the periodic domain cf. [@KV; @LO], but not for a domain with boundary. The methods of [@AM; @BB; @Lb; @SC] rely essentially on the special structure of the complex holomorphic functions, and do not apply to the non-analytic Gevrey-class setting.
The presence of the boundary creates several difficulties that do not arise in the periodic setting. In particular we cannot use Fourier-based methods, nor can we use the vorticity formulation of the equations. Instead we need to estimate the pressure, which satisfies (cf. [@T]) the elliptic Neumann problem $$\begin{aligned}
&- \Delta p = \partial_j u_i \partial_i u_j,\ \mbox{in}\ \hhh \times (0,\infty), \tag{P.1}
\label{eq:P1}\\
&\frac{\partial p}{\partial n} = (u\cdot \nabla) u \cdot n = 0, \ \mbox{on}\ \partial\hhh \times (0,\infty), \tag{P.2} \label{eq:P20}\end{aligned}$$ since $n=(0,0,-1)$, where the summation convention on repeated indices was used in . In order to close our argument we need to show that the pressure has the same analyticity radius as the velocity, and so we cannot appeal to the inductive argument of Lions and Magenes [@LM]. Moreover, the nature of the elliptic/hyperbolic boundary value problem imposes certain restrictions on the weights of the Sobolev norms that comprise the analytic norm. The analytic norm we define (cf. Section \[sec:proof\]) respects the symmetries of the problem and is adequate to account for the transfer of derivatives arising in the higher regularity estimates for the pressure.
The paper is organized as follows. In Section \[sec:thm\] we state our main result, Theorem \[thm:main\]. In Section \[sec:proof\] we prove the main theorem assuming two key estimates on the convection term and the pressure term, Lemma \[lemma:commutator\] and Lemma \[lemma:pressure\]. Section \[sec:commutator\] contains the proof of the commutator estimate Lemma \[lemma:commutator\], and lastly, the higher regularity estimates for the pressure and the proof of Lemma \[lemma:pressure\] are given in Section \[sec:pressure\].
Main Theorems {#sec:thm}
=============
The following statement is our main theorem addressing the analyticity of the solution. Theorem \[thm:gevrey\] below concerns the Gevrey class persistence.
\[thm:main\] Fix $r>9/2$. Let $u_0 \in H^r(\Omega)$ be divergence-free and uniformly real-analytic in $\Omega$. Then the unique solution $u(t) \in C(0,T_*;H^r(\Omega))$ of the initial value problem associated to the Euler equations – is real-analytic for all time $t<T_*$, where $T_* \in (0,\infty]$ denotes the maximal time of existence of the $H^r$-solution. Moreover, the uniform radius of space analyticity $\tau(t)$ of $u(t)$ satisfies $$\begin{aligned}
\tau(t) \geq \frac{1}{C_0(1+t)} \exp\left({-C\int_{0}^{t}
\Vert{\nabla u(s)}\Vert_{L^\infty} ds}\right),\label{eq:thm}
\end{aligned}$$ where $C>0$ is a constant that depends only on $r$, while $C_0$ has additional dependence on $u_0$ as described in below.
The lower bound improves the rate of decay from Bardos and Benachour [@BB] on a bounded domain (which can be inferred to be proportional to $\exp{\int_{0}^{t} \Vert u(s) \Vert_{H^r} ds}$), and it matches the rate of decay we obtained in [@KV] on the periodic domain.
The proof of Theorem \[thm:main\] also works in the case of the half-plane (recall that in two dimensions $T_*$ may be taken arbitrarily large, cf. [@MB; @Y1]) with the same lower bound on the radius of analyticity of the solution. Since in two dimensions $\Vert \nabla u(t) \Vert_{L^\infty}$ grows at a rate of $C \exp(Ct)$, for some positive constant $C$, the estimate shows that the rate of decay of the analyticity radius is at least $C \exp(- C \exp(Ct))$, for some $C>0$. This recovers the two-dimensional rate of decay obtained by Bardos, Benachour and Zerner [@BBZ] on a bounded domain and by the authors of this paper on the torus [@KV]. It would be interesting if one could prove a similar lower bound to but where the quantity $\int_{0}^{t} \Vert \nabla u(s) \Vert_{L^\infty}\, ds$ is replaced by $\int_{0}^{t} \Vert \curl u(s) \Vert_{L^\infty}\, ds$. In particular, such an estimate would imply in two dimensions that the radius of analyticity decays as a single exponential in time.
Recall (cf. [@LM]) that a smooth function $v$ is uniformly of Gevrey-class $s$, with $s\geq 1$, if there exist $M,\tau >0$ such that $$\begin{aligned}
\label{eq:gevrey}
|\partial^\alpha v(x)| \leq M
\frac{|\alpha|!^s}{\tau^{|\alpha|}},\end{aligned}$$ for all $x\in \hhh$ and all multi-indices $\alpha \in {\mathbb N}_{0}^3$. When $s=1$ we recover the class of real-analytic functions, and for $s\in(1,\infty)$ these functions are $C^\infty$ smooth but might not be analytic. We call the constant $\tau$ in the radius of Gevrey-class regularity. The following theorem shows the persistence of the Gevrey-class regularity for the Euler equations in a half-space.
\[thm:gevrey\] Fix $r>9/2$. Let $u_0$ be uniformly of Gevrey-class $s$ on $\hhh$, with $s>1$, and divergence-free. Then the unique $H^r$-solution $u(t)$ of the initial value problem – on $[0,T_*)$ is of Gevrey-class $s$, for all $t<T_*$, and the radius $\tau(t)$ of Gevrey-class regularity of the solution satisfies the lower bound .
Proofs of Theorem \[thm:main\] and Theorem \[thm:gevrey\] {#sec:proof}
=========================================================
For a multi-index $\alpha = (\alpha_1,\alpha_2,\alpha_3)$ in ${\mathbb
N}_{0}^{3}$, we denote $\alpha' = (\alpha_1,\alpha_2)$. Define the Sobolev and Lipshitz semi-norms $|\cdot|_m$ and $|\cdot|_{m,\infty}$ by $$\begin{aligned}
|v|_m =\sum\limits_{|\alpha| = m}
M_\alpha
\Vert{\partial^\alpha v}\Vert_{L^2}, \label{eq:m}\end{aligned}$$ and $$\begin{aligned}
|v|_{m,\infty} = \sum\limits_{|\alpha|=m} M_\alpha
\Vert{\partial^\alpha v}\Vert_{L^\infty},\end{aligned}$$ where $$\begin{aligned}
M_\alpha = \frac{|\alpha'|!}{\alpha'!} = {\alpha_1 + \alpha_2
\choose \alpha_1}. \label{eq:Mdef}\end{aligned}$$ The need for the binomial weights $M_\alpha$ in shall be evident in Section \[sec:pressure\] where we study the higher regularity estimates associated with the Neumann problem –. For $s
\geq 1$ and $\tau>0$, define the space $$\begin{aligned}
X_\tau = \{ v \in
C^\infty(\hhh):\Vert{v}\Vert_{X_\tau} < \infty\},\end{aligned}$$ where $$\begin{aligned}
\Vert{v}\Vert_{X_\tau} = \sum\limits_{m=3}^{\infty}
|v|_m \frac{\tau^{m-3}}{(m-3)!^{s}}.\end{aligned}$$ Similarly let $Y_\tau = \{ v \in C^\infty(\hhh):\Vert
v\Vert_{Y_\tau} < \infty\}$, where $$\begin{aligned}
\Vert{v}\Vert_{Y_\tau} = \sum\limits_{m=4}^{\infty}
|v|_m \frac{(m-3)\tau^{m-4}}{(m-3)!^{s}} .\end{aligned}$$
The above defined spaces $X_\tau$ and $Y_\tau$ can be identified with the classical Gevrey-$s$ classes as defined in [@LM]. On the full space or on the torus, the Gevrey-$s$ classes can also be identified with ${\mathcal D}((-\Delta)^{r/2} \exp{(\tau (-\Delta)^{1/2s})})$ (cf. [@FT; @KV; @LO]).
We shall prove Theorems \[thm:main\] and \[thm:gevrey\] simultaneously by looking at the evolution equation in Gevrey-$s$ classes with $s\geq1$. If $u_0$ is of Gevrey-class $s$ in $\Omega$, with $s\geq1$, then there exists $\tau(0)>0$ such that $u_0 \in X_{\tau(0)}$, and moreover $\tau(0)$ can be chosen arbitrarily close to the uniform real-analyticity radius of $u_0$, respectively to the radius of Gevrey-class regularity. Let $u(t)$ be the classical $H^r$-solution of the initial value problem –.
With the notations of Section \[sec:thm\] we have an [*a priori*]{} estimate $$\begin{aligned}
\frac{d}{dt} \Vert{u(t)}\Vert_{\Xtau} =
\dot{\tau}(t) \Vert{u(t)}\Vert_{\Ytau} + \sum\limits_{m=3}^{\infty}
\left(\frac{d}{dt} |u(t)|_m\right)
\frac{\tau(t)^{m-3}}{(m-3)!^{s}}. \label{eq:ODE1}\end{aligned}$$ Fix $m\geq 3$. In order to estimate $(d/dt) |u(t)|_m$, for each $|\alpha|=m$ we apply $\partial^\alpha$ on and take the $L^2$-inner product with $\partial^\alpha u$. We obtain $$\begin{aligned}
\frac12 \frac{d}{dt} \Vert{\partial^\alpha
u}\Vert_{L^2}^2 + < \partial^\alpha (u \cdot \nabla u),
\partial^\alpha u> + < \nabla \partial^\alpha p, \partial^\alpha
u> = 0. \label{eq:inner1}\end{aligned}$$ On the second term on the left, we apply the Leibniz rule and recall that $< u \cdot \nabla \partial^\alpha u,
\partial^\alpha u> = 0$. For the third term on the left of we note that since $n = (0,0,-1)$ and $u \cdot n = 0$ on $\partial\hhh$, we have that $\partial^\alpha u \cdot n =0$ for all $\alpha$ such that $\alpha_3 = 0$. Together with $\nabla \cdot u =
0$ in $\hhh$ this implies that $< \nabla
\partial^\alpha p,
\partial^\alpha u> = 0$ whenever $\alpha_3 = 0$. Using the Cauchy-Schwarz inequality and summing over $|\alpha|=m$ we then obtain $$\begin{aligned}
\frac{d}{dt} |u|_m \leq \sum\limits_{|\alpha| =
m}\sum\limits_{\beta \leq \alpha, \beta\neq 0} M_\alpha {\alpha
\choose \beta} \Vert{\partial^\beta u \cdot \nabla
\partial^{\alpha-\beta} u}\Vert_{L^2} + \sum\limits_{|\alpha|
= m, \alpha_3\neq 0}M_\alpha \Vert{\nabla \partial^\alpha
p}\Vert_{L^2}.\end{aligned}$$ Combined with , the above estimate shows that $$\begin{aligned}
\frac{d}{dt} \Vert{u(t)}\Vert_{\Xtau} \leq
\dot{\tau}(t) \Vert{u(t)}\Vert_{\Ytau} + \CCC + \PPP
\label{eq:ODE2},\end{aligned}$$ where the upper bound on the commutator term is given by $$\begin{aligned}
\CCC = \sum\limits_{m=3}^{\infty} \sum\limits_{|\alpha| =
m}\sum\limits_{\beta \leq \alpha, \beta\neq 0} M_\alpha {\alpha
\choose \beta} \Vert{\partial^\beta u \cdot \nabla
\partial^{\alpha-\beta} u}\Vert_{L^2}
\frac{\tau^{m-3}}{(m-3)!^{s}},\end{aligned}$$ and the upper bound on the pressure term is $$\begin{aligned}
\PPP = \sum\limits_{m=3}^{\infty} \sum\limits_{|\alpha| = m,
\alpha_3\neq 0}M_\alpha \Vert{\nabla \partial^\alpha p}\Vert_{L^2}
\frac{\tau^{m-3}}{(m-3)!^{s}}.\end{aligned}$$ In order to estimate $\CCC$ we use the following lemma, the proof of which is given in Section \[sec:commutator\] below.
\[lemma:commutator\] There exists a sufficiently large constant $C>0$ such that $$\begin{aligned}
\CCC \leq C \left(\mathcal{C}_1 +
\mathcal{C}_2 \Vert{u}\Vert_{Y_\tau}\right),\end{aligned}$$ where $$\begin{aligned}
\mathcal{C}_1 = |u|_{1,\infty} |u|_3 + |u|_{2,\infty} |u|_2 + \tau |u|_{2,\infty} |u|_3,\end{aligned}$$ and $$\begin{aligned}
\mathcal{C}_2 = \tau |u|_{1,\infty} + \tau^{2} |u|_{2,\infty} + \tau^3 |u|_{3,\infty} + \tau^{3/2} \Vert{u}\Vert_{X_\tau}.\end{aligned}$$
The following lemma shall be used to estimate $\PPP$. The proof is given in Section \[sec:pressure\] below.
\[lemma:pressure\] There exists a sufficiently large constant $C>0$ such that $$\begin{aligned}
\PPP \leq C \left(\PPP_1 + \PPP_2
\Vert{u}\Vert_{Y_\tau}\right),\end{aligned}$$ where $$\begin{aligned}
\PPP_1 = |u|_{1,\infty} |u|_3 + |u|_{2,\infty} |u|_2 + \tau |u|_{2,\infty} |u|_3 + \tau^2 |u|_{3,\infty} |u|_3,\end{aligned}$$ and $$\begin{aligned}
\PPP_2 = \tau |u|_{1,\infty} + \tau^2 |u|_{2,\infty} + \tau^3 |u|_{3,\infty} + \tau^{3/2} \Vert{u}\Vert_{X_\tau}.\end{aligned}$$
Let $r>9/2$ be fixed. The Sobolev embedding theorem, the two lemmas above, and imply $$\begin{aligned}
& \frac{d}{dt} \Vert{u(t)}\Vert_{X_{\tau(t)}} \leq
\dot{\tau}(t) \Vert{u(t)}\Vert_{Y_{\tau(t)}} + C
\Vert{u(t)}\Vert_{H^r}^2 ( 1+ \tau(t)^2) \notag\\
&\ + C
\Vert{u(t)}\Vert_{Y_{\tau(t)}} \left( \tau(t)
\Vert{\nabla u(t)}\Vert_{L^\infty} + (\tau(t)^2 + \tau(t)^3)
\Vert{u(t)}\Vert_{H^r} + \tau(t)^{3/2}
\Vert{u(t)}\Vert_{X_{\tau(t)}}\right). \label{eq:ODE3}\end{aligned}$$ If $\tau(t)$ decreases fast enough so that for all $0 \leq t < T_*$ we have $$\begin{aligned}
\label{eq:condition1}
\dot{\tau}(t) + C \tau(t)
\Vert{\nabla u(t)}\Vert_{L^\infty} + C (\tau(t)^2 + \tau(t)^3)
\Vert{u(t)}\Vert_{H^r} + C \tau(t)^{3/2}
\Vert{u(t)}\Vert_{X_{\tau(t)}} \leq 0,\end{aligned}$$ then implies that $$\begin{aligned}
\frac{d}{dt} \Vert{u(t)}\Vert_{X_{\tau(t)}} \leq C
\Vert{u(t)}\Vert_{H^r}^2 ( 1+ \tau(0)^2),\end{aligned}$$ and therefore $$\begin{aligned}
\Vert{u(t)}\Vert_{X_{\tau(t)}} \leq \Vert{u_0}\Vert_{X_{\tau(0)}} +
C_{\tau(0)} \int_{0}^{t} \Vert{u(s)}\Vert_{H^r}^2 ds = M(t),\end{aligned}$$ for all $0\leq t < T_*$, where $C_{\tau(0)} =1+
\tau(0)^2$. Since $\tau$ must be chosen to be a decreasing function, a sufficient condition for to hold is that $$\begin{aligned}
\dot{\tau}(t) + C \tau(t)
\Vert{\nabla u(t)}\Vert_{L^\infty} + C \tau(t)^{3/2}
\left(
C_{\tau(0)}' \Vert{u(t)}\Vert_{H^r} +
M(t)\right) \leq 0, \label{eq:condition2}\end{aligned}$$ where $C_{\tau(0)}' = \tau(0)^{1/2} + \tau(0)^{3/2}$. For simplicity of the exposition we denote $$\begin{aligned}
G(t) = \exp\left( C \int_{0}^{t} \Vert{\nabla
u(s)}\Vert_{L^\infty} ds\right),\end{aligned}$$ where the constant $C>0$ is taken sufficiently large so that $\Vert{u(t)}\Vert_{H^r}^2 \leq \Vert{u_0}\Vert_{H^r}^2 G(t)$. It then follows that is satisfied if we let $$\begin{aligned}
\tau(t) = G(t)^{-1/2} \left( \tau(0)^{-1/2} + C \int_{0}^{t} \left(
C_{\tau(0)}' \Vert{u(s)}\Vert_{H^r} + M(s)\right) G(s)^{-1}
ds\right)^{-1/2}.\end{aligned}$$ The lower bound on the radius of analyticity stated in Theorem \[thm:main\] is then obtained by noting that $$\begin{aligned}
&\tau(0)^{-1/2} + C \int_{0}^{t} \left( C_{\tau(0)}'
\Vert{u(s)}\Vert_{H^r} + M(s)\right) G(s)^{-1} ds \notag
\\
& \qquad \qquad \leq \tau(0)^{-1/2} + C \int_{0}^{t} \left(
C_{\tau(0)}' \Vert{u_0}\Vert_{H^r} + \Vert{u_0}\Vert_{X_{\tau(0)}} +
s C_{\tau(0)}
\Vert{u_0}\Vert_{H^r} ^2\right) ds\notag\\
& \qquad \qquad \leq C_0 (1+t)^2, \label{eq:C0}\end{aligned}$$ and therefore $$\begin{aligned}
\tau(t) \geq G(t)^{-1/2} \frac{C_0}{1+t}.\end{aligned}$$ The last inequality in above gives the explicit dependence of $C_0$ on $u_0$. This concludes the [*a priori*]{} estimates that are used to prove Theorem \[thm:main\]. The proof can be made formal by considering an approximating solution $u^{(n)}$, $n\in {\mathbb N}$, proving the above estimates for $u^{(n)}$, and then taking the limit as $n\rightarrow \infty$. We omit these details.
The commutator estimate {#sec:commutator}
=======================
Before we prove Lemma \[lemma:commutator\] we state and prove two useful lemmas about multi-indexes, that will be used throughout in Sections \[sec:commutator\] and \[sec:pressure\] below.
\[lemma:choose\] We have $$\begin{aligned}
{\alpha \choose \beta} M_{\alpha} M_{\beta}^{-1}
M_{\alpha-\beta}^{-1} \leq {|\alpha| \choose |\beta|}
\label{eq:choose}\end{aligned}$$ for all $\alpha,\beta \in \mathbb{N}_0^3$ with $\beta \leq \alpha$.
Using we have that $$\begin{aligned}
{\alpha' \choose \beta'} M_{\alpha} M_{\beta}^{-1}
M_{\alpha-\beta}^{-1} = {|\alpha'| \choose |\beta'|},\end{aligned}$$ and hence the left side of is bounded by $$\begin{aligned}
{|\alpha'| \choose |\beta'|} {\alpha_3 \choose \beta_3}.\end{aligned}$$The lemma then follows from $$\begin{aligned}
{n \choose i} {m \choose j} \leq {n+m \choose i+j},\end{aligned}$$ for any $n,m\geq 0$ such that $n\geq i$ and $m\geq j$, which in turn we obtain by computing the coefficient in front of $x^{i+j}$ in the binomial expansions of $(1+x)^n (1+x)^m$ and $(1+x)^{m+n}$.
The second lemma allows us to re-write certain double sums involving multi-indices.
\[lemma:product\] Let $\{x_\lambda\}_{\lambda\in{\mathbb N}_{0}^{3}}$ and $\{y_\lambda\}_{\lambda \in {\mathbb N}_{0}^{3}}$ be real numbers. Then we have $$\begin{aligned}
\label{eq:product}
\sum\limits_{|\alpha|=m} \sum\limits_{|\beta|=j, \beta\leq \alpha} x_\beta y_{\alpha
- \beta} = \left( \sum\limits_{|\beta| =j} x_\beta\right) \left(
\sum\limits_{|\gamma| = m-j} y_\gamma\right).\end{aligned}$$
The proof of the above lemma is omitted: it consists of re-labeling of the terms on the left side of . Now we proceed by proving the commutator estimate.
We have $$\begin{aligned}
\CCC = \sum\limits_{m=3}^{\infty} \sum\limits_{j=1}^{m} \CCC_{m,j},\end{aligned}$$ where we denoted $$\begin{aligned}
\label{eq:Cmj}
\CCC_{m,j} =
\frac{\tau^{m-3}}{(m-3)!^{s}} \sum\limits_{|\alpha|=m}
\sum\limits_{|\beta|=j, \beta\leq \alpha} M_\alpha {\alpha \choose
\beta} \Vert{\partial^\beta u \cdot \nabla \partial^{\alpha-\beta}
u}\Vert_{L^2}.\end{aligned}$$ We now split the right side of the above equality into seven terms according to the values of $m$ and $j$, and prove the following estimates. For low $j$, we claim $$\begin{aligned}
&\sum\limits_{m=3}^{\infty} \CCC_{m,1}\leq C |u|_{1,\infty} |u|_3 +
C \tau |u|_{1,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:low1},\\
&\sum\limits_{m=3}^{\infty} \CCC_{m,2}\leq C |u|_{2,\infty} |u|_2 +
C \tau |u|_{2,\infty} |u|_3+C \tau^2 |u|_{2,\infty}
\Vert{u}\Vert_{Y_\tau} \label{eq:low2}, \end{aligned}$$ for intermediate $j$, we have $$\begin{aligned}
&\sum\limits_{m=6}^{\infty} \sum\limits_{j=3}^{[m/2]} \CCC_{m,j}
\leq C \tau^{3/2} \Vert{u}\Vert_{X_\tau} \Vert{u}\Vert_{Y_\tau}\label{eq:lowmed},\\
&\sum\limits_{m=7}^{\infty} \sum\limits_{j=[m/2]+1}^{m-3} \CCC_{m,j}
\leq C \tau^{3/2} \Vert{u}\Vert_{X_\tau}
\Vert{u}\Vert_{Y_\tau}\label{eq:highmed}, \end{aligned}$$ and for high $j$, $$\begin{aligned}
&\sum\limits_{m=5}^{\infty} \CCC_{m,m-2} \leq C \tau^3 |u|_{3,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:highm-2},\\
&\sum\limits_{m=4}^{\infty} \CCC_{m,m-1} \leq C \tau |u|_{2,\infty}
|u|_3+C \tau^2 |u|_{2,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:highm-1},\\
&\sum\limits_{m=3}^{\infty} \CCC_{m,m} \leq C |u|_{1,\infty} |u|_3
+ C \tau |u|_{1,\infty} \Vert{u}\Vert_{Y_\tau} \label{eq:highm}.\end{aligned}$$ Due to symmetry we shall only prove – and indicate the necessary modifications for –.
Proof of : The Hölder inequality, , and Lemma \[lemma:choose\] imply that $$\begin{aligned}
\label{eq:Cmjlow1-1}
\sum\limits_{m=3}^{\infty} \CCC_{m,1} &=
\sum\limits_{|\alpha|=3}\sum\limits_{|\beta|=1, \beta\leq \alpha}
\left(M_\beta \Vert{\partial^\beta u}\Vert_{L^\infty} \right)
\left( M_{\alpha - \beta} \Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2}\right) M_\alpha M_{\beta}^{-1} M_{\alpha-\beta}^{-1} {\alpha \choose \beta} \notag\\
& +\sum\limits_{m=4}^{\infty} \sum\limits_{|\alpha|=m}
\sum\limits_{|\beta|=1, \beta\leq \alpha} \left( M_\beta \Vert{\partial^\beta
u}\Vert_{L^\infty}\right) \left( M_{\alpha-\beta} \Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2} \frac{(m-3) \tau^{m-4}}{(m-3)!^s}\right) \notag \\
& \qquad \qquad \times M_\alpha M_{\beta}^{-1} M_{\alpha-\beta}^{-1}
{\alpha \choose \beta} \frac{1}{m-3}\ \tau \notag \\
& \leq C \sum\limits_{|\alpha|=3}\sum\limits_{|\beta|=1, \beta\leq
\alpha} \left(M_\beta \Vert{\partial^\beta u}\Vert_{L^\infty}
\right)
\left( M_{\alpha - \beta} \Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2}\right) \notag\\
& + C \tau \sum\limits_{m=4}^{\infty} \sum\limits_{|\alpha|=m}
\sum\limits_{|\beta|=1, \beta\leq \alpha} \left( M_\beta \Vert{\partial^\beta
u}\Vert_{L^\infty}\right) \notag\\
& \qquad \qquad \times \left( M_{\alpha-\beta} \Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2} \frac{(m-3)
\tau^{m-4}}{(m-3)!^s}\right) \frac{m}{m-3}.\end{aligned}$$ The first sum on the far right side of can be estimated by $$\begin{aligned}
C |u|_{1,\infty} |\nabla u|_2
\leq C |u|_{1,\infty} |u|_3.\end{aligned}$$ Since $m \geq 4$, Lemma \[lemma:product\] implies that the second term on the far right side of is bounded by $$\begin{aligned}
C \tau |u|_{1,\infty} \sum\limits_{m=4}^{\infty} |\nabla u|_{m-1} \frac{(m-3)
\tau^{m-4}}{(m-3)!^s} \leq C \tau |u|_{1,\infty}
\Vert{u}\Vert_{Y_\tau},\end{aligned}$$ concluding the proof of .
Proof of : As in the proof of above, we have $$\begin{aligned}
\sum\limits_{m=3}^{\infty} \CCC_{m,2} & \leq
C \sum\limits_{|\alpha|=3,4}
\sum\limits_{|\beta|=2,\beta\leq\alpha} \tau^{m-3}
\left( M_\beta \Vert{\partial^\beta u}\Vert_{L^\infty}\right) \left(
M_{\alpha-\beta} \Vert{\partial^{\alpha-\beta} \nabla
u}\Vert_{L^2}\right)\notag\\
& \qquad \qquad \times M_\alpha M_{\beta}^{-1} M_{\alpha-\beta}^{-1} {\alpha \choose \beta} \notag\\
& + C\sum\limits_{m=5}^{\infty} \sum\limits_{|\alpha|=m}
\sum\limits_{|\beta|=2, \beta\leq \alpha}
\left( M_\beta \Vert{\partial^\beta u}\Vert_{L^\infty} \right) \left(M_{\alpha-\beta} \Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2} \frac{(m-4) \tau^{m-5}}{(m-4)!^s}
\right) \notag\\
& \qquad \qquad \times M_\alpha M_{\beta}^{-1} M_{\alpha-\beta}^{-1} {\alpha \choose \beta} \frac{1}{(m-4)
(m-3)^s} \tau^2. \label{eq:low2-1}\end{aligned}$$ Using Lemma \[lemma:product\], the first sum on the right of can be estimated from above by $$\begin{aligned}
C |u|_{2,\infty} |\nabla u|_1 + C \tau |u|_{2,\infty} |\nabla u|_2
\leq C |u|_{2,\infty} |u|_2 + C \tau |u|_{2,\infty} |u|_3.\end{aligned}$$ On the other hand, since $s\geq 1$, $|\beta|=2$, and $|\alpha|=m
\geq 5$, we have by Lemma \[lemma:choose\] that $$\begin{aligned}
M_\alpha M_{\beta}^{-1} M_{\alpha-\beta}^{-1} {\alpha \choose \beta}
\frac{1}{(m-4)
(m-3)^s} \leq {m\choose 2} \frac{1}{(m-4)(m-3)} \leq C.\end{aligned}$$ By Lemma \[lemma:product\], the second sum on the right of is thus bounded by $$\begin{aligned}
C \tau^2 \sum\limits_{m=5}^{\infty} |u|_{2,\infty} |\nabla u|_{m-2}
\frac{(m-4) \tau^{m-5}}{(m-4)!^s} \leq C \tau^2 |u|_{2,\infty}
\Vert{u}\Vert_{Y_\tau}.\end{aligned}$$ This proves the desired estimate.
Proof of : We first observe that the Hölder inequality and the Sobolev inequality give $$\begin{aligned}
\Vert{\partial^\beta u \cdot \nabla \partial^{\alpha-\beta}
u}\Vert_{L^2} \leq C \Vert{\partial^\beta
u}\Vert_{L^2}^{1/4} \Vert{\Delta \partial^\beta u}\Vert_{L^2}^{3/4} \Vert{\nabla \partial^{\alpha-\beta}
u}\Vert_{L^2}.\end{aligned}$$ Therefore we can bound the right hand side of as follows $$\begin{aligned}
& \sum\limits_{m=6}^{\infty} \sum\limits_{j=3}^{[m/2]} \CCC_{m,j}
\leq \sum\limits_{m=6}^{\infty} \sum\limits_{j=3}^{[m/2]}
\sum\limits_{|\alpha|=m}
\sum\limits_{|\beta|=j, \beta\leq \alpha} \left( M_\beta \Vert{\partial^\beta u}\Vert_{L^2} \frac{\tau^{j-3}}{(j-3)!^s}\right)^{1/4} \tau^{3/2} {\mathcal
A}_{\alpha,\beta,s} \notag\\
&\qquad \qquad \times \left( M_\beta \Vert{\partial^\beta \Delta u}\Vert_{L^2} \frac{\tau^{j-1}}{(j-1)!^s}\right)^{3/4} \left( M_{\alpha-\beta}
\Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2} \frac{(m-j-2)
\tau^{m-j-3}}{(m-j-2)!^s}\right) ,\end{aligned}$$ where $$\begin{aligned}
{\mathcal A}_{\alpha,\beta,s} = M_\alpha M_{\beta}^{-1}
M_{\alpha-\beta}^{-1} {\alpha \choose \beta} \frac{(j-3)!^{s/4}
(j-1)!^{3s/4} (m-j-2)!^s}{(m-3)!^s
(m-j-2)}.\end{aligned}$$ By Lemma \[lemma:choose\], we have that for $ m \geq 6$ and $3\leq j \leq [ m/2]$ $$\begin{aligned}
{\mathcal
A}_{\alpha,\beta,s} & \leq C {m \choose j} {m-3 \choose j-1}^{-s}
\frac{1}{(m-j-2)(j-1)^{s/4}
(j-2)^{s/4}} \notag\\
& \leq C {m-3 \choose j-1}^{-s+1} \frac{1}{j^{1 + s/2}}.\end{aligned}$$ Since $s\geq 1$ the above chain of inequalities gives that ${\mathcal A}_{\alpha,\beta,s}
\leq C$. Together with Lemma \[lemma:product\] and the discrete Hölder inequality this shows that $$\begin{aligned}
\sum\limits_{m=6}^{\infty} \sum\limits_{j=3}^{[m/2]} \CCC_{m,j} &
\leq C \tau^{3/2} \sum\limits_{m=6}^{\infty}
\sum\limits_{j=3}^{[m/2]} \left( |u|_{j} \frac{\tau^{j-3}}{(j-3)!^s}
\right)^{1/4} \left( |\Delta u|_{j} \frac{\tau^{j-1}}{(j-1)!^s}
\right)^{3/4} \notag \\
& \qquad \times \left( |\nabla u|_{m-j} \frac{(m-j-2)
\tau^{m-j-3}}{(m-j-2)!^s}\right).\end{aligned}$$ The discrete Young and Hölder inequalities then give $$\begin{aligned}
\sum\limits_{m=6}^{\infty} \sum\limits_{j=3}^{[m/2]} \CCC_{m,j} &
\leq C \tau^{3/2} \Vert{u}\Vert_{X_\tau} \Vert{u}\Vert_{Y_\tau},\end{aligned}$$ concluding the proof of .
To prove – we proceed as in the proofs of – above, with the roles of $j$ and $m-j$ reversed. Instead of estimating $\Vert{\partial^\beta u \cdot \nabla
\partial^{\alpha-\beta} u}\Vert_{L^2}$ with $\Vert{\partial^\beta u}\Vert_{L^\infty}
\Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^2}$ we instead bound $$\begin{aligned}
\Vert{\partial^\beta u \cdot \nabla
\partial^{\alpha-\beta} u}\Vert_{L^2} \leq \Vert{\partial^\beta u}\Vert_{L^2}
\Vert{\partial^{\alpha-\beta} \nabla u}\Vert_{L^\infty}.\end{aligned}$$ We omit further details. This concludes the proof of Lemma \[lemma:commutator\].
The pressure estimate {#sec:pressure}
=====================
In the proof of the Lemma \[lemma:pressure\] we need to use the following higher regularity estimate on the solution of the Neumann problem associated to the Poisson equation for the half-space.
\[lemma:neumann\] Assume that $p$ is a smooth solution of the Neumann problem $$\begin{aligned}
- \Delta p &= v\ \mbox{in}\ \Omega,\\
\frac{\partial p}{\partial n} &= 0\ \mbox{on}\ \partial \Omega ,\end{aligned}$$ with $v \in C^\infty(\Omega)$. Then there is a universal constant $C>0$ such that $$\begin{aligned}
\Vert{\partial_3 \partial^\alpha p}\Vert_{L^2} &\leq C
\sum\limits_{\substack{s,t \in {\mathbb N}_0,|\beta| = m-1\\ \beta' - \alpha' = (2s,2t)}} {s+t \choose s} \Vert{\partial^\beta
v}\Vert_{L^2},
\label{eq:neumannlemma3}\end{aligned}$$ for any $m\geq 1$ and any multiindex $\alpha\in{\mathbb N}_{0}^{3}$ with $|\alpha|=m$ and $\alpha_3 \neq 0$. Additionally, if $\alpha_3
\geq 2$ then $$\begin{aligned}
\Vert{\partial_1 \partial^\alpha p}\Vert_{L^2} &\leq C
\sum\limits_{\substack{ s,t \in {\mathbb N}_0,|\beta| = m-1\\ \beta' - \alpha' = (2s+1,2t)}} {s+t \choose s} \Vert{\partial^\beta
v}\Vert_{L^2},
\label{eq:neumannlemma1}\\
\Vert{\partial_2 \partial^\alpha p}\Vert_{L^2} &\leq C
\sum\limits_{\substack{ s,t \in {\mathbb N}_0, |\beta| = m-1\\ \beta' - \alpha' = (2s,2t+1)}} {s+t \choose s} \Vert{\partial^\beta v}\Vert_{L^2},
\label{eq:neumannlemma2}\end{aligned}$$ where $C>0$ is a universal constant.
We emphasize that the constant $C$ in the above lemma is independent of $\alpha$ and $m$. In we have are summing over the set $$\begin{aligned}
\{ \beta \in {\mathbb N}_{0}^3 : |\beta|=m-1,\ \exists s,t \in
{\mathbb N}_{0}\ \mbox{such that}\ \beta' - \alpha' = (2s,2t)\}\end{aligned}$$ and similar conventions are used in , , and throughout this section.
In order to avoid repetition, we only prove and indicate the necessary changes for and . Let $\Delta' =
\partial_{11} +
\partial_{22}$ be the tangential Laplacian. Using induction on $k
\in {\mathbb N}_{0}$ we obtain the identity $$\begin{aligned}
\partial_{3}^{2k+2} p = (-\Delta')^{k+1}p - \sum\limits_{j=0}^{k} \partial_{3}^{2j}
(-\Delta')^{k-j} v,\end{aligned}$$ and upon applying $\partial_3$ to the above equation $$\begin{aligned}
\partial_{3}^{2k+3} p = \partial_3(-\Delta')^{k+1}p - \sum\limits_{j=0}^{k} \partial_{3}^{2j+1}
(-\Delta')^{k-j} v.\end{aligned}$$ Therefore given $|\alpha|=m$, with $\alpha_3 =2k+1\geq 1$, we have $$\begin{aligned}
\partial_{3} \partial^\alpha p = \partial_{3}^{2k+2} \partial^{\alpha'} p = (-\Delta')^{k+1}\partial^{\alpha'} p + \sum\limits_{j=0}^{k} (-1)^{k-j+1} \partial_{3}^{2j}
(\partial_{11} + \partial_{22})^{k-j} \partial^{\alpha'} v,
\label{eq:p1}\end{aligned}$$ and if $\alpha_3 = 2k+2 \geq 2$, we have $$\begin{aligned}
\partial_{3} \partial^\alpha p =\partial_{3}^{2k+3} \partial^{\alpha'} p = \partial_3(-\Delta')^{k+1}\partial^{\alpha'} p + \sum\limits_{j=0}^{k} (-1)^{k-j+1} \partial_{3}^{2j+1}
(\partial_{11} + \partial_{22})^{k-j} \partial^{\alpha'} v.
\label{eq:p2}\end{aligned}$$ Since $n=(0,0,-1)$, the function $g = (-\Delta')^k
\partial^{\alpha'}p$ satisfies the Neumann problem $$\begin{aligned}
-\Delta g = (-\Delta')^k \partial^{\alpha'} v\qquad \mbox{in}\ \Omega,\\
\frac{\partial g}{\partial n} = 0\qquad \mbox{on} \ \partial \Omega.\end{aligned}$$ Using the classical $H^2$-regularity argument for the Neumann problem we then have $$\begin{aligned}
\Vert{\Delta' g}\Vert_{L^2} \leq C
\Vert{(-\Delta')^k \partial^{\alpha'} v}\Vert_{L^2},\end{aligned}$$ and $$\begin{aligned}
\Vert{\partial_3 \Delta' g}\Vert_{L^2} \leq C
\Vert{\partial_3 (-\Delta')^k \partial^{\alpha'}
v}\Vert_{L^2},\end{aligned}$$ for a positive universal constant $C$. Combining the above estimates with , , and the identity $$\begin{aligned}
(\partial_{11} + \partial_{22})^m w = \sum\limits_{s=0}^{m} {m
\choose s} \partial_{1}^{2s} \partial_{2}^{2m-2s} w,\end{aligned}$$ we obtain $$\begin{aligned}
\Vert{\partial_3 \partial^\alpha p}\Vert_{L^2} & \leq C
\sum_{j=0}^{k} \Vert{\partial_{3}^{2j} (\partial_{11} + \partial_{22})^{k-j} \partial^{\alpha'}
v}\Vert_{L^2} \notag\\
&\leq C \sum\limits_{j=0}^{k} \sum\limits_{s=0}^{k-j} {k-j \choose s}
\Vert{\partial_{1}^{2s + \alpha_1} \partial_{2}^{2k-2j-2s+\alpha_2} \partial_{3}^{2j}
v}\Vert_{L^2} \label{eq:p3}\end{aligned}$$ if $\alpha_3 = 2k+1\geq 1$, and $$\begin{aligned}
\Vert{\partial_3 \partial^\alpha p}\Vert_{L^2} & \leq C
\sum_{j=0}^{k} \Vert{\partial_{3}^{2j+1} (\partial_{11} + \partial_{22})^{k-j} \partial^{\alpha'}
v}\Vert_{L^2} \notag\\
&\leq C
\sum\limits_{j=0}^{k} \sum\limits_{s=0}^{k-j} {k-j \choose s}
\Vert{\partial_{1}^{2s + \alpha_1} \partial_{2}^{2k-2j-2s+\alpha_2} \partial_{3}^{2j+1}
v}\Vert_{L^2} \label{eq:p44}\end{aligned}$$ if $\alpha_3 = 2k+2\geq 2$. To simplify above, let $t
= k-j-s \geq 0$ and $\beta = (2s + \alpha_1,2k-2j-2s+\alpha_2,2j) =
(2s + \alpha_1, 2t + \alpha_2, \alpha_3 - 1 - 2s -2t) \in {\mathbb
N}_{0}^{3}$. Since $|\alpha|=m$ and $\alpha_3 = 2k+1$, we have $|\beta|=m-1$, and by re-indexing the sums, can be re-written as $$\begin{aligned}
\Vert{\partial_3 \partial^\alpha p}\Vert_{L^2} \leq C
\sum\limits_{\substack{s,t\in{\mathbb N}_0, |\beta|=m-1\\ \beta'-\alpha'=(2s,2t)}} \ {s+t \choose s}
\Vert{\partial^\beta v}\Vert_{L^2},\end{aligned}$$ The above estimate also holds for $\alpha_3 = 2k+2$ with the substitution $\beta = (2s + \alpha_1,2k-2j-2s+\alpha_2,2j+1)$, thereby simplifying the upper bound , and concluding the proof of .
To prove we proceed as above and obtain $$\begin{aligned}
\Vert{\partial_1 \partial^\alpha p}\Vert_{L^2} & =
\Vert{\partial_{1}^{\alpha_1 +1}
\partial_{2}^{\alpha_2} \partial_{3}^{2k+2} p }\Vert_{L^2} \notag\\
& \qquad \qquad \leq C
\sum\limits_{j=0}^{k} \sum\limits_{s=0}^{k-j} {k-j \choose s}
\Vert{\partial_{1}^{\alpha_1 + 2s +1} \partial_{2}^{\alpha_2 + 2k -
2j - 2s} \partial_{3}^{2j} v}\Vert_{L^2} \label{eq:D1even}\end{aligned}$$ if $\alpha_3 = 2k+2 \geq 2$, and $$\begin{aligned}
\Vert{\partial_1 \partial^\alpha p}\Vert_{L^2} &=
\Vert{\partial_{1}^{\alpha_1 +1}
\partial_{2}^{\alpha_2} \partial_{3}^{2k+3} p }\Vert_{L^2} \notag\\
& \qquad \qquad \leq C
\sum\limits_{j=0}^{k} \sum\limits_{s=0}^{k-j} {k-j \choose s}
\Vert{\partial_{1}^{\alpha_1 + 2s +1} \partial_{2}^{\alpha_2 + 2k -
2j - 2s} \partial_{3}^{2j+1} v}\Vert_{L^2} \label{eq:D1odd}\end{aligned}$$ if $\alpha_3 = 2k+3 \geq 3$. In we let $t = k-j-s
\geq 0$ and $\beta=(\alpha_1 + 2s + 1, \alpha_2 + 2t,2j) = (\alpha_1
+ 2s + 1, \alpha_2 + 2t,\alpha_3 - 2 - 2s - 2t)$, since $\alpha_3 =
2k +2$ and $|\alpha|=m$. Similarly in we let $\beta
= (\alpha_1 + 2s + 1, \alpha_2 + 2t,2j+1) = (\alpha_1 + 2s + 1,
\alpha_2 + 2t,\alpha_3 - 2 - 2s - 2t)$, since $\alpha_3 = 2k +3$ and $|\alpha|=m$. The above substitutions and re-indexing prove . Upon permuting the first and second coordinates, this also proves .
\[rem:pres\] We note that Lemma \[lemma:neumann\] does not give an estimate for $\Vert{\partial_1 \partial^\alpha p}\Vert_{L^2}$ and $\Vert{\partial_2 \partial^\alpha p}\Vert_{L^2}$ if $\alpha_3=1$. In this case we note that the function $g = \partial^{\alpha'} p$ satisfies the Neumann problem $$\begin{aligned}
-\Delta g = \partial^{\alpha'} v\qquad \mbox{in}\ \Omega,\\
\frac{\partial g}{\partial n} = 0\qquad \mbox{on} \ \partial \Omega.\end{aligned}$$ The classical $H^2$-regularity argument then gives $$\begin{aligned}
\Vert{\partial_1 \partial^\alpha p}\Vert_{L^2} = \Vert{\partial_1
\partial_3
\partial^{\alpha'} p}\Vert_{L^2}\leq C \Vert{\partial^{\alpha'}
v}\Vert_{L^2},\end{aligned}$$ and $$\begin{aligned}
\Vert{\partial_2 \partial^\alpha p}\Vert_{L^2} = \Vert{\partial_2
\partial_3
\partial^{\alpha'} p}\Vert_{L^2}\leq C \Vert{\partial^{\alpha'}
v}\Vert_{L^2},\end{aligned}$$ for a positive universal constant $C>0$.
We note that Lemma \[lemma:neumann\] is different from the classical higher regularity estimates (cf. [@GT; @LM; @T]) for the Neumann problem in the fact that the constant $C$ in – does not increase with $m$. The dependence on $m$ is encoded in the sums with binomial weights on the right side of –.
The following lemma shows that only a factor of $m$ is lost in the above higher regularity estimates if each $\Vert{\partial^\beta
v}\Vert_{L^2}$ term is paired with a proper binomial weight. This explains the definition of the homogeneous Sobolev norms $|\cdot|_m$ in .
\[lemma:\*\] There exists a positive universal constant $C$ such that $$\begin{aligned}
\sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]} {
\beta_1 + \beta_2 - 2s - 2t \choose \beta_1 - 2s} {s+t \choose
s} &\leq C m {\beta_1 + \beta_2 \choose \beta_1} \label{*}
\end{aligned}$$ for any $m\geq 3$ and any multi-index $\beta =
(\beta_1,\beta_2,m-1-\beta_1-\beta_2) \in
{\mathbb N}_{0}^{3}$. Additionally, if $\beta_1 \geq 1$ we have $$\begin{aligned}
\sum\limits_{s=0}^{[(\beta_1-1)/2]} \sum\limits_{t=0}^{[\beta_2/2]} {
\beta_1 + \beta_2 - 2s -1 - 2t \choose \beta_1 - 2s -1} {s+t \choose
s} &\leq C m {\beta_1 + \beta_2 \choose \beta_1}, \label{*1}
\end{aligned}$$ while if $\beta_2 \geq 1$ we have $$\begin{aligned}
\sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[(\beta_2-1)/2]} {
\beta_1 + \beta_2 - 2s - 2t-1 \choose \beta_1 - 2s} {s+t \choose
s} &\leq C m {\beta_1 + \beta_2 \choose \beta_1}, \label{*2}
\end{aligned}$$ where $C$ is a universal constant.
We note that in particular the constant $C$ is independent of $m$ and $\beta$.
Due to symmetry we only give the proof of . Estimates and are proven [*mutatis-mutandi*]{}. First we recall that given $\alpha,\gamma \in {\mathbb N}_{0}^{3}$, with $\gamma \leq \alpha$, we have $$\begin{aligned}
{\alpha \choose \gamma } \leq {|\alpha| \choose |\gamma|}.\end{aligned}$$ Using the above inequality we get $$\begin{aligned}
&\sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]} {
\beta_1 + \beta_2 - 2s - 2t \choose \beta_1 - 2s} {s+t \choose
s} {\beta_1 + \beta_2 \choose \beta_1}^{-1} \notag\\
&\qquad \qquad \qquad \leq \sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]} { \beta_1 + \beta_2 - s - t \choose \beta_1 - s} {\beta_1 + \beta_2 \choose \beta_1}^{-1} \leq \sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]} {s+t \choose s}^{-1}.\end{aligned}$$ The lemma is then proven if we find a constant $C$ such that $$\begin{aligned}
\sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]}
{s+t \choose s}^{-1} \leq C (\beta_1 + \beta_2).\end{aligned}$$ Without loss of generality we may assume that $\beta_1,\beta_2 \geq
4$. We split the above sum into $$\begin{aligned}
\sum\limits_{s=0}^{[\beta_1/2]} \sum\limits_{t=0}^{[\beta_2/2]} {s+t \choose s}^{-1} &\leq \sum\limits_{t=0}^{[\beta_2/2]} {t \choose 0}^{-1}+{t+1\choose 1}^{-1} + \sum\limits_{s=0}^{[\beta_1/2]} {s \choose s}^{-1} + {s+1 \choose s}^{-1} \notag \\
& \qquad + \sum\limits_{s=2}^{[\beta_1/2]} \sum\limits_{t=2}^{[\beta_2/2]} {s+t \choose s}^{-1} \notag \\
& = T_1 + T_2 + T_3. \label{3}\end{aligned}$$ It is clear that $$\begin{aligned}
\label{eq:testtest}
T_1 + T_2 \leq C (\beta_1 + \beta_2).\end{aligned}$$ We estimate $T_3$ by appealing to the Stirling estimate (cf. [@R p. 200]) $$\begin{aligned}
e^{7/8} \sqrt{n} \left(\frac ne\right)^n < n! <e \sqrt{n} \left(\frac
ne\right)^n.\end{aligned}$$ This implies $$\begin{aligned}
\frac{s! t!}{(s+t)!} \leq e^{9/8} \sqrt{\frac{st}{s+t}} \frac{1}{(1 + s/t)^t} \frac{1}{(1+t/s)^s}.\end{aligned}$$ Thus we obtain $$\begin{aligned}
T_3 &\leq C \sum\limits_{s=2}^{[\beta_1/2]} \sum\limits_{t=2}^{[\beta_2/2]} \sqrt{t} \frac{1}{(1+t/s)^s}. \label{9}\end{aligned}$$ Since $s\geq 2$, the Binomial Theorem implies $$\begin{aligned}
\left(1+\frac ts\right)^s \geq 1 + {s \choose 2}
\left(\frac{t}{s}\right)^2,\end{aligned}$$ and by we have $$\begin{aligned}
T_3 &\leq C \sum\limits_{s=2}^{[\beta_1/2]} \sum\limits_{t=2}^{[\beta_2/2]} \sqrt{t} \frac{1}{t^2} \leq C \left(\sum\limits_{s=2}^{[\beta_1/2]} 1\right) \left( \sum\limits_{t=2}^{\infty} \frac{1}{t^{3/2}} \right) \leq C \beta_1.\end{aligned}$$ Since $\beta_1
+\beta_2 \leq m-1$, the above inequality, , and complete the proof of the lemma.
First, note that since $p$ satisfies the elliptic Neumann problem – we may use Lemma \[lemma:neumann\] to estimate higher derivatives of $\partial_3 p$ as $$\begin{aligned}
\sum\limits_{|\alpha|=m, \alpha_3\neq
0} M_\alpha \Vert{\partial_3 \partial^\alpha
p}\Vert_{L^2} \leq C
\sum\limits_{|\alpha|=m, \alpha_3\neq 0} \sum\limits_{\substack{s,t\in {\mathbb N}_0,|\beta|=m-1\\ \beta'-\alpha' = (2s,2t)}} M_\alpha
{s+t \choose s} \Vert{\partial^\beta
(\partial_i u_k \partial_k u_i)}\Vert_{L^2}.\end{aligned}$$ By re-indexing the terms in the parenthesis, the right side of the above inequality may be re-written as $$\begin{aligned}
\sum\limits_{|\beta|=m-1} \sum\limits_{s=0}^{[\beta_1/2]}
\sum\limits_{t=0}^{[\beta_2/2]}
{\beta_1 + \beta_2 -2s -2t \choose \beta_1 -2s}
{s+t \choose s} \Vert{\partial^\beta
(\partial_i u_k \partial_k u_i)}\Vert_{L^2}.\end{aligned}$$ Using the estimate of Lemma \[lemma:\*\] we bound the above expression by $$\begin{aligned}
C m \sum\limits_{|\beta|=m-1} M_\beta
\Vert{\partial^\beta (\partial_i u_k \partial_k u_i)}\Vert_{L^2}\end{aligned}$$ and therefore $$\begin{aligned}
\label{eq:p3est}
& \sum\limits_{m=3}^{\infty} \left(
\sum\limits_{|\alpha|=m,\alpha_3\neq 0} M_\alpha \Vert{\partial_3 \partial^\alpha
p}\Vert_{L^2}\right) \frac{\tau^{m-3}}{(m-3)!^s} \notag\\
& \qquad \qquad \qquad \leq C
\sum\limits_{m=3}^{\infty} \sum\limits_{|\beta|=m-1} M_\beta
\Vert{\partial^\beta (\partial_i u_k \partial_k u_i)}\Vert_{L^2} \frac{m
\tau^{m-3}}{(m-3)!^s}.\end{aligned}$$ On the other hand, higher derivatives of $\partial_1 p$ are estimated using the decomposition $$\begin{aligned}
\sum\limits_{|\alpha|=m,
\alpha_3\neq 0} M_\alpha \Vert{\partial_1
\partial^\alpha p}\Vert_{L^2} & = \sum\limits_{|\alpha|=m,
\alpha_3=1} M_\alpha \Vert{\partial_1
\partial^\alpha p}\Vert_{L^2} + \sum\limits_{|\alpha|=m,
\alpha_3\geq 2} M_\alpha \Vert{\partial_1
\partial^\alpha p}\Vert_{L^2}. \label{eq:ppp1}\end{aligned}$$ By Remark \[rem:pres\], the first term on the right of is bounded by $$\begin{aligned}
C \sum\limits_{|\alpha|=m,
\alpha_3=1} M_\alpha \Vert{
\partial^{\alpha'} \left(\partial_i u_k \partial_k u_i\right) }\Vert_{L^2}
= C \sum\limits_{|\beta|=m-1, \beta_3 = 0} M_\beta \Vert{
\partial^\beta \left(\partial_i u_k \partial_k u_i\right)}\Vert_{L^2}.
\label{eq:ppp2}\end{aligned}$$ Using estimate , the second term on the right side of is estimated by $$\begin{aligned}
C \sum\limits_{|\alpha|=m,\alpha_3 \geq 2} \left( \sum\limits_{\substack{s,t\in {\mathbb N}_{0},|\beta|=m-1\\ \beta' - \alpha' = (2s+1,2t)}}
M_\alpha {s+t \choose s} \Vert{\partial^\beta \left( \partial_i u_k \partial_k u_i\right)}\Vert_{L^2}\right).\end{aligned}$$ By re-indexing the above expression equals $$\begin{aligned}
C \sum\limits_{|\beta|=m-1, \beta_1 \geq 1}
\sum\limits_{s=0}^{[(\beta-1)/2]} \sum\limits_{t=0}^{[\beta_2 /2]}
{\beta_1 -1 + \beta_2 - 2s -2t \choose \beta_1 - 2s - 1} {s+t
\choose s} \Vert{\partial^\beta \left(\partial_i u_k \partial_k u_i\right)}\Vert_{L^2},\end{aligned}$$ and using it is bounded from above by $$\begin{aligned}
C m \sum\limits_{|\beta|=m-1, \beta_1 \geq 1} M_\beta \Vert{\partial^\beta \left(\partial_i u_k \partial_k u_i\right)}\Vert_{L^2}. \label{eq:ppp3}\end{aligned}$$ Therefore, by , , and , we have $$\begin{aligned}
\label{eq:p1est}
&\sum\limits_{m=3}^{\infty} \left(
\sum\limits_{|\alpha|=m,\alpha_3\neq 0} M_\alpha \Vert{\partial_1 \partial^\alpha
p}\Vert_{L^2}\right) \frac{\tau^{m-3}}{(m-3)!^s} \notag\\
& \qquad \qquad \qquad \leq C
\sum\limits_{m=3}^{\infty} \sum\limits_{|\beta|=m-1} M_\beta
\Vert{\partial^\beta (\partial_i u_k \partial_k u_i)}\Vert_{L^2} \frac{m
\tau^{m-3}}{(m-3)!^s}.\end{aligned}$$ By symmetry, we also get $$\begin{aligned}
\label{eq:p2est}
& \sum\limits_{m=3}^{\infty} \left( \sum\limits_{|\alpha|=m, \alpha_3\neq
0} M_\alpha \Vert{\partial_2 \partial^\alpha
p}\Vert_{L^2}\right) \frac{\tau^{m-3}}{(m-3)!^s} \notag\\
& \qquad \qquad \qquad \leq C \sum\limits_{m=3}^{\infty} \sum\limits_{|\beta|=m-1} M_\beta
\Vert{\partial^\beta (\partial_i u_k \partial_k u_i)}\Vert_{L^2} \frac{m
\tau^{m-3}}{(m-3)!^s}.\end{aligned}$$ Combining , , , and the Leibniz rule we obtain $$\begin{aligned}
\PPP \leq C \sum\limits_{m=3}^{\infty} \sum\limits_{|\beta|=m-1} M_\beta
\Vert{\partial^\beta (\partial_i u_k \partial_k u_i)}\Vert_{L^2} \frac{m
\tau^{m-3}}{(m-3)!^s}\leq C \sum\limits_{m=3}^{\infty} \sum\limits_{j=0}^{m-1}
\PPP_{m,j}, \label{eq:Pest}\end{aligned}$$ where $$\begin{aligned}
\PPP_{m,j} = \frac{m \tau^{m-3}}{(m-3)!^s} \sum\limits_{|\beta|=m-1}
\sum\limits_{|\gamma| = j, \gamma \leq \beta}
M_\beta {\beta \choose \gamma}
\Vert{ \partial^\gamma \partial_i u_k \cdot \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2}.\end{aligned}$$We split the right side of into seven terms according to the values of $m$ and $j$. For low $j$, we claim $$\begin{aligned}
& \sum\limits_{m=3}^{\infty} \PPP_{m,0} \leq C |u|_{1,\infty} |u|_3 + C\tau |u|_{1,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:Plow0}\\
& \sum\limits_{m=3}^{\infty} \PPP_{m,1} \leq C |u|_{2,\infty} |u|_2 + C\tau |u|_{2,\infty} |u|_3 + C \tau^2 |u|_{2,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:Plow1}\\
& \sum\limits_{m=5}^{\infty} \PPP_{m,2} \leq C\tau^2 |u|_{3,\infty} |u|_3 + C \tau^3 |u|_{3,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:Plow2}
\end{aligned}$$ for intermediate $j$, we have $$\begin{aligned}
& \sum\limits_{m=8}^{\infty} \sum\limits_{j=3}^{[m/2]-1} \PPP_{m,j} \leq C \tau^{3/2} \Vert{u}\Vert_{X_\tau} \Vert{u}\Vert_{Y_\tau}\label{eq:Plowmed}\\
& \sum\limits_{m=6}^{\infty} \sum\limits_{j=[m/2]}^{m-3} \PPP_{m,j} \leq C \tau^{3/2} \Vert{u}\Vert_{X_\tau} \Vert{u}\Vert_{Y_\tau} \label{eq:Phighmed}
\end{aligned}$$ and for high $j$, we claim $$\begin{aligned}
& \sum\limits_{m=4}^{\infty} \PPP_{m,m-2} \leq C\tau |u|_{2,\infty} |u|_3 + C \tau^2 |u|_{2,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:Phighm-2}\\
& \sum\limits_{m=3}^{\infty} \PPP_{m,m-1} \leq C |u|_{1,\infty} |u|_3 + C \tau |u|_{1,\infty} \Vert{u}\Vert_{Y_\tau}\label{eq:Phighm-1}.\end{aligned}$$ The above estimates are proven similarly to – in the proof of Lemma \[lemma:commutator\]. Due to symmetry we have presented there the proofs of the estimates where $j\leq m-j$. For completeness of the exposition we provide the proofs of –, where we have $m-j
< j$.
Proof of : We proceed as in the proof of in Section \[sec:commutator\]. First, the Hölder and Sobolev inequalities imply that $$\begin{aligned}
\Vert{\partial^\gamma \partial_i u_k \cdot \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2} \leq C \Vert{\partial^\gamma \partial_i
u_k}\Vert_{L^2} \Vert{ \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2}^{1/4} \Vert{\Delta \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2}^{3/4}.\end{aligned}$$ Therefore, $$\begin{aligned}
\sum\limits_{m=6}^{\infty} \sum\limits_{j = [m/2]}^{m-3}
\PPP_{m,j} & \leq C \sum\limits_{m=6}^{\infty} \sum\limits_{j =
[m/2]}^{m-3} \sum\limits_{|\beta|=m-1} \sum\limits_{|\gamma|=j, \gamma \leq \beta} \left( M_\gamma \Vert{ \partial^\gamma \partial_i u_k}\Vert_{L^2} \frac{(j-2) \tau^{j-3}}{(j-2)!^s}\right) \notag \\
&\qquad \times \left( M_{\beta-\gamma} \Vert{ \partial^{\beta-\gamma} \partial_k u_i}\Vert_{L^2} \frac{\tau^{m-j-3}}{(m-j-3)!^s}\right)^{1/4} \notag\\
& \qquad \times \left( M_{\beta-\gamma} \Vert{\Delta \partial^{\beta-\gamma} \partial_k u_i}\Vert_{L^2} \frac{\tau^{m-j-1}}{(m-j-1)!^s}\right)^{3/4} \tau^{3/2} {\mathcal
B}_{\beta,\gamma,s},\end{aligned}$$ where $$\begin{aligned}
{\mathcal B}_{\beta,\gamma,s} = M_\beta M_{\gamma}^{-1}
M_{\beta-\gamma}^{-1} {\beta \choose \gamma} \frac{m (j-2)!^s (m-j-3)!^{s/4}
(m-j-1)!^{3s/4}}{(j-2)(m-3)!^s}.\end{aligned}$$ By Lemma \[lemma:choose\] we have that for $m\geq 6$ and $[m/2]\leq j \leq m-3$ $$\begin{aligned}
{\mathcal B}_{\beta,\gamma,s} &\leq C {m-1 \choose j} {m-3 \choose
j-2}^{-s} \frac{m}{(j-2) (m-j-1)^{s/4} (m-j-2)^{s/4}}\\
& \leq C {m-3 \choose j-2}^{1-s} (m-j)^{-s/2},\end{aligned}$$since ${m-1 \choose j} \leq C {m-3 \choose j-2}$, when $j \geq m/2$. Therefore, ${\mathcal B}_{\beta,\gamma,s} \leq C$; hence, by Lemma \[lemma:product\] and the discrete Hölder inequality, we have $$\begin{aligned}
\sum\limits_{m=6}^{\infty} \sum\limits_{j = [m/2]}^{m-3} \PPP_{m,j}
&\leq C \tau^{3/2} \sum\limits_{m=6}^{\infty}
\sum\limits_{j=[m/2]}^{m-3} \left( |\partial_k
u_i|_{m-j-1}
\frac{\tau^{m-j-3}}{(m-j-3)!^s}\right)^{1/4}\\
& \qquad \times \left( |\Delta \partial_k u_i|_{m-j-1}
\frac{\tau^{m-j-1}}{(m-j-1)!^s} \right)^{3/4} \left( |\partial_i u_k|_{j}
\frac{(j-2)\tau^{j-3}}{(j-2)!^s}\right) .\end{aligned}$$ The discrete Young and Hölder inequalities then give $$\begin{aligned}
\sum\limits_{m=6}^{\infty} \sum\limits_{j = [m/2]}^{m-3}
\PPP_{m,j}\leq C \tau^{3/2}\Vert{u}\Vert_{X_\tau}
\Vert{u}\Vert_{Y_\tau},\end{aligned}$$ concluding the proof of .
Proof of : As above we use the Hölder inequality and obtain $$\begin{aligned}
\sum\limits_{m=4}^{\infty} \PPP_{m,m-2} & \leq
\sum\limits_{m=4}^{\infty} \sum\limits_{|\beta|=m-1}
\sum\limits_{|\gamma|=m-2,\gamma\leq \beta} M_\beta {\beta \choose
\gamma} \Vert{\partial^\gamma \partial_i u_k}\Vert_{L^2}
\Vert{\partial^{\beta-\gamma} \partial_k u_i}\Vert_{L^\infty}
\frac{m \tau^{m-3}}{(m-3)!^s} \\
&\leq C\tau \sum\limits_{|\beta|=3} \sum\limits_{|\gamma|=2,
\gamma\leq\beta} \Vert{\partial^\gamma
\partial_i u_k}\Vert_{L^2}
\Vert{\partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^\infty} \\
& + C \tau^2 \sum\limits_{m=5}^{\infty} \sum\limits_{|\beta|=m-1}
\sum\limits_{|\gamma|=m-2,\gamma\leq\beta} \left(M_\gamma
\Vert{\partial^\gamma \partial_i u_k}\Vert_{L^2} \frac{m
\tau^{m-5}}{(m-4)!^s}\right) \\
& \qquad \times \left(M_{\beta-\gamma}
\Vert{\partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^\infty}\right) M_\beta M_{\gamma}^{-1}
M_{\beta-\gamma}^{-1} {\beta \choose \gamma} \frac{1}{(m-3)^s}.\end{aligned}$$ Using Lemma \[lemma:choose\], Lemma \[lemma:product\], and $s\geq 1$, this shows that the far right side of the above chain of inequalities is bounded by $$\begin{aligned}
C\tau |\partial_i u_k|_2 |\partial_k u_i|_{1,\infty} &+ C \tau^2 |\partial_k u_i|_{1,\infty} \sum\limits_{m=5}^{\infty} |
\partial_i u_k|_{m-2} \frac{m \tau^{m-5}}{(m-4)!^s} \notag\\
& \qquad \qquad \qquad \leq C\tau |u|_{2,\infty} |u|_3 + C \tau^2
|u|_{2,\infty} \Vert{u}\Vert_{Y_\tau},\end{aligned}$$ thereby proving .
Proof of : By the Hölder inequality we have $$\begin{aligned}
\sum\limits_{m=3}^{\infty} \PPP_{m,m-1} &\leq
\sum\limits_{m=3}^{\infty} \sum\limits_{|\beta| = m-1} M_\beta
\Vert{ \partial^\beta \partial_i u_k}\Vert_{L^2}
\Vert{\partial_k u_i}\Vert_{L^\infty} \frac{m
\tau^{m-3}}{(m-3)!^s}\\
&\leq C |\partial_i u_k |_{2} \left \Vert\partial_k u_i\right\Vert_{L^\infty} + C \tau \left \Vert\partial_k u_i\right\Vert_{L^\infty}
\sum\limits_{m=4}^{\infty} |\partial_i u_k|_{m-1} \frac{m
\tau^{m-4}}{(m-3)!^s}\\
& \leq C |u|_{1,\infty} |u|_3 + C \tau |u|_{1,\infty}
\Vert{u}\Vert_{Y_\tau},\end{aligned}$$ which gives the desired estimate. By symmetry, we may similarly prove –, but in these cases we apply the Hölder inequality as $$\begin{aligned}
\Vert{
\partial^\gamma \partial_i u_k \cdot \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2} \leq \Vert{
\partial^\gamma \partial_i u_k}\Vert_{L^\infty} \Vert{ \partial^{\beta-\gamma}
\partial_k u_i}\Vert_{L^2},\end{aligned}$$ that is we reverse the roles of $j$ and $m-j$. We omit further details. This concludes the proof of Lemma \[lemma:pressure\].
Acknowledgments {#acknowledgments .unnumbered}
===============
Both authors were supported in part by the NSF grant DMS-0604886.
[99]{}
S. Alinhac and G. Métivier, *Propagation de l’analyticité locale pour les solutions de l’équation d’[E]{}uler*, Arch. Rational Mech. Anal. **92** (1986), 287–296.
C. Bardos, *Analyticité de la solution de l’équation d’[E]{}uler dans un ouvert de [$R\sp{n}$]{}*, C. R. Acad. Sci. Paris Sér. A-B **283** (1976), A255–A258.
C. Bardos and S. Benachour, *Domaine d’analycité des solutions de l’équation d’[E]{}uler dans un ouvert de [$R\sp{n}$]{}*, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **4** (1977), 647–687.
C. Bardos, S. Benachour and M. Zerner, *Analycité des solutions périodiques de l’équation d’Euler en deux dimensions*, C. R. Acad. Sci. Paris Sér. A-B **282** (1976), A995–A998.
C. Bardos, and E.S. Titi, *Loss of smoothness and energy conserving rough weak solutions for the 3d Euler equations*, Discrete Contin. Dyn. Syst. [**3**]{} (2010), no. 2, 185–197.
J.T. Beale, T. Kato and A. Majda, *Remarks on the breakdown of smooth solutions for the $3$-D Euler equations*, Comm. Math. Phys. [**94**]{} (1984), 61–66.
S. Benachour, *Analyticité des solutions périodiques de l’équation d’[E]{}uler en trois dimensions*, C. R. Acad. Sci. Paris Sér. A-B[**283**]{} (1976), A107–A110.
J.L. Bona, Z. Grujić, *Spatial analyticity for nonlinear waves*, Math. Models Methods Appl. Sci. **13** (2003), 1–15.
J.L. Bona, Z. Grujić and H. Kalisch, *Algebraic lower bounds for the uniform radius of spatial analyticity for the generalized KdV equation*, Ann. Inst. H. Poincaré Anal. Non Linéaire **22** (2005), 783–797.
J.L. Bona, Z. Grujić and H. Kalisch, *Global solutions of the derivative Schrödinger equation in a class of functions analytic in a strip*, J. Differential Equations **229** (2006), 186–203.
J.L. Bona and Yi A. Li, *Decay and analyticity of solitary waves*, J. Math. Pures Appl. (9) **76** (1997), 377–430.
J.P. Bourguignon and H. Brezis, *Remarks on the Euler equation*, J. Functional Analysis **15** (1974), 341–363.
P. Constantin, E.S. Titi and J. Vukadinović, *Dissipativity and Gevrey regularity of a Smoluchowski equation*, Indiana Univ. Math. J. **54** (2005), 949–969.
R. DiPerna and A. Majda, *Oscillations and concentrations in weak solutions of the incompressible fluid equations*, Comm. Math. Phys. **108** (1987), 667–-689. D.G. Ebin and J.E. Marsden, , Bull. Amer. Math. Soc. **75** (1969), 962–967.
A.B. Ferrari and E.S. Titi, **Gevrey regularity for nonlinear analytic parabolic equations**, Comm. Partial Differential Equations **23** (1998), 1–16.
C. Foias, U. Frisch and R. Temam, *Existence de solutions [$C\sp{\infty }$]{} des équations d’Euler*, C. R. Acad. Sci. Paris Sér. A-B **280** (1975), A505–A508.
C. Foias and R. Temam, *Gevrey class regularity for the solutions of the [N]{}avier-[S]{}tokes equations*, J. Funct. Anal. **87** (1989), 359–369.
D. Gilbarg and N.S. Trudinger, “Elliptic Partial Differential Equations of Second Order," Reprint of the 1998 edition Springer-Verlag, Berlin, 2001.
Z. Grujić and I. Kukavica, *Space analyticity for the [N]{}avier-[S]{}tokes and related equations with initial data in [$L\sp p$]{}*, J. Funct. Anal. **152** (1998), 447–466.
Z. Grujić and I. Kukavica, *Space analyticity for the nonlinear heat equation in a bounded domain*, J. Differential Equations **154** (1999), 42–54.
W.D. Henshaw, H.-O. Kreiss and L.G. Reyna, *Smallest scale estimates for the Navier-Stokes equations for incompressible fluids*, Arch. Rational Mech. Anal. **112** (1990), 21–44.
T. Kato, *Nonstationary flows of viscous and ideal fluids in [${\bf R}\sp{3}$]{}*, J. Functional Analysis **9** (1972), 296–305.
I. Kukavica, *Hausdorff length of level sets for solutions of the Ginzburg-Landau equation*, Nonlinearity **8** (1995), 113–129.
I. Kukavica, *On the dissipative scale for the Navier-Stokes equation*, Indiana Univ. Math. J. **48** (1999), 1057–1081.
I. Kukavica and V. Vicol, *On the radius of analyticity of solutions to the three-dimensional Euler equations*, Proc. Amer. Math. Soc. **137** (2009), 669-677.
D. Le Bail, *Analyticité locale pour les solutions de l’équation d’[E]{}uler*, Arch. Rational Mech. Anal. **95** (1986), 117–136.
P.G. Lemarié–Rieusset, *Une remarque sur l’analyticité des solutions milds des équations de [N]{}avier-[S]{}tokes dans [${\bf R}\sp 3$]{}*, C. R. Acad. Sci. Paris Sér. I Math. **330** (2000), 183–186.
P.G. Lemarié–Rieusset, *Nouvelles remarques sur l’analyticité des solutions milds des équations de [N]{}avier-[S]{}tokes dans [$\Bbb R\sp 3$]{}*, C. R. Math. Acad. Sci. Paris **338** (2004),443–446.
C.D. Levermore and M. Oliver, *Analyticity of solutions for a generalized Euler equation*, J. Differential Equations **133** (1997), 321–339.
J.-L. Lions and E. Magenes, “Problemès aux limites non homogènes et applications," Vol. 3, Dunod, Paris, 1970.
A.J. Majda and A.L. Bertozzi, “Vorticity and incompressible flow," Cambridge Texts in Applied Mathematics, Vol. 27, Cambridge University Press, Cambridge, 2002.
M. Oliver and E.S. Titi, *On the domain of analyticity of solutions of second order analytic nonlinear differential equations*, J. Differential Equations **174** (2001), 55–74.
W. Rudin, “Principles of mathematical analysis," Third edition, McGraw-Hill Book Co., New York 1976.
M. Sammartino and R.E. Caflisch, *Zero Viscosity Limit for Analytic Solutions of the Navier-Stokes Equation on a Half-Space. I. Existence for Euler and Prandtl equations*, Commun. Math. Phys. **192** (1998), 433-–461.
R. Temam, *On the Euler equations of incompressible perfect fluids*, J. Functional Analysis **20** (1975), 32–43.
V.I. Yudovich, *Non stationary flow of an ideal incompressible liquid*, Zh. Vych. Mat. **3** (1963), 1032–1066.
V.I. Yudovich, *On the loss of smoothness of the solutions of the Euler equations and the inherent instability of flows of an ideal fluid*, Chaos **10** (2000), 705–-719.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The effect of hyperfine interaction on the room-temperature defect-enabled spin filtering effect in GaNAs alloys is investigated both experimentally and theoretically through a master equation approach based on the hyperfine and Zeeman interaction between electron and nuclear spin of the Ga$^{2+}_i$ interstitial spin filtering defect . We show that the nuclear spin polarization of the gallium defect can be tuned through the optically induced spin polarization of conduction band electrons.'
author:
- 'C. Sandoval-Santana$^{1}$ A. Balocchi$^{2}$, T. Amand$^{2}$, J. C. Harmand$^{3}$, A. Kunold$^{4}$, X. Marie$^{2}$'
bibliography:
- 'kunold.bib'
title: Room temperature optical manipulation of nuclear spin polarization in GaAsN
---
introduction
============
Many applications such as quantum registers [@hanson:161203; @robledo:574], quantum memories and nanoscale magnetic imaging setups [@grinolds:1745], rely on individually addressable spin systems that can be initialized and read out. Diamond NV paramagnetic centers [@davies:1653] have provided with an ideal system to build such applications given that its spin has a long coherence time that persists even at room temperature[@kennedy:4190]. It has been shown that the spin states of NV centers can be coherently controlled by optical and radio-frequency means. Similarly, interstitial Ga$^{2+}_i$ defects in dilute nitride GaAsN give rise to paramagnetic centers [@wang:198] whose well isolated and stable spin states can be addressed collectively and moreover detected both by optical and electrical means [@zhao:241104; @kunold:165202]. These defects, whose density can be controlled, are in particular responsible for the very high spin polarization of conduction band electrons in GaAsN compounds at room temperature under circularly polarized light excitation [@egorov:013539; @kalevich:455; @lombez:252115; @kalevich:174] thanks to a very efficient spin filtering mechanism. Recent measurements of an increased efficiency of the spin-filtering mechanism under a weak magnetic field in Faraday configuration in GaAsN [@kalevich:035205; @puttisong:2013] and in focus ion beam implanted InGaAs layers [@nguyen:2013] have stimulated renewed interest in the subject after the early observations of D. Paget in GaAs [@paget:931].\
Mainly three mechanisms have been proposed to understand the magnetic field induced amplification of the spin-filtering mechanism, all considering it as a specific signature of the interplay between the Ga defect nucleus and its localized electron coupled through the hyperfine interaction (HFI).\
Paget[@paget:931] suggested that the electron spin polarization should increase in an external magnetic field as a result of the decoupling between the mixed electron and nuclear spins as calculated initially by Dyakonov and Perel [@dyakonov:3059; @lee:265]. Kalevich *et al.* [@kalevich:035205] have interpreted the Faraday field enhancement of the spin filtering effect in GaAsN as due to a suppression of the chaotic magnetic field produced by the nuclei spin fluctuations surrounding each paramagnetic center, as typically observed in quantum dots [@Braun_PRL; @Petta_science]. These authors observed as well a shift of the photoluminescence (PL) polarization as a function of the applied magnetic field with respect to the zero field case, with opposite values for opposite excitation light helicity. This effect, explained in terms of the dynamical polarization of the lattice nuclei (Overhauser effect) has been phenomenologically introduced in the two-charge model [@ivchenko:465804; @weisbuch:141; @kalevich:455; @kalevich:208; @kalevich:174] through a magnetic field dependence of the localized electron spin relaxation time. The model correctly reproduces the measured effects to what concerns the band-edge PL enhancements, however it does not predict the observed shifts of the PL intensity versus magnetic field.\
Puttisong *et al.* [@puttisong:2013] performed an analysis of the electron spin state mixing at the Ga$_i^{2+}$ as well considering the HFI and Zeeman contributions. They investigated the influence of HFI by low temperature (T=3 K) optically detected magnetic resonance experiments on the observed amplification of spin dependent recombination measured at room temperature. The model, centered on the HFI Hamiltonian, focuses on the coupling between the localized electron and the Ga$^{2+}_i$ interstitial defect and compares the observed spin dependent recombination ratio (SDR$_r$) dependence on the magnetic field to the percentage of localized electron pure spin states in the paramagnetic centers. No zero magnetic field shift of the PL polarization curves is experimentally observed nor it can be predicted by the model.\
Despite the success in describing the main features of the spin-filtering enhancement in a magnetic field, the different models proposed do not take into consideration the dynamics of the ensemble of the dilute nitride system composed of the conduction band electrons with its spin dependent recombination into the coupled nuclear-localized electron complex \[inset of Fig. 1 (b)\].\
The aim of this paper is to present a comprehensive theoretical work on the SDR related phenomena in GaAsN taking into consideration this whole dynamical system. In order to gain insight into the interplay of the different mechanisms involved in the conduction band (CB) electron spin polarization in GaAsN alloys we develop a model based on the open quantum system approach that reduces to the well known two charge model[@ivchenko:465804; @weisbuch:141; @kalevich:455; @kalevich:208; @kalevich:174] in the absence of HFI. This master equation model, taking into account the strong coupling of the center-localized electron and the gallium defect nuclei by hyperfine interaction [@dyakonov:995], is able to reproduce accurately all the observed features of the experimental results including the polarization dependent PL polarization shift in Faraday configuration. We show that on one hand, the HFI of the Ga$^{2+}_i$ centers causes strong mixing of the localized electron and nuclear spin states thus polarizing the nuclear spin and partially canceling the localized electron spin polarization. Under a magnetic field aligned parallel to the incident light, the coupling between electrons and nuclei is destroyed and electrons recover their spin polarization. On the other hand, the dynamical equilibrium of the HFI-coupled electron-nucleus eigenstates’ populations under circularly polarized light leads to the appearance of an excitation power and polarization dependent shift of the electrons (conduction and localized) and paramagnetic center nuclei polarizations versus magnetic field in Faraday geometry with respect to the $B$=0 case. From a macroscopic photoluminescence or photoconductivity polarization measurement it is then possible to deduce the average Ga$^{2+}_i$ interstitial electronic and nuclear spin polarizations and their evolution in a magnetic field. Though optical pumping of nuclear spins in semiconductors usually require cryogenic temperatures of the sample [@meier:1984; @dyakonov_spin_2008; @urbaszek_nuclear_2013], we show here that the nuclear spin states of an ensemble of Ga centers can be controlled and measured at room temperature through the spin polarization of conduction band electrons.\
This paper is organized as follows. In Sec. \[samples\] we describe the sample preparation, the experimental setup and present the experimental results. The master equation model taking into account the hyperfine interaction between the centers localized electrons and nuclei is described in Sec. III. The calculations and comparison with the experiment are presented in Sec. \[results\]. Here, through the master equation approach, we demonstrate how the hyperfine interaction significantly alters the spin polarization of conduction band electrons, localized electrons and nuclei. We describe the mechanism behind the spin transfer from conduction band electrons to center’s nuclei and the origin of the polarization shift under a Faraday magnetic field. A summary of the results and the conclusions are drawn in Sec. \[conclusions\].
Samples and experimental results {#samples}
================================
![(color on line) (a) Photoluminescence SDR$_r$ as a function of laser excitation power $P$. The symbols show the experimental results; the solid lines indicate the theoretical results for two different values of the magnetic field. (b) Photoluminescence SDR$_r$ as a function of Faraday configuration magnetic field for a fixed laser irradiance $P$=9 mW. The circles indicate the experimental results while the solid line traces the theoretical results under the same conditions. Inset: schematic representation of the SDR system showing the Ga$^{2+}_i$ atom with its localized electron coupled by the hyperfine interaction (HFI) and the photogenerated conduction band electron.[]{data-label="figure1"}](power_2.pdf "fig:"){width="50.00000%"} ![(color on line) (a) Photoluminescence SDR$_r$ as a function of laser excitation power $P$. The symbols show the experimental results; the solid lines indicate the theoretical results for two different values of the magnetic field. (b) Photoluminescence SDR$_r$ as a function of Faraday configuration magnetic field for a fixed laser irradiance $P$=9 mW. The circles indicate the experimental results while the solid line traces the theoretical results under the same conditions. Inset: schematic representation of the SDR system showing the Ga$^{2+}_i$ atom with its localized electron coupled by the hyperfine interaction (HFI) and the photogenerated conduction band electron.[]{data-label="figure1"}](magnetic_2.pdf "fig:"){width="50.00000%"}
The sample under study consists of a 100 nm thick GaAs$_{1-x}$N$_x$ layer (x=0.0079) grown by molecular beam epitaxy on a (001) semi-insulating GaAs substrate and capped with 10 nm GaAs. The conduction band electron spin polarization properties in the structure have been investigated at room temperature by optical orientation experiments which rely on the transfer of the angular momentum of the exciting photons, using circularly polarized light, to the photogenerated electronic excitations. The excitation source is a continuous wave (CW) Ti:Sapphire laser emitting at 850 nm and focused onto the sample to a 50 $\mu$m diameter spot. The excitation laser is either circularly ($\sigma^+$) or linearly ($\sigma^X$) polarized propagating along the $z$ growth axis and the resulting PL circular polarization ($P_c$) and SDR$_r$ are calculated respectively as $P_c = (I^{++} - I^{+-})/(I^{++} + I^{+-})$ and SDR$_r$=$I^+/I^X$. For calculating the PL circular polarization $P_c$, $I^{++}$ and $I^{+-}$ represent the PL intensity components co- and counter-polarized to the $\sigma^+$ excitation light. In the case of the SDR$_r$, $I^+$ and $I^X$ denote respectively the total PL intensities detected under a circular or linear excitation of same intensity. The photoluminescence intensity is detected using a silicon photodiode coupled to a long-pass filter in order to suppress the contribution due to the laser scattered light and GaAs substrate/buffer layers luminescence. In order to improve the signal to noise ratio, the excitation light is mechanically chopped and the photodiode signal synchronously detected with a lock-in amplifier.\
In Fig. \[figure1\] (a), squares, we present the photoluminescence SDR$_r$ as a function of the laser excitation power $P$. We observe the main characteristic of the SDR effect, namely a marked excitation power dependence showing a peak value around $P$=25 mW [@kunold:165202]. Figure \[figure1\] (a), circles, reports the same experiment under a longitudinal magnetic field (Faraday configuration) $B_z$=185 mT. As previously reported by Kalevich [@kalevich:035205] we observed a sizable increase of the SDR ratio which is more substantial at low excitation power and gradually disappears at higher excitations. Figure \[figure1\] (b), reports the SDR ratio dependence on the Faraday magnetic field measured at $P$=9 mW. Altough a monotonous increase of the SDR$_r$ is observed [@kalevich:035205], the minimum SDR is observed for $B$=0 ; no shift is detected here in contrast to ref.\[\] probably due to the low excitation power and the limited signal/noise ratio.\
In order to account for our experimental observations and the additional evidences reported by Kalevich *et al.* [@kalevich:035205], namely the shift from $B$=0 of the CB polarization dip under a magnetic field in Faraday geometry, we have developed in the next section a density matrix model comprising the full dynamical system of CB electrons spin dependent recombination and hyperfine interaction between localized electron and paramagnetic centers.
![(color on line) The calculated coupled Ga$^{2+}_i$ nuclei and localized electron spin state structure as a result of hyperfine and Zeeman interactions. As the magnetic field increases the nuclear and localized electron states decouple. The decoupled spin states are presented at the right of the diagram. []{data-label="figure2"}](endiag.pdf){width="8.00"}
Model
=====
Two charge model {#model}
----------------
Our starting point is the two charge model based on the following set of rate equations: [@ivchenko:465804; @weisbuch:141; @kalevich:455; @kalevich:208; @kalevich:174] $$\begin{aligned}
\dot{n}
&+\frac{\gamma_e}{2}\left(nN_1-4\boldsymbol{S}\cdot\boldsymbol{S}_c\right)=G,
\label{mag:eq3}\\
\dot{p}&+\gamma_hN_2p=G,\\
\dot{N_1}
&+\frac{\gamma_e}{2}\left(nN_1-4\boldsymbol{S}\cdot\boldsymbol{S}_c\right)
-\gamma_hN_2p=0,\\
\dot{N_2}
&-\frac{\gamma_e}{2}\left(nN_1-4\boldsymbol{S}\cdot\boldsymbol{S}_c\right)
+\gamma_hN_2p=0.\label{mag:eq6}\\
\dot{\boldsymbol{S}}
&+\frac{\gamma_e}{2}\left(\boldsymbol{S}N_1-\boldsymbol{S}_cn\right)
+\frac{1}{\tau_s}\boldsymbol{S}+\boldsymbol{S}\times\boldsymbol{\omega}
=\boldsymbol{\Delta G},\label{mag:eq1}\\
\dot{\boldsymbol{S}}_c
&+\frac{\gamma_e}{2}\left(\boldsymbol{S_c}n-\boldsymbol{S}N_1\right)
+\frac{1}{\tau_{sc}}\boldsymbol{S_c}
+\boldsymbol{S_c}\times\boldsymbol{\Omega}
=\boldsymbol{0}.\label{mag:eq2}\end{aligned}$$ Here $n$ is the density of conduction band (CB) electrons, the total number of unpaired paramagnetic traps is given by $N_1$ and $N_2$ is the total number of electrons singlets hosted by the paramagnetic traps. $\boldsymbol{S},\boldsymbol{S_c}$ represent the average free and localized unpaired total electron spin. Holes ($p$) are considered unpolarized[@hilton:146601] as their spin relaxes with a characteristic time of the order of 1 ps [@malinowski]. Eqs. (\[mag:eq3\])-(\[mag:eq2\]) ensure conservation of charge neutrality and number of centers: $$\begin{aligned}
n-p+N_2=0\label{chgcons:eq1},\\
N_1+N_2=N\label{cencons:eq1}.\end{aligned}$$ The terms of the form $-\gamma_e\left(nN_1-4\boldsymbol{S}\cdot\boldsymbol{S}_c\right)/2$ and $\gamma_e\left(\boldsymbol{S}_cn-\boldsymbol{S}N_1\right)/2$ are responsible for the spin dependent free electron capture in paramagnetic centers with recombination rate $\gamma_e$. The recombination rate of conduction electrons to paramagnetic centers is increased when the free and localized electrons total spins $\boldsymbol{S}$ and $\boldsymbol{S}_c$ are antiparallel whereas it vanishes when they are parallel as expected from the Pauli exclusion principle needed to form a singlet state \[inset of Fig. 1(b)\]. The terms $-\gamma_hpN_2$ model the spin independent recombination of one electron of the paramagnetic center singlet with a hole. The photo generation of spin-up and spin-down electrons is accounted for by the terms $G_{+}$ and $G_{-}$ and of holes by $G=G_++G_-$ using the same method as in Ref.\[ \]. In CW conditions the total photoluminescence intensity under linear ($X$) or circular ($+$) excitation is calculated as $I^{X(+)}=\gamma_r n\left(t\right)p\left(t\right)$ where $t$ is a sufficiently long time to ensure steady state conditions. In the absence of the SDR mechanism, $\mathrm{SDR}_r=1$ whereas in its presence $\mathrm{SDR}_r>1$. Magnetic field effects such as the Hanle effect, are included into the model via the spin precession terms that arise from the Zeeman interaction $\boldsymbol{\omega}\times\boldsymbol{S}$ for free electrons and $\boldsymbol{\Omega}\times\boldsymbol{S}_c$ for localized electrons where $\boldsymbol{\omega}= g \mu_B \hbar\boldsymbol{B}$, $\boldsymbol{\Omega}=g_c\mu_B \hbar\boldsymbol{B}$ and $\mu_B$ is the Bohr magneton. The gyromagnetic factors for free electrons and localized electrons were set to $g$=1 and $g_c$=2 respectively [@kalevich:174; @kalevich:208; @pettinari:245202; @zhao:041911]. Nevertheless, the inclusion of the localized electron-nucleus hyperfine interaction terms in Eqs. (1)-(6) giving rise to the amplification of the SDR in longitudinal magnetic field is not straightforward as we shall see in the next section.
Master equation
---------------
When an electron is bounded to a deep Ga$^{2+}_i$ defect, its wavefunction is strongly localized [@lagarde:208; @wang:198; @puttisong:2013] and one can consider that its spin interacts mainly with the corresponding unique Ga nucleus yielding coupled electron-nucleus quantum states. Note that this is a very different situation compared to the usual treatment of hyperfine interaction of weakly localized (for instance electrons bound to donor states) or confined electrons (in quantum dots) in which the electron wavefunction interacts with 10$^{5}$ - 10$^{6}$ nuclei allowing a mean field description [@meier:1984; @dyakonov_spin_2008; @urbaszek_nuclear_2013]. The hyperfine interaction $A \boldsymbol{\hat{I}}\cdot \boldsymbol{\hat{S}}_c$ (where $A$, $\boldsymbol{\hat{I}}$ and $\boldsymbol{\hat{S}}_c$ are respectively the hyperfine interaction constant, the nucleus and localized electron spin operators) between the localized electron and the interstitial Ga$^{2+}_i$ in the Hamiltonian gives rise to the following eigenstates in zero magnetic field $$\begin{aligned}
\left\vert 1,1\right\rangle &=& -\frac{\sqrt{3}}{2}\left\vert \frac{3}{2},-\frac{1}{2}\right\rangle
+\frac{1}{2}\left\vert +\frac{1}{2},+\frac{1}{2}\right\rangle,\\
\left\vert 1,0\right\rangle &=& -\frac{1}{\sqrt{2}}\left\vert +\frac{1}{2},-\frac{1}{2}\right\rangle
+\frac{1}{\sqrt{2}}\left\vert -\frac{1}{2},+\frac{1}{2}\right\rangle, \\
\left\vert 1,-1 \right\rangle &=& -\frac{1}{2}\left\vert -\frac{1}{2},-\frac{1}{2}\right\rangle
+\frac{\sqrt{3}}{2}\left\vert -\frac{3}{2},+\frac{1}{2}\right\rangle,\\
\left\vert 2,2 \right\rangle &=& \left\vert + \frac{3}{2},+\frac{1}{2}\right\rangle,\\
\left\vert 2,1 \right\rangle &=& \frac{1}{2}\left\vert +\frac{3}{2},-\frac{1}{2}\right\rangle
+ \frac{\sqrt{3}}{2}\left\vert +\frac{1}{2},+\frac{1}{2}\right\rangle ,\\
\left\vert 2,0 \right\rangle &=& \frac{1}{\sqrt{2}}\left\vert +\frac{1}{2},-\frac{1}{2}\right\rangle
+ \frac{1}{\sqrt{2}}\left\vert -\frac{1}{2},+\frac{1}{2}\right\rangle,\\
\left\vert 2,-1 \right\rangle &=& \frac{\sqrt{3}}{2}\left\vert -\frac{1}{2},-\frac{1}{2}\right\rangle
+ \frac{1}{2}\left\vert -\frac{3}{2},+\frac{1}{2}\right\rangle,\\
\left\vert 2,-2 \right\rangle &=& \left\vert -\frac{3}{2},-\frac{1}{2}\right\rangle .\end{aligned}$$ where $\left\vert j,j_z\right\rangle$ on the left hand side represent the eigenstate of total angular momentum $j$ and component $j_z$, whereas on the right hand side $\left\vert m,s\right\rangle$ is a state of the uncoupled nuclear spin $m$ and localized electron spin $s$ projections along the chosen quantization axis. The total Hamiltonian (taking into account the conduction electron and the coupled localized electron-Ga nucleus system) takes the following form: $$\label{ham:eq1}
\hat{H}=\boldsymbol{\omega}\cdot\hat{\boldsymbol{S}}
+\boldsymbol{\Omega}\cdot\hat{\boldsymbol{S}}_c
+\boldsymbol{\Theta}\cdot\hat{\boldsymbol{I}}
+A\hat{\boldsymbol{I}}\cdot\hat{\boldsymbol{S}}_c,$$ where the first three terms correspond to the Zeeman interaction between an external magnetic field $\boldsymbol{B}$ and the magnetic moments of CB electrons, localized electrons and nuclei. The spin precession terms in the rate equations arise from these contributions. In the last term, accounting for the hyperfine interaction between the center’s nuclei and the localized electrons, $A=687\times 10^{-4}$cm$^{-1}$ was set to the average hyperfine parameter[@wang:198] of the two gallium isotopes: $^{69}$Ga$^{2+}_i$ (60%) with $A=620\times 10^{-4}$cm$^{-1}$ and $^{71}$Ga$^{2+}_i$ (40%) with $A=788\times 10^{-4}$cm$^{-1}$. Fig. 2 reports the calculated energies of the coupled localized electron- Ga$^{2+}_i$ nucleus states as a function of a Faraday magnetic field. For zero magnetic field the hyperfine interaction mixes the nucleus and electron spin states giving rise to two degenerate states corresponding to the two possible values of the total angular momentum $J=I+S$=2,1. As an external magnetic field is applied in Faraday geometry, the mixing due to the hyperfine interaction progressively decreases as the electron Zeeman term becomes predominant. For sufficiently high magnetic field values the electron and nucleus are effectively decoupled and pure electron and nuclear spin states are now the eigenstates of the system.\
The form of the hyperfine interaction terms reveals the difficulties of introducing its effects directly into the rate Eqs. (1)-(6). The Zeeman interactions are comprised only of CB and localized electrons angular momentum linear terms. Their corresponding angular momentum operators form a closed algebra, characterized by $\left[S_i,S_j\right]=i\hbar \sum_{k=x,y,z}\epsilon_{ijk}S_k$ ($\epsilon_{ijk}$ is the Levi-Civita symbol), thus yielding one time dependent differential equation for each component of the angular momentum arising from the commutator in the von Neumann equation. Therefore Eqs. (\[mag:eq1\]) and (\[mag:eq2\]) contain only linear terms in the angular momentum. Unlike the Zeeman terms, the hyperfine interaction in the Hamiltonian (\[ham:eq1\]) is the product $\hat{\boldsymbol{I}}\cdot\hat{\boldsymbol{S}}_c$ that does not give rise to a closed algebra. An attempt to workout the rate equations starting from the Hamiltonian (\[ham:eq1\]) would yield an increasingly large number of differential equations. Therefore, in this case the master equation formulation seems to be a better candidate to model the SDR than the rate equation approach.\
The master equation for the density matrix $\hat\rho$ for the given system is thereby expressed as $$\label{mastereq}
\dot{\hat{\rho}}
=\frac{i}{\hbar}\left[\hat{\rho},\hat{H}\right]
+\mathcal{D}\left[\hat{\rho}\right]+\hat G,$$ where the Hamiltonian $\hat{H}$ is given by (\[ham:eq1\]) and $\mathcal{D}\left[\hat{\rho}\right]$ is the dissipator. Accordingly, the chosen basis is comprised of a non interacting ensemble of $1/2$ spin CB electrons, spin unpolarized valence band (VB) holes, spin polarized localized electrons, $3/2$ spin nuclei and paired (singlet) localized electrons. The dissipator $\mathcal{D}\left[\hat{\rho}\right]$ describes the coupling or decay channels resulting from an interaction with the photon environment and spin relaxation as explained later on. Similarly $\hat{G}$ corresponds to the laser generating term. We can identify a total of 12 states: i) one hole state, ii) one paired localized state, iii) one spin down and iv) one spin up CB electron state, v) a total of eight states, Eqs. (9)-(16), corresponding to the localized electron-nucleus states, *i.e.* two states for the spin up and spin down localized electron times four states for the $3/2$ Ga$^{2+}_i$ nucleus spin. This basis is displayed explicitly in the Appendix.
In order to connect the master equation and the rate equation formulations we must build the operators corresponding to the ensemble averages $n$, $p$, $N_1$, $N_2$, $\boldsymbol{S}$, $\boldsymbol{S}_c$ in the rate Eqs. (\[mag:eq3\])-(\[mag:eq2\]) and the nuclei angular momentum operator $\hat{\boldsymbol{I}}$ as detailed in the Appendix.
Now we turn our attention to the explicit form of the dissipator $\mathcal{D}\left[\hat\rho\right]$. It contains the interactions that lead to spin dependent recombination between CB electrons and localized electrons; the successive spin-independent recombination of localized electrons to the VB; bimolecular recombination between CB and VB electrons and finally CB, localized electrons and nuclei spin decoherence and relaxation. In the absence of hyperfine interaction its structure should permit to retrieve the two charge model rate Eqs. (\[mag:eq3\])-(\[mag:eq2\]). It is given by $$\begin{gathered}
\mathcal{D}\left[\hat\rho\right]=
-\left(\gamma_r p n+\gamma_h p N_2\right)\hat p
-\frac{1}{2}\left(\frac{\gamma_e}{2} Q+\gamma_r p n\right)\hat n\\
+\left(\frac{\gamma_e}{2} Q-\gamma_h pN_2\right)\hat N_2
-\frac{1}{8}\left(\frac{\gamma_e}{2}Q-\gamma_h p N_2\right)\hat N_1\\
-2\left[\frac{\gamma_e}{2}\left(N_1\boldsymbol{S}-n\boldsymbol{S}_c\right)
+\frac{1}{\tau_s}\boldsymbol{S}+\gamma_r p\boldsymbol{S}\right]
\cdot \hat{\boldsymbol{S}}-
\frac{\boldsymbol{I}\cdot\hat{\boldsymbol{I}}}{10\tau_n}\\
-2\sum_{i=-3/2}^{3/2}
\left[\frac{\gamma_e}{2}
\left(n {\boldsymbol{\sigma}}_i- \boldsymbol{S}{\iota}_i\right)
+\frac{{\boldsymbol{\sigma}}_i}{\tau_{sc}}
\right]\cdot{\hat{\boldsymbol{\sigma}}}_i .\label{dissipator}\end{gathered}$$ Here $n$, $p$, $N_1$, $N_2$, $\boldsymbol{S}$ and $\boldsymbol{S}_c$ are the variables introduced in Sec. \[model\] and $\hat n$, $\hat p$, $\hat{N}_1$, $\hat{N}_2$, $\hat{\boldsymbol{S}}$ are the corresponding operators whose explicit form is given in App. \[operators\]. We consider them to be connected through the ensemble averages $n={\rm Tr}\left[\hat n\hat \rho\right]$, $p={\rm Tr}\left[\hat p\hat \rho\right]$, $N_1={\rm Tr}\left[\hat{N}_1\hat \rho\right]$, $N_2={\rm Tr}\left[\hat{N}_2\hat \rho\right]$, $\boldsymbol{S}={\rm Tr}\left[\hat{\boldsymbol{S}}\hat \rho\right]$ and $\boldsymbol{S}_c={\rm Tr}\left[\hat{\boldsymbol{S}}_c\hat \rho\right]$. For the sake of brevity we have defined $$\label{qdef:eq1}
Q=n N_1-4\boldsymbol{S}\cdot \boldsymbol{S}_c.$$ The terms proportional to $\gamma_e$ are the spin dependent recombination rates of CB electrons recombining to the paramagnetic traps. Localized electrons recombine to the VB at a rate given by $\gamma_h$. Those terms proportional to $\gamma_r$ are related to bimolecular recombination. Spin relaxation for CB and localized electrons is modeled by the terms proportional to $1/\tau_s$ and $1/\tau_{sc}$ respectively. We introduce a phenomenological nuclear spin decay term $\frac{\mathbf{I}\cdot \hat{\mathbf{I}}}{10\tau_n}$ to take into account possible mechanisms such as the fluctuating dipole-dipole interaction between the Ga interstitial with its neighbors, the fluctuating hyperfine interaction with conduction electrons and also electron exchange between the center and the free conduction electrons\cite{} , these mechanisms arising when the center is occupied by an electron singlet.\
We introduce the nuclear angular momentum operator $\hat{\boldsymbol{I}}$ and its corresponding ensemble average $\boldsymbol{I}={\rm Tr }\left[\hat{\boldsymbol{I}}\rho\right]$. The auxiliary operators ${\hat{\boldsymbol{\sigma}}}_i$ and ${\hat{\iota}}_i$ with $i=-3/2$, $-1/2$, $1/2$, $3/2$ are also presented in the Appendix. Their ensemble averages are given by ${\boldsymbol{\sigma}}_i={\rm Tr}\left[{\hat{\boldsymbol{\sigma}}}_i\rho\right]$ and ${\iota}_i={\rm Tr}\left[{\hat{\iota}}_i\rho\right]$ respectively. These operators are related to the localized electron number and their angular momentum and therefore have the following properties $\hat{\boldsymbol{S}}_c=\sum_{m=-3/2}^{3/2}{\hat{\boldsymbol{\sigma}}}_m$ and $\hat N_1=\sum_{m=-3/2}^{3/2}{\hat{\iota}}_m$.
The dissipator $\mathcal{D}\left[\hat\rho\right]$ in Eq. (\[dissipator\]) is constructed as a linear combination of the elements of the orthogonal inner product space spanned by the set of operators $\mathcal{V}=
\{\hat n$, $\hat p$, $ \hat{N}_1$, $ \hat{N}_2$, $
\hat{S}_x$, $\hat{S}_y$, $\hat{S}_z$, $
\hat{S}_{cx}$, $ \hat{S}_{cy}$, $\hat{S}_{cz}$, $
\hat{I}_x$, $\hat{I}_y$, $\hat{I}_z$, $
{\hat{\sigma}}_{xm}$, ${\hat{\sigma}}_{ym}$, ${\hat{\sigma}}_{zm}\}$. The vector space $\mathcal{V}$ inner product is conveniently set to be the trace of the product of any two matrices belonging to $\mathcal{V}$. Therefore if $\hat V_i$ and $\hat V_j$ are elements of $\mathcal{V}$ then ${\rm Tr}\left[\hat V_i\hat V_j\right]=
{\rm Tr}\left[\hat V_i^2\right]\delta_{ij}$. Thus, for example, to obtain the dynamical equation for CB electrons we first calculate $\dot n$ using the master equation $$\begin{gathered}
\dot n={\rm Tr}\left[ \hat n\dot {\hat \rho}\right]=
{\rm Tr}\left\{
\frac{i}{\hbar}\hat n\left[\hat\rho,\hat H\right]+
\hat n \mathcal{D}\left[\hat\rho\right]+
\hat n\hat G
\right\}\\
={\rm Tr}\left\{
\frac{i}{\hbar}\left[\hat n,\hat H\right]\hat\rho+
\hat n \mathcal{D}\left[\hat\rho\right]+
\hat n\hat G
\right\},\end{gathered}$$ where the commutor in the last line of the previous equation vanishes. Second, we calculate the dissipator term by using the orthogonality of the matrix vector inner space and the explicit form of the dissipator (\[dissipator\]) $$\begin{gathered}
{\rm Tr}\left\{
\hat n \mathcal{D}\left[\hat\rho\right]
\right\}=
-\frac{1}{2}\left(\frac{\gamma_e}{2} Q+\gamma_r p n\right)
{\rm Tr}\left[\hat n^2\right]\\
=-\left(\frac{\gamma_e}{2} Q+\gamma_r p n\right).\end{gathered}$$ Finally we calculate the generating term part ${\rm Tr}\left[\hat n\hat G\right]=G$. Collecting these results we retrieve the CB electron density Eq. (\[mag:eq3\]). This procedure can be repeated for Eqs. (\[mag:eq3\])-(\[mag:eq2\]). It is important to stress that the obtained $\boldsymbol{S}_c$ rate equations contain additional terms compared to the two charge model ones arising from the hyperfine interaction. Even though $\left(1/2\right)\left[-\left(\gamma_e/2\right)\left(\boldsymbol{S}_c n
-\boldsymbol{S}N_1\right)-\boldsymbol{S}_c/\tau_{sc}\right]
\cdot\hat{\boldsymbol{S}}_c$ would at first glance seem a simpler option for the last term in Eq. (\[dissipator\]), it does not guarantee equal recombination rates from the CB electron spin states to the four nuclear spin states that might lead to negative density matrix probabilities in the high power regime. Instead, $-2\sum_{m=-3/2}^{3/2}
\left[(\gamma_e/2)
\left(n {\boldsymbol{\sigma}}_m- \boldsymbol{S}{\iota}_m\right)
+{\boldsymbol{\sigma}}_m/\tau_{sc}
\right]\cdot{\hat{\boldsymbol{\sigma}}}_m$ not only yields uniform recombination rates for all nuclear spin states but also reproduces the localized electrons polarization rate equations as can be readily verified by applying the orthogonality properties of the auxiliary operators. To understand the amplification of the spin filtering effect observed in Fig. 1 under longitudinal magnetic field, it is important to take into account the transfer of angular momentum between CB electrons, localized electrons and traps. Using the master equation (\[mastereq\]) and the explicit form of the dissipator (\[dissipator\]) we work out the total change in angular momentum as $$\begin{gathered}
\frac{d}{dt}\left(\boldsymbol{S}+\boldsymbol{S}_c+\boldsymbol{I}\right)
={\rm Tr}\left[\left(\hat{\boldsymbol{S}}+\hat{\boldsymbol{S}}_c+\hat{\boldsymbol{I}}\right)
\dot{\hat{\rho}}\right]\\
=-\frac{1}{\tau_s}\boldsymbol{S}-\frac{1}{\tau_{sc}}\boldsymbol{S}_c
-\frac{1}{\tau_n}\boldsymbol{I}+\boldsymbol{\Delta}\boldsymbol{G}\\
+\boldsymbol{\omega}\times\boldsymbol{S}+\boldsymbol{\Omega}\times\boldsymbol{S}_{c}
+\boldsymbol{\Theta}\times\boldsymbol{I}.\end{gathered}$$ Here it should be noted that no terms arising from the hyperfine coupling contribute to the total angular momentum losses. Under a magnetic field in Faraday configuration the last three terms in the previous equations vanish, and under steady state conditions $$\frac{1}{\tau_s}\boldsymbol{S}+\frac{1}{\tau_{sc}}\boldsymbol{S}_c
+\frac{1}{\tau_n}\boldsymbol{I}=\boldsymbol{\Delta}\boldsymbol{G}.\\$$ Moreover, if we separate the angular momentum change in the CB and localized electron part and the nuclear part we obtain $$\begin{aligned}
\frac{d}{dt}\left(\boldsymbol{S}+\boldsymbol{S}_c\right)
&=&-\frac{1}{\tau_s}\boldsymbol{S}-\frac{1}{\tau_{sc}}\boldsymbol{S}_c
+\boldsymbol{\omega}\times\boldsymbol{S}
+\boldsymbol{\Omega}\times\boldsymbol{S}_{c}\nonumber\\
&&+A{\rm Tr}\left[\hat{\boldsymbol{S}}_c\times\hat{\boldsymbol{I}}\hat\rho\right]
+\boldsymbol{\Delta}\boldsymbol{G},\label{momcons:eq1}\\
\frac{d}{dt}\boldsymbol{I}&=&-\frac{1}{\tau_n}\boldsymbol{I}
+\boldsymbol{\Theta}\times\boldsymbol{I}\nonumber\\
&&-A{\rm Tr}\left[\hat{\boldsymbol{S}}_c\times\hat{\boldsymbol{I}}\hat\rho\right]
\label{momcons:eq2},\end{aligned}$$ where we observe that the hyperfine coupling term $A{\rm Tr}\left[\hat{\boldsymbol{S}}_c\times\hat{\boldsymbol{I}}\hat\rho\right]$ transfers angular momentum from the CB-center system to the nucleus until steady state conditions are reached. This effect can be in principle integrated together with the localized electron spin losses by replacing it by a time dependent relaxation time $\tau_{sc}\left(B\right)$ as phenomenologically introduced in Ref. \[ \].
Results
=======
![(color on line) The calculated spin polarization degree of (a) CB electrons, (b) localized electron and (c) Ga$^{2+}_i$ nuclei as a function of the Faraday configuration magnetic field for excitation powers from 10 to 70 mW and for right ($\sigma^+$) and left ($\sigma^-$) circularly polarized light. The positive (negative) field extrema correspond to a $\sigma^+$($\sigma^-$) excitation. []{data-label="figure4"}](polCBe.pdf "fig:"){width="8.00"} ![(color on line) The calculated spin polarization degree of (a) CB electrons, (b) localized electron and (c) Ga$^{2+}_i$ nuclei as a function of the Faraday configuration magnetic field for excitation powers from 10 to 70 mW and for right ($\sigma^+$) and left ($\sigma^-$) circularly polarized light. The positive (negative) field extrema correspond to a $\sigma^+$($\sigma^-$) excitation. []{data-label="figure4"}](poltrap.pdf "fig:"){width="8.00"} ![(color on line) The calculated spin polarization degree of (a) CB electrons, (b) localized electron and (c) Ga$^{2+}_i$ nuclei as a function of the Faraday configuration magnetic field for excitation powers from 10 to 70 mW and for right ($\sigma^+$) and left ($\sigma^-$) circularly polarized light. The positive (negative) field extrema correspond to a $\sigma^+$($\sigma^-$) excitation. []{data-label="figure4"}](polnuc.pdf "fig:"){width="8.00"}
![(color on line) The calculated probability of the eight HFI-coupled states as a function of the magnetic field at $20$ mW pump power and right circularly polarized light. For $B$=0 the states corresponds to the ones listed in Eqs. (9) to (16). []{data-label="probs"}](probs.pdf){width="8.00"}
![(color on line) The calculated nuclear and localized electron spin states probabilities as a function of the magnetic field at $20$ mW pump power and right circularly polarized light. []{data-label="probs_pure"}](probs_pure.pdf){width="8.00"}
![Shift of the minimum of the conduction band polarization from the $B$=0 position as a function of the pump power $P$. The solid line reproduces the theoretical calculation according to our model whereas the dots indicate the experimental points obtained by Kalevich, *et al.* [@kalevich:567] (see text).[]{data-label="beff"}](beff.pdf){width="50.00000%"}
The 144 differential equations arising from the master equation (\[mastereq\]) were solved by fourth-order Runge-Kutta method. Initially (before optical excitation) the localized electron-nuclear spin states are equally populated to $N/8$ in order to guarantee zero localized electron and nuclear spin polarization. The rest of the variables and density matrix elements were set to zero *i.e.* we considered unpopulated CB electron, VB hole and paired trap singlet states.\
In Fig. 1 (a) the calculated SDR ratio under circularly polarized light is compared to the measured one. Values of the spin relaxation time of free and unpaired electrons on the centers $\tau_{s}$=180 ps and $\tau_{sc}$=2200 ps respectively and the effective hole life time $\tau_{h}$=13 ps as well as the typical ratio of the electron to the hole recombination coefficients $\gamma_{e}/\gamma_{h}=6$ ( where $\gamma_{h}=1/\tau_{h}N$) are estimated from previous time resolved PL experiments [@lagarde:208; @kalevich:455]. The ratio of bimolecular and hole recombination coefficients is set to $\gamma_{r}/\gamma_{h}=0.008$. The calculated curve reproduces well the main features of the SDR power dependence: in low pumping regime the SDR$_r$ grows monotonically until it reaches its maximum and finally, in the strong pumping regime, it decreases monotonically. In the low pumping regime this behavior has been attributed to the growing number of traps that dynamically spin polarize in the same direction as the spin of the majority photo generated CB electrons therefore augmenting their spin filtering effect. On the contrary, in the high pumping regime there is a large number of photo-generated CB electrons compared to the total number of centers. The CB electrons that are spin polarized in a direction antiparallel to the traps dynamically depolarize the latter thus reducing the spin filtering effect. In addition, non-spin dependent recombination channels, such as bimolecular recombination itself, might be present. The model also describes very satisfactorily the SDR magnetic field dependence. Whereas for zero magnetic field the maximum SDR$_r$ reaches approximately 225%, it increases up to SDR$_r$=260% for $B$=185 mT. Here the magnetic field seems to stabilize the localized electron spin polarization. For magnetic fields above 200 mT the SDR$_r$ saturates and remains constant. Whereas the photoluminescence intensity for linearly polarized light remains constant for all values of the magnetic field, it is enhanced for larger magnetic fields under circularly polarized light (not shown in the figure).\
To gain further insight into the mechanism behind the amplification of the spin filtering effect under magnetic field, we theoretically calculate the spin polarization degree for CB electrons, localized electron and coupled nucleus for different pump power values using the parameters’ values reported in Ref. \[ \]: $\tau_s=140 $ps, $\tau_{sc}=2200$ps, $\tau_h=30$ps, $\gamma_e/\gamma_h=30$ and $\gamma_r=0$. In Fig. \[figure4\] (a) we observe the CB electrons spin polarization degree $\rho^{CB}=2 S_z/n$ as a function of the magnetic field in Faraday configuration for different laser irradiances. Two main features can be evidenced: first, the amplification of the spin filtering effect as $B$ increases. Second, the shift of the CB spin polarization dip from $B$=0 T. Concerning the first feature, we observe the same trend as Kalevich *et al.* [@kalevich:035205]: the spin polarization degree increases from its minimum $\rho^{\rm min}_s$ up to a saturation value $\rho^{\rm max}_s$ as the Faraday magnetic field absolute value grows. The difference between these to extrema $\Delta P_s=P^{\rm max}_s-P^{\rm min}_s$ reaches a maximum at a pump power $P=20$ mW. As expected, the spin polarization degree of the localized electrons $\rho^{loc}=2S_{cz}/N_1$ in Fig. \[figure4\] (b) follows a similar trend as they are dynamically spin polarized by the CB electrons. However, in Fig. \[figure4\] (c) we observe a maximum for the nuclear spin polarization degree $\rho^N=2J_z/3N_1$ aligned on the same magnetic field value as the minimum of the spin polarization degree of CB and localized electrons. As the magnetic field in Faraday configuration is increased, the polarization of the nuclear spin decreases until it vanishes at approximately $250$ mT. The inflection point of the CB electrons, localized electrons and nuclei spin degree of polarization occurs close to $B=A\hbar/g_c\mu_B\sim 80$ mT where the Zeeman energy and the HFI are comparable in magnitude. From the previous analysis we can describe the effect of a Faraday magnetic field on the spin filtering mechanism as follows: Incident circularly polarized light spin pumps CB electrons. Under strong magnetic field (such that the electron Zeeman interaction dominates over the hyperfine one), the localized electron is decoupled form the Ga$^{2+}_i$ nucleus and the eigenstates of the Hamiltonian are pure electron spin states. During the recombination process of CB electrons into the traps, the resident electrons are dynamically spin polarized in the same direction as the incoming CB electrons to the maximum degree possible under the actual excitation conditions. The interaction with the Ga$^{2+}_i$ nuclei is negligible (due to the strong Zeeman effect), and the nuclei retain their zero average angular momentum. However, for zero or weak magnetic fields (such that now the hyperfine interaction dominates over the electron Zeeman one), the hyperfine interaction mixes the localized electron and Ga$^{2+}_i$ spin states. On one hand the efficiency of the spin filtering mechanism is weakened compared to a pure electron spin situation due to the partial lifting of the Pauli spin blockade. On the other hand, the same hyperfine interaction is responsible for the transfer of angular momentum to the nucleus leading to an increase of the nuclear spin polarization.\
The second feature evidenced in Fig. 3 (a) is the asymmetric dependence of the average electron and nuclear polarization on the applied magnetic field direction for an excitation of a given helicity [@kalevich:567]. This feature is observable for nuclei with angular momentum larger than $1/2$, as it is the case here for Ga$^{2+}_i$ interstitial. This shift from $B$=0 of the minimum of the conduction band electron spin polarization reflects an equal shift of the electrons spin polarization localized in the paramagnetic centers. These shifts arise due to the dynamical equilibrium under optical pumping with circular polarization due to the eigenstates’ populations imbalance of the electron-nuclear system (see Figs. \[probs\],\[probs\_pure\]) compared to a uniform population condition. The increase of this shift with an increase of the excitation power reflects the modification of this dynamical equilibrium, the shift eventually saturating at a given value. We emphasize here that this power dependent asymmetry is only obtained under dynamical equilibrium conditions. Although this behavior closely resemble an Overhauser effect, we clearly see that the concept of an effective nuclear magnetic field is not applicable in this context of a localized electron on a Ga nucleus in a strong coupling regime.\
The calculated value of the conduction band polarization dip shift $\delta_{dip}$ from $B$=0 is plotted as a function of the pump power in Fig. \[beff\]. The dots indicate the experimental results obtained by Kalevich *et al.* [@kalevich:567] and the solid line are the theoretical results calculated from the displacements simulated in Fig. (\[figure4\]) (a) with the parameters corresponding to our own experimental results. We notice that $\delta_{dip}$ increases until it saturates in the high power regime as the nuclei acquire their maximum spin polarization and saturate. This estimation is consistent with the experimental results. The nuclear spin polarization maximum is forced to displace exactly to the same value of $\delta_{dip}$ as the minimum of the CB and bounded electrons degree of spin polarization in order to ensure spin transfer conservation, an essential characteristic of hyperfine interaction \[see Eqs. (\[momcons:eq1\]) and (\[momcons:eq2\])\]. Thanks to this mechanism, it is possible to access the nuclear and localized electron spin polarizations from a measurement of the PL (or photoconductivity) polarization degree. As previously stated, under the influence of a magnetic field in Faraday configuration, strong enough to make the Zeeman and HFI energies comparable, the coupling between bounded electrons and nuclei becomes less efficient and the spin mixing is lifted inhibiting the transfer between bounded electrons and nuclei. The spin mixing occurring close to $B$=0 T is very large between the nuclear-bounded electron states $\left\vert 1/2,-1/2\right\rangle$ and $\left\vert -1/2,1/2\right\rangle$ and between states $\left\vert 3/2,-1/2\right\rangle$ and $\left\vert -3/2,1/2\right\rangle$ as can be seen in Fig. \[probs\_pure\]. As the magnetic field in Faraday configuration is increased this mixing is lifted obtaining higher probabilities for those states having positive bounded electron angular momentum for a $\sigma^+$ excitation. The remaining spin states do not vary considerably giving thus an overall increase in the bounded electron spin polarization and an overall decrease in the nuclear spin polarization.
![(color on line) The calculated nuclear spin polarization degree as a function of CB electron spin polarization degree for $40$ mW pump power and various values of the external magnetic field. []{data-label="figure7"}](pol.pdf){width="8.00"}
Fig. \[figure7\] presents the nuclear spin polarization degree $\rho^N=2J_z/N_1$ as a function of the CB electron spin polarization degree $\rho^{CB}=2S_z/N_1$ for various values of the magnetic field shows a linear behavior. As expected, larger values of the magnetic field impede nuclear spins to polarize due to spin state mixing with the bounded electrons.
Conclusions
===========
In summary we have demonstrated the possibility to access the nuclear spin states of an ensemble of Ga$^{2+}_i$ centers through a measurement of the optical polarization of CB electrons in in GaAsN. Optically spin pumped CB electrons dynamically polarize the localized electrons in Ga$^{2+}_i$ centers by spin dependently recombining with them. The spin polarized localized electrons loose angular momentum to their corresponding nuclei by the hyperfine interaction. A control of the degree of the spin polarization of the Ga$^{2+}_i$ nuclei via the dynamical polarization of CB electrons is thus possible. Our calculations show that the nuclear spin polarization might be tuned with different excitation parameters such as pump power, degree of circular polarization of incident light, and the intensity of a magnetic field in Faraday configuration.
The model developed here describes all the essential features of the experimental results as the dependence of SDR$_r$ and the spin polarization degree as a function of the magnetic field and laser power. It is capable of reproducing the shift of the CB electron spin polarization degree curves as a function of the magnetic field in Faraday configuration [@kalevich:035205; @kalevich:567]. This feature is shown to be caused by a dynamical equilibrium of the populations of the electron-nuclear states, strongly coupled via the hyperfine interaction, in the traps driven the spin dependent recombination.
Part of this work has been done in the framework of the EU Cost action N$^\circ$ MP0805. Alejandro Kunold acknowledges financial support from UAM-A CB department and thanks INSA-Toulouse for a two-months professorship position.
Matrix representation of the operators {#operators .unnumbered}
======================================
Here we present the matrix representation of the number and spin operators needed to build the master equation. They are all $12\times 12$ matrices written in the basis: $$\begin{split}
\mathcal{B} =\{
\left\vert 1 \right\rangle=\left\vert h \right\rangle, \left\vert 2 \right\rangle=\left\vert \mathrm{singlet} \right\rangle,
\left\vert 3 \right\rangle=\left\vert c\downarrow \right\rangle,
\left\vert 4 \right\rangle=\left\vert c \uparrow \right\rangle, \\
\left\vert 5 \right\rangle=\left\vert -\frac{3}{2} \downarrow \right\rangle,
\left\vert 6 \right\rangle=\left\vert -\frac{1}{2} \downarrow \right\rangle,
\left\vert 7 \right\rangle=\left\vert \frac{1}{2} \downarrow \right\rangle,
\left\vert 8 \right\rangle=\left\vert \frac{3}{2} \downarrow \right\rangle, \\
\left\vert 9 \right\rangle=\left\vert -\frac{3}{2} \uparrow \right\rangle,
\left\vert 10 \right\rangle=\left\vert -\frac{1}{2} \uparrow \right\rangle,
\left\vert 11 \right\rangle=\left\vert \frac{1}{2} \uparrow \right\rangle,
\left\vert 12 \right\rangle=\left\vert \frac{3}{2} \uparrow \right\rangle\}
\end{split}$$ where states from 1 to 4 represent respectively the valence band hole, the paired localized electron singlet state and the conduction band electron with their spin represented by the arrows. States from 5 to 12 each represents a state of a given projection of the nuclear and localized electron spins.
The VB hole and CB electron number operators are given by $$\begin{aligned}
\left(\hat p\right)_{ij}
&=&\delta_{i,1}\delta_{j,1},\\
\left(\hat n\right)_{ij}
&=& \delta_{i,3}\delta_{j,3}+\delta_{i,4}\delta_{j,4},
$$ The unpaired and paired trap number operators can be expressed as $$\begin{aligned}
\left(\hat N_1\right)_{ij}
&=&\sum_{k=1,8}\delta_{i,k+4}\delta_{j,k+4},\\
\left(\hat N_2\right)_{ij}
&=&\delta_{i,2}\delta_{j,2}.
$$ The CB electron spin operators are given by $$\begin{aligned}
\left(\hat{S}_x\right)_{ij}
&=& \frac{1}{2}\left(\delta_{i,3}\delta_{j,4}
+\delta_{i,4}\delta_{j,3}\right),\\
\left(\hat{S}_y\right)_{ij}
&=&\frac{i}{2}\left(\delta_{i,3}\delta_{j,4}
-\delta_{i,4}\delta_{j,3}\right),\\
\left(\hat{S}_z\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,3}\delta_{j,3}
+\delta_{i,4}\delta_{j,4}\right).
$$ The nuclear spin operators are given by $$\begin{aligned}
\left(I_x\right)_{ij}
&=&
\frac{\sqrt{3}}{2}\left(
\delta_{i,5}\delta_{j,6}+\delta_{i,6}\delta_{j,5}
+\delta_{i,7}\delta_{j,8}+\delta_{i,8}\delta_{j,7}\right.\nonumber\\
&&\left.+\delta_{i,9}\delta_{j,10}+\delta_{i,10}\delta_{j,9}
+\delta_{i,11}\delta_{j,12}+\delta_{i,12}\delta_{j,11}
\right)\nonumber\\
&&+\left(\delta_{i,6}\delta_{j,7}+\delta_{i,7}\delta_{j,6}
+\delta_{i,10}\delta_{j,11}+\delta_{i,11}\delta_{j,10}\right),\nonumber\\
\\
\left(I_y\right)_{ij}
&=&
i\frac{\sqrt{3}}{2}\left(
\delta_{i,5}\delta_{j,6}-\delta_{i,6}\delta_{j,5}
+\delta_{i,7}\delta_{j,8}-\delta_{i,8}\delta_{j,7}\right.\nonumber\\
&&\left.+\delta_{i,9}\delta_{j,10}-\delta_{i,10}\delta_{j,9}
+\delta_{i,11}\delta_{j,12}-\delta_{i,12}\delta_{j,11}
\right)\nonumber\\
&&+\left(\delta_{i,6}\delta_{j,7}-\delta_{i,7}\delta_{j,6}
+\delta_{i,10}\delta_{j,11}-\delta_{i,11}\delta_{j,10}\right),\nonumber\\
\\
\left(I_z\right)_{ij}
&=&\frac{3}{2}\left(
-\delta_{i,5}\delta_{j,5}+\delta_{i,8}\delta_{j,8}
-\delta_{i,9}\delta_{j,9}+\delta_{i,12}\delta_{j,12}
\right)\nonumber\\
&&+\frac{1}{2}\left(
-\delta_{i,6}\delta_{j,6}+\delta_{i,7}\delta_{j,7}
-\delta_{i,10}\delta_{j,10}+\delta_{i,11}\delta_{j,11}
\right)\nonumber\\\end{aligned}$$ The auxiliary operators are useful in expressing the dissipator and some of the other operators. They can be expressed as $$\hat{\vec{\sigma}}_m=\left(\hat{\sigma_{x,m}}, \hat{\sigma_{y,m}},\hat{\sigma_{z,m}}\right)$$ with the definitions $$\begin{aligned}
\left({\hat{\sigma}}_{x,-\frac{3}{2}}\right)_{ij}
&=&\frac{1}{2}\left(-\delta_{i,5}\delta_{j,9}
+\delta_{i,9}\delta_{j,5}\right), \\
\left({\hat{\sigma}}_{x,-\frac{1}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,6}\delta_{j,10}
+\delta_{i,10}\delta_{j,6}\right),\\
\left({\hat{\sigma}}_{x,\frac{1}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,7}\delta_{j,11}
+\delta_{i,11}\delta_{j,7}\right), \\
\left({\hat{\sigma}}_{x,\frac{3}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,8}\delta_{j,12}
+\delta_{i,12}\delta_{j,8}\right).
$$ $$\begin{aligned}
\left({\hat{\sigma}}_{y,-\frac{3}{2}}\right)_{ij}
&=&\frac{i}{2}\left(\delta_{i,5}\delta_{j,9}
+\delta_{i,9}\delta_{j,5}\right), \\
\left({\hat{\sigma}}_{y,-\frac{1}{2}}\right)_{ij}
&=& \frac{i}{2}\left(\delta_{i,6}\delta_{j,10}
+\delta_{i,10}\delta_{j,6}\right),\\
\left({\hat{\sigma}}_{y,\frac{1}{2}}\right)_{ij}
&=& \frac{i}{2}\left(\delta_{i,7}\delta_{j,11}
+\delta_{i,11}\delta_{j,7}\right), \\
\left({\hat{\sigma}}_{y,\frac{3}{2}}\right)_{ij}
&=& \frac{i}{2}\left(\delta_{i,8}\delta_{j,12}
+\delta_{i,12}\delta_{j,8}\right),
$$ $$\begin{aligned}
\left({\hat{\sigma}}_{z,-\frac{3}{2}}\right)_{ij}
&=&\frac{1}{2}\left(-\delta_{i,5}\delta_{j,5}
+\delta_{i,9}\delta_{j,9}\right), \\
\left({\hat{\sigma}}_{z,-\frac{1}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,6}\delta_{j,6}
+\delta_{i,10}\delta_{j,10}\right),\\
\left({\hat{\sigma}}_{z,\frac{1}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,7}\delta_{j,7}
+\delta_{i,11}\delta_{j,11}\right), \\
\left({\hat{\sigma}}_{z,\frac{3}{2}}\right)_{ij}
&=& \frac{1}{2}\left(-\delta_{i,8}\delta_{j,8}
+\delta_{i,12}\delta_{j,12}\right),\end{aligned}$$ The number operators of nuclear state $m=\left\{-3/2, -1/2,1/2,3/2\right\}$ are: $$\begin{aligned}
\left({\hat{\iota}}_{-\frac{3}{2}}\right)_{ij}
&=&\delta_{i,5}\delta_{j,5}
+\delta_{i,9}\delta_{j,9}, \\
\left({\hat{\iota}}_{-\frac{1}{2}}\right)_{ij}
&=& \delta_{i,6}\delta_{j,6}
+\delta_{i,10}\delta_{j,10},\\
\left({\hat{\iota}}_{\frac{1}{2}}\right)_{ij}
&=& \delta_{i,7}\delta_{j,7}
+\delta_{i,11}\delta_{j,11}, \\
\left({\hat{\iota}}_{\frac{3}{2}}\right)_{ij}
&=& \delta_{i,8}\delta_{j,8}
+\delta_{i,12}\delta_{j,12},\end{aligned}$$ The remaining operators can be expressed in terms of the auxiliary operators. The localized electron spin operators are then given by $$\hat{\boldsymbol{S}}_c=\sum_{m=-3/2}^{3/2}{\hat{\boldsymbol{\sigma}}}_{m},$$ and the nuclear angular momentum operator: $$\hat{I}_z=\sum_{m=-3/2}^{3/2}m{\hat{\iota}}_{m}.$$ The generation term is given by $$\hat{G} =
G\hat p+\frac{G}{2}\hat n+(G_+-G_-)\hat S_z,$$ where the first two terms account for the photogeneration of VB holes and CB electrons and the last for the generation of spin polarization in the sample.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Takuya <span style="font-variant:small-caps;">Iwashita</span>$^1$, Yasuya <span style="font-variant:small-caps;">Nakayama</span>$^2$, and Ryoichi <span style="font-variant:small-caps;">Yamamoto</span>$^{1,3}$'
subtitle: Toward direct numerical simulation of particle dispersions
title: 'Velocity autocorrelation function of fluctuating particles in incompressible fluids.'
---
Introduction
============
The motions of small fluctuating particles in viscous fluids have been studied for a long time. Although theoretical or numerical analysis based on the coupled motions of the particles and the host fluid are very complicated, it becomes rather simple if one considers only the particles’ motions by assuming that the host fluid degree of freedom can be safely projected out from the entire degree of freedom of the dispersions. One of such models is the well-known generalized Langevin equation(GLE) for Brownian particles, [*i.e.*]{}, $$\begin{aligned}
M_i\frac{d{\bm V}_i}{dt} &=& \int_{-\infty}^{t} ds\sum_j{ \Gamma}_{ij}(t-s){\bm
V}_j(s) + {\bm G}_i(t),\\
\langle {\bm G}_i(t) \cdot {\bm G}_j(0)\rangle &=& 3k_BT \Gamma_{ij} (t),%\delta_{ij}\end{aligned}$$ where $M_i$ and ${\bm V}_i$ denotes the mass and the translational velocity of the [*i*]{}-th particle, respectively. $ \Gamma_{ij}(t)$ is a friction tensor, which represents the effect of hydrodynamic interactions(HI) between [*i*]{}-th and [*j*]{}-th particles. ${\bm G}_i$ is the random force acting on the [*i*]{}-th particle induced by thermal fluctuations of the solvent, $k_B$ is Boltzmann constant, and $T$ is the temperature of Brownian particles. For a single spherical particle ($i=1$) immersed in a infinitely large host fluid, the analytic form of the time-dependent friction[@Landau] is known as $$\int dt \Gamma_{11}(t)\exp(-i\omega t) = \hat{\Gamma}_{11} (-i\omega) = 6\pi\eta a (1 +a\sqrt{-i\omega/\nu} - i\omega
a^2/9\nu)
\label{gamma}$$ where $\hat{\Gamma_{11}}(-i\omega)$ is the Fourier transform of $\Gamma_{11}(t)$ and $\omega$ is the angular frequency. The first term corresponds to the normal Stokes friction for a spherical particle of radius $a$ in a Newtonian fluid whose viscosity is $\eta$. The second term represents the memory effect, which is related to the momentum diffusion in a viscous medium. Here the kinematic viscosity is defined as $\nu=\eta/\rho_f$ with $\rho_f$ being the density of the fluid. The third term corresponds to the effect of the acceleration of the host fluid surrounding the tagged particle when the particle is accelerated through the host fluid. Using Eq.(\[gamma\]), the hydrodynamic GLE can be solved analytically. The translational velocity autocorrelation function (VACF) $\langle\bm V_i(t)\cdot\bm V_i(0)\rangle/3$ then obtained is known to exhibit the characteristic power-law relaxation for long-time region, which is widely known as the “hydrodynamic long-time tail” [@ANA1; @ANA0; @ANA2; @ANA]. For dispersions composed of many particles interacting via HI, the situation is still not straightforward because we do not know the true analytic expression for the hydrodynamic friction tensor $\Gamma_{ij}(t)$. Some approximated expressions, such as Oseen or Rotne-Prager-Yamakawa(RPY) tensor, can be obtained by introducing the Stokes approximation, however, those expressions completely neglect the memory effect that corresponds to the second term of Eq.(\[gamma\]). This means that the hydrodynamic long-time tail can not be reproduced correctly with Oseen or RPY tensor. In the present study, we developed a numerical method to take into account the effects of hydrodynamics directly by simultaneously solving the Navier-Stokes equation for the host fluid with the Brownian motions of the particles. We first examined the VACF for a single Brownian particle and compared it with the analytical form mentioned above. Secondly, we examined the rotational motions of a single Brownian particle. We furthermore examined the motions of Brownian particles in harmonic potentials to check the validity of our method.
Simulation method
=================
Here we briefly explain the basic equations of our numerical model since those are explained in detail elsewhere[@Iwa]. A smooth profile function $0\le\phi(x,t)\le 1$ is introduced to define fluid ($\phi=0$) and particle ($\phi=1$) domains on a regular Cartesian grid. Those two domains are separated by thin interface regions whose thickness is $\xi$. The position of the [*i*]{}-th particle is ${\bm R_i}$, the translational velocity is ${\bm V_i}$, and the rotational velocity is ${\bm \Omega_i}$. The motion of [*i-*]{}th particle with mass $M_i$ and the moment of inertia $\bm I_i$ is governed by the following Langevin-type equations, $$\begin{aligned}
M_i \frac{d{\bm V}_i}{dt}&=& {\bm F}^H_i
+ {\bm F}^C_i + {\bm F}^{ex}_i + {\bm G}^V_i,\ \ \
\frac{d{\bm R}_i}{dt} = {\bm V}_i,\\
\bm I_i \cdot\frac{d{\bm \Omega}_i}{dt}&=& {\bm N}^H_i + {\bm G}^\Omega_i,\end{aligned}$$ where ${\bm F}_i^H$ and $\bm N_i^H$ are the hydrodynamic forces and torques acting on the [*i*]{}-th particle due to HI, respectively. $\bm F^C_i$ and $\bm F^{ex}_i$ denote the direct particle-particle interaction and external force. ${\bm G}_i^V$ and ${\bm G}_i^\Omega$ are the random force and torque due to thermal fluctuations defined stochastically as $$\begin{aligned}
\langle {\bm G}_i^V\rangle&=\langle {\bm G}_i^\Omega\rangle={\bm 0},\\
\langle {\bm G}^V_i(t)\cdot {\bm G}^V_j(0)\rangle&=3k_BT\alpha^V\delta(t)\delta_{ij},\\
\langle {\bm G}^\Omega_i(t)\cdot {\bm G}^\Omega_j(0)\rangle&=3k_BT\alpha^\Omega\delta(t)\delta_{ij},\end{aligned}$$ where $\alpha^V$ and $\alpha^\Omega$ are parameters to control the temperature $T$. The motions of the host fluid are governed by the Navier-Stokes equation $$\begin{aligned}
\rho_f(\partial_t {\bm v} + {\bm v}\cdot \nabla {\bm v}) &=&-\nabla p +\eta\nabla^2 {\bm v}+\rho_f\phi{\bm f_p}\end{aligned}$$ with the incompressible condition $\nabla\cdot{\bm v}= 0$, where $\bm v$ and $p$ are the velocity and the pressure fields of the host fluid, respectively, and $\phi \bm f_p$ is the body force defined so that the rigidity of the particles is automatically satisfied. Note that ${\bm F}_i^H$ and $\bm N_i^H$ are determined from the body force $\phi \bm f_p$ [@Naka; @Naka1].
Results and discussion
======================
A single spherical particle fluctuating in a Newtonian fluid was simulated in the absence of external forces $\bm F^{ex}_i=0$ as depicted in Fig.1. We take the mesh size $\Delta$ and $\tau=\Delta^2\rho_f/\eta$ as the units of space and time. Simulations have been performed with $\eta=1$, $a=5$, and $\xi=2$ in a three-dimensional cubic box composed of $64 \times 64 \times 64$ grid points. The particle and fluid densities are identically set to be unity, $\rho_p=\rho_f=1$.
![A snapshot of a single Brownian particle immersed in a Newtonian fluid. The one eighth of the entire system is graphically displayed. The color map on the horizontal plane shows the value of the local fluid velocity in the $x$ direction.[]{data-label="SNAP"}](particle.eps){width="68mm"}
![The translational velocity autocorrelation function $\langle{\bm V}_i(t)\cdot{\bm V}_i(0)\rangle/3$ (triangle) and the rotational velocity autocorrelation function $\langle{\bm \Omega}_i(t)\cdot{\bm \Omega}_i(0)\rangle/3$ (circle) for a single Brownian particle fluctuating in a Newtonian fluid. The simulation data was taken at $k_BT = 0.83$. The solid lines indicate the analytic results for the translational [@ANA] and the rotational motions. The dotted lines show power-laws, $Bt^{-3/2}$ with $B=k_BT/12\rho_f(\pi\nu)^3$ for the translational motions and $Ct^{-5/2}$ with $C=\pi k_BT/32\rho_f(\pi\nu)^{5/2}$ for the rotational motions. The dashed lines indicate the Markovian VACF and RVACF, which decay exponentially as $\exp(-t/\tau_B)$ and $\exp(-t/\tau_r)$, respectively. ](trans_rot.eps)
\[VACF\]
Figure 2 shows our simulation results ($\triangle$) of VACF for a single Brownian particle fluctuating in a Newtonian host fluid at $k_BT=0.83$. The temperature $T$ was determined by comparing the long-time diffusion coefficient $D_{sim}$ obtained from simulations with $D^V=k_BT/6\pi \eta a K(\Phi)$, where $K(\Phi)$ takes into account the effects of finite volume fraction [@Zick] and $\Phi$ denotes the volume fraction. The volume fraction of a single particle is $\Phi=0.002$. One finds that the VACF approaches asymptotically to the power-law line with the exponent $-3/2$, and the long-time behavior of our simulation agrees well with the analytical solution[@ANA] of the hydrodynamic GLE rather than the Markovian VACF which neglects memory effects. This behavior indicates that the memory effects are accurately taken into account. Similar to the translational motions, we have studied the rotational motions of the Brownian particle in the host fluid. The GLE of the rotational motions for a single spherical particle can be written as $$\begin{aligned}
I_i\dot{\bm \Omega}_i&=-\int^t_{-\infty}ds\mu(t-s){\bm \Omega}_i(s) + {\bm G}_i(t),\\
\langle {\bm G}_i\rangle &=0, \ \ \ \langle {\bm G}_i(t) \cdot {\bm G}_i(0)\rangle = 3k_BT \mu (t),\end{aligned}$$ where the time-dependent friction $\mu(t)$ has the form $\hat{\mu}(-i\omega)=8\pi\eta a^3[1 - i\omega/3\nu(1 + a\sqrt{-i\omega/\nu})]$[@RAMB] in Fourier space. The first term in $\hat{\mu}$ is the Stokes friction and the second term represents the memory effect due to the kinematic viscosity of the fluid. The GLE can be solved analytically, and the analytical solution of the rotational velocity autocorrelation function (RVACF) is obtained in the following form $$\begin{aligned}
\langle {\bm \Omega}_i(t) &\cdot& {\bm \Omega}_i(0)\rangle \nonumber\\
=&& -\frac{3k_BT\nu}{8\pi\eta a^5}\int_0^{\infty}\frac{dy}{3\pi} \exp(-yt/\tau_\nu)\Biggl[ \frac{y^{3/2}}{[1-(\frac{\tau_r}{\tau_\nu} + \frac{1}{3}) y]^2 + y (1 - \frac{\tau_r}{\tau_\nu}y)^2 }\Biggl],\label{ROT_ANA}\end{aligned}$$ where $\tau_\nu=a^2/\nu$ and $\tau_r=I_i/8\pi\eta a^3$. In Fig.2, simulation results ($\bigcirc $) of RVACF are also plotted. The RVACF clearly shows the asymptotic approach to the hydrodynamic long-time tail with the exponent $-5/2$ which agrees well with the analytical solution (\[ROT\_ANA\]) rather than a simple Markovian RVACF. By comparing the present simulation results with the corresponding analytical solutions more in detail, one may notice that some discrepancies become notable for $t<\tau_B$ or $t<\tau_r$, where $\tau_B =M_i/6\pi\eta a=2 a^2\rho_p/9\eta\simeq5$ is the Brownian relaxation time and $\tau_r =I_i/8\pi\eta a^3=3\tau_B/10\simeq 1.5$ is the Brownian rotational relaxation time. For opposite cases $t>\tau_B$ or $t>\tau_r$, however, the agreements between the numerical results and the analytical solutions are excellent. This is because we neglected memory effects in thermal noises ${\bm G}^V_i$ and ${\bm G}^{\Omega}_i$. We however believe that the long-time behavior of our numerical model is valid for $t>\tau_B$ since $\tau_B$ is much longer than the memory times of the thermal noises.
There exist many other characteristic time-scales in particle dispersions. Important ones are the kinematic time-scale $\tau_\nu=a^2 \rho_f/\eta = 25$ which measures the momentum diffusion over the particle size and the diffusion time-scale $\tau_D=a^2/D \simeq 3\times 10^3$ which measures the particle diffusion over the particle size. As one can see in Fig.2, the present model works quite well for the time-scales comparable to $\tau_\nu$ and $\tau_D$, while it becomes inaccurate for $t<\tau_B$. In order to test the validity of our method for the long-time behavior of Brownian particles, we next applied the present model to simulate Brownian particles fluctuating in external harmonic potentials. The potentials are introduced with the form $$\begin{aligned}
{\bm F_i}^{ex}=-k({\bm R}_i - {\bm R}_i^{eq})=-k\Delta {\bm R_i},\end{aligned}$$ where ${\bm R}_i^{eq}$ is the $i$th particle’s equilibrium position and $k$ is the spring constant. Figure 3 shows the positional autocorrelation function $\langle\Delta{\bm R}_i(t)\cdot\Delta{\bm R}_i(0)\rangle/3$ of two Brownian particles in harmonic potentials whose minimum positions are separated by a fixed distance of $5a$. The pair of particles are interacting only hydrodynamically, and there exists no direct interactions between them. The spring constant is set to $k=10$, and the temperature is $k_BT\simeq 0.0066$, which was determined by the average potential energy $k_BT=k\langle\Delta{\bm R}_i^2\rangle/3$. The simulation results ($\bigcirc $) agree well with the hydrodynamic analytical solution[@ANA] in harmonic potentials which account for the effects of finite volume fraction. The analytical solution was derived by solving the GLE of a single Brownian particle in a harmonic potential which includes the modified Stokes friction $\zeta=6\pi\eta a K(\Phi)$. The correlation functions decay much slower than the Markovian relaxation functions. We also confirmed that the validity of our method is excellent for $t\ge\tau_\nu$.
![The positional autocorrelation function $\langle\Delta{\bm
R}_i(t)\cdot\Delta{\bm R}_i(0)\rangle/3$ (circle) for a system composed of two Brownian particles in harmonic potentials $-k({\bm R}_i - {\bm
R}_i^{eq})$. The distance between their equilibrium positions is $|{\bm
R}_1^{eq}-{\bm R}_2^{eq}|=5a$. The solid line shows the analytical solution[@ANA]. The dotted line shows the Markovian functions, [*i.e.*]{} the solution of $M_i\dot{\bm V}_i = -6\pi\eta a\bm V_i - k\bm R_i + \bm G_i.$](supplement_auto.eps)
Conclusion
==========
We proposed a numerical model to simulate Brownian particles fluctuating in Newtonian host fluids. To test the validity of the model, the translational velocity autocorrelation function (VACF), the rotational velocity autocorrelation function (RVACF), and the positional autocorrelation function of fluctuating Brownian particles were calculated in some simple situations for which analytical solutions were obtained. We compared our numerical results with the analytical solutions and found excellent agreements between them specially for long-time regions $t>\tau_B$ while some discrepancies were found for short time regions $t<\tau_B$. This is because our model is designed to simulate correct long-time behaviors of Brownian particles in host fluids. Applications of the present method for more complicated situations are in progress.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was partly supported by a grant from Hosokawa powder technology foundation.
[99]{} E. M. Lifshitz and L. D. Landau: [*Fluid Mechanics,*]{} (Addison-Wesley, Reading, 1959). B. J. Alder and T. E. Wainwright: Phys. Rev. A. [**1**]{} (1970), 18 A. Widom: Phys. Rev. A. [**3**]{} (1971), 1394 E. H. Hauge and A. Martin-L$\ddot{\text{o}}$f: J. Stat. Phys. [**7**]{} (1973), 259 H. J. H. Clercx and P. P. J. Schram: Phys. Rev. A. [**46**]{} (1992), 1942. T.Iwashita, Y.Nakayama, and R.Yamamoto: J. Phys. Soc. Jpn. [**77**]{} (2008), 074007. Y. Nakayama and R. Yamamoto: Phys. Rev. E. [**71**]{} (2005), 036707. Y. Nakayama, K. Kim, and R. Yamamoto: Eur. Phys. J. E. (2008), 361. A. A. Zick and G. M. Homsy: J. Fluid Mech. [**115**]{} (1982), 13. S. H. Lamb: [*Hydrodynamics,*]{} (DOVER PUBLICATIONS, NEW YORK, 1932).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using the unfolding method given in [@HL], we prove the conjectures on sign-coherence and a recurrence formula respectively of ${\bf g}$-vectors for acyclic sign-skew-symmetric cluster algebras. As a following consequence, the conjecture is affirmed in the same case which states that the ${\bf g}$-vectors of any cluster form a basis of $\mathbb Z^n$. Also, the additive categorification of an acyclic sign-skew-symmetric cluster algebra $\mathcal A(\Sigma)$ is given, which is realized as $(\mathcal C^{\widetilde Q},\Gamma)$ for a Frobenius $2$-Calabi-Yau category $\mathcal C^{\widetilde Q}$ constructed from an unfolding $(Q,\Gamma)$ of the acyclic exchange matrix $B$ of $\mathcal A(\Sigma)$.'
address:
- 'Peigen Cao Department of Mathematics, Zhejiang University (Yuquan Campus), Hangzhou, Zhejiang 310027, P.R.China'
- 'Min Huang Department of Mathematics, Zhejiang University (Yuquan Campus), Hangzhou, Zhejiang 310027, P.R.China'
- 'Fang Li Department of Mathematics, Zhejiang University (Yuquan Campus), Hangzhou, Zhejiang 310027, P.R.China'
author:
- 'Peigen Cao $\;\;\;\;\;\;$ Min Huang $\;\;\;\;\;\;$ Fang Li $\;\;\;\;\;\;$'
date: version of
title: 'Categorification of sign-skew-symmetric cluster algebras and some conjectures on $\bf{g}$-vectors'
---
Introduction
=============
Cluster category was introduced by [@BMRRT] for an acyclic quiver. In general, we view a Hom-finite $2$-Calabi-Yau triangulated category $\mathcal C$ which has a cluster structure as a cluster category, see [@BIRTS]. In fact, the mutation of a cluster-tilting object $T$ in $\mathcal C$ categorifies the mutation of a quiver $Q$, where the quiver $Q$ is the Gabriel quiver of the algebra $End_{\mathcal C}(T)$. Cluster character gives an explicit correspondence between certain cluster objects of $\mathcal C$ and all the clusters of $\mathcal A(\Sigma(Q))$, where $\Sigma(Q)$ means the seed associated with $Q$. For details, see [@FK], [@PP] and [@PP1]. Thus, cluster category and cluster character are useful tools to study a cluster algebra.
Let $\mathcal A(\Sigma)$ be a cluster algebra with principal coefficients at $\Sigma=(X,Y,B)$, where $B$ is an $n\times n$ sign-skew-symmetric integer matrix, $Y=(y_1,\cdots,y_n)$. The celebrate Laurent phenomenon says that $\mathcal A$ is a subalgebra of $\mathbb Z[y_{n+1},\cdots,y_{2n}][X^{\pm 1}]$. Setting $deg(x_i)=e_i$, $deg(y_j)=-b_j$, then $\mathbb Z[y_{n+1},\cdots,y_{2n}][X^{\pm 1}]$ becomes to a graded algebra, where $\{e_{i}\;|\;i=1,\cdots,n\}$ is the standard basis of $\mathbb Z^n$ and $b_{i}$ is the $i$-th column of $B$. Under such $\mathbb Z^n$-grading, the cluster algebra $\mathcal A$ is a graded subalgebra, in which the degree of a homogenous element is called its [**${\bf g}$-vector**]{}; furthermore, each cluster variable $x$ in $\mathcal A(\Sigma)$ is homogenous, denoted its ${\bf g}$-vector as ${\bf g}(x)$. For details, see [@fz4].
It was conjectured that
\[$g$-vec\]([@fz4], Conjecture 6.13) For any cluster $X'$ of $\mathcal A(\Sigma)$ and all $x\in X'$, the vectors ${\bf g}(x)$ are sign-coherent, which means that the $i$-th coordinates of all these vectors are either all non-negative or all non-positive.
Such conjecture has been proved in the skew-symmetrizable case in ([@GHKK], Theorem 5.11).
\[basis\]([@fz4], Conjecture 7.10(2)) For any cluster $X'$ of $\mathcal A(\Sigma)$, the vectors ${\bf g}(x),x\in X'$ form a $\mathbb Z$-basis of the lattice $\mathbb Z^n$.
In terms of cluster pattern, fixed a regular tree $\mathbb T_n$, for any vertices $t$ and $t_0$ of $\mathbb T_n$, let ${\bf g}^{B^0; t_0}_{1; t},\cdots, {\bf g}^{B^0; t_0}_{n; t}$ denote the ${\bf g}$-vectors of the cluster variables in the seed $\Sigma_t$ with respect to the principal coefficients seed $\Sigma_{t_0}=(X_0,Y,B_0)$. When set different vertices to be principal coefficients seeds, it is conjectured that the ${\bf g}$-vectors with respect to a fixed vertex $t$ of $\mathbb T_n$ have the following relation.
\[basechange\]([@fz4], Conjecture 7.12) Let $t_1\overset{k}{--} t_2\in \mathbb T_n$ and let $B^2=\mu_k(B^1)$. For $a\in [1,n]$ and $t\in \mathbb T_n$, assume ${\bf g}^{B^1; t_1}_{a; t}=(g_{1}^{t_1},\cdots,g_{n}^{t_1})$ and ${\bf g}^{B^2; t_2}_{a; t}=(g_{1}^{t_2},\cdots,g_{n}^{t_2})$, then $$\label{eqn3.1}
g_i^{t_2}=\begin{cases}-g_k^{t_1}& \text{if }i=k;\\g_i^{t_1}+[b_{ik}^{t_1}]_+g_k^{t_1}-b_{ik}^{t_1}min(g_k^{t_1},0)&\text{if }i\neq k.\end{cases}$$
\[remark\] (1) As is said in Remark 7.14 of [@fz4], it is easy to see that Conjectures \[$g$-vec\] and \[basechange\] imply Conjecture \[basis\].
(2) In the skew-symmetrizable case, the sign-coherence of the ${\bf c}$-vectors can deduce Conjectures \[$g$-vec\] and \[basechange\], see [@NZ], where the given method strongly depends on the skew-symmetrizability. So far, the similar conclusion have not been given in the sign-skew-symmetric case. Further, the sign-coherence of the ${\bf c}$-vectors has been proved in [@HL] for the acyclic sign-skew-symmetric case, because this is equivalent to $F$-polynomial has constant term $1$, see [@fz4]. Therefore, for the acyclic sign-skew-symmetric case, it is interesting to study directly Conjectures \[$g$-vec\] and \[basechange\].
Unfolding of skew-symmetrizable matrices is introduced by Zelevinsky, whose aim is to characterize skew-symmetrizable cluster algebras using the version in skew-symmetric case. The second and third authors of this paper improved in [@HL] such method to arbitrary sign-skew-symmetric matrices. According to this previous work, there is an unfolding $(\widetilde Q, F, \Gamma)$ of any acyclic $m\times n$ matrix $\widetilde B$, and also a $2$-Calabi-Yau Frobenius category $\mathcal C^{\widetilde Q}$ with $\Gamma$ action constructed.
Our motivation and the main results of this paper are two-fold.
(1) Give the $\Gamma$-equivariant cluster character for $\underline {\mathcal C^Q}$, which can be regarded as the additive categorification of the cluster algebra $\mathcal A(\Sigma(Q))$. See Theorem \[char\] and Theorem \[reach\].
(2) Solve Conjectures \[$g$-vec\] and \[basechange\] in the acyclic case. See Theorem \[sign-c\] and Theorem \[base\]. As a consequence, in the same case, Conjecture \[basis\] follows to be affirmed.
An overview of unfolding method {#2}
===============================
In this section, we give a brief introduction of the concept of unfolding of totally sign-skew-symmetric cluster algebras and some necessary results in [@HL].
For any sign-skew-symmetric matrix $B\in Mat_{n\times n}(\mathbb Z)$, one defines a quiver $\Delta(B)$ as follows: the vertices are $1,\cdots, n$ and there is an arrow from $i$ to $j$ if and only if $b_{ij}>0$. $B$ is called [**acyclic**]{} if $\Delta(B)$ is acyclic, a cluster algebra is called [**acyclic**]{} if it has an acyclic exchange matrix, see [@fz3].
A locally finite [**ice quiver**]{} is a pair $(Q,F)$ where $Q$ is a locally finite quiver without 2-cycles or loops and $F\subseteq Q_0$ is a subset of vertices called [**frozen vertices**]{} such that there are no arrows among vertices of $F$. For a locally finite ice quiver $(Q,F)$, we can associate an (infinite) skew-symmetric row and column finite (i.e. having at most finite nonzero entries in each row and column) matrix $(b_{ij})_{i\in Q_0,j\in Q_0\setminus F}$, where $b_{ij}$ equals to the number of arrows from $i$ to $j$ minus the number of arrows from $j$ to $i$. In case of no confusion, for convenience, we also denote the ice quiver $(Q, F)$ as $(b_{ij})_{i\in Q_0, j\in Q_0\setminus F}$.
We say that [*an ice quiver $(Q,F)$ admits the action of a group $\Gamma$*]{} if $\Gamma$ acts on $Q$ such that $F$ is stable under the action. Let $(Q,F)$ be a locally finite ice quiver with an action of a group $\Gamma$ (maybe infinite). For a vertex $i\in Q_0\setminus F$, a [**$\Gamma$-loop**]{} at $i$ is an arrow from $i$ to $h\cdot i$ for some $h\in \Gamma$, a [**$\Gamma$-$2$-cycle**]{} at $i$ is a pair of arrows $i\rightarrow j$ and $j\rightarrow h\cdot i$ for some $j\notin \{h'\cdot i\;|\;h'\in \Gamma\}$ and $h\in \Gamma$. Denote by $[i]$ the orbit set of $i$ under the action of $\Gamma$. Say that [**$(Q, F)$ has no $\Gamma$-loops**]{} (respectively, [**$\Gamma$-$2$-cycles**]{}) [**at**]{} $[i]$ if $(Q,F)$ has no $\Gamma$-loops ($\Gamma$-$2$-cycles, respectively) at any $i'\in [i]$.
\[orbitmu\](Definition 2.1, [@HL]) Let $(Q,F)=(b_{ij})$ be a locally finite ice quiver with an group $\Gamma$ action. Denote $[i]=\{h\cdot i\;|\;h\in \Gamma\}$ the orbit of vertex $i\in Q_0\setminus F$. Assume that $(Q, F)$ admits no $\Gamma$-loops and no $\Gamma$-$2$-cycles at $[i]$, we define an [**adjacent ice quiver**]{} $(Q', F)=(b'_{i'j'})_{i'\in Q_0, j'\in Q_0\setminus F}$ from $(Q,F)$ to be the quiver by following:
(1) The vertices are the same as $Q$,
(2) The arrows are defined as $$\begin{array}{ccl} b'_{jk} &=&
\left\{\begin{array}{ll}
-b_{jk}, &\mbox{if $j\in [i]$ or $k\in [i]$}, \\
b_{jk}+\sum\limits_{i'\in[i]}\frac{|b_{ji'}|b_{i'k}+b_{ji'}|b_{i'k}|}{2}, &\mbox{otherwise.}
\end{array}\right.
\end{array}$$ Denote $(Q',F)$ as $\widetilde\mu_{[i]}((Q,F))$ and call $\widetilde\mu_{[i]}$ the [**orbit mutation**]{} at direction $[i]$ or at $i$ under the action $\Gamma$. In this case, we say that $(Q,F)$ can [**do orbit mutation at**]{} $[i]$.
Note that if $\Gamma$ is the trivial group $\{e\}$, then the definition of orbit mutation of a quiver is the same as that of quiver mutation (see [@fz1][@fz2]).
\[unfolding\](Definition 2.4, [@HL]) (i) For a locally finite ice quiver $(Q,F)=(b_{ij})_{i\in Q_0,j\in Q_0\setminus F}$ with a group $\Gamma$ (maybe infinite) action, let $\overline {Q_0}$ (respectively, $\overline F$) be the orbit sets of the vertex set $Q_0$ (respectively, the frozen vertex set $F$) under the $\Gamma$-action. Assume that $m=|\overline {Q_0}|<+\infty$, $m-n=|\overline{F}|$ and $Q$ has no $\Gamma$-loops and no $\Gamma$-$2$-cycles.
Define a sign-skew-symmetric matrix $B(Q,F)=(b_{[i][j]})_{[i]\in \overline{Q_0},[j]\in \overline{Q_0}\setminus \overline{F}}$ to $(Q,F)$ satisfying (1) the size of the matrix $B(Q,F)$ is $m\times n$; (2) $b_{[i][j]}=\sum\limits_{i'\in [i]}b_{i'j}$ for $[i]\in \overline {Q_0}$, $[j]\in \overline{Q_0}\setminus\overline{F}$.
(ii) For an $m\times n$ sign-skew-symmetric matrix $B$, if there is a locally finite ice quiver $(Q,F)$ with a group $\Gamma$ such that $B=B(Q,F)$ as constructed in (i), then we call $(Q,F,\Gamma)$ a [**covering**]{} of $B$.
(iii) For an $m\times n$ sign-skew-symmetric matrix $B$, if there is a locally finite quiver $(Q,F)$ with an action of group $\Gamma$ such that $(Q,F,\Gamma)$ is a covering of $B$ and $(Q,F)$ can do arbitrary steps of orbit mutations, then $(Q,F,\Gamma)$ is called an [**unfolding**]{} of $B$; or equivalently, $B$ is called the [**folding**]{} of $(Q,F,\Gamma)$.
The definition of unfolding is slight different with that in [@HL] where the definition was just applied to square matrices.
By Lemma 2.5 of [@HL], we have the following consequence.
\[mut\] If $(Q, F, \Gamma)$ is an unfolding of $B$, for any sequence $[i_1],\cdots,[i_s]$ of orbits of $Q_0\setminus F$ under the action of $\Gamma$, then $(\widetilde \mu_{[i_s]}\cdots \widetilde \mu_{[i_s]}(Q, F), \Gamma)$ is a covering of $\mu_{[i_s]}\cdots\mu_{[i_1]}B$.
By Theorem 2.16 of [@HL], we have
\[mainlemma\] If $\widetilde B\in Mat_{m\times n}(\mathbb Z)$ ($m\geq n$) is an acyclic sign-skew-symmetric matrix, then $\widetilde B$ has an unfolding $(\widetilde Q,F,\Gamma)$, where $\widetilde Q$ is given using of Construction 2.6 in [@HL].
Assume $\widetilde B=\left(\begin{array}{c}
B \\
B'
\end{array}\right)$ with $B\in Mat_{n\times n}(\mathbb Z)$. Denote $\widetilde B'=\left(\begin{array}{cc}
B & -{B'}^{T} \\
B' & 0
\end{array}\right)$. Since $\widetilde B$ is acyclic, $\widetilde B'\in Mat_{m\times m}(\mathbb Z)$ is acyclic. According to Construction 2.6 and Theorem 2.16 of [@HL], $\widetilde B'$ has an unfolding $(\widetilde Q,\Gamma)$. Let $F\subset \widetilde Q_0$ be the vertices of $\widetilde Q_0$ corresponding to $B'$. Thus, it is clear that $(\widetilde Q,F,\Gamma)$ is an unfolding of $\widetilde B$.
In [@HL], we proved that $\widetilde Q$ is [**strongly almost finite**]{}, that is, $\widetilde Q$ is locally finite and has no path of infinite length.
This theorem means that an acyclic matrix is always totally sign-skew-symmetric. Thus, we can define a cluster algebra via an acyclic matrix.
For an acyclic matrix $\widetilde B\in Mat_{m\times n}(\mathbb Z)$ ($m\geq n$), assume $(\widetilde Q,F, \Gamma)$ is an unfolding of $\widetilde B$. Denote $\overline{\widetilde Q_0}$ and $\overline F$ be the orbits sets of vertices in $\widetilde Q_0$ and $F$. Let $\widetilde\Sigma=\Sigma(\widetilde Q)=(\widetilde X, \widetilde Y, \widetilde Q)$ be the seed associated with $(\widetilde Q, F)$, where $\widetilde X=\{x_{u}\;|\;u\in \widetilde Q_0\setminus F\}$, $\widetilde Y=\{y_v\;|\;v\in F\}$. Let $\Sigma=\Sigma(\widetilde B)=(X, Y,\widetilde B)$ be the seed associated with $\widetilde B$, where $X=\{x_{[i]}\;|\;[i]\in \overline{\widetilde Q_0}\setminus \overline F\}$, $Y=\{y_{[j]}\;|\;[j]\in \overline{F}\}$. It is clear that there is a surjective algebra homomorphism: $$\label{pi}
\pi:\;\; \mathbb Q[x^{\pm 1}_i, y_j\;|\;i\in \widetilde Q_0\setminus F, j\in F]\rightarrow \mathbb Q[x^{\pm 1}_{[i]},y_{[j]}\;|\;[i]\in \overline{\widetilde Q_0}\setminus \overline F, [j]\in \overline{F}]$$ such that $\pi(x_{i})= x_{[i]}$ and $\pi(y_j)= y_{[j]}$.
For any cluster variable $x_u\in \widetilde X$, define $\widetilde\mu_{[i]}(x_u)=\mu_u(x_u)$ if $u\in [i]$; otherwise, $\widetilde\mu_{[i]}(x_u)=x_u$ if $u\not\in [i]$. Formally, write $\widetilde\mu_{[i]}(\widetilde X)=\{\widetilde\mu_{[i]}(x)\;|\;x\in \widetilde X\}$ and $\widetilde\mu_{[i]}({\widetilde X}^{\pm 1})=\{\widetilde\mu_{[i]}(x)^{\pm 1}\;|\;x\in \widetilde X\}$.
\[mutationorbit\](Lemma 7.1, [@HL]) Keep the notations as above. Assume that $B$ is acyclic. If $[i]$ is an orbit of vertices with $i\in \widetilde Q_0\setminus F$, then
(1) $\widetilde\mu_{[i]}(x_j)$ is a cluster variable of $\mathcal A(\widetilde Q)$ for any $j\in \widetilde Q_0\setminus F$,
(2) $\widetilde\mu_{[i]}(\widetilde X)$ is algebraic independent over $\mathbb Q[y_j\;|\;j\in F]$.
By Lemma \[mutationorbit\], $\widetilde\mu_{[i]}(\widetilde \Sigma):=(\widetilde\mu_{[i]}(\widetilde X), \widetilde Y, \widetilde\mu_{[i]}(\widetilde Q))$ is a seed. Thus, we can define $\widetilde\mu_{[i_s]}\widetilde\mu_{[i_{s-1}]}\cdots\widetilde\mu_{[i_1]}(x)$ and $\widetilde\mu_{[i_s]}\widetilde\mu_{[i_{s-1}]}\cdots\widetilde\mu_{[i_1]}(\widetilde X)$ and $\widetilde\mu_{[i_s]}\widetilde\mu_{[i_{s-1}]}\cdots\widetilde\mu_{[i_1]}(\widetilde \Sigma)$ for any sequence $([i_1],[i_2],\cdots,[i_s])$ of orbits in $Q_0$.
\[sur\](Theorem 7.5, [@HL]) Keep the notations as above with an acyclic sign-skew-symmetric matrix $B$ and $\pi$ as defined in (\[pi\]). Restricting $\pi$ to $\mathcal A(\widetilde\Sigma)$, then $\pi:\mathcal A(\widetilde\Sigma)\rightarrow \mathcal A(\Sigma)$ is a surjective algebra morphism satisfying that $\pi(\widetilde\mu_{[j_k]}\cdots\widetilde\mu_{[j_1]}(x_{a} ))=\mu_{[j_k]}\cdots\mu_{[j_1]}(x_{[i]})\in \mathcal A(\Sigma)$ and $\pi(\widetilde\mu_{[j_k]}\cdots\widetilde\mu_{[j_1]}(\widetilde X))=\mu_{[j_k]}\cdots\mu_{[j_1]}(X)$ for any sequences of orbits $[j_1],\cdots,[j_k]$ and any $a\in [i]$.
In case $\mathcal A(\Sigma)$ with principal coefficients, from Lemma \[mut\], we may assume that $\mathcal A(\widetilde\Sigma)$ is also with principal coefficients. Let $\lambda:\bigoplus\limits_{i\in \widetilde Q_0\setminus F}\mathbb Z e_i\rightarrow \bigoplus\limits_{[i]\in \overline{\widetilde Q_0}\setminus \overline F}\mathbb Z e_{[i]}, e_i\rightarrow e_{[i]}$ be the group homomorphism, where $\bigoplus\limits_{i\in \widetilde Q_0\setminus F}\mathbb Z e_i$ (resp. $\bigoplus\limits_{[i]\in \overline {\widetilde Q_0}\setminus \overline F}\mathbb Z e_{[i]}$) is the free abelian group generated by $\{e_i\;|\;i\in \widetilde Q_0\setminus F\}$ (resp. $\{e_{[i]}\;|\;[i]\in \overline{\widetilde Q_0}\setminus \overline F\}$). Under such group homomorphism, $\mathcal A(\widetilde\Sigma)$ becomes a $\mathbb Z^n$-graded algebra such any cluster variable $x$ is homogenous with degree $\lambda({\bf g}(x))$.
\[ho\] Keep the notations as in Theorem \[sur\]. If $\mathcal A(\Sigma)$ with principal coefficients, then the restriction of $\pi$ to $\mathcal A(\widetilde\Sigma)$ is a $\mathbb Z^n$-graded surjective homomorphism.
Since $\lambda({\bf g}(x_i))=e_{[i]}={\bf g}(x_{[i]})$ and $\lambda({\bf g}(y_j))=-b_{[j]}={\bf g}(y_{[j]})$, where $b_{[j]}$ is the $[j]$-th column of $B$. Further, because $\{x^{\pm 1}_i, y_j\;|\;i\in \widetilde Q_0\setminus F, j\in \widetilde Q_0\}$ is a generator of $\mathbb Q[x^{\pm 1}_i, y_j\;|\;i\in \widetilde Q_0\setminus F, j\in \widetilde Q_0]$, thus $\pi$ is homogenous. Then our result follows by Theorem \[sur\].
Cluster character in sign-skew-symmetric case {#3}
=============================================
Let $\widetilde B\in Mat_{m\times n}(\mathbb Z)$ be an acyclic sign-skew-symmetric matrix, $(\widetilde Q, F, \Gamma)$ be an unfolding of $\widetilde B$ given in Theorem \[mainlemma\]. Denote $\widetilde \Sigma=(\widetilde X,\widetilde Y,\widetilde Q)$ and $\Sigma=(X,Y,\widetilde B)$ be the seeds corresponding to $(\widetilde Q, F)$ and $\widetilde B$.
From $(\widetilde Q, F,\Gamma)$, we constructed a $2$-Calabi-Yau Frobenius category $\mathcal C^{\widetilde Q}$ in [@HL] such that $\Gamma$ acts on it exactly, i.e. each $h\in \Gamma$ acts on $\mathcal C^{\widetilde Q}$ as an exact functor. Furthermore, there exists a cluster tilting subcategory $\mathcal T_0$ of $\mathcal C^{\widetilde Q}$ such that the Gabriel quiver of $\underline {\mathcal T_0}$ is isomorphic to $\widetilde Q$, where $\underline {\mathcal T_0}$ is the subcategory of the stable category $\underline{\mathcal C^{\widetilde Q}}$ corresponding to $\mathcal T_0$. For details, see Lemma 4.15 of [@HL]. Since $\mathcal C^{\widetilde Q}$ is a $Hom$-finite $2$-Calabi-Yau Frobenius category, $\underline{\mathcal C^{\widetilde Q}}$ follows to be a $Hom$-finite $2$-Calabi-Yau triangulated category. Write $[1]$ as the shift functor in $\underline{\mathcal C^{\widetilde Q}}$. For any object $X$ and subcategory $\mathcal X$ of $\mathcal C^{\widetilde Q}$, we denote by $\underline{X}$ and $\underline{\mathcal X}$ the corresponding object and subcategory of $\underline{\mathcal C^{\widetilde Q}}$ respectively. Since the action of the group $\Gamma$ on $\mathcal C^{\widetilde Q}$ is exact, $\underline{\mathcal C^{\widetilde Q}}$ also admits an exact $\Gamma$-action.
Since $\mathcal C^{\widetilde Q}$ is a Frobenius category, by the standard result, see [@BIRTS], we have that
\[extpre\] $Ext^1_{\mathcal C^{\widetilde Q}}(Z_1,Z_2)\cong Ext^1_{\underline{\mathcal C^{\widetilde Q}}}(\underline{Z_1},\underline{Z_2})$ for all $Z_1,Z_2\in \mathcal C^{\widetilde Q}$.
The category $\underline{\mathcal C^{\widetilde Q}}$ can be viewed as an additive categorification of the cluster algebra $\mathcal A(\widetilde\Sigma)$ given from $\widetilde Q$. For details, refer to [@FK] and [@PP1]. Although the authors of [@FK] and [@PP1] deal with the cluster algebras of finite ranks, it is easy to see that these results still hold in $\underline{\mathcal C^{\widetilde Q}}$ since $Q''$, as well as $\widetilde Q$, is a strongly almost finite quiver.
Denote $\underline{\mathcal T_0}=add(\underline{\mathcal T'}\cup \underline{\mathcal T''})$, where $\underline{\mathcal T'}$ and $\underline{\mathcal T''}$ respectively consist of the indecomposable objects correspondent to cluster variables in the clusters $\widetilde X$ and $\widetilde Y$ of $\widetilde \Sigma$.
Let $\mathcal U$ be the subcategory of $\underline{\mathcal C^{\widetilde Q}}$ generated by $\{\underline X\in \underline{\mathcal C^{\widetilde Q}}\;|\;Hom_{\underline{\mathcal C^{\widetilde Q}}}(\underline {T}[-1],\underline X)=0, \;\forall\; \underline T\in \underline{\mathcal T''}\}.$
For any $\underline{X}\in \mathcal U $, let $\underline{T}_1\rightarrow \underline{T}_0\overset{f}{\rightarrow}\underline{X}\rightarrow \underline{T}_1[1]$ be the triangle with $f$ the minimal right $\underline{\mathcal T_0}$-approximation. By Lemma \[extpre\], $\underline{\mathcal T_0}$ is a cluster tilting subcategory of $\underline{\mathcal C^{\widetilde Q}}$. Applying $Hom_{\underline{\mathcal C^{\widetilde Q}}}(\underline{T},-)$ to the triangle for all $\underline{T}\in \underline{\mathcal T_0}$, we have $\underline{T}_1\in \underline{\mathcal T_0}$. The index of $\underline X$ is defined as $$ind_{\underline{\mathcal T_0}}(\underline{X})=[\underline{T}_0]-[\underline{T}_1]\in K_0(\underline{\mathcal T_0})\cong \mathbb Z^{|\widetilde Q_0|}.$$
Recall that the Gabriel quiver of $\underline{\mathcal T_0}$ is isomorphic to $\widetilde Q$. We may assume that $\{X_i\;|\;i\in \widetilde Q_0\}$ is the complete set of the indecomposable objects of $\underline{\mathcal T_0}$. For any $L\in mod\underline{\mathcal T_0}$, we denote $(dim_k (L(X_i)))_{i\in \widetilde Q_0}\in \bigoplus\limits_{i\in \widetilde Q_0}\mathbb Z^{e_i}$ as its dimensional vector.
After the preparations, we give the definition of cluster character $CC(\;\;)$ on $\underline{\mathcal C^{\widetilde Q}}$. For any $i\in \widetilde Q_0\setminus F$, since $\widetilde Q$ is strongly almost finite, we can set $$\widehat{y}_i=\prod\limits_{j\in \widetilde Q_0\setminus F} x_j^{b_{ji}}\prod\limits_{j'\in F}y_{j'}^{b_{j'i}}\in \mathbb Q[x_i^{\pm 1}, y_j\;|\;i\in \widetilde Q_0\setminus F,j\in F].$$ For each rigid object $\underline X\in \mathcal U$, we define
$$CC(\underline X)=\widetilde{\mathbf{x}}^{ind_{\underline{\mathcal T_0}}(\underline X)}\sum\limits_{\mathbf{a}\in\bigoplus\limits_{i\in \widetilde Q_0}\mathbb Ze_i}\chi(Gr_{\mathbf{a}}(Hom_{\underline{\mathcal C^{\widetilde Q}}}(-,\underline{X}[1])))\prod\limits_{j\in \widetilde Q_0\setminus F}\widehat{y_j}^{a_j},$$ where $\widetilde{\mathbf{x}}^{\mathbf{a}}=\Pi x^{a_i}_{i}$ for $\mathbf{a}=(a_i)_{i\in \widetilde Q_0}\in \bigoplus\limits_{i\in \widetilde Q_0}\mathbb Ze_i$, $Gr_{\mathbf{a}}(Hom_{\underline{\mathcal C}}(-,\underline{X}[1]))$ is the quiver Grassmannian whose points are corresponding to the sub-$\underline{\mathcal T_0}$-representations of $Hom_{\underline{\mathcal C^{\widetilde Q}}}(-,\underline{X}[1])$ with dimension vector $\mathbf{a}$, and $\chi$ is the Euler characteristic with respect to étale cohomology with proper support. It is easy to see that $CC(\underline X)\in \mathbb Q[x_i^{\pm 1}, y_j\;|\;i\in \widetilde Q_0\setminus F,j\in F]$.
\[cluster ch\] Keeps the notations as above. Then
\(1) $CC(\underline {T_i})=x_i$ for all $\underline {T_i}\in \underline{\mathcal T'}$.
\(2) $CC(\underline X\oplus \underline X')=CC(\underline X)CC(\underline X')$ for any objects $\underline X,\underline X'\in \mathcal U$.
\(3) $CC(\underline X)CC(\underline Y)=CC(\underline Z)+CC(\underline Z')$ for $\underline X,\underline Y\in \mathcal U$ with $dimExt^1_{\underline{\mathcal C^{\widetilde Q}}}(\underline X,\underline Y)=1$ and the two non-splitting triangles: $ \underline Y\rightarrow \underline Z \rightarrow \underline X \rightarrow \underline Y[1]$ and $\underline X\rightarrow \underline Z' \rightarrow \underline Y \rightarrow \underline X[1].$
This theorem can be proved similarly as that of ([@FK], Theorem 3.3) using of local finiteness of $\widetilde Q$ since it is strongly almost finite.
Like [@D], for any $\underline X\in \mathcal U$, we define $ind'_{\underline{\mathcal T_0}}(\underline{X})=\lambda(ind_{\underline{\mathcal T_0}}(\underline{X}))\in \bigoplus\limits_{[i]\in \overline{\widetilde Q_0}}\mathbb Ze_{[i]}.$
For each rigid object $\underline X\in \mathcal U$, using the definition of $\pi$ in (\[pi\]), we define the [**$\Gamma$-equivariant cluster character**]{} as follows: $$CC'(\underline X)=\pi(CC(\underline X))=\mathbf{x}^{ind'_{\underline{\mathcal T_0}}\underline X}\sum\limits_{\mathbf{a}\in\bigoplus\limits_{i\in \widetilde Q_0}\mathbb Ze_i}\chi(Gr_{\mathbf{a}}Hom_{\underline{\mathcal C^{\widetilde Q}}}(-,\underline{X}[1]))\prod\limits_{[j]\in \overline{\widetilde Q_0}\setminus \overline F}\widehat{y}_{[j]}^{\lambda(\mathbf a)_{[j]}},$$ where $\mathbf{x}^{\mathbf{a}}=\Pi x^{a_{[i]}}_{[i]}$ for $\mathbf{a}=(a_{[i]})_{[i]\in \overline{\widetilde Q_0}\setminus \overline{F}}\in \bigoplus\limits_{[i]\in \overline{\widetilde Q_0}\setminus \overline{F}}\mathbb Ze_{[i]}$ and $\widehat y_{[i]}=\pi(\widehat y_i)=\prod\limits_{[j]\in \overline{\widetilde Q_0}\setminus \overline{F}} x_{[j]}^{b_{[j][i]}}\prod\limits_{[j']\in \overline{\widetilde Q_0}\setminus \overline{F}}y_{[j']}^{b_{[j'][i]}}.$
Inspired by Definition 3.34 of [@D], we give the following definition,
Two objects $\underline X, \underline X'\in \underline{\mathcal C^{\widetilde Q}}$ are said to be [**equivalent modulo**]{} $\Gamma$ if $\underline X\cong \bigoplus\limits_{k=1}^m\underline X_k,\; \underline X'\cong \bigoplus\limits_{k=1}^m\underline X'_k$ with $add\{h\cdot \underline X_k\;|\;h\in\Gamma\}= add\{h\cdot \underline X'_k\;|\;h\in\Gamma\}$ for every $k$ and indecomposables $\underline X_k,\underline X'_k$.
Similar to Lemma 3.49 of [@D], we have the following lemma,
The Laurent polynomial $CC'(\underline X)$ depends only on the class of $\underline X$ under equivalent modulo $\Gamma$ for and $\underline X\in \underline{\mathcal C^{\widetilde Q}}$.
We need only to prove that $CC'(\underline X)=CC'(h\cdot \underline X)$ for $h\in \Gamma$ by Theorem \[cluster ch\](2). It follows immediately from that $ind'_{\underline{\mathcal T_0}}(\underline X)=ind'_{\underline{\mathcal T_0}}(h\cdot \underline X)$ and $\chi(Gr_{\mathbf{a}}Hom_{\underline{\mathcal C^{\widetilde Q}}}(-,\underline{X}[1]))=\chi(Gr_{h\cdot \mathbf{a}}Hom_{\underline{\mathcal C^{\widetilde Q}}}(-,h\cdot\underline{X}[1]))$.
Using the algebra homomorphism $\pi$ and Theorem \[cluster ch\], we have the following theorem at once,
\[char\] Keeps the notations as above. Then
\(1) $CC'(\underline {T_i})=x_{[i]}$ for all $\underline T_i\in \underline{\mathcal T'}$.
\(2) $CC'(\underline X\oplus \underline X')=CC'(\underline X)CC'(\underline X')$ for any two objects $\underline X,\underline X'\in \mathcal U$.
\(3) $CC'(\underline X)CC'(\underline Y)=CC'(\underline Z)+CC'(\underline Z')$ for $\underline X,\underline Y\in \mathcal U$ with $dimExt^1_{\underline{\mathcal C^{\widetilde Q}}}(\underline X,\underline Y)=1$ satisfying two non-splitting triangles: $ \underline Y\rightarrow \underline Z \rightarrow \underline X \rightarrow \underline Y[1]$ and $\underline X\rightarrow \underline Z' \rightarrow \underline Y \rightarrow \underline X[1].$
Following [@PP], we say $\underline X\in \underline{\mathcal C^{\widetilde Q}}$ to be [**reachable**]{} if it belongs to a cluster-tilting subcategory which can be obtained by a sequence of mutations from $\underline {\mathcal T_0}$ with the mutations do not take at the objects in $\underline{\mathcal T''}$. It is clear that any reachable object belongs to $\mathcal U$.
Following Theorem 4.1 of [@PP], we have the following result:
\[reach\] Keep the notations as above. Then the cluster character $CC'(\;\;)$ gives a surjection from the set of equivalence classes of indecomposable reachable objects under equivalent modulo $\Gamma$ of $\underline{\mathcal C^{\widetilde Q}}$ to the set of clusters variables of the cluster algebra $\mathcal A(\Sigma)$.
By Theorem \[char\], the proof is similar as that of Theorem 4.1 in [@PP].
Sign-coherence of ${\bf g}$-vectors {#sign-coherence}
====================================
Keep the notations in Section \[3\]. In this section, we will prove the sign-coherence of ${\bf g}$-vectors for acyclic sign-skew-symmetric cluster algebras. For convenience, suppose that $\mathcal A(\Sigma)$ is an acyclic sign-skew-symmetric cluster algebra with principal coefficients at $\Sigma$.
Since $B$ is acyclic, $\left(\begin{array}{c}
B \\
I_n
\end{array}\right)$ is acyclic, too. We can construct an unfolding $(\widetilde Q, F, \Gamma)$ according to Theorem \[mainlemma\]. It is easy to check that the corresponding seed $\widetilde \Sigma$ of $(\widetilde Q, F, \Gamma)$ is with principal coefficients, where $\widetilde \Sigma$ is the seed associate to $(Q,F)$.
Assume that $\underline X$ is an object of $\mathcal U$ given in Section 3.
\(i) \[gindex\] $CC(\underline X)$ admits a ${\bf g}$-vector $(g_i)_{i\in \widetilde Q_0\setminus F}$ which is given by $g_i=[ind_{\underline{\mathcal T_0}}(\underline X): \underline {T_i}]$ for each $i$.
\(ii) \[g-vec\] $CC'(\underline X)$ admits a ${\bf g}$-vector $(g_{[i]})_{[i]\in \overline{Q_0}\setminus \overline{F}}$ which is given by $g_{[i]}=\sum\limits_{i'\in [i]}[ind_{\underline{\mathcal T_0}}(\underline X): \underline {T_{i'}}]$ for each $i$.
\(i) Since the quiver $\widetilde Q$ is strongly almost finite, the proof is the same one as that of ([@PP1], Proposition 3.6) using of the local finiteness of $\widetilde Q$.
\(ii) is obtained immediately from (i).
Let $h\in \Gamma$ be either of finite order or without fixed points. Define $\underline{\mathcal C^{\widetilde Q}}_h$ the $K$-linear category whose objects are the same as that of $\underline{\mathcal C^{\widetilde Q}}$, and whose morphisms consist of $Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline{X},\underline{Y})=\bigoplus\limits_{h'\in \Gamma'}Hom_{\mathcal C^{\widetilde Q}}(h'\cdot \underline{X},\underline{Y})$ for all objects $\underline{X},\underline{Y}$. We view this category as a dual construction of the category $\mathcal C^Q_h$ in [@HL].
Denote by $\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})$ the subcategory of $\underline{\mathcal C^{\widetilde Q}}_h$ consisting of all objects $\underline{T}\in \underline{\mathcal T_0}$.
\[Hom-finite\] The category $\underline{\mathcal C^{\widetilde Q}}_h$ is $Hom$-finite if either (i) the order of $h$ is finite, or (ii) $Q$ has no fixed points under the action of $h\in \Gamma$.
The proof is similar to that of Lemma 6.1 in [@HL].
In the sequel, assume that $\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})$ is $Hom$-finite. Let $F:\underline{\mathcal C^{\widetilde Q}}\rightarrow mod\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})^{op}$ be the functor mapping an object $X$ to the restricting of $\underline{\mathcal C^{\widetilde Q}}_h(-,X)$ to $\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})$. For each indecomposable object $\underline{T}$ of $\underline{\mathcal T_0}$, denote by $S_{\underline T}$ the [**simple quotient of $F(\underline T)$**]{} via $S_{\underline T}(\underline T')=End_{\mathcal C}(\underline T)/J$ if $\underline T'\cong \underline T$ and $S_{\underline T}(\underline T')=0$ if $\underline T'\not\cong \underline T$, for any object $\underline T'$ of $\underline{\mathcal C^{\widetilde Q}}_h$, where $J$ is the Jacobson radical of $End_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T)$.
\[lift\] Keep the notations as above. For any morphism $\widetilde f:F(\underline M)\rightarrow F(\underline N)$ in $mod\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})^{op}$ with $M,N\in \mathcal C^{\widetilde Q}$, there exists $f: \underline M\rightarrow \underline N$ in $\underline{\mathcal C^{\widetilde Q}}_h$ such that $F(f)=\widetilde f$.
Let $\underline{T}_1\overset{e}{\rightarrow} \underline{T}_0\overset{d}{\rightarrow}\underline{M}\rightarrow \underline{T}_1[1]$ be the triangle such that $d$ is a minimal right $\underline{\mathcal T_0}$-approximation. Since $\underline{\mathcal T_0}$ is a cluster tilting subcategory of $\underline{\mathcal C^{\widetilde Q}}$, we have $\underline{T}_1\in \underline{\mathcal T_0}$. Applying $F$ to the above triangle, we have $F(\underline{T}_1)\rightarrow F(\underline{T}_0)\rightarrow F(\underline{M})\rightarrow 0$. Similarly, there is a triangle $\underline{T'}_1\overset{e'}{\rightarrow} \underline{T'}_0\overset{d'}{\rightarrow}\underline{N}\rightarrow \underline{T'}_1[1]$, and $F(\underline{T'}_1)\rightarrow F(\underline{T'}_0)\rightarrow F(\underline{N})\rightarrow 0$. Since $\underline{T}_0,\underline{T'}_0,\underline{T}_1,\underline{T'}_1\in \underline{\mathcal T_0}$, it follows that $F(\underline{T}_0),F(\underline{T'}_0),F(\underline{T}_1),F(\underline{T'}_1)$ are projective in $mod\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})^{op}$. Thus, $\widetilde f$ can be lift to the following commutative diagram: $$\xymatrix{
F(\underline{T}_1)\ar[r]\ar[d]^{\widetilde f_1} & F(\underline{T}_0)\ar[r] \ar[d]^{\widetilde f_0} & F(\underline{M})\ar[r]\ar[d]_{\widetilde f}& 0 \\
F(\underline{T'}_1)\ar[r] & F(\underline{T'}_0)\ar[r] & F(\underline{N}) \ar[r] &0,}$$
By the Yoneda Lemma, there exist $f_1:\underline{T}_1\rightarrow \underline{T'}_1$ and $f_0:\underline{T}_0\rightarrow \underline{T'}_0$ in $\underline{\mathcal C^{\widetilde Q}}_h$ such that $F(f_1)=\widetilde f_1$ and $F(f_0)=\widetilde f_0$. Thus, $f_1\in\bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline {T}_1,\underline {T'}_1)$ and $f_0\in\bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline {T}_0,\underline {T'}_0)$ such that $$\xymatrix{
\bigoplus\limits_{h'\in \Gamma'}h'\cdot \underline{T}_1\ar[r]\ar[d]^{f_1} & \bigoplus\limits_{h'\in \Gamma'}h'\cdot \underline{T}_0\ar[r] \ar[d]^{ f_0} & \bigoplus\limits_{h'\in \Gamma'}h'\cdot \underline{M}\ar[r]\ar[d]& 0 \\
\underline{T'}_1\ar[r] & \underline{T'}_0\ar[r] & \underline{N} \ar[r] &0,}$$ commutes. Since $\underline{\mathcal C^{\widetilde Q}}_h(\underline{\mathcal T_0})$ is $Hom$-finite, we may assume that only finite $h'\in \Gamma'$ appear in the upper triangle, which means that there exists a finite subset $I$ of $\Gamma'$ such that $$\xymatrix{
\bigoplus\limits_{h'\in I}h'\cdot \underline{T}_1\ar[r]\ar[d]^{f_1} & \bigoplus\limits_{h'\in I}h'\cdot \underline{T}_0\ar[r] \ar[d]^{ f_0} & \bigoplus\limits_{h'\in I}h'\cdot \underline{M}\ar[r]\ar[d]& 0 \\
\underline{T'}_1\ar[r] & \underline{T'}_0\ar[r] & \underline{N} \ar[r] &0,}$$ commutes. Thus, there exists $f\in\bigoplus\limits_{h'\in I}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline {T}_0,\underline {T'}_0)\subseteq \bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline {T}_0,\underline {T'}_0)$ such that the above diagram commutes. This commutative diagram induces $$\xymatrix{
F(\underline{T}_1)\ar[r]\ar[d]^{\widetilde f_1} & F(\underline{T}_0)\ar[r] \ar[d]^{\widetilde f_0} & F(\underline{M})\ar[r]\ar[d]_{F(f)}& 0 \\
F(\underline{T'}_1)\ar[r] & F(\underline{T'}_0)\ar[r] & F(\underline{N}) \ar[r] &0.}$$ Therefore, we have $F(f)=\widetilde f$.
\[minimalpro\] Assume that for $\underline X\in \mathcal U$, the category $add(\{h\cdot \underline{X}\;|\;h\in \Gamma\})$ is rigid. Let $\underline{T}_1\overset{f'}{\rightarrow} \underline{T}_0\overset{f}{\rightarrow}\underline{X}\rightarrow \underline{T}_1[1]$ be a triangle in $\underline{\mathcal C^{\widetilde Q}}$ with $f$ a minimal right $\underline{\mathcal T_0}$-approximation. If $\underline X$ has not any direct summand in $\underline{\mathcal T_0}[1]$, then $$F(\underline{T}_1)\overset{F(f')}{\longrightarrow} F(\underline{T}_0)\overset{F(f)}{\longrightarrow}F(\underline{X})\rightarrow 0$$ is a minimal projective resolution of $F(\underline X)$.
Since $\underline X$ does not have direct summand in $\underline{\mathcal T_0}[1]$, $f'$ is right minimal. Otherwise, if $f'$ is not right minimal, then $f'$ has a direct summand as $\underline{T'}\rightarrow 0$, thus $\underline{T'}[1]$ is a direct summand of $\underline X$. This is a contradiction.
First, we prove that $F(f)$ is a projective cover of $F(\underline X)$. For any projective representation $F(\underline T)e$ of $\underline{\mathcal C^{\widetilde Q}}_h$ and surjective morphism $u:F(\underline T)e\rightarrow F(\underline X)\rightarrow 0$, where $\underline T\in \underline{\mathcal C^{\widetilde Q}}_h$, $e\in End_{\underline{\mathcal C^{\widetilde Q}}_h}(T)$ is an idempotent. By Yoneda Lemma, there exists $g\in Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T,\underline X)$ such that $F(g)=u$. Since $F(\underline T)e$ and $F(\underline{T}_0)$ are projective, there exist $v: F(\underline{T}_0)\rightarrow F(\underline T)e$ and $w: F(\underline{T})\rightarrow F(\underline{T}_0)$ such that $F(f)=F(g)v$ and $F(g)=F(f)w$. Thus, $F(f)=F(f)wv$. Similarly, by Yoneda Lemma, there exist $g'\in Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline{T}_0,\underline T)$ and $g''\in Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline{T},\underline{T}_0)$ such that $F(g')=v$, $F(g'')=w$. Thus, $F(f)=F(f)wv$ is equivalent to $f=f\circ g''\circ g'$. We may assume $g''\circ g'=(g_{h'})_{h'\in \Gamma'}$, then $f=f\circ(g''\circ g')=(fg_{h'})_{h'\in \Gamma}$. Thus $f=fg_{e}$ and $0=fg_{h'}=0$ for any $h'\neq e$, where $e$ is the identity of $\Gamma'$. Further, since $f$ is a right minimal $\underline{\mathcal T_0}$-approximation and $h'\underline{T}_0\in \underline{\mathcal T_0}$ . Therefore, for any $e\neq h'\in \Gamma$, $g_{h'}\in J(h'\underline{T}_0,\underline{T}_0)$ and $g_e$ is an isomorphism. Using Lemma 5.10 of [@HL], $g''\circ g'=(g_{h'})_{h'\in \Gamma'}$ is an isomorphism. Thus, $F(\underline{T}_0)$ is a direct summand of $F(\underline T)e$.
Similarly, because $f'$ is right minimal, $F(f')$ induces a projective cover of $ker(F(f))$. Our result follows.
Inspired by (Proposition 2.1, [@DK]), (Lemma 3.58, [@D]) and (Lemma 3.5, [@PP1]), we have:
\[distinct\] Assume $\underline X\in \mathcal U$ such that $add(\{h\cdot \underline{X}\;|\;h\in \Gamma\})$ is rigid, and $\underline{T}_1\overset{f'}{\rightarrow} \underline{T}_0\overset{f}{\rightarrow}\underline{X}\rightarrow \underline{T}_1[1]$ is a triangle in $\underline{\mathcal C^{\widetilde Q}}$ and $f$ is a minimal right $\underline{\mathcal T}$-approximation. If $\underline{T}$ is a direct summand of $\underline{T}_0$ for indecomposable object $\underline T\in \underline{\mathcal C^{\widetilde Q}}$, then $h\cdot \underline{T}$ is not a direct summand of $\underline{T}_1$ for any $h\in \Gamma$.
By Lemma 2.3 and Lemma 2.4 of [@HL], we may assume that $h$ has no fixed points or finite order. By Lemma \[Hom-finite\], $\underline{\mathcal C^{\widetilde Q}}_h$ is Hom-finite. For any $h'\in \Gamma'$ and $\underline T'\in \underline{\mathcal T}$, applying $Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T',-)$ to the triangle, we get $$Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T',\underline{T}_1)\rightarrow Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T',\underline{T}_0)\overset{Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T',f)}{\longrightarrow}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T',\underline{X})\rightarrow 0.$$ Since $f$ is minimal, we get a minimal projective resolution of $F(\underline X)$, $$F(\underline{T}_1)\rightarrow F(\underline{T}_0)\rightarrow F(\underline X)\rightarrow 0.$$ To prove that $h\cdot \underline T$ is not a direct summand of $\underline{T}_1$, it suffices to prove that $F(\underline T)$ is not a direct summand of $F(\underline{T}_1)$, or equivalently $Ext^1(F(\underline{X}),S_{\underline T})=0$, where $S_{\underline T}$ is the simple quotient of $F(\underline T)$.
As $F(\underline{T}_0)\rightarrow F(\underline X)$ is the projective cover of $F(\underline X)$ and $F(\underline{T})$ is a direct summand of $F(\underline{T}_0)$, then there is a non-zero morphism $\widetilde p:F(\underline X)\rightarrow S_{\underline T}$. For any $\widetilde g:F(\underline T_1)\rightarrow S_{\underline T}$, since $F(\underline T_1)$ is projective, there exists $\widetilde q: F(\underline T_1)\rightarrow F(\underline X)$ such that $\widetilde g=\widetilde p \widetilde q$.
Let $T$ be a lifting of $\underline T$, by Lemma \[extpre\], we have $T\in \mathcal T$ and $T$ is non-projective. Moreover, since $\underline T$ is indecomposable, we can choose $T$ is indecomposable. By Lemma 5.3 of [@HL], there is an admissible short exact sequence $0\rightarrow Y\rightarrow T'\rightarrow T\rightarrow 0$. Since $\mathcal T$ has no $\Gamma$-loop, by the dual version of Lemma 5.11 (2) of [@HL], and Lemma \[extpre\], we have $S_{\underline T}\cong F(\underline Y[1])$.
Thus, according to Lemma \[lift\], lifting $\widetilde q$, $\widetilde g$ and $\widetilde p$ as $q,g,p$ in $\underline{\mathcal C^{\widetilde Q}}_h$, where $q\in\bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{Q}}}(h'\cdot \underline T_1,\underline X)$, $g\in\bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T_1,\underline Y[1])$ and $p\in\bigoplus\limits_{h'\in \Gamma'}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline X,\underline Y[1])$. Since $\widetilde g=\widetilde p \widetilde q$ and $Hom(F(\underline T),F(\underline Y[1]))\cong Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T,\underline Y[1])$, we obtain $g=p\circ q$.
Since $\underline{\mathcal C^{\widetilde Q}}_h$ is $Hom$-finite, we may assume that $g\in\bigoplus\limits_{h'\in I}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T_1,\underline Y[1])$ and $q\in\bigoplus\limits_{h'\in I}Hom_{\underline{\mathcal C^{\widetilde Q}}}(h'\cdot \underline T_1,\underline X)$ for a finite subset $I\subseteq \Gamma'$. According to the composition of morphisms in $\underline{\mathcal C^{\widetilde Q}}_h$, $g=p\circ q$ means $g=p(\sum\limits_{h'\in I}h'\cdot q)$, equivalently, we have the following commutative diagram: $$\xymatrix{
\bigoplus\limits_{h'\in I}h'\cdot \underline{X}[1] \ar[r]^{\sum h'\cdot f'} &\bigoplus\limits_{h'\in I}h'\cdot \underline{T}_1\ar[r]^{\sum h'\cdot f}\ar[ld]_{\sum h'\cdot q}\ar[d]^{g} & \bigoplus\limits_{h'\in I}h'\cdot \underline{T}_0\ar[r] &\bigoplus\limits_{h'\in I}h'\cdot \underline{X} \\
\bigoplus\limits_{h'\in I}h'\cdot \underline{X} \ar[r]^{p} &\underline{Y}[1], & & &}$$ Since $add(\{h\cdot X\;|\;h\in \Gamma\})$ is rigid, we have $g(\sum h'\cdot f')=0$. Therefore, $g$ factor through $\sum h'\cdot f$. Thus, in $\underline{\mathcal C^{\widetilde Q}}_h$, $g$ factors through $f$. By the arbitrary of $g$, we get a surjective map $Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T_0,\underline Y[1])\twoheadrightarrow Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T_1,\underline Y[1])$. Since $Hom_{\underline{\mathcal C^{\widetilde Q}}_h}(\underline T_i,\underline Y[1])\cong Hom(F(\underline T_i), F(\underline Y[1]))=Hom(F(\underline T_i), S_{\underline T})$ for $i=1,2$, we get $Hom(F(\underline T_0), S_{\underline T})\rightarrow Hom(F(\underline T_0), S_{\underline T})$ is surjective. Therefore, we obtain $Ext^1(F(\underline X), S_{\underline T})=0$. Our result follows.
Let $\underline{X}$ be an object of $\underline{\mathcal C}$ such that $add(\{h\cdot \underline{X}\;|\;h\in \Gamma\})$ is rigid. If $ind'_{\underline{\mathcal T_0}}(\underline{X})$ has no negative coordinates, then $\underline{X}\in \underline{\mathcal T}$.
Assume $\underline{T}_1\overset{f'}{\rightarrow} \underline{T}_0\overset{f}{\rightarrow}\underline{X}\rightarrow \underline{T}_1[1]$ is a triangle and $f$ is a minimal right $\mathcal T$-approximation. According to Lemma \[distinct\], $\underline{T}_0$ and $\underline{T}_1$ have no direct summands which have the same $\Gamma$-orbits. Further, since $ind'_{\underline{\mathcal T}}(\underline{X})$ has no negative components. Therefore, by the definition of $ind'_{\underline{\mathcal T_0}}$, we obtain $\underline{T}_1=0$. Thus, $\underline{X}\cong\underline{T}_0\in \underline{\mathcal T}$.
Using the above preparation, we now can prove Conjecture \[$g$-vec\] for all acyclic sign-skew-symmetric cluster algebras. The method of the proof follows from that of Theorem 3.7 (i) in [@PP1].
\[sign-c\] The conjecture \[$g$-vec\] on sign-coherence holds for all acyclic sign-skew-symmetric cluster algebras.
For any cluster $Z=\{z_1,\cdots,z_n\}$ of $\mathcal A(\Sigma)$, by Theorem \[reach\], we associate a cluster tilting subcategory $\underline{\mathcal T}$ of $\underline{\mathcal C^{\widetilde Q}}$ which obtained by a series of mutations of $\underline{\mathcal T_0}$ and the mutations do no take at $\underline{\mathcal T''}$. Precisely, there are $n$ indecomposable objects $\{\underline{X_j}\;|\;j=1,\cdots,n\}$ such that $\underline{\mathcal T}=add(\{h\cdot \underline{X_j}\;|\;j=1,\cdots,n\}\cup \underline{\mathcal T''})$ and $CC'(\underline{X_i})=z_i$ for $i=1,\cdots,n$. By Proposition \[g-vec\], the $g$-vector $g^j_{[1]},\cdots,g^j_{[n]}$ of $z_j$ is given by $g^j_{[i]}=\sum
\limits_{i'\in[i]}[ind_{\underline{\mathcal T_0}}(\underline{X_j}):\underline{T_{i'}}]$.
Suppose that there exist $s$ and $s'$ such that $g^s_{[i]}>0$ and $g^{s'}_{[i]}<0$. Assume that $\underline{T^j}_1\rightarrow \underline{T^j}_0\overset{f^j}{\rightarrow}\underline{X_j}\rightarrow \underline{T^j}_1[1]$ be the triangle with $f^j$ is a minimal right $\underline{\mathcal T_0}$-approximation for $j=s, s'$. Thus, there exist $h,h'\in \Gamma$ such that $h\cdot\underline{T_i}$ (respectively, $h'\cdot\underline{T_i}$) is a direct summand of $\underline{T^s}_0$ (respectively, $\underline{T^{s'}}_1$). Furthermore, in the triangle $$\bigoplus\limits_{j=s,s'}\underline{T^j}_1\rightarrow \bigoplus\limits_{j=s,s'}\underline{T^j}_0\overset{\bigoplus\limits_{j=s,s'}f^j}{\rightarrow}\bigoplus\limits_{j=s,s'}\underline{X_j}\rightarrow \bigoplus\limits_{j=s,s'}\underline{T^j}_1[1],$$ the morphism $\bigoplus\limits_{j=s,s'}f^j$ is a minimal right $\underline{\mathcal T_0}$-approximation. According to Lemma \[distinct\], $h'\cdot\underline{T_i}$ is not a direct summand of $\underline{T^{s'}}_1$ since $h\cdot\underline{T_i}$ is a direct summand of $\underline{T^{s}}_0$. This is a contradiction. Our result follows.
The recurrence of ${\bf g}$-vectors {#basechange1}
===================================
([@DWZ]) Conjecture \[basechange\] holds true for all finite rank skew-symmetric cluster algebras.
It is easy to see that the above theorem can be extended to the situation of infinite rank skew-symmetric cluster algebras, that is, we have:
\[skew\] Conjecture \[basechange\] holds true for all infinite rank skew-symmetric cluster algebras.
We first give the following easy lemma.
\[finite\] Let $(Q,F,\Gamma)$ be the unfolding of a matrix $B$ and $\mathcal A=\mathcal A(\Sigma(Q,F))$. For any sequence of orbits $([i_1],\cdots,[i_s])$ and $a\in Q_0$, there exist finite subsets $S_j\subseteq [i_j], j=1,\cdots,s$ such that $\prod\limits_{k\in V_s}\mu_{k}\cdots\prod\limits_{k\in V_1}\mu_k(x_a)=\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}(x_a)$ for all finite subsets $V_j, j=1,\cdots,s$, satisfying $S_j\subseteq V_j\subseteq [i_j], j=1,\cdots,s$.
Since $\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}(x_a)$ is determined by finite vertices of $Q_0$, there exist finite subsets $S_j\subseteq [i_j], j=1,\cdots,s$ such that $\prod\limits_{k\in S_s}\mu_{k}\cdots\prod\limits_{k\in S_1}\mu_k(x_a)=\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}(x_a)$. Then for all finite subsets $V_j, j=1,\cdots,s$ satisfying $S_j\subseteq V_j\subseteq [i_j], j=1,\cdots,s$, we have $\prod\limits_{k\in V_s}\mu_{k}\cdots\prod\limits_{k\in V_1}\mu_k(x_a)=\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}(x_a)$.
Let $\mathcal A_1$ (respectively, $\mathcal A_2$) be the cluster algebra with principal coefficients at $\Sigma_1=(\widetilde X,\widetilde Y,Q)$ (respectively, $\Sigma_2=(\widetilde X', \widetilde Y', \widetilde \mu_{[k]}(Q))$). For any sequence $([i_1],\cdots,[i_s])$ of orbits of $Q_0$ and $a\in Q_0=\widetilde \mu_{[k]}(Q_0)$, denote $g^{Q,a}=(g^Q_i)_{i\in Q_0}$ (respectively, $g^{\widetilde \mu_{[k]}(Q),a}=(g^{\widetilde \mu_{[k]}(Q)}_i)_{i\in Q_0}$) be the ${\bf g}$-vector of the cluster variable $\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}(x_a)$ (respectively, $\widetilde\mu_{[i_s]}\cdots\widetilde\mu_{[i_1]}\widetilde \mu_{[k]}(x'_a)$.
As a consequence of Theorem \[skew\], we have the following property.
\[rec\] Keep the notations as above. The following recurrence holds: $$\label{eqn3.1}
g_i^{\widetilde \mu_{[k]}(Q)}=\begin{cases}-g_{i}^{Q}& \text{if }i\in [k];\\g_i^{Q}+\sum\limits_{k'\in [k]}[b_{ik'}]_+g_{k'}^{Q}-\sum\limits_{k'\in [k]}b_{ik'}min(g_{k'}^{Q},0)&\text{if }i\not\in [k].\end{cases}$$
By Lemma \[finite\], for $j=1,\cdots,s$, there exist finite subsets $S^{1}_j\subseteq [i_j]$ (respectively, $S^{2}_j\subseteq [i_j]$) such that $g^{Q,a}$ (respectively, $g^{\widetilde \mu_{[k]}(Q),a}$) is the ${\bf g}$-vector of the cluster variable $\prod\limits_{t\in V^1_s}\mu_{t}\cdots\prod\limits_{t\in V^1_1}\mu_{t}(x_a)$ (respectively, $\prod\limits_{t\in V^2_s}\mu_{t}\cdots\prod\limits_{t\in V^1_1}\mu_{t}\widetilde\mu_{[k]}(x'_a)$) for all finite subsets $V^1_j$ (respectively, $V^2_j$) satisfying that $S^1_j\subseteq V^1_j\subseteq [i_j]$ (respectively, $S^2_j\subseteq V^2_j\subseteq [i_j]$). Choose $S_j=S^1_j\cup S^2_j$, we have $g^{Q,a}$ and $g^{\widetilde \mu_{[k]}(Q),a}$ as the ${\bf g}$-vectors of the cluster variables $\prod\limits_{t\in S_s}\mu_{t}\cdots\prod\limits_{t\in S_1}\mu_{t}(x_a)$ and $\prod\limits_{t\in S_s}\mu_{t}\cdots\prod\limits_{t\in S_1}\mu_{t}\widetilde\mu_{[k]}(x'_a)$ respectively.
For any finite subset $T\subseteq [k]$, denote by $\mathcal A^T$ the cluster algebra with principal coefficients at $\Sigma^T=(\widetilde X^T, \widetilde Y^T, \prod\limits_{k'\in T}\mu_{k'}(Q))$. Denote by $g^{\prod\limits_{k'\in T}\mu_{k'}(Q),a}=(g_{i}^{\prod\limits_{k'\in T}\mu_{k'}(Q)})_{i\in Q_0}$ the ${\bf g}$-vector of the cluster variable $\prod\limits_{t\in S_s}\mu_{t}\cdots\prod\limits_{t\in S_1}\mu_{t}\prod\limits_{k'\in T}\mu_{k'}(x_a^T)$.
Since the cluster variable $\prod\limits_{t\in S_s}\mu_{t}\cdots\prod\limits_{t\in S_1}\mu_{t}\widetilde\mu_{[k]}(x'_a)$ is only determined by finite vertices of $Q_0$, there exists a finite subset $S\subseteq [k]$ such that $g_i^{\widetilde \mu_{[k]}(Q)}=g_i^{\prod\limits_{k'\in T}\mu_{k'}(Q)}$ for any finite set $T$ satisfying $S\subset T\subseteq [k]$.
Furthermore, by Theorem \[skew\], for any subset $T\subseteq Q_0$, we have $$\label{eqn3.1}
g_i^{\prod\limits_{k'\in T}\mu_{k'}(Q)}=\begin{cases}-g_{i}^{Q}& \text{if }i\in T;\\g_i^{Q}+\sum\limits_{k'\in T}[b_{ik'}]_+g_{k'}^{Q}-\sum\limits_{k'\in T}b_{ik'}min(g_{k'}^{Q},0)&\text{if }i\not\in T.\end{cases}$$
Therefore, the result holds.
\[base\] Conjecture \[basechange\] holds true for all acyclic sign-skew-symmetric cluster algebras. That is, let $B=(b_{[i][j]})\in Mat_{n\times n}(\mathbb Z)$ be a sign-skew-symmetric matrix which is mutation equivalent to an acyclic matrix and let $t_1\overset{[k]}{--} t_2\in \mathbb T_n$ and $B^2=\mu_{[k]}(B^1)$. For $[a]\in \{[1],\cdots,[n]\}$ and $t\in \mathbb T_n$, assume ${\bf g}^{B^1; t_1}_{[a]; t}=(g_{[1]}^{t_1},\cdots,g_{[n]}^{t_1})$ and ${\bf g}^{B^2; t_2}_{[a]; t}=(g_{[1]}^{t_2},\cdots,g_{[n]}^{t_2})$, then $$\label{eqn3.1}
g_{[i]}^{t_2}=\begin{cases}-g_{[k]}^{t_1}& \text{if }[i]=[k];\\g_{[i]}^{t_1}+[b_{[i][k]}^{t_1}]_+g_{[k]}^{t_1}-b_{[i][k]}^{t_1}min(g_{[k]}^{t_1},0)&\text{if }[i]\neq [k].\end{cases}$$
By Lemma \[mut\] and Theorem \[mainlemma\], let $(Q,\Gamma)$ be an unfolding of $B$. Assume $t_1 \overset{i_1}{--}t_2'\cdots t'_s\overset{i_s}{--} t$. By Theorem \[ho\], we have $g_{[i]}^{t_2}=\sum\limits_{i'\in [i]}g_{i'}^{\widetilde \mu_{[k]}(Q)}$ and $g_{[i]}^{t_1}=\sum\limits_{i'\in [i]}g_{i'}^{Q}$. By Proposition \[gindex\] and Lemma \[distinct\], for $k',k''\in [k]$, both $g_{k'}^Q$ and $g_{k''}^Q$ are non-negative or non-positive. Thus, $\sum\limits_{k'\in [k]}min(g^Q_{k'},0)=min(\sum\limits_{k'\in [k]}g^Q_{k'},0)$. Using Proposition \[rec\], if $[i]=[k]$, then $$g_{[i]}^{t_2}=\sum\limits_{i'\in [i]}g_{i'}^{\widetilde \mu_{[k]}(Q)}=-\sum\limits_{i'\in [i]}g_{i'}^{Q}=-g_{[i]}^{t_1};$$ if $[i]\neq [k]$, since $\sum\limits_{k'\in [k]}min(g^Q_{k'},0)=min(\sum\limits_{k'\in [k]}g^Q_{K'},0)$ and $b_{[i][k]}^{t_1}=\sum\limits_{i'\in [i]}b_{i'k}$, then $$g_{[i]}^{t_2}=\sum\limits_{i'\in [i]}(g_{i'}^{Q}+\sum\limits_{k'\in [k]}[b_{i'k'}^{t_1}]_+g_{k'}^{Q}-\sum\limits_{k'\in [k]}b_{i'k'}^{t_1}min(g_{k'}^{Q},0))=g_{[i]}^{t_1}+[b_{[i][k]}^{t_1}]_+g_{[k]}^{t_1}-b_{[i][k]}^{t_1}min(g_{[k]}^{t_1},0).$$ The result holds.
[**Acknowledgements:**]{} This project is supported by the National Natural Science Foundation of China (No.11671350 and No.11571173).
[10]{}
A. B. Buan, O. Iyama, I. Reiten and J. Scott, Cluster structures for 2-Calabi-Yau categories and unipotent groups. Compositio Math. 145 (2009), 1035-1079.
A. B. Buan, R. Marsh, M. Reineke, I. Reiten and G. Todorov, Tilting theory and cluster combinatorics, Adv. Math. 204 (2006), no.2, 572-618.
A. Berenstein, S. Fomin and A. Zelevinsky, Cluster algebras III: Upper bound and Bruhat cells. Duke Mathematical Journal. Vol.126, No.1(2005).
R. Dehy, B. Keller, On the combinatorics of rigid objects in 2-Calabi-Yau categories. Int. Math. Res. Not.(2008), doi:10.1093/imrn/rnn029.
L. Demonet, Categorification of skew-symmerizable cluster algebras. Algebr Represent Theory, 14, 1087-1162, 2011.
H. Derksen, J. Weyman, and A. Zelevinsky, Quivers with potentials and their representations II: applications to cluster algebras. J. Amer. Math. Soc., 23(3):749-790, 2010.
S. Fomin, A. Zelevinsky, Cluster algebras I: Foundations. J. Amer. Math. Soc. 15, 497-529(2002).
S. Fomin, A. Zelevinsky, Cluster algebras II: Finite type classification. Inven. Math. 154, 63-121(2003).
S. Fomin, A. Zelevinsky, Cluster algebras IV: Coefficients, Comp. Math. 143, 112-164, 2007.
C.J. Fu, B. Keller, On cluster algebras with coefficients and 2-Calabi¨CYau categories. Trans. Am. Math. Soc. 362(2), 859-895 (2010).
M. Gross, P. Hacking, S. Keel, M. Kontsevich, Canonical bases for cluster algebras, arXiv:1411.1394v1 \[math.AG\], 2014.
M. Huang, F. Li, Unfolding of sign-skew-symmetric cluster algebras and applications to positivity and F-polynomials, arXiv:1609.05981.
T. Nakanishi, A. Zelevinsky, On tropical dualities in cluster algebras, Contemp. Math. 565 (2012) 217-226.
P.-G. Plamondon, Cluster characters for cluster categories with infinite-dimensional morphism spaces, Adv. Math. 227 (2011), 1-39.
P.-G. Plamondon, Cluster algebras via cluster categories with infinite-dimensional morphism spaces, Compositio Math. 147, 1921-1954, 2011.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This is an expository account of the proof of Kontsevich’s combinatorial formula for intersections on moduli spaces of curves following the paper [@OP]. It is based on the lectures I gave on the subject in St. Petersburg in July of 2001.'
author:
- 'Andrei Okounkov[^1]'
title: Random trees and moduli of curves
---
These are notes from the lectures I gave in St. Petersburg in July of 2001. Our goal here is to give an informal introduction to intersection theory on the moduli spaces of curves and its relation to random matrices and combinatorics. More specifically, we want to explain the proof of Kontsevich’s formula given in [@OP] and how it is connected to other topics discussed at this summer school such as, for example, the combinatorics of increasing subsequences in a random permutations.
These lectures were intended for an audience of mostly analysts and combinatorialists interested in asymptotic representation theory and random matrices. This is very much reflected in both the selection of the material and its presentation. Since absolutely no background in geometry was assumed, there is a long and very basic discussion of what moduli spaces of curves and intersection theory on them are about. We hope that a reader trained in analysis or combinatorics will get some feeling for moduli of curves (without worrying too much about the finer points of the theory, all but a few of which were swept under the rug).
Conversely, in the second, asymptotic, part of the text, I allowed myself to operate more freely because the majority of the audience was experienced in asymptotic analysis. Also, since many fundamental ideas such as e.g. the KdV equations for the double scaling limit of the Hermitian $1$-matrix model were at length discussed in other lectures of the school, their discussion here is much more brief that it would have been in a completely self-contained course. A much more detailed treatment of both geometry and asymptotics can be found in the paper [@OP], on which my lectures were based.
It is needless to say that, since this is an expository text based on my joint work with Rahul Pandharipande, all the credit should be divided while any blame is solely my responsibility. Many people contributed to the success of the St. Petersburg summer school, but I want to especially thank A. Vershik for organizing the school and for the invitation to participate in it. I am grateful to NSF (grant DMS-0096246), Sloan foundation, and Packard foundation for partial financial support.
Introduction to moduli of curves
================================
Let me begin with an analogy. In the ideal world, the moduli spaces of curves would be quite similar to the Grassmann varieties $$Gr_{k,n}=\{L\subset {\mathbb{C}}^n,\dim L=k \}$$ of $k$-dimensional linear subspaces $L$ of an $n$-dimensional space. While any such subspace $L$ is geometrically just a $k$-dimensional vector space ${\mathbb{C}}^k$, nontrivial things can happen if $L$ is allowed to vary in families, and this nontriviality is captured by the geometry of $Gr_{k,n}$.
A convenient formalization of the notion of a family of linear spaces parameterized by points of some base space $B$ is a (locally trivial) vector bundle over $B$. There is a natural *tautological* vector bundle over the Grassmannian $Gr_{k,n}$ itself, namely the space $$\mathcal{L}=\{(L,v), v\in L \subset {\mathbb{C}}^n\}$$ formed by pairs $(L,v)$, where $L$ is a $k$-dimensional subspace of ${\mathbb{C}}^n$ and $v$ is a vector in $L$. Forgetting the vector $v$ gives a map $\mathcal{L}\to Gr_{k,n}$ whose fiber over $L\in Gr_{k,n}$ is naturally identified with $L$ itself.
Given any space $B$ and a map $$\phi: B \to Gr_{k,n}$$ we can form the pull-back of $\mathcal{L}$ $$\phi^* \mathcal{L} =\{(b,v), b \in B, v\in C^n, v\in \phi(b)\}$$ which is a rank $k$ vector bundle over $B$. For a compact base $B$, in the $n\to\infty$ limit this becomes a bijection between (homotopy classes) of maps $B\to Gr_{k,\infty}$ and (isomorphism classes) of rank $k$ vector bundles on $B$.
In particular, one can associate to a vector bundle $\phi^* \mathcal{L}$ its characteristic cohomology classes obtained by pulling back the elements of $H^*(Gr_{k,n})$ via the map $\phi$. Intersections of these classes describe the enumerative geometry of the bundle $\phi^* \mathcal{L}$. It is thus especially important to understand the intersection theory on the space $Gr_{k,n}$ itself — and this leads to a very beautiful classical combinatorics, in particular, Schur functions play a central role (see for example [@Fu], Chapter 14).
One would like to have a similar theory with families of linear spaces replaced by families of curves of some genus $g$. That is, given a family $F$ of, say, smooth genus $g$ algebraic curves parameterized by some base $B$ we want to have a natural map $\phi: B\to {\mathcal{M}}_g$ that captures the essential information about the family $F$. Here ${\mathcal{M}}_g$ is the *moduli space* of smooth curves of genus $g$, that is, the space of isomorphism classes of smooth genus $g$ curves. At this point it may be useful to be a little naive about what we mean by a family of curves etc., in this way we should be able to understand the basic issues without too many technicalities getting in our way. Basically, a family $F$ of curves with base $B$ is a “nice” morphism $$\pi : F \to B$$ of algebraic varieties whose fibers $\pi^{-1}(b)$, $b\in B$, are smooth complete genus $g$ curves. We want the moduli space ${\mathcal{M}}_g$ and the induced map $\phi: B \to {\mathcal{M}}_g$ to be also algebraic.
We will see that the first difficulty with the above program is that, in general, the family $F$ will not be a pull-back of any universal family over ${\mathcal{M}}_g$. To get a sense of why this is the case we can cheat a little and consider the (normally forbidden) case $g=0$. Up to isomorphism, there is only one curve of genus $0$, namely the projective line ${\mathbb{P}}^1$. Hence the map $\phi$ in this case can only be the trivial map to a point. There exist, however, highly nontrivial families with fibers isomorphic to ${\mathbb{P}}^1$ or even ${\mathbb{C}}^1$ as we, in fact, already saw above in the example of the tautological rank $1$ bundle over $Gr_{1,n} \cong {\mathbb{P}}^{n-1}$.
The reason why there exist locally trivial yet globally nontrivial families with fiber ${\mathbb{P}}^1$ is that ${\mathbb{P}}^1$ has a large automorphism group which one can use to glue trivial pieces in a nontrivial way. Basically, the automorphisms are the principal issue behind the nonexistence of a universal family of curves over ${\mathcal{M}}_g$. The situation becomes manageable, if not entirely perfect, once one can get the automorphism group to be finite (which is automatic for smooth curves of genus $g>1$). A standard way to achieve this is to consider curves with sufficiently many marked points on them ($\ge 3$ marked points for $g=0$ and $\ge 1$ for $g=1$). Since curves with marked points arise very naturally in many other geometric situations, the moduli spaces ${\mathcal{M}}_{g,n}$ of smooth genus $g$ curves with $n$ distinct marked points should be considered on equal footing with the moduli spaces ${\mathcal{M}}_g$ of plain curves.
{#sM11}
As the first example, let us consider the space ${\mathcal{M}}_{1,1}$ of genus $g=1$ curves $C$ with one parked point $p\in C$. By Riemann-Roch the space of $H^0(C,\mathcal{O}(2p))$ of meromorphic functions on $C$ with at most double pole at $p$ has dimension $2$. Hence, in addition to constants, there exists a (unique up to linear combinations with constants) nonconstant meromorphic function $$f: C \to {\mathbb{P}}^1\,,$$ with a double pole at $p$ and no other poles (this is essentially the Weierstraß function $\wp{}$). Thus, $f$ defines at a $2$-fold branched covering of ${\mathbb{P}}^1$ doubly ramified over $\infty \in{\mathbb{P}}^1$. For topological reasons, it has three additional ramification points which, after normalization, we can take to be $0,1$ and some $\lambda\in {\mathbb{P}}^1\setminus\{0,1,\infty\}$.
Now it is easy to show that such a curve $C$ must be isomorphic to the curve $$C\cong\{y^2= x(x-1)(x-\lambda)\}\in {\mathbb{P}}^2\label{Cyx}$$ in such a way that the point $p$ becomes the unique point at infinity and the function $f$ becomes the coordinate function $x$. It follows that every smooth pointed $g=1$ curves occurs in the following family of curves $$F=\{(x,y,\lambda), y^2= x(x-1)(x-\lambda)\} \label{F3}$$ with the base $$B=\{\lambda\}= {\mathbb{P}}^1\setminus\{0,1,\infty\}\,,$$ where the marked point is the point $p$ at infinity. For example, the curve corresponding to $\lambda=\frac32$ is plotted in Figure \[fig1\].
However, a given curve $C$ occurs in the family more than once. Indeed, we made arbitrary choices when we normalized the 3 critical values of $f$ to be $0$, $1$, and $\lambda$, respectively. At this stage, we can choose any of $6=3!$ possible assignments which makes the symmetric group $S(3)$ act on the base $B$ preserving the isomorphism class of the fiber. Concretely, this group is generated by involutions $$\lambda \mapsto 1-\lambda\,,$$ which interchanges the roles of $0$ and $1$, and by $$\lambda \mapsto 1/\lambda\,,$$ exchanging the roles of $1$ and $\lambda$. It can be shown that two members of the family $F$ are isomorphic if and only if they belong to the same $S(3)$ orbit. Thus, the structure map $\phi: B \to {\mathcal{M}}_{1,1}$ should be just the quotient map $$\phi: B \to B/S(3)= \operatorname{Spec}{\mathbb{C}}[B]^{S(3)} \,.$$ Here ${\mathbb{C}}[B]^{S(3)}$ is the algebra of $S(3)$-invariant regular functions on $B$. This algebra is a polynomial algebra with one generator, the traditional choice for which is the following $$j(\lambda) = 256\, \frac{(\lambda^2 - \lambda +1)^3}
{\lambda^2 (\lambda-1)^2} \,.$$ Thus, ${\mathcal{M}}_{1,1}$ is simply a line $${\mathcal{M}}_{1,1} = \operatorname{Spec}{\mathbb{C}}[j] \cong {\mathbb{C}}\,.$$
It is now time to point out that the family $F$ is not a pull-back $\phi^*$ of some universal family over ${\mathcal{M}}_{1,1}$. The simplest way to see this is to observe that the group $S(3)$ fails to act on $F$. Indeed, let us try to lift the involution $\lambda\to 1-\lambda$ from $B$ to $F$. There are two ways to do this, namely $$(x,y,\lambda) \mapsto (1-x,\pm i y,1-\lambda)\,,$$ neither of which is satisfactory because the square of either map $$(x,y,\lambda) \mapsto (x,-y,\lambda)$$ yields, instead of identity, a nontrivial automorphism of every curve in the family $F$. One should also observe that both choices act by a nontrivial automorphism on the fiber over the fixed point $\lambda=\frac 12 \in B$. In fact, the fibers of $F$ over fixed points of a transposition and a 3-cycle in $S(3)$, respectively, (with $j(\lambda)=1728$ and $j(\lambda)=0$, resp.) are precisely the curves with extra large automorphism groups (of order $4$ and $6$, resp.).
The existence of an nontrivial automorphism of every pointed genus $1$ curve leads to the somewhat unpleasant necessity to consider every point of ${\mathcal{M}}_{1,1}$ as a “half-point” in some suitable sense in order to get correct enumerative predictions. Again, automorphisms make the real world not quite ideal.
While it is important to be aware of these automorphism issues (for example, to understand how intersection numbers on moduli spaces can be rational numbers), there is no need to be pessimistic about them. In fact, by allowing spaces more general than algebraic varieties (called *stacks*) one can live a life in which ${\mathcal{M}}_{g,n}$ is smooth and with a universal family over it. This is, however, quite technical and will remain completely outside the scope of these lectures.
{#Fr}
Clearly, the space ${\mathcal{M}}_{1,1}\cong {\mathbb{C}}$ is not compact. The $j$-invariant of the curve goes to $\infty$ as the parameter $\lambda$ approaches the three excluded points $\{0,1,\infty\}$. As $\lambda$ approaches $0$ or $1$, the curve $C$ acquires a nodal singularity; for example, for $\lambda=1$ we get the curve $$y^2 = x(x-1)^2$$ plotted in Figure \[fig2\].
It is natural to complete the family by adding the corresponding nodal cubics for $\lambda \in \{0,1\}$. All plane cubic with a node being isomorphic, the function $j$ extends to a map $$j: {\mathbb{C}}\to {\mathbb{P}}^1/S(3) = {\overline{\mathcal{M}}}_{1,1} \cong {\mathbb{P}}^1$$ to the moduli space of ${\overline{\mathcal{M}}}_{1,1}$ of curves of arithmetic genus $1$ with at most one node and a smooth marked point.[^2]
In general, it is very desirable to have a nice compactification for the noncompact spaces ${\mathcal{M}}_{g,n}$. First of all, interesting families of curves over a complete base $B$ are typically forced to have singular fibers over some points in the base (as in the example above). Fortunately, as we will see below, it often happens that precisely these special fibers contain key information about the geometry of the family. Also, since eventually we will be interested in intersection theory on the moduli spaces of curves, having a complete space can be a significant advantage.
A particularly remarkable compactification ${{\overline{\mathcal{M}}}_{g,n}}$ of ${{\mathcal{M}}_{g,n}}$ was constructed by Deligne and Mumford. The space ${{\overline{\mathcal{M}}}_{g,n}}$ is the moduli space of *stable* curves $C$ of arithmetic genus $g$ with $n$ distinct marked points. Stability, by definition, means that the curve $C$ is complete and connected with at worst nodal singularities, all marked points are smooth, and that $C$, together with marked points, admits only finitely many automorphisms. In practice, the last condition means that every rational component of the normalization of $C$ should have at least 3 special (that is, lying over marked or singular points of $C$) points. Observe that, in particular, the curve $C$ is allowed to be reducible. A typical stable curve can be seen in Figure \[fig3\].
{#scomp}
Those who have not seen this before are probably left wondering how it is possible for ${{\overline{\mathcal{M}}}_{g,n}}$ to be compact. What if, for example, one marked point $p_1$ on some fixed curve $C$ approaches another marked point $p_2\in C$ ? We should be able to assign some meaningful stable limit to such a $1$-parametric family of curves, but it is somewhat nontrivial to guess what it should be.
A family of curves with a $1$-dimensional base $B$ is a surface $F$ together with a map $\pi: F\to B$ whose fibers are the curves of the family. Marking $n$ points on the curves means giving $n$ sections of the map $\pi$, that is, $n$ maps $$p_1,\dots,p_n: B \to F\,,$$ such that $$\pi(p_k(b))=b\,,\quad b\in B\,,\quad k=1,\dots,n\,.$$ We will denote by $$S_i=p_i(B)$$ the trajectories of the marked points on $F$; they are curves on the surface $F$.
Now suppose we have a $1$-dimensional family of $2$-pointed curves such that at some bad point $b_0$ of the base $B$ we have $p_1(b_0)=p_2(b_0)$, that is, over this point two marked points hit each other, see Figure \[fig4\], and therefore the fiber $\pi^{-1}(b_0)$ is not a stable $2$-pointed curve.
It is quite easy to repair this family: just blow up the offending (but smooth) point $P=p_1(b_0)=p_2(b_0)$ on the surface $F$. Let $$\sigma : \widetilde{F} \to F$$ be the blow-up at $P$. Then $$\widetilde{\pi} = \pi \circ \sigma: \widetilde{F} \to B$$ is new family of curves with base $B$. Outside $b_0$ this the same family as before, whereas the fiber $\widetilde{\pi}^{-1}(b_0)$ is the old fiber $\pi^{-1}(b_0)$ plus the exceptional divisor $E=\sigma^{-1}(P)\cong {\mathbb{P}}^1$ of the blow-up, see Figure \[fig5\].
Assuming the sections $S_1$ and $S_2$ met each other and the fiber $\pi^{-1}(b_0)$ transversally at $P$, as in Figure \[fig4\], the marked points on $\widetilde{\pi}^{-1}(b_0)$ are two distinct point on the exceptional divisor $E$. Therefore, $\widetilde{\pi}^{-1}(b_0)$ is a stable $2$-pointed curve which is the stable limit of the curves $\pi^{-1}(b)$ as $b\to b_0$.
To summarize, if one marked point on a curve $C$ approaches another then $C$ bubbles off a projective line ${\mathbb{P}}^1$ with these two points on it as in Figure \[fig6\].
More generally, if $F$ is any family of curves with a smooth 1-dimensional base $B$ that are stable except over one offending point $b_0\in B$ then after a sequence of blow-ups and blow-downs and, possibly, after passing to a branched covering of $B$, one can always arrive at a new family with all fibers stable. Moreover, the fiber over $b_0$ in this family is determined uniquely. This process is called the *stable reduction* and how it works is explained, for example, in [@HM], Chapter 3C.
In particular, there exists a stable reduction of the family which, as we saw in Section \[Fr\], fails to have a stable fiber over the point $\lambda=\infty$ in the base. This is an example where only blow-ups and blow-downs will not suffice, that is, a base change is necessary.
{#section-2}
The topic of this lectures is intersection theory on the Deligne-Mumford spaces ${{\overline{\mathcal{M}}}_{g,n}}$ and, specifically, intersections of certain divisors $\psi_i$ which will be defined presently. It was conjectured by Witten [@W] that a suitable generating function for these intersections is a $\tau$-function for the Korteweg–de Vries hierarchy of differential equations. This conjecture was motivated by an analogy with matrix models of quantum gravity, where the same KdV hierarchy appears (this was already discussed in other lectures at this school). The KdV equations were deduced by Kontsevich in [@K] from an explicit combinatorial formula for the intersections of the $\psi$-classes (see also, for example, [@D] for more information about the connection to the KdV equations). The main goal of these lectures is to explain a proof of this combinatorial formula of Kontsevich following the paper [@OP].
The definition of the divisors $\psi_i$ is the following. A point in ${{\overline{\mathcal{M}}}_{g,n}}$ is a stable curve $C$ with marked points $p_1,\dots,p_n$. By definition, all points $p_i$ are smooth points of $C$, hence the tangent space $T_{p_i} C$ to $C$ at $p_i$ is a line. Similarly, we have the cotangent lines $T^*_{p_i} C$, $i=1,\dots,n$. As the point $(C,p_1,\dots,p_n)\in {{\overline{\mathcal{M}}}_{g,n}}$ varies, these cotangent lines $T^*_{p_i} C$ form $n$ line bundles over ${{\overline{\mathcal{M}}}_{g,n}}$. By definition, $\psi_i$ is the first Chern class of the line bundle $T^*_{p_i} C$. In other words, it is the divisor of any nonzero section of the line bundle $T^*_{p_i} C$.
{#ssi}
To get a better feeling for these classes let us intersect $\psi_i$ with a curve in ${{\overline{\mathcal{M}}}_{g,n}}$. The answer to this question should be a number. Let $B$ be a curve. A map $B\to{{\overline{\mathcal{M}}}_{g,n}}$ is morally equivalent to a $1$-dimensional family of curves with base $B$ (in reality we may have to pass to a suitable branched covering of $B$ to get an honest family. [^3]) So, let us consider a family $\pi: F\to B$ of stable pointed curves with base $B$ and the induced map $\phi:B\to {{\overline{\mathcal{M}}}_{g,n}}$. As usual, the marked points $p_1,\dots,p_n$ are sections of $\pi$ and $$S_i=p_i(B)\,, \quad i=1,\dots,n\,,$$ are disjoint curves on the surface $F$.
A section $s$ of $\phi^*(T_{p_i})$ is a vector field on the curve $S_i$ which is tangent to fibers of $\pi$ and, hence, $s$ is a section of the normal bundle to $S_i\subset F$, see Figure \[fig7\]. The degree of this normal bundle is the self-intersection of the curve $S_i$ on the surface $F$, that is, $$\deg\,(s)=(S_i,S_i)_F \,,$$ where $(s)$ is the divisor of $s$. In other words, $$\int_{B} c_1\left(\phi^*(T_{p_i}) \right) = (S_i,S_i)_F \,,$$ where $c_1$ denotes the $1$st Chern class. Dually, we have $$\int_{\phi(B)} \psi_i = - (S_i,S_i)_F\,.$$
We will now use this formula to compute the intersections of $\psi_i$ with ${{\overline{\mathcal{M}}}_{g,n}}$ in the cases when the space ${{\overline{\mathcal{M}}}_{g,n}}$ is itself $1$-dimensional. Since $$\dim{{\overline{\mathcal{M}}}_{g,n}}= 3g -3 + n \label{dimMgnb}\,,$$ this happens for ${\overline{\mathcal{M}}}_{0,4}$ and ${\overline{\mathcal{M}}}_{1,1}$.
{#sM04}
The space ${\overline{\mathcal{M}}}_{0,4}$ is easy to understand. After all, there is only one smooth curve of genus $0$, namely ${\mathbb{P}}^1$. Moreover, any 3 distinct points of ${\mathbb{P}}^1$ can be taken to the points $\{0,1,\infty\}$ by an automorphism of ${\mathbb{P}}^1$ (in particular, this means that ${\overline{\mathcal{M}}}_{0,3}$ is a point). After we identified the first three marked points with $\{0,1,\infty\}$, we can take any point $x\in {\mathbb{P}}^1 \setminus \{0,1,\infty\}$ as the fourth marked point. Thus the locus of smooth curves in ${\overline{\mathcal{M}}}_{0,4}$ is isomorphic to ${\mathbb{P}}^1 \setminus \{0,1,\infty\}$. Singular curves are obtained as we let $x$ approach the 3 excluded points $\{0,1,\infty\}$, which, by the process described in Section \[scomp\], bubbles off a new ${\mathbb{P}}^1$ with two marked points on it. This completes the description of ${\overline{\mathcal{M}}}_{0,4}\cong {\mathbb{P}}^1$.
In addition, this gives a description of the universal family over ${\overline{\mathcal{M}}}_{0,4}$ (oh yes, in genus $0$ it does exist !). Take ${\mathbb{P}}^1\times {\mathbb{P}}^1$ with coordinates $(x,y)$. The map $(x,y)\to x$ with 4 sections $$p_1(x)=(x,0)\,, \quad p_2(x)=(x,1)\,, \quad
p_3(x)=(x,\infty)\,, \quad p_4(x)=(x,x)\,,$$ defines a family of 4-pointed smooth genus $0$ curves for $x \in {\mathbb{P}}^1 \setminus \{0,1,\infty\}$.
The section $p_4$ collides with the other three at the points $(0,0)$, $(1,1)$, and $(\infty,\infty)$, see Figure \[fig8\]. To extend this family over all of ${\mathbb{P}}^1$, we blow up these collision points as in Section \[scomp\] and get the surface $F$ shown in Figure \[fig9\].
The closures of the curves $p_1(x),\dots,p_4(x)$ give $4$ sections which are now disjoint everywhere.
Incidentally, this surface $F$ can be naturally identified with ${\overline{\mathcal{M}}}_{0,5}$ and, more generally, for any $n$ there exists a natural map ${\overline{\mathcal{M}}}_{0,n+1}\to {\overline{\mathcal{M}}}_{0,n}$ giving the universal family over ${\overline{\mathcal{M}}}_{0,n}$. This map forgets the $(n+1)$st marked point and, if the curve becomes unstable after that, blows down all unstable components.
Now let us compute $\int_{{\overline{\mathcal{M}}}_{0,4}} \psi_1$ using the recipe given in Section \[ssi\]. Recall that $S_1\subset F$ denotes the closure of the curve $\{(x,0)\}$, $x\ne 0$ in $F$ (a.k.a. the proper transform of the corresponding curve in ${\mathbb{P}}^1\times {\mathbb{P}}^1$). Let $E$ denote the preimage of $(0,0)$ under the blow-up, that is, let $E$ be the exceptional divisor. The self-intersection of $S_1$ with any curve $\{y=c\}$, $c\ne 0$, on ${\mathbb{P}}^1\times {\mathbb{P}}^1$ is clearly zero. Letting $c\to 0$ we get $$(S_1,E+S_1)=0 \,.$$ Since, obviously, $(S_1,E)=1$ we conclude that $$\int_{{\overline{\mathcal{M}}}_{0,4}} \psi_1 = - (S_1,S_1)=-(-1)=1 \,.$$
{#section-3}
Now let us analyze the integral $\int_{{\overline{\mathcal{M}}}_{1,1}} \psi_1$. In the absence of a universal family, we have to look for another suitable family to compute this integral. A particularly convenient family can be obtained in the following way. Consider the projective plane ${\mathbb{P}}^2$ with affine coordinates $(x,y)$. Pick two generic cubic polynomials $f(x,y)$ and $g(x,y)$ and consider the family of cubic curves $$F=\{(x,y,t), f(x,y)- t \, g(x,y) = 0\} \subset {\mathbb{P}}^2\times {\mathbb{P}}^1\,,
\label{pencilF}$$ with base $B={\mathbb{P}}^1$ parameterized by $t$. The cubic curves $f(x,y)=0$ and $g(x,y)=0$ intersect in 9 points $p_1,\dots,p_9$ and we can choose any of those points as the marked point in our family. An example of such family of plane cubics is plotted in Figure \[fig10\].
Our first observation is that the surface $F$ is the blow-up of ${\mathbb{P}}^2$ at the points $p_1,\dots,p_9$ (which are distinct for generic $f$ and $g$). Indeed, we have a rational map $${\mathbb{P}}^2 \owns (x,y)\to \left(x,y,\frac{g}{f}\right) \in F \,,$$ which is regular away from $p_1,\dots,p_9$. Each of the points $p_i$ is a transverse intersection of $f=0$ and $g=0$, which is another way of saying that at those points the differentials $df$ and $dg$ are linearly independent. Thus, this map identifies $F$ with the blow-up of ${\mathbb{P}}^2$ at $p_1,\dots,p_9$. The graph of the function $\frac{g(x,y)}{f(x,y)}$ is shown in Figure \[fsur\]. This graph goes vertically over the points $p_1,\dots,p_9$ that are blown up.
\[0.5\][![A fragment of the surface []{data-label="fsur"}](surface.eps "fig:")]{}
Since the section $S_1$ is exactly one of the exceptional divisors of this blow-up, arguing as in Section \[sM04\] above we find that $$(S_1,S_1) = -1 \,.$$ It does not mean, however, that we are done with the computation of the integral, because the induced map $\phi:B \to {\overline{\mathcal{M}}}_{1,1}$ is very far from being one-to-one. In fact, set-theoretically, the degree of the map $\phi$ is $12$ as we shall now see.
To compute the degree of $\phi$ we need to know how many times a fixed generic elliptic curve appears in the family $F$. This is a classical computation. First, one can show that the singular cubic is generic enough. Then we claim that, as $t$ varies, there will be precisely $12$ values of $t$ that produce a singular curve. There are various ways to see this. For example, the singularity of the curve is detected by vanishing of the discriminant. The discriminant of a cubic polynomial is a polynomial of degree 12 in its coefficients, hence a polynomial of degree 12 in $t$.
A alternative way to obtain this number $12$ is to compute the Euler characteristic of the surface $F$ in two different ways. On the one hand, viewing $F$ as a blow-up, we get $$\chi(F) = \chi({\mathbb{P}}^2) + 9(\chi({\mathbb{P}}^1)-1) = 12 \,.$$ On the other hand, $F$ is fibered over $B$ and the generic fiber is a smooth elliptic curve whose Euler characteristic is 0. The special fibers are the nodal elliptic curves with Euler characteristic equal to 1. Hence, there are 12 special fibers.
However, as remarked in Section \[sM11\], each point of ${\overline{\mathcal{M}}}_{1,1}$ is really a half–point because of automorphism of order 2 of any pointed genus $1$ curve. Therefore, the $24=2\cdot 12$ is the true degree of the map $\phi$. By the push-pull formula we thus obtain $$\int_{{\overline{\mathcal{M}}}_{1,1}} \psi_1 = \frac{1}{\deg \phi}
\int_B \phi^* \, \psi_1 =
\frac{1}{\deg \phi} \, (- (S_1,S_1)) = \frac{1}{24} \,.$$ An interesting corollary of this computation is that if $F\to B$ is a smooth family of $1$-pointed genus 1 stable curves over a smooth complete curve $B$ then the set-theoretic degree of the induced map $B\to{\overline{\mathcal{M}}}_{1,1}$ has to be divisible by $12$.
{#section-4}
It is difficult to imagine being able to compute many intersections of the $\psi$-classes in the above manner. To begin with, it is essentially impossible to write down a sufficiently explicit family of general high genus curves, see the discussion in Chapter 6F of [@HM]. It is therefore amazing that there exist several complete and beautiful descriptions of the all possible intersection numbers of the form $${\left\langle}\tau_{k_1} \dots \tau_{k_n}
{\right\rangle}\overset{\textup{def}}=
\int_{{\overline{\mathcal{M}}}_{g,n}} \psi_1^{k_1} \cdots \psi_n^{k_n} \,, \quad
k_1+\cdots+k_n = 3g-3+n \,.
\label{tau}$$ The most striking description was conjectured by Witten [@W] and says the exponential of the following generating function for the numbers $$\label{free}
F(t_1,t_2,\dots) = \sum_n \frac{1}{n!} \sum_{k_1,\dots,k_n}
{\left\langle}\tau_{k_1} \dots \tau_{k_n}
{\right\rangle}\, t_{k_1} \cdots t_{k_n}$$ is a $\tau$-function for the KdV hierarchy of differential equations. This conjecture was motivated by the (physical) analogy with the random matrix models of quantum gravity and, in fact, the $\tau$-function thus obtained is the same as the one that arises in the double scaling of the 1-matrix model (and discussed in other lectures of this school). The KdV equation and the string equation satisfied by the $\tau$-function uniquely determine all numbers . Alternatively, the numbers are uniquely determined by the associated Virasoro constraints. Further discussion can be found, for example, in [@D].
{#section-5}
Kontsevich in [@K] obtained the KdV equations for from a combinatorial formula for the following (somewhat nonstandard) generating function $$K_{g,n}(z_1,\dots,z_n) = \sum_{k_1+\cdots+k_n = 3g-3+n}
{\left\langle}\tau_{k_1} \dots \tau_{k_n} {\right\rangle}\, \prod \frac{(2k_i-1)!!}{z_i^{2k_i+1}}
\,,\label{KK}$$ for the numbers with fixed $g$ and $n$.
The main ingredient in Kontsevich’s combinatorial formula is a 3-valent graph $G$ embedded in the a topological surface $\Sigma_g$. A further condition on this graph $G$ is that the complement $\Sigma_g \setminus G$ is a union of $n$ topological disks (in particular, this forces $G$ to be connected). These disks, called *cells*, have to (bijectively) numbered by $1,\dots,n$. Two such graphs $G$ and $G'$ are identified if there exist an orientation preserving homeomorphism of $\Sigma_g$ that takes $G$ to $G'$ and preserves the labels of the cells. In particular, every graph $G$ has an automorphism group $\operatorname{Aut}G$, which is finite and only seldom nontrivial. Let ${\mathsf{G}}^3_{g,n}$ denote the set of distinct such graphs $G$ with given values of $g$ and $n$; this is a finite set. An example of an element of ${\mathsf{G}}^3_{2,3}$ is shown in Figure \[fig11\].
Another name for a graph $G\subset \Sigma_g$ such that $\Sigma_g \setminus G$ is a union of cells is a *map* on $\Sigma_g$. One can imagine that the cells are the countries in which the graph $G$ divides the surface $\Sigma_g$.
Kontsevich’s combinatorial formula for the function is the following: $$\begin{gathered}
\label{KF}
K_{g,n}(z_1,\dots,z_n) = \\
2^{2g-2+n} \sum_{G\in {\mathsf{G}}^3_{g,n}} \frac1{|\operatorname{Aut}G|}
\, \prod_{\textup{edges $e$ of $G$}} \frac1{z_{\textup{one side of $e$}}+
z_{\textup{other side of $e$}}} \,, \end{gathered}$$ where the meaning of the term $$z_{\textup{one side of $e$}}+
z_{\textup{other side of $e$}}$$ is the following. Each edge $e$ of $G$ separates two cells (which may be identical). These cell carry some labels, say, $i$ and $j$. Then $(z_i+z_j)^{-1}$ is the factor in corresponding to the edge $e$.
To get a better feeling for how this works let us look at the cases $(g,n)=(0,3),(1,1)$ that we understand well. The space ${\overline{\mathcal{M}}}_{0,3}$ is a point and the only nontrivial integral over it is $${\left\langle}\tau_0 \, \tau_0\, \tau_0 {\right\rangle}= \int_{{\overline{\mathcal{M}}}_{0,3}} 1 = 1 \,.$$ Thus, $$K_{0,3} = \frac1{z_1 z_2 z_3} \,.$$ The combinatorial side of Kontsevich’s formula, however, is not quite trivial. The set ${\mathsf{G}}^3_{0,3}$ consists of 4 elements. Two of them are shown in Figure \[fig12\];
the other two are obtained by permuting the cell labels of the graph of the left. All these graphs have only trivial automorphisms. Hence, we get $$\begin{gathered}
K_{0,3} = 2 \left( \frac1{2z_1(z_1+z_2)(z_1+z_3)} +
\textup{permutations} \right. \\
+ \left. \frac1{(z_1+z_2)(z_1+z_3)(z_2+z_3)} \right)\,,\end{gathered}$$ and, indeed, this simplifies to $(z_1 z_2 z_3)^{-1}$. What is apparent in this example is that it is rather mysterious how , which a priori is only a rational function of the $z_i$’s, turns out to be a polynomial in the variables $z_i^{-1}$.
Perhaps this example created a somewhat wrong impression because in this case was much more complicated than . So, let us consider the case $(g,n)=(1,1)$, where the computation of the unique integral $${\left\langle}\tau_1 {\right\rangle}= \frac{1}{24}$$ already does require some work. The unique element of ${\mathsf{G}}^3_{1,1}$ is shown in Figure \[fig13\].
This graph can be obtained by gluing the opposite sides of a hexagon, which also explains why the automorphism group of this graph is the cyclic group of order 6 (acting by rotations of the hexagon). Thus, specializes in this case to $$2 \, \frac{1}{6} \, \frac1{(z_1+z_1)^3} = \frac{1}{24} \,
\frac1{z_1^3} \,,$$ as it should.
{#section-6}
Kontsevich was led to the formula by considering a cellular decomposition of ${\mathcal{M}}_{g,n}$ coming from Strebel differentials. In these lectures we shall explain, following [@OP], different approach to the formula via the asymptotics in the Hurwitz problem of enumerating branched covering of ${\mathbb{P}}^1$. This approach is based on the relation between the Hurwitz problem and intersection theory on ${\overline{\mathcal{M}}}_{g,n}$ discovered in [@ELSV; @FP] and on the asymptotic analysis developed in [@O1]. It has several advantages over the approach based on Strebel differentials.
Hurwitz problem
===============
{#section-7}
Intersection theory on ${\overline{\mathcal{M}}}_{g,n}$ is about enumerative geometry of families of stable $n$-pointed curves of genus $g$. The significance of the space ${\overline{\mathcal{M}}}_{g,n}$ is that its geometry captures some essential information about all possible families of curves. Through the space ${\overline{\mathcal{M}}}_{g,n}$, one can learn something about curves in general from any specific enumerative problem. If the specific enumerative problem is sufficiently rich, one can gather a lot of information about intersection theory on ${\overline{\mathcal{M}}}_{g,n}$ from it. Potentially, one can get a complete understanding of the whole intersection theory, which then can be applied to any other enumerative problem.
Our strategy will be to study such a particular yet representative enumerative problem. This specific problem will be the Hurwitz problem about branched covering of ${\mathbb{P}}^1$. That there exists a direct connection between Hurwitz problem and the intersection theory on ${\overline{\mathcal{M}}}_{g,n}$ was first realized in [@ELSV; @FP]. The beautiful formula of [@ELSV] for the Hurwitz numbers will be the basis for our computations.
In fact, we will see that the (exact) knowledge of the numbers is equivalent to the *asymptotics* in the Hurwitz problem. This is, in some sense, very fortunate because asymptotic enumeration problems often tend to be more structured and accessible than exact enumeration.
{#section-8}
It is a century-old theme in combinatorics to enumerate branched coverings of a Riemann surface by another Riemann surface (an example of which is shown schematically in Figure \[fig15\]). Given degree $d$, positions of ramification points downstairs, and their types (that is, given the conjugacy class in $S(d)$ of the monodromy around each one of them), there exist only finitely many possible coverings and the natural question is: how many ? This very basic enumerative problem arises all over mathematics, from complex analysis to ergodic theory. These numbers of branched coverings are directly connected to other fundamental objects in combinatorics, namely to the class algebra of the symmetric group and — via the representation theory of finite groups — to the characters of symmetric groups.
We also mention that there is a general, and explicit, correspondence between enumeration of branched covering of a curve and the the Gromov-Witten theory of the same curve, see [@OP2]. From this point of view, the computation of the numbers , that is, the Gromov-Witten theory of a point, arises as a limit in the Gromov-Witten theory of ${\mathbb{P}}^1$ as the degree goes to infinity. This is parallel to how the free energy equation arises as the limit in the $1$-matrix model.
{#section-9}
The particular branched covering enumeration problem that we will be concerned with can be stated as follows. The data in the problem are a partition $\mu$ and genus $g$. Let $$f: C \to {\mathbb{P}}^1$$ be a map of degree $$d=|\mu|=\sum \mu_i\,,$$ where $C$ is smooth connected complex curve of genus $g$. We require that $\infty\in{\mathbb{P}}^1$ is a critical value of the map $f$ and the corresponding monodromy has cycle type $\mu$. Equivalently, this can be phrased as the requirement that divisor $f^{-1}(\infty)$ has the form $$f^{-1}(\infty) = \sum_{i=1}^n \mu_i \, [p_i]\,,$$ where $n=\ell(\mu)$ is the length of the partition $\mu$ and $p_1,\dots,p_n\in C$ are the points lying over $\infty \in {\mathbb{P}}^1$. We further require that all other critical values of $f$ are distinct and nondegenerate. In other words, the map $f^{-1}$ has only square-root branch points in ${\mathbb{P}}^1\setminus\{\infty\}$. The number $r$ of such square-root branch points is given by the Riemann–Hurwitz formula $$r= 2g-2 + |\mu| + \ell(\mu) \,.\label{r=}$$ An example of such a covering can be seen in Figure \[fig14\] where $\mu=(3)$ and $r=2$, hence $d=3$, $n=1$, and $g=0$.
\[0.25\][![A Hurwitz covering with $\mu=(3)$[]{data-label="fig14"}](branching2.EPS "fig:")]{}
We will call a covering satisfying the above conditions a *Hurwitz covering*. Once the positions of the $r$ simple branchings are fixed, there are only finitely many Hurwitz coverings provided we identify two coverings $$f:C\to {\mathbb{P}}^1\,, \quad f':C'\to{\mathbb{P}}^1$$ for which there exists an isomorphism $h: C\to C'$ such that $f=f'\circ h$. Similarly, we define automorphisms of $f$ as an automorphisms $h: C \to C$ such that $f=f\circ h$. We will see that, with a very rare exception, Hurwitz coverings have only trivial automophisms.
By definition, the *Hurwitz number* $\operatorname{Hur}_g(\mu)$ is the number of isomorphism classes of Hurwitz coverings with given positions of branch points. In the special case when such a covering has a nontrivial automorphism, it should be counted with multiplicity $\frac12$.
{#section-10}
The Hurwitz problem can be restated as a problem about factoring permutations into transpositions. This goes as follows.
Let us pick a point $x\in{\mathbb{P}}^1$ which is not a ramification point. Then, by basic topology, all information about the covering is encoded in the homomorphism $$\pi_1({\mathbb{P}}^1\setminus\{\textup{ramification points}\},x) \to
\operatorname{Aut}f^{-1}(x)\cong S(d) \,.$$ The identification of $\operatorname{Aut}f^{-1}(x)$ with $S(d)$ here is not canonical, but it is convenient to pick any one of the $d!$ possible identifications. Then, by construction, the loop around $\infty$ goes to a permutation $s\in S(d)$ with cycle type $\mu$ and loops around finite ramification points correspond to some transpositions $t_1,\dots,t_r$ in $S(d)$.
The unique relation between those loops in $\pi_1$ becomes the equation $$t_1 \cdots t_r = s\,.\label{tpr}$$ This establishes the equivalence of the Hurwitz problem with the problem of factoring general permutations into transpositions (up to conjugation, since we picked an arbitrary identification of $\operatorname{Aut}f^{-1}(x)$ with $S(d)$). More precisely, the Hurwitz number $\operatorname{Hur}_g(\mu)$ is the number (up to conjugacy, and possibly with an automorphism factor) of factorization of the form that correspond to a connected branched covering. A branched covering is connected when we can get from any point of $f^{-1}(x)$ to any other point by the action of the monodromy group. Thus, the transpositions $t_1,\dots,t_n$ have to generate a transitive subgroup of $S(d)$, which is then automatically forced to be the whole of $S(d)$.
The fact that $t_1,\dots,t_n$ generate $S(d)$ greatly constraints the possible automorphisms of $f$. Indeed, the action of any nontrivial automorphism on $f^{-1}(x)$ has to commute with $t_1,\dots,t_n$, and hence with $S(d)$, which is only possible if $d=2$.
By the usual inclusion–exclusion principle, it is clear that one can go back and forth between enumeration of connected and possibly disconnected coverings. Thus, the Hurwitz problem is essentially equivalent to decomposing the powers of one single element of the class algebra of the symmetric group, namely of the conjugacy class of a transposition $$\sum_{1\le i<j\le d} (ij)
\label{2cycl}$$ in the standard conjugacy class basis. There is a classical formula, going back to Frobenius, for all such expansion coefficients in terms of irreducible characters. The character sums that one thus obtains can be viewed as finite analogs of Hermitian matrix integrals, with the dimension of a representation $\lambda$ playing the role of the Vandermonde determinant and the central character of in the representation $\lambda$ playing the role of the Gaussian density, see, for example [@O2; @O4] for a further discussion of properties of such sums.
{#section-11}
For us, the crucial property of the Hurwitz problem is its connection with the intersection theory on the Deligne-Mumford spaces ${{\overline{\mathcal{M}}}_{g,n}}$. This connection was discovered, independently, in [@FP] and [@ELSV], the latter paper containing the following general formula $$\operatorname{Hur}_g(\mu) = \frac{r!}{|\operatorname{Aut}\mu|}
\prod_{i=1}^n \frac{\mu_i^{\mu_i}}{\mu_i !} \,
\int_{{{\overline{\mathcal{M}}}_{g,n}}} \frac{1-\lambda_1+\dots\pm \lambda_g}{\prod (1-\mu_i \psi_i)}\,,
\label{ELSV}$$ where $r$ is number of branch points given by , $n=\ell(\mu)$ is the length of the partition $\mu$, $\operatorname{Aut}\mu$ is the stabilizer of the vector $\mu$ in $S(n)$, $$\lambda_i \in H^{2i} \left({{\overline{\mathcal{M}}}_{g,n}}\right)\,,\quad i=1,\dots,g\,,$$ are the Chern classes of the Hodge bundle over ${{\overline{\mathcal{M}}}_{g,n}}$ (it is not important for what follows to know what this is), and finally, the denominators are supposed to be expanded into a geometric series $$\frac{1}{1-\mu_i \psi_i} = 1+ \mu_i \psi_i + \mu_i^2 \psi_i^2 + \dots\,,
\label{gc}$$ which terminates because $\psi_i \in H^{2}\left({{\overline{\mathcal{M}}}_{g,n}}\right)$ is nilpotent.
In particular, the integral in the ELSV formula is a polynomial in the $\mu_i$’s. The monomials in this polynomial are obtained by picking a term in the expansion for each $i=1,\dots,n$ and then adding a suitable $\lambda$-class to bring the total degree to the dimension of ${{\overline{\mathcal{M}}}_{g,n}}$. It is, therefore, clear that the top degree term of this polynomial involves only intersections of the $\psi$-classes and no $\lambda$-classes. That is, $$\int_{{{\overline{\mathcal{M}}}_{g,n}}} = \sum_{k_1+\cdots+k_n=3g-3+n} \prod \mu_i^{k_i} \,
{\left\langle}\tau_{k_1} \dots \tau_{k_n} {\right\rangle}+ \textup{lower degree} \,.\label{topE}$$ These top degree terms are precisely the numbers that we want to understand.
{#sLH}
A natural way to infer something about the top degree part of a polynomial is to let its arguments go to infinity. The behavior of the prefactors in is given by the Stirling formula $$\frac{m^m}{m!} \sim \frac{e^m}{\sqrt{2\pi m}}\,, \quad
m\to \infty \,.$$ Let $N$ be a large parameter and let $\mu_i$ depend on $N$ in such a way that $$\frac{\mu_i}{N} \to x_i\,, \quad i=1,\dots,n\,, \quad N\to\infty\,,$$ where $x_1,\dots,x_n$ are finite. We will also additionally assume that all $\mu_i$’s are distinct and hence $|\operatorname{Aut}\mu|=1$. Then by and the Stirling formula, we have the following asymptotics of the Hurwitz numbers: $$\begin{gathered}
\frac{1}{N^{3g-3+n/2}} \, \frac{\operatorname{Hur}_g(\mu)}{e^{|\mu|} r !} \to \\
\frac1{(2\pi)^{n/2}} \sum_{k_1+\cdots+k_n=3g-3+n} \prod
\mu_i^{k_i-\frac12} \, {\left\langle}\tau_{k_1} \dots \tau_{k_n} {\right\rangle}=:
H_g(x)\,. \label{Hg}\end{gathered}$$ It is convenient to Laplace transform the asymptotics $H_g(x)$. Since $$\int_0^\infty e^{-sx} \, x^{k-1/2} \, dx = \frac{\Gamma(k+1/2)}{s^{k+1/2}} =
\sqrt{\pi} \, \frac{(2k-1)!!}{2^k\, s^{k+1/2}} \,,$$ we get $$\int_{{\mathbb{R}}^n_{>0}} e^{-s\cdot x} \, H_g(x) \, dx =
\sum_{k_1+\cdots+k_n=3g-3+n}
{\left\langle}\tau_{k_1} \dots \tau_{k_n} {\right\rangle}\, \prod
\frac{(2k_i-1)!!}{(2s_i)^{k_i+1/2}} \,,\label{LHg}$$ which up to the following change of variables $$z_i = \sqrt{2 s_i} \,, \quad i=1,\dots,n \,,$$ is precisely the Kontsevich generating function for the numbers .
Thus, we find ourselves in situation which looks rather comfortable: the generating function that we seek to compute is not only related to a specific enumerative problem but, in fact, it is the Laplace transform of the asymptotics in that enumerative problem. People who do enumeration know that asymptotics tends to be simpler than exact enumeration and, usually, the Laplace (or Fourier) transform of the asymptotics is the most natural thing to compute.
This general philosophy is, of course, only good if we can find a handle on the Hurwitz problem. In the following subsection, we will discuss a restatement of the Hurwitz problem in terms of enumeration of certain graphs on genus $g$ surfaces that we call branching graphs. This description will turn out to be particularly suitable for our purposes (which may not be a huge surprise because, after all, Kontsevich’s formula is stated in terms of graphs on surfaces).
{#Hbr}
A very classical way to study branched coverings is to cut the base into simply-connected pieces. Over each of the resulting regions the covering becomes trivial, that is, consisting of $d$ disjoint copies of the region downstairs, where $d$ is the degree of the covering. The structure of the covering is then encoded in the information on how those pieces are patched together upstairs. Typically, this gluing data is presented in the form of a graph, usually with some additional labels etc.
There is, obviously, a considerable flexibility in this approach and some choices may lead to much more convenient graph enumeration problems than the others. For the Hurwitz problem, we will follow the strategy from [@A], which goes as follows.
Let $$f: C \to {\mathbb{P}}^1$$ be a Hurwitz covering with partition $\mu$ and genus $g$. In particular, the number $r$ of finite ramification points of $f$ is given by the formula . Without loss of generality, we can assume these ramification points to be $r$th roots of unity in ${\mathbb{C}}$. Let us cut the base ${\mathbb{P}}^1$ along the unit circle $S=\{|z|=1\}$, that is, let us write $${\mathbb{P}}^1 = D_- \sqcup S \sqcup D_+$$ where $$D_\pm = \{|z|\lessgtr 1\}$$ are the Southern and Northern hemisphere in Figure \[fig15\], respectively.
Since the map $f$ is unramified over $D_-$, its preimage $f^{-1}(D_-)$ consists of $d$ disjoint disks. Their closures, however, are not disjoint: they come together precisely at the critical points of $f$. By construction, critical points of $f$ are in bijection with its critical values, that is, with the $r$th roots of unity in ${\mathbb{P}}^1$. Thus, the the set $f^{-1}(\overline{D_-}) \subset C$ looks like the structure in Figure \[fig16\]. This structure is, in fact, a graph $\Gamma$ embedded in a genus $g$ surface. Its vertices are the components of $f^{-1}(D_-)$ and its edges are the critical points of $f$ that join those components together. In addition, the edges of $\Gamma$ (there are $r$ of them) are labeled by the roots of unity.
This edge-labeled graph $\Gamma\subset\Sigma_g$ is subject to some additional constraints. First, the cyclic order of labels around any vertex should be in agreement with the cyclic order of roots of unity. Next, the complement of $\Gamma$ consists of $n$ topological disks, where $n$ is the length of the partition $\mu$. Indeed, the complement of $\Gamma$ corresponds to $f^{-1}(D_+)$ and $z=\infty$ is the only ramification point in $D_+$. The connected components of $f^{-1}(D_+)$ thus correspond to parts of $\mu$.
The partition $\mu$ can be reconstructed from the edge labels of $\Gamma$ as follows. Pick a cell $U_i$ in $f^{-1}(D_+)$. The length of the corresponding part $\mu_i$ of $\mu$ is precisely the number of times the map $f$ wraps the boundary $\partial U_i$ around the circle $S$. As we follow the boundary $\partial U_i$, we see the edge labels appear in a certain sequence. As we complete a full circle around $\partial U_i$, the edge labels will make exactly $\mu_i$ turns around $S$. It is natural to call this number $\mu_i$ the *perimeter* of the cell $U_i$. This perimeter is $(2\pi)^{-1}$ times the sum of *angles* between pairs of the adjacent edges on $\partial U_i$, where the angle is the usual angle in $(0,2\pi)$ between the corresponding roots of unity, see Figure \[fig16\].
We call a edge-labeled embedded graph $\Gamma$ as above a *branching graph*. By the above correspondence, the number $\operatorname{Hur}_g(\mu)$ is the number of genus $g$ branching graphs with $n$ cells of perimeter $\mu_1,\dots,\mu_n$. As usual, in the trivial $d=2$ case, those graphs have to be counted with automorphism factors.
It is this definition of Hurwitz numbers that we will use for the asymptotic analysis in the next lecture.
{#section-12}
It may be instructive to consider an example of how this correspondence between coverings and graphs works. Consider the covering corresponding to factorization $$(12)\, (13)\, (24)\, (14)\, (13)=(1243)$$ of the form . The degree of this covering is $d=4$, it has $r=5$ ramification points, and the monodromy $\mu=(4)$ around infinity. It follows that its genus is $g=1$. Let us denote the five finite ramification points by $$\{a,b,c,d,e\}=\{1,e^{2\pi i/5},\dots,e^{8\pi i/5}\} \,.$$ The preimage of $D_-$ on the torus $\Sigma_1$ consists of $4$ disks and the momodromies tell us which disk is connected to which at which critical point: for example, at the critical point lying over $a$, the 1st disk is connected to the 2nd disk. This is illustrated in Figure \[fig17\]
where, among the 3 preimages of any critical value, the one which is a critical point is typeset in boldface. Clearly, any disk in $f^{-1}(D_-)$ has the alphabet $\{a,b,c,d,e\}$ going counterclockwise around its boundary and, in particular, the cyclic order of the critical values on its boundary is in agreement with the orientation on $\Sigma_1$.
Observe that the preimage $f^{-1}(D_+)$ is one cell whose boundary is a 4-fold covering of the equator. In particular, the alphabet $\{a,b,c,d,e\}$ is repeated $4$ times around the boundary of $f^{-1}(D_+)$. Finally, Figure \[fig18\] shows the branching graph translation of Figure \[fig17\].
{#section-13}
Finally, a few remarks about how one can prove a formula like . This will necessary be a very sketchy account; the actual details of the proof can be found in [@GV; @OP], as well as in the original paper [@ELSV].
As mentioned before, the numbers like $\operatorname{Hur}_g(\mu)$ a special case of in the integrals in the Gromov-Witten theory of ${\mathbb{P}}^1$, that is, certain intersections on the Kontsevich moduli space ${\overline{\mathcal{M}}}_{g,d,n}({\mathbb{P}}^1)$ of stable degree $d$ maps $$f: C \to {\mathbb{P}}^1$$ from a varying $n$-pointed genus $g$ domain curve $C$ to the fixed target curve ${\mathbb{P}}^1$.
Since such a map can be composed with any automorphism of ${\mathbb{P}}^1$, we have a ${\mathbb{C}}^\times$-action on ${\overline{\mathcal{M}}}_{g,d,n}({\mathbb{P}}^1)$. A theory due to Graber and Pandharipande [@GP] explains how to localize the integrals in Gromov-Witten theory to the fixed points of the action of the torus ${\mathbb{C}}^\times$. These fixed point loci in ${\overline{\mathcal{M}}}_{g,d,n}({\mathbb{P}}^1)$ are, essentially, products of Deligne-Mumford spaces ${\overline{\mathcal{M}}}_{g_i,n_i}$ for some $g_i$’s and $n_i$’s. Indeed, only very few maps are fixed by the action of the torus. Namely, for the standard ${\mathbb{C}}^\times$-action on ${\mathbb{P}}^1$ and an irreducible domain curve $C$ the only choices are the degree $0$ constant maps to $\{0,\infty\}=\left({\mathbb{P}}^1\right)^{{\mathbb{C}}^\times}$ or the degree $d$ map $${\mathbb{P}}^1 \owns z \mapsto z^d \in {\mathbb{P}}^1 \,.$$ In general, the domain curve is allowed to be reducible, but still any torus-invariant map has to be of the above form on each component $C_i$ of $C$. Once all discrete invariants of the curve $C$ are fixed (that is, the combinatorics of its irreducible components, their genera and numbers of marked points on them) the remaining moduli parameters are only a choice of a bunch of curves to collapse plus a choice of where to attach the non-collapsed ${\mathbb{P}}^1$’s to them. That is, the torus-fixed loci are products of Deligne-Mumford spaces, modulo possible automorphisms of the combinatorial structure.
In this way integrals in the Gromov-Witten theory of ${\mathbb{P}}^1$ can be reduced, at least in principle, to computing intersections on ${{\overline{\mathcal{M}}}_{g,n}}$. An elegant localization analysis leading to the ELSV formula is presented in [@GV], see also [@OP].
Asymptotics in Hurwitz problem
==============================
{#section-14}
Our goal now is to see how the Laplace transform of the asymptotics in the Hurwitz problem turns into Kontsevich’s combinatorial formula . The formulation of the Hurwitz problem in terms of branching graphs, see Section \[Hbr\], looks promising. Indeed, a branching graph $\Gamma$ is by definition embedded in a topological genus $g$ surface $\Sigma_g$ and and it cuts $\Sigma_g$ into $n$ cells. Here the numbers $g$ and $n$ are the same as the indices in ${{\overline{\mathcal{M}}}_{g,n}}$, on the intersection theory on which we are trying to understand. Similarly, in Kontsevich’s formula we have a graph $G$ embedded into $\Sigma_g$ and cutting it into $n$ cells. This graph $G$, however, is a more modest object: it does not have any edge labels and it is allowed to have only $3$-valent vertices.
Recall that we denote by ${\mathsf{G}}^3_{g,n}$ the set of all possible $3$-valent graphs as in Kontsevich’s formula . Let us introduce two larger sets $${\mathsf{G}}^3_{g,n} \subset {\mathsf{G}}^{\ge 3}_{g,n} \subset {\mathsf{G}}_{g,n} \,,$$ on which, by definition, the $3$-valence condition is weakened to allow vertices of valence $3$ or more, and dropped altogether, respectively. The elements of ${{\mathsf{G}}^{\ge 3}_{g,n}}$ can be obtained from elements of ${\mathsf{G}}^3_{g,n}$ by contracting some edges. In particular, the set ${{\mathsf{G}}^{\ge 3}_{g,n}}$ is still a finite set. Similarly, denote by ${\mathsf{H}}_{g,\mu}$ the set of all branching graphs with given genus $g$ and perimeter partition $\mu$. Our first order of business is to construct a map $${\mathsf{H}}_{g,\mu} \to {{\mathsf{G}}^{\ge 3}_{g,n}}\,,$$ which we call the *homotopy type* map. This map is the composition of the map $${\mathsf{H}}_{g,\mu} \to {\mathsf{G}}_{g,n}$$ which simply forgets the edge labels with the map $${\mathsf{G}}_{g,n} \to {{\mathsf{G}}^{\ge 3}_{g,n}}\,,$$ which does the following. First, we remove all univalent vertices together with the incident edge. After that, we remove the all remaining $2$-valent vertices joining their two incident edges. What is left, by construction, has only vertices of valence $3$ and higher and still cuts $\Sigma_g$ into $n$ cells.
We remark that in the two exceptional cases $(g,n)=(0,1),(0,2)$, which correspond to unstable moduli spaces, what we get in the end (a point and a circle, respectively) is not really an element of ${{\mathsf{G}}^{\ge 3}_{g,n}}$. In all other cases, however, we do get an honest element of ${{\mathsf{G}}^{\ge 3}_{g,n}}$. Figure \[fig19\] illustrates this procedure applied to the branching graph from Figure \[fig16\].
{#section-15}
Now let us make the following simple but important observation. Since the set ${{\mathsf{G}}^{\ge 3}_{g,n}}$ is *finite* and we are interested in the asymptotics of $\operatorname{Hur}_{g}(\mu)$ as $\mu\to\infty$ while keeping $g$ and $n$ fixed, we can just do the asymptotics separately for each homotopy type and then sum over all possible homotopy types. The Laplace transform will then be also expressed as a sum over all corresponding homotopy types $G$ in ${{\mathsf{G}}^{\ge 3}_{g,n}}$.
We now claim that not only Kontsevich’s combinatorial formula is the Laplace transformed asymptotics but, in fact, the summation over $G\in {\mathsf{G}}^3_{g,n}$ in Kontsevich’s formula corresponds precisely to summation over possible homotopy types. Since there are non-trivalent homotopy types, implicit in this claim is the statement that *non-trivalent homotopy types do not contribute to asymptotics*.
{#section-16}
What do we need to do to get the asymptotics of the number of branching graphs of a given homotopy type $G$ ? What would suffice is to have a simple way to enumerate all such branching graphs. To enumerate all branching graphs with given homotopy type $G$, we need to retrace the steps of the homotopy type map. Imagine that the homotopy type graph $G$ is a fossil from which we want to reconstruct some prehistoric branching graph $\Gamma$. What are the all possible ways to do it ?
The answer to all these rhetoric questions is quite simple. It is easy to see that the preimage of any edge in $G$ is some subtree in the original branching graph $\Gamma$. In addition, all these trees carry edge-labels which were erased by the homotopy type map. Thus, for any edge $e$ of $G$, we need to take a tree $T_e$ whose edges are labeled by roots of unity. In particular, there is a canonical way to make this tree planar, that is, embed it in the plane in such a way that the cyclic order of edges around each vertex agrees with the order of their labels. In particular, each such tree is a *branching tree*, that is, it satisfies the $(g,n)=(0,1)$ case of our definition of a branching graph[^4].
Next, these trees are to be glued into the graph $\Gamma$ by identifying some of their vertices, as in Figure \[fig20\]. This means that each of these branching trees carries two special vertices, which we call its *root* and *top*. These special vertices of $T_e$ mark the places where $T_e$ is attached to the other trees in $\Gamma$. We will call such a branching tree with two marked vertices an *edge tree*.
{#section-17}
Now we have a procedure which from a homotopy type $G$ and a collection of edge trees $\{T_e\}$ with distinct labels assembles a branching graph $\Gamma$. This procedure, which we will call *assembly*, does have some imperfections. Those imperfections will be discussed momentarily, but first we want to make the following important observation.
Since the homotopy type graph $G$ is something fixed and finite, *the whole asymptotics of the branching graphs lies in the edge trees*. For a large random branching graph $\Gamma$, those edge trees will be large random trees. This is how the theory of random trees enters the scene. Fortunately, a large random tree is a very well studied and a very nicely behaved object, see for example [@Pit] for a particularly enjoyable introduction. It turns out that all the information we need about random trees is either already classical or can be easily deduced from known results.
In fact, all required knowledge about random trees can be quite easily deduced (as was done in [@OP]) from the first principles, which in this case, is the following formula going back to Cayley [@Sta]. Consider all possible trees $T$ with the vertex set $\{1,\dots,m\}$. For any such tree $T$, we have a function $\operatorname{val}_T(i)$ which takes the vertex $i=1,\dots,m$ to its valence in $T$. The information about all vertex valences in all possible trees $T$ is encoded in the following generating function $$\sum_T z_1^{\operatorname{val}_T(1)} \cdots z_m^{\operatorname{val}_T(m)} =
z_1 \cdots z_m (z_1 + \cdots + z_m)^{m-2} \,.
\label{Cay}$$ A probabilistic restatement of this result is the following. The valence $\operatorname{val}_T(i)$ is the number of edges of $T$ incident to the vertex $i$. Let us cut all edges in half; since there were $m-1$ edges of $T$, we get $2m-2$ half edges. The formula says that the same distribution of half-edges can be obtained as follows: give every vertex a half-edge and the remaining $m-2$ edges just throw at the vertices randomly like darts.
What is then the valence of a given vertex in a random tree $T$ ? It is 1 for the half edge allowance that it always gets plus its share in the random distribution of $m-2$ darts among $m$ targets. As $m\to\infty$, this share goes to a Poisson random variable with mean $1$. In other words, as $m\to\infty$ we have $$\operatorname{Prob}\{\operatorname{val}_T(i)=v\} \to \frac{e^{-1}}{(v-1)!} \,,
\quad v=1,2,\dots \,.\label{valP}$$ For different vertices, their valences become independent in the $m\to\infty$ limit.
Also, setting all variables in to $1$ we find that the total number of trees with vertex set $\{1,\dots,m\}$ is $m^{m-2}$.
{#assc}
Now it is time to talk about how the assembly map differs from being one-to-one (it clear that it is onto).
First, it may happen that the cyclic order of edge labels is violated at one of the vertices of $G$ where we patch together different edge trees. If this is the case, we simply declare the assembly to be a failure and do nothing. The probability of such an assembly failure in the large graph limit can be computed as follows. Suppose that we need to glue together three vertices with valences $v_1$, $v_2$, and $v_3$. From , the chance of seeing these particular valences is $$\frac{e^{-3}}{(v_1-1)! (v_2-1)! (v_3-1)!} \,.$$ On the other hand, the conditional probability that the edge labels in the resulting graph are cyclically ordered, given that they were cyclically ordered before gluing is easily seen to be $$\frac{(v_1-1)! (v_2-1)! (v_3-1)!}{(v_1+v_2+v_3-1)!} \,.$$ Hence the success rate of the assembly at a particular trivalent vertex is $$e^{-3}\sum_{v_1,v_2,v_3\ge 1} \frac1{(v_1+v_2+v_3-1)!} =
\frac{e^{-2}}{2} \,.$$ Assembly failures at distinct vertices being asymptotically independent events, this goes into an overall factor and, eventually in the prefactor in .
At this point it should be clear that there in no need to consider nontrivalent vertices. Indeed, a homotopy type graph with a vertex of valence $\ge 4$ can be obtained from a trivalent graph by contracting some edges, hence corresponds to the case when some of the edge graphs are trivial. It is obvious that the chances that a large random tree came out empty are negligible. Hence, nontrivalent graphs make indeed no contribution to the asymptotics and can safely be ignored.
{#section-18}
The second (minor) issue with the assembly map is that we can get the same, that is, isomorphic branching graphs starting from different collections of the edge trees. This happens if the homotopy type graph $G$ has nontrivial automorphisms. It is clear that the group $\operatorname{Aut}(G)$ acts on edges of $G$ and, hence, acts by permutations on collections of edge trees preserving the isomorphism class of the assembly output. It is also clear that the chance for a large edge tree to be isomorphic to another edge tree (or to itself with root and top permuted) is, asymptotically, zero. Hence almost surely this $\operatorname{Aut}(G)$ action is free and hence there is an overcounting of branching graphs by exactly a factor of $|\operatorname{Aut}(G)|$. This explains the division by $|\operatorname{Aut}(G)|$ in .
{#section-19}
Now, after explaining the summation over trivalent graphs and the automorphism factor in , we get to the heart of Kontsevich’s formula — the product over the edges.
It is at this point that the convenience (promised in Section \[sLH\]) of working with the Laplace transform rather then the asymptotics itself can be appreciated. We will see shortly that, asymptotically, the cell perimeters of a branching graph $\Gamma$ assembled from a 3-valent graph $G$ and bunch of random edge trees $\{T_e\}$ is a sum of independent contributions from each edge of $G$. This makes the Laplace transform factor over the edges of $G$ as in . To justify the above claim, we need to take a closer look at a large typical edge tree.
{#section-20}
Let $T$ be an edge tree. It has two marked vertices, root and top; let us call the path joining them the *trunk* of $T$. The tree $T$ naturally splits into 3 parts: the root component, the top component, and the trunk component, according to their closest trunk point. This is illustrated in Figure \[fig21\].
Figure \[fig21\] may give a wrong idea of the relative size of these components for a typical large edge tree. Let $M\to\infty$ be the size (e.g. the number of vertices) in $T$. It is known and can be without difficulty deduced from (see for example [@OP]) that the size $M$ distributes itself among the three components of $T$ in the $M\to\infty$ limit as follows.
First, the size of the root and the top component stays finite in the $M\to\infty$ limit. In fact, it goes to the *Borel distribution*, given by the following formula $$\operatorname{Prob}(k) = \frac{k^{k-1}\, e^{-k}}{k!}\,,
\quad k=1,2,\dots \,.$$ Second, the typical size of the trunk is of order $\sqrt{M}$. More precisely, scaled by $\sqrt{M}$, the trunk size distribution goes to the Rayleigh distribution with density $$x\, e^{-x^2/2} \, dx \,, \quad x\in (0,\infty) \,.$$ For our purposes, however, it only matters that the size of all these parts is $o(M)$ as $M\to\infty$.
The overwhelming majority of vertices lie, therefore, somewhere in the branches of the trunk component of $T$. What is very important is that, after assembly, any such vertex will find itself completely surrounded by a unique cell. As a result, it will contribute exactly $1$ to that cell’s perimeter. What this analysis shows it that, asymptotically, the cell perimeters are determined simply by the the number of such interior trunk vertices ending up in a given cell, all other contributions to perimeters being $o(M)$. It should be clear that such contributions of distinct edges of $G$ are indeed independent, leading to the factorization in .
{#section-21}
What remains is to determine is what the edge factors are, that is, to determine the actual contribution of an edge tree $T$ to the perimeters of the adjacent cells.
All we need to know for this is to know how vertices in the trunk component distribute themselves between the two sides of the trunk, as in Figure \[fig21\]. One shows, see [@OP] and below, that the fraction of the vertices that land on a given side of the trunk is, asymptotically, *uniformly distributed* on $[0,1]$. This reduces the computation of the edge factor to computing one single integral. That computation will be presented in a moment, after we review the knowledge that we have accumulated so far.
{#section-22}
Let $G$ be a 3-valent map with $n$ cells. It has $$|E(G)|= 6g-6+3n$$ edges and $$|V(G)|= 4g-4+2n$$ vertices, which follows from the Euler characteristic equation $$|V(G)| - |E(G)| + n = 2- 2g$$ combined with the 3-valence condition $3|V(G)|=2|E(G)|$.
Let $e\in E(G)$ be an edge of $G$ and let $T_e$ be the corresponding edge tree. Let $d_e$ be the number of vertices of $T_e$. Ignoring the few vertices on the trunk itself, the vertices of $T_e$ distribute themselves between the two sides of the trunk of $T_e$. Let’s say that $p_e$ vertices are on the one side and define the number $q_e$ by $$p_e + q_e = d_e \,.\label{split}$$ It is clear that $q_e$ is the approximate number of vertices on the other side of the trunk. We call the numbers $p_e$ and $q_e$ the *semiperimeters* of the tree $T_e$.
The basic question, which we now can answer in the large graph limit, is how many branching graphs $\Gamma$ have given semiperimeters $\{(p_e,q_e)\}_{e\in E(G)}$. This distribution can be computed asymptotically as follows.
{#section-23}
First, there are some overall factors that come from automorphisms of $G$ and the assembly success rate. Recall that in Section \[assc\] we saw that the assembly success rate is $e^{-2|V(G)|} 2^{-|V(G)|}$.
Second, for every edge $e\in E(G)$ we need to pick an edge tree $T_e$ with $d_e$ vertices. As we already learned from , the number of vertex-labeled trees with $d_e$ vertices is $d_e^{d_e-2}$. Vertex labels can be traded for edge labels at the expense of the factor $d_e!/(d_e-1)!=d_e$, hence there are $d_e^{d_e-3}$ edge labeled trees with $d_e$ vertices. The choice of the root vertex brings in additional factor of $d_e$ choices. Once the root is fixed, the condition dictates the position of the top, so there is no additional freedom in choosing it. To summarize, there are $\sim d_e^{d_e-2}$ edge trees with given semiperimeters $p_e$ and $q_e$.
Third, the edge labels of $\Gamma$ is a shuffle of edge labels of the trees $T_e$. Let $$r=\sum_{e}(d_e-1)$$ be the total number of edges in $\Gamma$ (and, hence, also the total number of simple branch points in the Hurwitz covering corresponding to $\Gamma$). Obviously, there are $$\frac{r!}{\prod_{e\in E(G)} (d_e-1)!} \label{de!}$$ ways to shuffle edge labels of $\{T_e\}$ into edge labels of $\Gamma$.
Putting it all together, we obtain the following approximate expression for the number of branching graphs with given semiperimeters $\{(p_e,q_e)\}_{e\in E(G)}$ $$\label{r!}
\frac{r! \, e^{-2|V(G)|} \, 2^{-|V(G)|}}{|\operatorname{Aut}(G)|} \,
\prod_{e\in E(G)} \frac{d_e^{d_e-2}}{(d_e-1) !} \sim
\frac{r! \, e^d \, 2^{-|V(G)|}}{|\operatorname{Aut}(G)|} \,
\prod_{e\in E(G)} \frac{1}{\sqrt{2\pi}\, d_e^{3/2}} \,,$$ where $$d = |V(\Gamma)|= \sum_{e} d_e - 2 |V(G)|$$ is the degree of the corresponding Hurwitz covering and the RHS of is obtained from the LHS by the Stirling formula.
Note that the factor $r! \, e^d$ precisely cancels with prefactor in .
{#section-24}
Since the cell perimeters of $\Gamma$ are the sums of edge tree semiperimeters along the boundaries of the cells, the computation of the Laplace transform indeed boils down to the computation of a singe edge factor $$\begin{aligned}
\frac{1}{\sqrt{2\pi}} \iint_{p,q>0} \, \frac{e^{-p s_1 - q s_2
}}{(p+q)^{3/2}} \, dp \, dq & = \frac{1}{\sqrt{2\pi}} \,
\frac1{s_1-s_2}\int_{x>0}
\left(e^{-s_1 x} - e^{-s_2 x}\right) \, \frac{dx}{x^{3/2}} \\
&= \frac{1}{\sqrt{2\pi}}\,
\Gamma\left(-\tfrac12\right) \, \frac{\sqrt{s_1} - \sqrt{s_2}}{s_1-s_2} \\
&=\frac{\sqrt{2}}{\sqrt{s_1} +
\sqrt{s_2}} \,,\end{aligned}$$ where we set $x=p+q$.
Recall that the relation between the Laplace transform variables $s_i$ in and the variables $z_i$ in Kontsevich’s generation function \[KK\] is $$z_i = \sqrt{2 s_i} \,, \quad i=1,\dots,n \,.$$ Thus we get indeed the LHS of , including the correct exponent of $2$, which is $$|E(G)|- |V(G)| = 2g-2 + n \,.$$ This completes the proof of Kontsevich’s formula .
Remarks
=======
{#section-25}
Since random matrices are the common thread of many talks at this school, let us point out various connections between moduli of curves and random matrices. As we already discussed, the original KdV conjecture of Witten was based on physical parallelism between intersection theory on ${{\overline{\mathcal{M}}}_{g,n}}$ and the double scaling limit of the Hermitian $1$-matrix model. Despite many spectacular achievements by physicists as well as mathematicians, this double scaling seems to remain a source of serious mathematical challenges, in particular, it appears that no direct mathematical connection between it and moduli of curves is known. On the other hand, there is a very direct connection between what we did and another, much simpler, matrix model, namely, the edge scaling of the standard GUE model. This connection goes as follows.
Recall that by Wick formula the coefficients of the $1/N$ expansion of the following $N\times N$ Hermitian matrix integral $$\int e^{-\operatorname{tr}H^2} \, \prod_{1}^{m} \operatorname{tr}H^{k_i} \, dH \, \label{dH}$$ are the numbers of ways to glue a surface of given topology from $m$ polygons with perimeters $k_1,\dots, k_n$. The double scaling limit of the $1$-matrix model is concerned with gluing a given surface out of a very large number of small pieces. An opposite asymptotic regime is when the number $m$ of pieces stays fixed while their sizes $k_i$ go to infinity. Since for large $k$ the trace $\operatorname{tr}H^k$ picks out the maximal eigenvalues of $H$, this asymptotic regime is about largest eigenvalues of a Hermitian random matrix. In the large $N$ limit, the distribution of largest eigenvalues of $H$ is well known to be the Airy ensemble. This edge scaling random matrix ensemble is very rich, yet susceptible to a very detailed mathematical analysis. In particular, the individual distributions of eigenvalues were found by Tracy and Widom in [@TW]. They are given in terms of certain solutions of the Painlevé II equation.
The connection between GUE edge scaling and what we were doing is the following. If one takes a branching graph as in Figure \[fig16\] and strips it off its edge labels, one gets a map on genus $\Sigma_g$ with a few cells of very large perimeter, that is, an object of precisely the kind enumerated by in the edge scaling regime. We argued that almost all vertices of a large branching $\Gamma$ graph are completely surrounded by a unique cell, hence contribute exactly $1$ to that cell’s perimeter regardless of the edge labels. This shows that edge labels play no essential role in the asymptotics, thus establishing a direct connection between Hurwitz numbers asymptotics and GUE edge scaling. A similar direct connection can be established in other situations, for example, between GUE edge scaling and distribution of long increasing subsequences in a random permutation, see [@O1]. Since a great deal is known about GUE edge scaling, one can profit very easily from having a direct connection to it. In particular, one can give closed error-function-type formulas for a natural generating functions (known as $n$-point functions) for the numbers , see [@O3].
There exists another matrix model, namely the Kontsevich’s matrix model [@K], specifically designed to reproduce the graph summation in as its diagrammatic expansion. Once the combinatorial formula is established, this Kontsevich’s model can be used to analyze it, in particular, to prove the KdV equations, see [@K] and also [@D].
Alternatively, the KdV equations can be pulled back from the GUE edge scaling (where they have been studied in depth by Adler, Shiota, and van Moerbeke) via the above described connection, see the exposition in [@O3].
{#section-26}
In our approach, the intersections , the combinatorial formula , the KdV equations etc. appear through the asymptotic analysis of the Hurwitz problem. The ELSV formula , which is the bridge between enumeration of branched coverings and the intersection theory of ${{\overline{\mathcal{M}}}_{g,n}}$ is, on the hand, an exact formula. It is, therefore, natural to ask for more exact bridges between intersection theory, combinatorics, and integrable systems.
After the moduli space ${{\overline{\mathcal{M}}}_{g,n}}$ of stable curves, a natural next step is the Gromov-Witten theory of ${\mathbb{P}}^1$, that is, the intersection theory on the moduli space ${{\overline{\mathcal{M}}}_{g,n}}({\mathbb{P}}^1,d)$ of stable degree $d$ maps $$C \to {\mathbb{P}}^1$$ from an $n$-pointed genus $g$ curve $C$ to the projective line ${\mathbb{P}}^1$. More generally, one can replace ${\mathbb{P}}^1$ by some higher genus target curve $X$. It turns out, see [@OP2], that there is a simple dictionary, which we call the Gromov-Witten-Hurwitz correspondence, between enumeration of branched coverings of ${\mathbb{P}}^1$ and Gromov-Witten theory of ${\mathbb{P}}^1$. This correspondence naturally connects with some very beautiful combinatorics and integrable systems, the role of random matrices now being played by random partitions. The connection with the integrable systems is seen best in the equivariant Gromov-Witten theory of ${\mathbb{P}}^1$, where the 2-Toda lattice hierarchy of Ueno and Takasaki plays the role that KdV played for ${{\overline{\mathcal{M}}}_{g,n}}$.
[99]{}
V. I. Arnold, [*Topological classification of complex trigonometric polynomials and the combinatorics of graphs with an identical number of vertices and edges*]{}, Funct. Anal. Appl. **30** (1996), no. 1, 1–14.
P. Di Francesco, *2-D quantum and topological gravities, matrix models, and integrable differential systems*, The Painlevé Property, Springer–Verlag, 1999, 229–285.
T. Ekedahl, S. Lando, M. Shapiro, and A. Vainshtein, *Hurwitz numbers and intersections on moduli spaces of curves*, Invent. Math. **146** (2001), no. 2, 297–327.
B. Fantechi and R. Pandharipande, [*Stable maps and branch divisors*]{}, math.AG/9905104.
W. Fulton, [*Intersection theory*]{}, Springer–Verlag, 1998.
T. Graber and R. Pandharipande, [*Localization of virtual classes*]{}, Invent. Math. [**135**]{} (1999), 487–518.
T. Graber and R. Vakil, [*Hodge integrals and Hurwitz numbers via virtual localization*]{}, math.AG/0003028.
J. Harris and I. Morrison, *Moduli of Curves*, Springer–Verlag, 1998.
M. Kontsevich, *Intersection theory on the moduli space of curves and the matrix Airy function*, Commun. Math. Phys., **147**, 1992, 1–23.
A. Okounkov, *Random matrices and random permutations*, IMRN (2000) no. 20, 1043–1095.
A. Okounkov, *Infinite wedge and random partitions*, Selecta Math., New Ser., **7** (2001), 1–25.
A. Okounkov, *Toda equations for Hurwitz numbers*, Math. Res. Lett. **7** (2000), no. 4, 447–453.
A. Okounkov, *Generating functions for intersection numbers on moduli spaces of curves*, IMRN (2002) no. 18, 933-957
A. Okounkov and R. Pandharipande, *Gromov-Witten theory, Hurwitz numbers, and matrix models, I*, math.AG/0101147.
A. Okounkov and R. Pandharipande, in preparation.
J. Pitman, *Enumeration of trees and forests related to branching processes and random walks*, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. **41**, 1998.
R. Stanley, *Enumerative combinatorics*, Vol. 2, Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge, 1999.
C. A. Tracy and H. Widom, *Level-spacing distributions and the Airy kernel*, Commun. Math. Phys., **159**, 1994, 151–174.
E. Witten, *Two-dimensional gravity and intersection theory on moduli space*, Surveys in Diff. Geom. **1** (1991), 243–310.
[^1]: Department of Mathematics, University of California at Berkeley, Evans Hall \#3840, Berkeley, CA 94720-3840. E-mail: [email protected]
[^2]: For future reference we point out that something rather different happens in the family as $\lambda\to\infty$. Indeed, the equation $$\lambda^{-1} \, y^2 = x(x-1)(\lambda^{-1} x - 1)$$ becomes $x(x-1)=0$ in the $\lambda\to\infty$ limit, which means that we get three lines (one of which is the line at infinity), all three of them intersecting in the marked point at infinity. In other words, the fiber of the family at $\lambda=\infty$ is very much not the kind of curve by which we want to compactify ${\mathcal{M}}_{1,1}$. This problem can be cured, but in a not completely trivial way, see below.
[^3]: We already saw an example of this in Section \[sM11\]. Indeed, ${\overline{\mathcal{M}}}_{1,1}$ is itself a line ${\mathbb{P}}^1$. However, in order to get an actual family over it we have to go to a branched covering. As always, the automorphisms are to blame.
[^4]: A small and inessential detail is that the labels $T_e$ are taken from a larger set of roots of unity
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We analyze the deviations of the mixing induced CP asymmetry in $B\to \phi K_s$ from $\sin{2\beta}$, as well as the deviations of the asymmetries in $B_s\to K^{*0}\bar{K}^{*0}$, $B_s\to \phi K^{*0}$ and $B_s\to \phi\phi$ from $\sin{2\beta_s}$, that arise in SM due to penguin pollution. We use a theoretical input which is short-distance dominated in QCD-factorization and thus free of IR-divergencies. We also provide alternative ways to extract angles of the unitarity triangle from penguin-mediated decays, and give predictions for $B_s\to K^{*0}\bar{K}^{*0}$ observables.'
author:
- 'Javier Virto\'
title: |
**Evading $1/m_b$-suppressed IR divergencies in QCDF:\
$B_s\to KK$ Decays and $B_{d,s}$ mixing[^1]**
---
Introduction
============
The phenomenology of hadronic $B_d$-decays has been a matter of intensive research in the past 15 years, in part due to the large amount of data collected at the B-factories Babar and Belle, and at CDF and D0. This research has led to the consensus that the CKM mechanism for CP and flavor violation is accurate. However, some puzzles have survived up to this day [@BpiKex; @BpiKth; @BVKex; @BVKth; @Stewart].
A new era in B-physics has been triggered by the experimental precision and by the emergent exploration of the $B_s$ system. The future is marked by the starting of LHCb and the possibility of a super-B factory. So far, measurements on the $B_s$ system are limited to the mass difference $\Delta M_s$ [@DMs], other mixing parameters [@mix], and some $B_s\to \pi K$ and $B_s\to KK$ modes [@Bsmodes], but the extension of this list is an important element in the B-physics program.
On the theoretical side, the study of non-leptonic $B$-decays is difficult because of the presence of important long distance strong interaction effects. The correct computation of these contributions is crucial to be able to resolve small NP contributions. The amplitude of a B meson decaying into two light mesons can be written as [$$A(B\to M_1M_2)=\lambda_u^{(D)*}\,T_{M_1M_2}+\lambda_c^{(D)*}\,P_{M_1M_2}$$]{} were $\lambda_q^{(D)}\equiv V_{qb}V_{qD}^*$, and $T$ and $P$ are called “tree” and “penguin”. These hadronic parameters can be extracted from data, or can be predicted using symmetries (such as flavor). The direct computations from QCD are much more involved and are based on factorization and the $1/m_b$ expansion. They appear in the context of QCDF, pQCD or SCET [@fact]. While the methods based on flavor symmetries include naturally all kinds of long distance physics, they suffer from big uncertainties due to bad data an poorly estimated SU(3) breaking. On the other hand, methods based on factorization suffer from uncertainties due to non-factorizable chirally enhanced $1/m_b$ corrections, and long distance charm loops (charming penguins).
These proceedings review a recent proposal to improve (at a phenomenological level) on some of the weak points of the approaches mentioned above [@DMV; @BVV]. We also include a straightforward application to $B\to \phi K_s$ and comment on the limitations and the applicability of the method.
Express review of $B_q-\bar{B}_q$ mixing
========================================
The time evolution of a $B_q$ meson ($q=d,s$) can be easily described by changing to the mass eigenbasis. The relationship between the flavor basis and the physical mesons is specified by a mixing parameter usually denoted by $q/p$, [$$\begin{aligned}
|B_L\rangle&=&\frac{1}{\sqrt{1+|q/p|^2}}\Big( |B^0\rangle + \frac{q}{p}\,|\bar{B}^0\rangle\Big)\nonumber\\
|B_H\rangle&=&\frac{1}{\sqrt{1+|q/p|^2}}\Big( |B^0\rangle - \frac{q}{p}\,|\bar{B}^0\rangle\Big).
\end{aligned}$$]{} The time evolution of the mass eigenstates is straightforward and depends only on the masses and the widths of the physical B-mesons. Therefore, the evolution of the flavor eigenstates –which describes the oscillations– is specified by the masses and widths of the physical mesons and on the mixing parameter $q/p$.
Then one can define the mixing angle $\phi_M$ as [$$\phi_M\equiv -\arg{(q/p)}$$]{} In terms of the entries of the effective hamiltonian, [$$\frac{q}{p}=\sqrt{\frac{M_{12}^*-\frac{i}{2}\Gamma_{12}^*}{{M_{12}-\frac{i}{2}\Gamma_{12}}}}\simeq \sqrt{\frac{M_{12}^*}{M_{12}}}$$]{} where we have used that $|\Gamma_{12}|\ll |M_{12}|$. This means that the above definition of the mixing angle is equivalent to [$$\phi_M=\arg{(M_{12})}$$]{} One should be aware, though, that these quantities are not convention independent and are sensitive to unphysical phase redefinitions. However, once a convention for the weak phases is chosen everywhere, this definition is meaningful. In the Wolfenstein parametrization $V_{ub}$ and $V_{td}$ have phases of $\mathcal{O}(1)$ and $V_{ts}$ of $\mathcal{O}(\lambda^2)$. This means that in SM, in the case of $B_d-\bar{B}_d$ mixing $\ M_{12}^d\propto (V_{td}^*V_{tb})^2$ and $\phi_d^{SM}=2\beta+\mathcal{O}(\lambda^4)$, and in the case of $B_s-\bar{B}_s$ mixing $\ M_{12}^s\propto (V_{ts}^*V_{tb})^2$ and $\phi_s^{SM}=2\beta_s$.
In order to measure the mixing angle $\phi_M$ one usually looks at CP asymmetries. The time-dependent CP asymmetry of a $B(t)\to f$ decay is defined as [$$\mathcal{A}_{\rm CP}(t)\equiv\frac{\Gamma(B(t)\to f)-\Gamma(\bar{B}(t)\to \bar{f})}{\Gamma(B(t)\to f)+\Gamma(\bar{B}(t)\to \bar{f})}$$]{} where $B(t)\ (\bar{B}(t))$ was a $B\ (\bar{B})$ at $t=0$. For $f=f_{\rm CP}$ a final CP-eigenstate (with CP eigenvalue $\eta_f$), and neglecting the small CP violation in mixing one finds [$$\mathcal{A}_{\rm CP}(t)=\frac{\Adir\cos(\Delta M t)+\Amix\sin(\Delta M t)}
{\cosh(\Delta \Gamma t/2)-\mathcal{A}_{\Delta\Gamma}\sinh(\Delta\Gamma t/2)},$$]{} where the quantities $\Adir$ and $\Amix$ are the so called direct and mixing induced CP asymmetries, and are measured from the time oscillations of $\mathcal{A}_{\rm CP}(t)$. The mixing induced CP asymmetry is given by [$$\Amix=-\frac{2{\rm Im}\lambda_f}{1+|\lambda_f|}\ ;\quad \lambda_f\equiv \frac{q}{p}\frac{\bar{A}_f}{A_f}\simeq e^{-i\phi_M}\frac{\bar{A}_f}{A_f}$$]{} where $\lambda_f$ is a convention-independent (physical) quantity. Here $\bar{A}_f\equiv A(\bar{B}\to f)$ and $A_f\equiv A(B\to f)$.
In some particular cases, one can extract the mixing angle in a very clean way from a mixing induced CP asymmetry. The prominent example is the case of $B_d\to J/\psi K_s$ [@jpsi0]. Since this decay is dominated by a single amplitude, $\bar{A}_{J/\psi K_s}/A_{J/\psi K_s}\simeq \eta_{J/\psi K_s}=-1$, and therefore, [$$-\Amix(B_d\to J/\psi K_s)\simeq \sin{2\beta}$$]{} The neglected amplitude is both CKM and $\alpha_s(m_b)$ suppressed with respect to the dominant amplitude, so the corrections to this equation are below the percent (or even the per mil) level [@jpsi1; @jpsi2; @jpsi3].
The same argument is valid in principle for penguin-mediated $b\to s$ decays, where the amplitude can be written as [$$A=\lambda_u^{(s)*}\,T+\lambda_c^{(s)*}\,P\,\stackrel{!}{\simeq}\, \lambda_c^{(s)*}\,P.$$]{} The tree part is strongly CKM suppressed since $|\lambda_u^{(s)}/\lambda_c^{(s)}|\sim 2\%$. However, as opposed to the previous case, $T$ is not suppressed with respect to $P$. In fact, an enhancement of $T$ over $P$ would spoil the extraction of $\phi_M$ from these modes.
The classical example is the case of $B_d\to \phi K_s$, where [$$-\Amix(B_d\to \phi K_s)=\sin{2\beta}+\Delta S_{\phi K_s}$$]{} with [$$\Delta S_{\phi K_s}=
2\left| \frac{\lambda_u^{(s)}}{\lambda_c^{(s)}} \right|{\rm Re}\left( \frac{T_{\phi K_s}}{P_{\phi K_s}} \right)\sin{\gamma}\cos{2\beta}+\cdots$$]{} In order to have under control the smallness of the correction $\Delta S_{\phi K_s}$ one should be able to bound the size of ${\rm Re}(T_{\phi K_s}/P_{\phi K_s})$. In fact, latest data gives [@HFAG] [$$\Delta S_{\phi K_s}^{\rm exp}=-0.39\pm 0.20$$]{} so if the uncertainties are reduced around this central value, the claim of a NP signal will have to rely on a solid bound for the SM penguin pollution.
A first approach to bound this tree-to-penguin ratio was taken in the context of flavour SU(3) symmetry [@phiKs1; @phiKs2]. The argument is that if there was a large hierarchy between T and P in a $b\to s$ mode, the hierarchy would persist when one moves to the SU(3)-related $b\to d$ modes. But in these modes the tree is not suppressed with respect to the penguin, because $|\lambda_u^{(d)}|\sim |\lambda_c^{(d)}|$, so this hierarchy would have a clear impact in the observables of the $b\to d$ modes. In this way one can write the following bound [$$|\Delta S_{\phi K_s}|<\sqrt{2}\lambda
\Bigg( \sqrt{\frac{BR_{\phi \pi^+}}{BR_{\phi K_s}}}+ \sqrt{\frac{BR_{K^{*0}K^+}}{BR_{\phi K_s}}}\Bigg)+\mathcal{O}(\lambda^2)$$]{} valid in the SU(3) limit and under a non-cancelation assumption between $B_d\to \phi K_s$ and $B^+\to \phi K^+$. With the present data the bound is [$$|\Delta S_{\phi K_s}^{SU(3)}|<0.4$$]{} The same kind of analysis can be applied to other penguin-dominated decays [@phiKs2; @othersin2beta].
A second approach has been followed in the framework of QCD-factorization [@beneke], which gives a much more competitive bound, [$$0.01<\Delta S_{\phi K_s}^{\rm QCDF}<0.05$$]{} Related analyses have been carried out in SCET [@zupan] and pQCD [@mishima]. For a recent review see also [@ZupanTalk].
Concerning the $B_s-\bar{B_s}$ mixing angle, the clean tree-level determination comes from $B_s\to J/\psi \phi$. The related penguin-mediated decays are, for example, $B_s\to K^{*0}\bar{K}^{*0}$, $B_s\to \phi K^{*0}$ or $B_s\to \phi\phi$. Again, one can write [$$\eta_f\Amix(B_s\to f)=\sin{2\beta_s}+\Delta S_{f}.$$]{} A study of the amounts by which their mixing induced CP asymmetries deviate from $\sin{2\beta_s}$ in the SM can be found in [@BsSilvestrini; @BVV].
In the next pages we follow the approach in [@BVV; @DMV], based on a theoretical input that we call $\Delta$.
Theoretical input
=================
Consider the quantity $\Delta\equiv T-P$. This quantity is a hadronic, process-dependent, intrinsically non-perturbative object, and thus difficult to compute theoretically. Such hadronic quantities are usually either extracted from data or computed using some factorization-based approach. In the latter case, $\Delta$ could suffer from the usual problems related to the factorization ansatz and in particular long-distance effects.
However, for a certain class of decays, $T$ and $P$ share the same long-distance dynamics: the difference comes from the ($u$ or $c$) quark running in the loop, which is dominated by short-distance physics [@DMV]. Indeed, in such decays, $\Delta=T-P$ is not affected by the breakdown of factorization that affects annihilation and hard-spectator contributions, and it can be computed in a well-controlled way leading to safer predictions and smaller uncertainties.
$\Delta_{\phi K_s}^d$ $\quad$ $(2.29\pm 0.67)\times 10^{-7}{\rm GeV}$
----------------------- --------- -----------------------------------------
$\Delta_{K^*K^*}^d$ $(1.85\pm 0.79)\times 10^{-7}{\rm GeV}$
$\Delta_{K^*K^*}^s$ $(1.62\pm 0.69)\times 10^{-7}{\rm GeV}$
$\Delta_{\phi K^*}^s$ $(1.16\pm 1.05)\times 10^{-7}{\rm GeV}$
$\Delta_{\phi\phi}^s$ $(2.06\pm 2.24)\times 10^{-7}{\rm GeV}$
: Values of $\Delta$ for the various decays of interest. In the case of two vector mesons these numbers correspond to longitudinal polarizations.[]{data-label="table1"}
Table \[table1\] shows the values of $\Delta$ for our cases of interest. This quantity was used to predict branching ratios and asymmetries in $B_s\to KK$ modes, and the outcome was promising [@DMV; @BLMV]. In [@DILM] this quantity was used to extract the angle $\alpha$ of the unitarity triangle from $B_d\to K^{0}\bar{K}^{0}$. In the following section we review the formulae that allow to extract $T$ and $P$ from data and the theoretical input $\Delta$.
Tree and Penguin Contributions
==============================
We begin writing the two self-conjugated amplitudes in terms of tree and penguin contributions, [$$A=\lambda_u^{(D)*} T +\lambda_c^{(D)*} P\ ,\quad \bar{A}=\lambda_u^{(D)} T +\lambda_c^{(D)} P$$]{} Now we put $T=P-\Delta$ and we square the amplitudes, [$$\begin{array}{rcl}
|A|^2 & = & |\lambda_c^{(D)*}+\lambda_u^{(D)*}|^2\left| P + \frac{\lambda_u^{(D)*}}{\lambda_c^{(D)*}+\lambda_u^{(D)*}} \Delta \right|^2\\
&&\\
|\bar{A}|^2 & = & |\lambda_c^{(D)}+\lambda_u^{(D)}|^2\left| P + \frac{\lambda_u^{(D)}}{\lambda_c^{(D)}+\lambda_u^{(D)}} \Delta \right|^2
\end{array}$$]{} But the squared amplitudes are directly related to observables, [$$\begin{aligned}
|A|^2&=&BR(1+\Adir)/g_{PS}\nonumber\\
|\bar{A}|^2&=&BR(1-\Adir)/g_{PS} \end{aligned}$$]{} where $g_{PS}$ is the usual phase-space factor. Neglecting the masses of the light mesons with respect to the $B$ mesons, [$$\begin{aligned}
g_{PS}(B_d)&=&8.8\times 10^9\,{\rm GeV}^{-2}\nonumber\\
g_{PS}(B_s)&=&8.2\times 10^9\,{\rm GeV}^{-2}
\end{aligned}$$]{} for two non-identical particles in the final state. The resulting expressions are [$$\begin{aligned}
\frac{BR(1+\Adir)/g_{PS}}{|\lambda_c^{(D)*}+\lambda_u^{(D)*}|^2}=
\left| P + \frac{\lambda_u^{(D)*}}{\lambda_c^{(D)*}+\lambda_u^{(D)*}} \Delta \right|^2\nonumber\\
\frac{BR(1-\Adir)/g_{PS}}{|\lambda_c^{(D)}+\lambda_u^{(D)}|^2}=\left| P + \frac{\lambda_u^{(D)}}{\lambda_c^{(D)}+\lambda_u^{(D)}} \Delta \right|^2
\end{aligned}$$]{} These are the equations for two circles in the complex $P$ plane, whose solutions are the two points of intersection. This will result in a two-fold ambiguity in the determination of $P$ (and $T$). Before writing down the analytical solutions, notice that in order for solutions to exist, the separation between the centers of these circles must be smaller than the sum of the radii but bigger than the difference. This translates into a consistency condition between $BR$, $\Adir$ and $\Delta$: [$$|\Adir| \le \sqrt{\frac{\mathcal{R}_D^2\Delta^2}
{2\widetilde{BR}}\Big(
2-\frac{\mathcal{R}_D^2\Delta^2}{2\widetilde{BR}} \Big)}\label{cons}$$]{} where $\widetilde{BR}\equiv BR/g_{PS}$ and $\mathcal{R}_D$ is a specific combination of CKM elements (see Table \[coeffs\]). This condition turns out to be highly nontrivial. For example, Fig.\[fig1\] shows the allowed values for the longitudinal direct CP asymmetry of $B_d\to K^{*0}\bar{K}^{*0}$ in terms of its longitudinal branching ratio. It can be seen that for $BR\gtrsim 3\times 10^{-6}$ the direct CP asymmetry is quite constrained.
![Allowed values for the longitudinal direct CP asymmetry of $B_d\to K^{*0}\bar{K}^{*0}$ in terms of its longitudinal branching ratio, according to the value of $\Delta_{K^*K^*}$.[]{data-label="fig1"}](BRvsAdirKK.eps){width="8cm"}
The hadronic quantities $P$ and $T$ are then given by [$$\begin{aligned}
Im[P] & = & \frac{\widetilde{BR}\,\Adir}{2 c_0^{(D)}\Delta}\nonumber\\
&&\nonumber\\
Re[P] & = & -c_1^{(D)}\,\Delta \pm
\sqrt{-Im[P]^2-\left(\frac{c_0^{(D)}\Delta}{c_2^{(D)}}\right)^2+\frac{\widetilde{BR}}{c_2^{(D)}}}\nonumber\\
&&\nonumber\\
T&=&P+\Delta
\label{eqTP} \end{aligned}$$]{} where the coefficients $c_i^{(D)}$ are again some specific combinations of CKM elements (see Table \[coeffs\]).
$c_0^{(d)}$ $c_1^{(d)}$ $c_2^{(d)}$ $\mathcal{R}_d$
-------------------------- -------------- ------------------------- ---------------------
$\ -3.15\cdot 10^{-5}\ $ $\ -0.034\ $ $\ 6.93\cdot 10^{-5}\ $ $7.58\cdot 10^{-3}$
$c_0^{(s)}$ $c_1^{(s)}$ $c_2^{(s)}$ $\mathcal{R}_s$
$\ 3.11\cdot 10^{-5}\ $ $\ 0.011\ $ $\ 1.63\cdot 10^{-3}\ $ $1.54\cdot 10^{-3}$
: Numerical values for the coefficients $c_i^{(D)}$ and $\mathcal{R}_D$ for $\gamma=62^\circ$.[]{data-label="coeffs"}
Equations (\[eqTP\]) allow to extract the hadronic parameters $T$ and $P$ from experimental data on $BR$ and $\Adir$, information on sides of the unitarity triangle and the weak phase $\gamma$, and the theoretical value for $\Delta$. This method is also powerful because if no experimental information is available for $\Adir$, one can just vary $\Adir$ over its allowed range in eq.(\[cons\]). So in fact $T$ and $P$ can be extracted from $BR$, $\Delta$ and CKM elements.
$\sin{2\beta}$ from $B\to \phi K_s$
===================================
Following the discussion in the previous section, the bounds for ${\rm Re}(T_{\phi K_s}/P_{\phi K_s})$ are given by [$$\begin{aligned}
{\rm Re}\left( \frac{T_{\phi K_s}}{P_{\phi K_s}} \right)&\le&\,1+\left(-c_1^{(s)}+C(BR_{\phi K_s},\Delta_{\phi K_s}^d)\right)^{-1}\nonumber\\
{\rm Re}\left( \frac{T_{\phi K_s}}{P_{\phi K_s}} \right)&\ge&\,1+\left(-c_1^{(s)}-C(BR_{\phi K_s},\Delta_{\phi K_s}^d)\right)^{-1}\nonumber\\
C(BR,\Delta)&\equiv&\sqrt{-(c_0^{(s)}/c_2^{(s)})^2+(1/c_2^{(s)})\,\widetilde{BR}/\Delta^2}\qquad
\label{bounds} \end{aligned}$$]{} Introducing the numbers for the coefficients from Table \[coeffs\], the value or $\Delta_{\phi K_s}^d$ in Table \[table1\] and the latest experimental value for the branching ratio [@HFAG] [$$BR(B_d\to \phi K_s)_{\rm exp}=8.3^{+1.2}_{-1.0}\times 10^{-6}$$]{} we get the following bounds for $\Delta S_{\phi K_s}$, [$$0.03<\Delta S_{\phi K_s}<0.06$$]{}
$\sin{2\beta_s}$ from $B_s\to VV$
=================================
Equations (\[bounds\]) apply as well to $B_s\to VV$. Here we focus on longitudinal polarizations for which the numerical values of $\Delta$ are under control. As mentioned above, penguin mediated $B_s\to VV$ decays measure $\sin{2\beta_s}$, but no experimental information is available yet for CP asymmetries in $B_s$ decays. There is, though, an experimental number for the $B_s\to \phi\phi$ branching ratio [@HFAG], [$$BR(B_s\to\phi\phi)_{\rm exp}=14^{+8}_{-7}\times 10^{-6}$$]{} If we suppose that the longitudinal polarization fraction is $f_L^{\phi\phi}\sim 50\%$ as QCDF suggests [@beneke], then we find [$$0.006\le \Delta S_{\phi\phi}\le 0.072$$]{}
The decay $B_s\to K^{*0}\bar{K}^{*0}$ is more appropriate because $\Delta_{K^*K^*}$ is under a much better numerical control, but there is no experimental value for the branching ratio. For the sake of illustration, we just mention that for $BR\lg(B_s\to K^{*0}\bar{K}^{*0})\sim (30-40)\times 10^{-6}$, one gets [$$0.037\le \Delta S_{K^*K^*}\le 0.051$$]{}
Other angles from Data and $\Delta$
===================================
There are also some expressions that can be written that relate directly branching ratios and asymmetries to other angles of the unitarity triangle through the quantity $\Delta$ [@BVV]. These expressions do not require any CKM angle as an input, just sides of the unitarity triangle. The experimental input is minimized by measuring $\mathcal{A}_{\Delta\Gamma}$, which can be extracted from the time-dependent untagged rate [@Dighe; @fleischerMatias]. Then, in the case of a $B_d$ meson decaying through a $b\to D$ process ($D=d,s$), [$$\sin^2{\alpha}=\frac{\widetilde{BR}(1-\mathcal{A}_{\Delta\Gamma})}{2|\lambda_u^{(D)}|^2|\Delta|^2}\ ;\
\sin^2{\beta}=\frac{\widetilde{BR}(1-\mathcal{A}_{\Delta\Gamma})}{2|\lambda_c^{(D)}|^2|\Delta|^2}$$]{} and in the case of a $B_s$ meson decaying through a $b\to D$ process ($D=d,s$), [$$\sin^2{\beta_s}=\frac{\widetilde{BR}(1-\mathcal{A}_{\Delta\Gamma})}{2|\lambda_c^{(D)}|^2|\Delta|^2}$$]{}
Predictions for $B_s\to K^*K^*$
===============================
Assuming no new physics contributing to $B_d\to K^{*0}\bar{K}^{*0}$, we can also give SM predictions for the branching ratios and CP asymmetries of its U-spin partner $B_s\to K^{*0}\bar{K}^{*0}$ [@BVV]. The inputs are $\Delta_{K^*K^*}^d$, SU(3) breaking from QCDF, and the experimental value for $BR(B_d\to K^{*0}\bar{K}^{*0})$. While there is still no experimental information on this branching ratio, it is remarkable that for $BR(B_d\to K^{*0}\bar{K}^{*0})\gtrsim 5 \times 10^{-7}$ the results are almost insensitive to it. Taking $\gamma=(62\pm 6)^\circ$ and $2\beta_s=-2^\circ$, we find [$$\begin{aligned}
&&\left(\frac{BR\lg(B_s\to K^{*0}\bar{K}^{*0})}{BR\lg(B_d\to K^{*0}\bar{K}^{*0})}\right)_{{\scriptscriptstyle}SM}=17\pm 6\label{br}\\
&&\nonumber\\
&&\Adir\lg(B_s\to K^{*0}\bar{K}^{*0})_{{\scriptscriptstyle}SM}=0.000\pm 0.014\label{as}\\
&&\nonumber\\
&&\Amix\lg(B_s\to K^{*0}\bar{K}^{*0})_{{\scriptscriptstyle}SM}=0.004\pm 0.018.
\end{aligned}$$]{}
Conclusions
===========
To conclude, we would like to comment on the relevance of the proposal. First, the applicability of the approach has to be checked individually for each mode: it can only be applied to those decays for which $\Delta$ receives no contributions from annihilation or hard spectator scattering graphs, such as $B\to K^{(*)}K^{(*)}$, $B_d\to\phi K^{(*)}$, $B^+\to \pi^+\phi$, etc. The predictions derived in this way include most of the long distance physics, which is contained inside the experimental input. The used theoretical input is minimal, and is the most reliable input that QCDF can offer, free from the troublesome IR-divergencies. Moreover, the theoretical error is under control and is likely to be reduced in the near future due to, for example, the fast progress that is taking place in lattice simulations.
There is, however, an honest criticism due to the fact that some long distance effects that are controversial in QCDF, are also absent here. The prominent one is the contribution from the charming penguins, which could easily account for a significant discrepancy between theory and experiment that would not be due to NP. A recent example of this is given in [@Stewart].
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Joaquim Matias for a careful reading of the manuscript, and to Sebastian Descotes-Genon for a nice and dynamical collaboration. I am also indebted to Gudrun Hiller for helpful criticism and discussions on the part concerning $\sin 2\beta$. This work was supported in part by the EU Contract No. MRTN-CT-2006-035482, “FLAVIAnet”.
[99]{}
W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{} (2006) 1. B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett. [**97**]{} (2006) 171805; arXiv:hep-ex/0607106; Phys. Rev. D [**75**]{}, 012008 (2007); arXiv:hep-ex/0607096. K. Abe [*et al.*]{} \[Belle Collaboration\], arXiv:hep-ex/0608049; arXiv:hep-ex/0609006. S. Chen [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett. [**85**]{} (2000) 525. A. Bornheim [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. D [**68**]{} (2003) 052002. A. J. Buras, R. Fleischer, S. Recksiegel and F. Schwab, Phys. Rev. Lett. [**92**]{}, 101804 (2004). H. J. Lipkin, Phys. Lett. B [**445**]{}, 403 (1999). M. Gronau and J. L. Rosner, Phys. Rev. D [**59**]{}, 113002 (1999) J. Matias, Phys. Lett. B [**520**]{}, 131 (2001). K. F. Chen [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**72**]{} (2005) 012004; Phys. Rev. Lett. [**98**]{} (2007) 031802. B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. D [**71**]{} (2005) 091102; Phys. Rev. Lett. [**94**]{} (2005) 191802; Phys. Rev. Lett. [**98**]{} (2007) 051803; Phys. Rev. Lett. [**98**]{} (2007) 031801; arXiv:hep-ex/0607101. Y. Nir, Nucl. Phys. Proc. Suppl. [**117**]{}, 111 (2003) \[arXiv:hep-ph/0208080\]; G. Hiller, Phys. Rev. D [**66**]{}, 071502 (2002); M. Ciuchini and L. Silvestrini, Phys. Rev. Lett. [**89**]{}, 231802 (2002).
A. Jain, I. Z. Rothstein and I. W. Stewart, arXiv:0706.3399 \[hep-ph\].
A. Abulencia [*et al.*]{} \[CDF - Run II Collaboration\], Phys. Rev. Lett. [**97**]{} (2006) 062003 \[AIP Conf. Proc. [**870**]{} (2006) 116\]; Phys. Rev. Lett. [**97**]{} (2006) 242003. V. M. Abazov [*et al.*]{} \[D0 Collaboration\], Phys. Rev. Lett. [**97**]{} (2006) 021802. V. M. Abazov [*et al.*]{} \[D0 Collaboration\], Phys. Rev. Lett. [**98**]{}, 121801 (2007); V. M. Abazov [*et al.*]{} \[D0 Collaboration\], arXiv:hep-ex/0702030.
A. Abulencia [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett. [**97**]{} (2006) 211802; M. Morello \[CDF Collaboration\], arXiv:hep-ex/0612018; D. Tonelli, Fermilab-Thesis-2006-23; M. Paulini, arXiv:hep-ex/0702047; G. Punzi \[CDF - Run II Collaboration\], arXiv:hep-ex/0703029. M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Phys. Rev. Lett. [**83**]{}, 1914 (1999); Y. Y. Keum, H. n. Li and A. I. Sanda, Phys. Lett. B [**504**]{}, 6 (2001); C. W. Bauer, S. Fleming, D. Pirjol and I. W. Stewart, Phys. Rev. D [**63**]{}, 114020 (2001).
S. Descotes-Genon, J. Matias and J. Virto, Phys. Rev. Lett. [**97**]{}, 061801 (2006).
S.Descotes-Genon, J.Matias, J.Virto, arXiv:0705.0477 \[hep-ph\].
I. I. Y. Bigi and A. I. Sanda, Nucl. Phys. B [**193**]{}, 85 (1981).
H. Boos, T. Mannel and J. Reuter, Phys. Rev. D [**70**]{}, 036006 (2004).
M. Ciuchini, M. Pierini and L. Silvestrini, Phys. Rev. Lett. [**95**]{} (2005) 221804.
H. n. Li and S. Mishima, JHEP [**0703**]{}, 009 (2007).
E. Barberio et al. \[HFAG\], arXiv:0704.3575 \[hep-ex\]\
and online update at\
http://www.slac.stanford.edu/xorg/hfag
Y. Grossman, G. Isidori and M. P. Worah, Phys. Rev. D [**58**]{}, 057504 (1998).
Y. Grossman, Z. Ligeti, Y. Nir and H. Quinn, Phys. Rev. D [**68**]{}, 015004 (2003).
M. Gronau, Y. Grossman and J. L. Rosner, Phys. Lett. B [**579**]{}, 331 (2004); M. Gronau, J. L. Rosner and J. Zupan, Phys. Lett. B [**596**]{}, 107 (2004); M. Gronau and J. L. Rosner, Phys. Rev. D [**71**]{}, 074019 (2005); G. Raz, arXiv:hep-ph/0509125; M. Gronau, J. L. Rosner and J. Zupan, Phys. Rev. D [**74**]{}, 093003 (2006).
M. Beneke, Phys. Lett. B [**620**]{}, 143 (2005).
A. R. Williamson and J. Zupan, Phys. Rev. D [**74**]{}, 014003 (2006) \[Erratum-ibid. D [**74**]{}, 03901 (2006)\]
H. n. Li and S. Mishima, Phys. Rev. D [**74**]{}, 094020 (2006)
J. Zupan, arXiv:0707.1323 \[hep-ph\].
M. Ciuchini, M. Pierini and L. Silvestrini, arXiv:hep-ph/0703137.
S. Baek, D. London, J. Matias and J. Virto, JHEP [**0612**]{}, 019 (2006).
A. Datta, M. Imbeault, D. London and J. Matias, Phys. Rev. D [**75**]{}, 093004 (2007).
M. Beneke, J. Rohrer and D. Yang, Nucl. Phys. B [**774**]{}, 64 (2007).
A. S. Dighe, I. Dunietz and R. Fleischer, Eur. Phys. J. C [**6**]{}, 647 (1999)
R. Fleischer and J. Matias, Phys. Rev. D [**66**]{}, 054009 (2002)
[^1]: Talk given at the International Workshop on Quantum Chromodynamics: QCD@Work 2007, Martina Franca, Italy, June 2007.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We develop a fast numerical procedure for analysis of nonlinear and nonlocal electrodynamics of type-II superconducting films in transverse magnetic fields coupled with heat diffusion. Using this procedure we explore stability of such films with respect to dendritic flux avalanches. The calculated flux patterns are very close to experimental magneto-optical images of MgB$_2$ and other superconductors, where the avalanche sizes and their morphology change dramatically with temperature. Moreover, we find the values of a threshold magnetic field which agrees with both experiments and linear stability analysis. The simulations predict the temperature rise during an avalanche, where for a short time $T \approx 1.5 T_c$, and a precursor stage with large thermal fluctuations.'
author:
- 'J. I. Vestg[å]{}rden'
- 'D. V. Shantsev'
- 'Y. M. Galperin'
- 'T. H. Johansen'
title: Dynamics and morphology of dendritic flux avalanches in superconducting films
---
Introduction
============
The gradual penetration of magnetic flux in type-II superconductors subjected to an increasing applied field or electrical current can be interrupted by dramatic avalanches in the vortex matter.[@altshuler04] The mechanism responsible for the avalanches is that an initial fluctuation reduces locally the pinning of some vortices, which start to move, thus creating dissipation followed by depinning of even more vortices. A positive feedback loop is formed where a small perturbation can escalate into a macroscopic thermomagnetic breakdown.[@mints81]
In thin film superconductors, the dynamics and morphology of these avalanches is tantalizing, when at very high speeds they develop into complex dendritic structures, which once formed remain robust against changes in external conditions. When repeating identical experiments one finds that the patterns are never the same although qualitative features of the morphology, such as the degree of branching and overall size of the structure, show systematic dependences on, e.g., temperature. Using magneto-optical imaging flux avalanches with these characteristics have been observed in films of Nb, [@duran95; @*welling04] YBa$_2$Cu$_3$O$_{7-x}$, [@brull92; @*leiderer93; @*bolz03] MgB$_2$, [@johansen02; @albrecht05; @*olsen07] Nb$_3$Sn, [@rudnev03] YNi$_2$B$_2$C, [@wimbush04] and NbN. [@rudnev05; @*yurchenko07] Investigations of onset conditions for the avalanche activity have identified material dependent threshold values in temperature, [@johansen02] applied magnetic field, [@barkov03; @yurchenko07] and transport current, [@bobyl02] as well as in sample size.[@denisov06] Analytical modeling of the nucleation stage has explained many of these thresholds using linear stability analysis.[@rakhmanov04; @aranson05; @denisov05; @denisov06]
Far from being understood is the development of the instability from its nucleation stage to the fully developed dendritic pattern. Aranson et al.[@aranson05] explored the dynamics of the flux avalanches as a numerical solution of Maxwell’s equations with temperature dependent critical current density. The dynamical process was governed by the interplay between an extremely nonlinear current-voltage relation, heat diffusion, and the nonlocal electrodynamics characteristic for thin superconducting films. To treat the nonlocal electrodynamics the authors used periodic continuation of the sample taken as an infinite strip. This scheme should be a good approximation inside the sample, although not necessarily close to the edges. In fact, in thin films the magnetic field near the edges is significantly enhanced [@brandt93; @*zeldov94] due to the flux expulsion. Moreover, all experiments show that the instability is always nucleated at an edge. Therefore, a careful account of the electrodynamics close to the edges, including the regions outside the film, is expected to be crucially important.[@brandt95]
In this work we study the formation and characteristics of dendritic flux avalanches using a numerical scheme that takes into account the nonlocal electrodynamics both inside and outside a finite-sized superconducting film. It is shown that our simulations largely reproduces experimental results obtained by magneto-optical imaging of dendritic avalanches in films of MgB$_2$, and furthermore gives detailed insight into not yet observed quantities such as local temperature rise and electrical field.
The paper is organized as follows. Section \[sec:model\] presents the model and the equations describing the process. The numerical scheme including the implementation of boundary conditions and thermomagnetic feedback is described in Sec. \[num\]. The results for the time-dependent distributions of magnetic flux and temperature are presented and discussed in Sec. \[sec:results\], while Sec. \[sec:conclusion\] gives the conclusions.
Model {#sec:model}
=====
Consider a rectangular superconducting film zero-field cooled below the critical temperature, $T_c$, followed by a gradual increase in a perpendicular applied magnetic field. The film is deposited on a substrate, which in the process will be regarded as a sink for the dissipated heat. Shown in Fig. \[fig1\] is a sketch of the overall configuration, including the relevant fields and currents.
![(Color online) Schematic of the sample configuration.\[fig1\]](fig1-sample.pdf){width="0.9\columnwidth"}
The macroscopic behavior of type-II superconductor films in a transverse applied magnetic field, $H_a$, is well described by quasi-static classical electrodynamics. [@brandt95-prl; @brandt95] Here the sharp depinning of vortices under flowing current is represented by a highly nonlinear current-voltage relation $$\begin{aligned}
\label{power-law-EJ}
\mathbf E &=&\rho (J)\mathbf J/d \, , \nonumber \\[2mm]
\rho (J) & \equiv& \left\{
\begin{array}{lll}
\rho_0 \left( J/J_c\right)^{n-1}, & J \le J_c, & T \le T_c \, , \\
\rho_0 \, , & J > J_c, &T \le T_c \, , \\
\rho_n\, , & & T > T_c\, .
\end{array}
\right.\end{aligned}$$ Here $\mathbf E$ is the electric field, $\mathbf J$ is the sheet current ($J \equiv |\mathbf{J}|$), $J_c$ the critical sheet current, $n$ is the creep exponent, $\rho_0$ is a resistivity constant, $\rho_n$ is the normal resitivity, and $T$ is temperature. It is assumed that the sample thickness, $d$, is so small that variations in all relevant quantities across the thickness can be ignored. For $T\leq T_c$ the temperature dependence of the critical current and flux creep exponent [@denisov06] are taken as $$J_c = J_{c0}(1-T/T_c) \ \ {\rm and} \ \quad n-1 = n_0 T_c/T \, ,$$ where $J_{c0}$ and $n_0$ are constants.
The distribution of temperature is described by the heat diffusion equation $$\label{dynamics-T}
dc \, \dot T = d\nabla\cdot (\kappa \nabla T) - h(T-T_0)+
\mathbf J\cdot \mathbf E \, ,$$ where $\kappa$ is the thermal conductivity of the superconductor, $c$ is its specific heat, $T_0$ is the substrate temperature, taken to be constant, and $h$ is the coefficient of heat transfer between the film and the substrate. The $\kappa, c$ and $h$ are all assumed to be proportional to $T^3$, whereas a relatively weak temperature dependences of $\rho_0$ and $\rho_n$ are neglected. [@schneider01; @denisov06]
Following Ref. we define the local magnetization, $g=g(\mathbf r)$, as $$\label{cur1}
\nabla g\times \mathbf{z} =\nabla \times (g\mathbf{z}) = \mathbf{J} \ ,$$ where $\mathbf r\equiv (x,y)$ is a 2D vector in the film plane, and $\mathbf{z}$ is the unit vector in the perpendicular direction. Outside the sample there are no currents, and we set $g=0$ by definition. The Biot-Savart law can then be written as $$\label{biot-savart}
\frac{B_z(\mathbf r)}{\mu_0} -H_a=\hat{Q}g \equiv \int d^2r'\,
Q(\mathbf{r}-\mathbf{r}',z) g(\mathbf{r}') \, ,$$ where the integral is calculated over the whole plane. The kernel $Q(\mathbf{r})$ should be calculated as a limit at $z\to 0$ of the expression $$\label{biot-savart1}
Q(\mathbf{r},z)=\frac{1}{4\pi}\frac{2z^2-r^2}{(z^2+r^2)^{5/2}}\, , \ r
\equiv |\mathbf{r}|\, .$$ Here reqularization is needed to avoid formal divergence of the r.h.s. of Eq. at $z=0$, $\mathbf{r}=\mathbf{r}'$. The Fourier transform of $\lim_{z\to 0}Q(\mathbf{r},z)$ is equal to $k/2$.[@roth89] Therefore, from the convolution theorem it follows that the inverse operator $\hat{Q}^{-1}$ acting on some function $\varphi (\mathbf{r})$ can be expressed as $$\label{hatQ}
\hat{Q}^{-1} \varphi(\mathbf{r}) =2\mathcal{F}^{-1} \left(k^{-1}
\mathcal{F}[\varphi(\mathbf{r})] \right)\, .$$ Here ${\mathcal F} [\varphi (\mathbf{r})]$ and $ {\mathcal
F}^{-1}[\varphi (\mathbf{k})]$ are Fourier and inverse Fourier transform, respectively, and $k \equiv |\mathbf k|$.
Inverting Eq. (\[biot-savart\]) one arrives at the equation for the time evolution of the local magnetization, $$\label{se1}
\dot{g}(\mathbf{r},t)=2\mathcal{F}^{-1}\left( k^{-1} \mathcal{F}
\left[\mu_0^{-1} \dot{B}_z (\mathbf{r},t)-\dot{H}_a(t)\right]\right) \, .$$ Equations , and therefore determine the dynamics of $g(\mathbf{r},t)$, $T(\mathbf{r},t)$, etc. To solve these equations numerically we proceed from the continuous to a discrete formulation.
Numerical approach {#num}
==================
To allow use of the fast Fourier transform (FFT) we consider a rectangular area of size $2L_ x \times 2L_y$ containing the sample plus a substantial part of its surrounding area. A key point is to select proper values for $L_x$ and $L_y$ relative to the sample size, $2a \times 2b$. By including too little area outside the sample one clips away the slowly decaying tail of the stray fields, leading to decreased accuracy at large scales, and major deviations from the correct physical behavior. [@brandt95] On the other hand, including too much of the outside area keeping the same number of the grid points tends to decrease the accuracy at small scales, where actually the most interesting features of the dendritic avalanches appear. This blurring can be compensated by using a finer spatial grid, at the cost of a rapidly increasing computation time.
A careful test of our numerical scheme was done by comparing the calculations with the exact solution for the Bean critical state in an infinitely long strip. [@brandt93; @*zeldov94] It is found that already with $L_x/a \gtrsim 1.3$ the calculated results are correct within a few percent, and are essentially indistinguishable from the exact solution in graphic comparisons.
In the FFT-based calculations the rectangle $2L_ x \times 2L_y$ is discretized as a $N_x\times N_y$ equidistant grid, and used as unit cell in an infinite superlattice. The Fourier wave vectors $k_{x,y}$ are then discrete, $k_{x,y}= \pi q_{x,y}/L_{x,y}$, where $q_{x,y}$ are integers. The Brillouin zone is chosen as $|q_{x,y}| \le N_{x,y}/2$, which ensures $g(\mathbf{r},t)$, $T(\mathbf{r},t)$, etc. to be real valued.
The calculation of the temporal evolution is based on a discrete integration forward in time [^1] of the local magnetization $$g(\mathbf r,t+\Delta t) \approx g(\mathbf r,t)+\Delta t~\dot g(\mathbf r,t) \, ,
\label{dynamics-g}$$ starting from $g(\mathbf r,0)=0$. Once $g(\mathbf r,t)$ is known at time $t$, we proceed one time step by determining $\dot g(\mathbf r,t)$. The $\dot{g}(\mathbf{r},t)$ can be calculated from Eq. , provided $\dot{B}_z$ is known *everywhere* within the unit cell. For this, we have to find self-consistent solutions for $\dot{g}$ and $\dot{B}_z$ given the function $g$.
For the area inside the superconductor the material law, Eq. , applies and together with the Faraday law, $\dot{B_z}=- (\nabla \times \mathbf{E})_z$, it follows that $$\label{se2}
\dot B_z = \nabla \cdot (\rho\nabla g)/d\, .$$ The gradient $\nabla g(\mathbf r,t)$ is readily calculated, and since the result allows finding $\mathbf J(\mathbf r,t)$, from Eq. , also $\rho(\mathbf r,t)$ is determined from Eq. . The difficult point is that $\dot{g}$ depends on the distribution of $\dot{B}_z$ in the whole unit cell. The task is to find the $\dot{B}_z$ outside the sample which leads to $\dot g=0$ outside. This cannot be calculated directly since there is a nonlocal relation between $\dot{B}_z$ and $\dot{g}$. Instead we use an iterative procedure.
Let us label the iterations by a superscript $(i)$. At the first step, $i =1$, we calculate $\dot{B}_z$ *inside* the superconductor from Eq. . Then an initial guess is made for the time derivative, $\dot{B}_z^{(1)}$, *outside* the sample. From Eq. we now compute the time derivative $\dot{g}^{(1)}$. In general, this $\dot{g}^{(1)}$ does not vanish outside the superconductor. To correct for this, a new and improved $\dot{B}_z$ is chosen as $$\dot{B}_z^{(i+1)} = \dot{B}_z^{(i)} -\mu_0\hat{Q}\hat{O} \dot{g}^{(i)} +C^{(i)}
,$$ where the projection operator $\hat O$ vanishes inside the superconductor and equals to 1 outside it. The constant $C^{(i)}$ is determined by the flux conservation, $$\int d^2r\, [\dot B_z^{(i+1)}(\mathbf r,t)-\mu_0 \dot H_a]=0
.$$ The procedure is stopped after $s$ iterations when the values of $\dot{g}$ outside the superconductor becomes sufficiently small. The final distribution, $\dot{g}^{(s)}(\mathbf{r})$, is taken as the “true” $\dot{g}(\mathbf{r},t)$, and substituted into Eq. in order to advance in time.
A good choice for the initial state of the iteration at time $t$ is $\dot{B}_z^{(1)}(t)=\dot {B}_z^{(s)}(t-\Delta t)$, i.e., each iteration starts from the final distributions achieved during the previous iteration. Normally, $s=5$ iterations is sufficient to give good results.
![ (Color online) Calculated distribution of $B_z$ at an applied field $H_a=0.18 J_{c0}$, and substrate temperature $T_0=T_c/4$. The image brightness represents the magnitude of $B_z$. The sample contour appears as a bright rim of enhanced field, and the black central area is the flux-free Meissner state region. []{data-label="fig:b"}](fig2-H.pdf){width="\columnwidth"}
Results and discussion {#sec:results}
======================
Numerical simulations were performed for samples shaped as a square of side $2a$ and with an outside area corresponding to $L_x = L_y =1.3a$. The total area is discretized on a 512$\times$512 equidistant grid. Quenched disorder is included in the model by a 10% reduction of $J_{c0}$ at randomly selected 5% of the grid points. The simulated flux penetration process starts at zero applied field with no flux trapped in the sample, which has a uniform temperature $T_0$.
![ (Color online) Map of the sheet current, $J$, corresponding the image in Fig. \[fig:b\]. The brightness represents $J$, where black means $J=0$. []{data-label="fig:j"}](fig3-j.pdf){width="\columnwidth"}
![ (Color online) Color-coded overlay of two separate runs with same quenched disorder but with different microscopic fluctuations. The pixels in gray-scale represent overlapping results. The parameters are the same as in the caption of Fig. \[fig:b\]. []{data-label="fig:z"}](fig4-rgb.pdf){width="\columnwidth"}
Calculations were performed at $T_0=T_c/4$ using material parameters corresponding to a typical MgB$_2$ film,[@schneider01; @denisov06] $\rho_n$=7 $\mu\Omega$cm, $\kappa = 0.17$ kW/Km$\times (T/T_c)^3$ and $c = 35 $ kJ/Km$^3\times (T/T_c) ^3$, where $\rho_n$ is the normal resistivity at $T_c=39~$K, $J_{c0}=50$ kA/m, $\rho_0=\rho_n$, $d=0.5~\mu$m, $a=2.2~$mm, and $h=220$ kW/Km$^2\times
(T/T_c)^3$. We choose $n_0=19$ and limit the creep exponent to $n(T)\leq n_\text{max}=59$. The field was ramped from $H_a=0$ at a constant rate, $\dot
H_a=10^{-5}J_{c0}\rho_n/ad\mu_0$.
Figure \[fig:b\] shows the $B_z$-distribution at $\mu_0H_a=0.18 \mu_0J_{c0}=11$ mT, where three large dendritic structures have already been formed. The numerical labels indicate the order in which they appeared during the field ramp. The first event took place at the threshold applied field, $\mu_0H_{\rm th} = 0.145 \mu_0J_{c0}=9.1$ mT, which is in excellent agreement with measurements on MgB$_2$ films just below 10 K$\approx T_c/4$. At lower fields, the flux penetration was gradual and smooth, just as seen on the left edge of the sample, where the characteristic “pillow effect” for films in the critical state is very well reproduced. [^2]
The dendritic avalanches all nucleate at the edges, and one by one they quickly develop into a branching structure that extends far beyond the critical-state front and deep into the Meissner state area. The trees are seen to have a morphology that strongly resembles the flux structures observed experimentally in many superconducting films. [@leiderer93; @bolz03; @duran95; @welling04; @johansen02; @albrecht05; @olsen07; @rudnev03; @wimbush04; @yurchenko07] The simulations also reproduce the experimental finding that once a flux tree is formed, the entire dendritic structure remains unchanged as $H_a$ continues to increase. The supplementary material [^3] includes a VIDEO clip of the dynamical process, and shows striking resemblance with magneto-optical observations of the phenomenon.
Figure \[fig:j\] shows the sheet current magnitude, $J$, corresponding to the flux distribution in Fig. \[fig:b\]. From this map it is clear that the dendrites completely interrupt the current flow in the critical state, and redirect it around the perimeter of the branching structure. This vast perturbation of the current has been demonstrated experimentally earlier using inversion of magneto-optical images. [@laviano04; @*olsen06] Note that the critical state region contains dark pixels which are the randomly distributed sites of reduced $J_{c0}$.
To investigate reproducibility in the pattern formation, microscopic fluctuations were introduced by randomly alternating between right- and left-derivatives in the discrete differentiation. Due to the nonlinear form of Eq. this procedure gives large local variations in the electrical field. Figure \[fig:z\] shows an overlay of two simulation runs with different realizations of the microscopic fluctuations while keeping the same quenched disorder in $J_{c0}$. The two resulting images were colored so that adding them gives shades of gray where both coincide in pixel values. Clearly, the two runs gave different results as far as the dendritic pattern is concerned. Both produced three branching structures, where two are rooted at the same place and the third is at a different location. [^4] Even for those with overlap, there are parts of the structure that differ considerably, especially in the finer branches. In contrast, both the critical state and the Meissner state regions are essentially identical in the two runs. Note the color at the edge of the right hand side near the root of the green dendrite, which reflects that the growth of the flux structure drains the external field near the root. Moreover, the root of all the trees are not far from the middle of the sides. Both features are in full accordance with experiments.
Each dendritic avalanche is accompanied by a large local increase in temperature. Shown in Fig. \[fig:temp\]a is a plot of the maximum temperature in the film during a field ramp with substrate kept at $T_0 = T_c/4$. The spikes in the temperature rise as high as $ 1.5 T_c$. The maximum temperature is found in the root region of the avalanche. The heating above $T_c$ is an interesting prediction; to our knowledge, the temperature of propagating avalanches has not been observed experimentally. At the same time, the result is consistent with the measured heating of uniform flux jumps in Nb foils [@prozorov06] and the magnetic field-induced damage in a YBa$_2$Cu$_3$O$_{7-x}$ film during dendritic growth.[@brull92]
![ Maximum temperature in the superconductor during an ascending field ramp at $T_0=T_c/4$. The panels (a)-(c) are successive magnifications of the first avalanche event. \[fig:temp\] ](fig5-T.pdf){width="\columnwidth"}
![ (Color online) Magnetic moment in units of $m_0=a^3J_{c0}$ as function of increasing field obtained by simulations at three different temperatures, $T_0$. Each jump in the curves represents a flux avalanche. \[fig:moment\] ](fig6-moment.pdf){width="\columnwidth"}
{width="\textwidth"}
The first avalanche in Fig. \[fig:temp\]a appears at $H_{\rm th} = 0.145 J_{c0}$. Since the chosen disorder is rather weak and the ramp rate is high, the heat diffusion to the substrate is expectedly a more important stabilizing factor than lateral heat diffusion, the theoretically predicted threshold field is [@denisov06] $$\label{Hth}
H_\text{th} = \frac{J_{c}'}{\pi}\tanh^{-1}\left(\frac{T_ch}{naJ_{c0}\mu_0\dot H_a}\right)\, .$$ At $T=T_c/4$ and with $n=59$ this gives $H_\text{th}=0.15J_{c0}$, in excellent agreement with the present simulation. Here, $J_{c}'=0.6J_{c0}$ is the effective critical current, which is lower than $J_c$ due to flux creep. At the same time, the adiabatic threshold field [@denisov05] is much smaller than $H_\text{th}$, which means that the heat diffusion and heat transfer to the substrate prevent avalanches. However, during short time intervals cooling is not always effective, and the temperature experiences large fluctuations. The fluctuations are particularly large as $H_a$ approaches triggering of an avalanche, see Fig. \[fig:temp\]b. In these intervals both heat absorption and lateral heat diffusion play important roles in stabilizing the superconductor. A close-up view of the maximum temperature during the first avalanche at $T_0 = T_c/4$ is shown in Fig. \[fig:temp\]c. First, the temperature rapidly increases, and then decays much slower. The duration of the avalanche is $0.18~\mu$s. Since the length is $2.5~$mm, the average propagation velocity is of order $14$ km/s. This numerical value is reasonable compared to previous measurements, where the flux dendrites were triggered by a laser pulse in YBaCuO films.[@leiderer93; @bolz03] The maximum electric field in the superconductor during the avalanche is also high, found from the simulations to be approximately $5$ kV/m.
The abrupt redirection of the current implies that the magnetic moment of the sample makes a jump and becomes smaller. Figure \[fig:moment\] shows the moment as function of the increasing applied field. Each vertical step corresponds to a flux avalanche. The lower curve, obtained for $T_0 = T_c/4$, shows jumps with typical size of $0.1m_0$ with a slight dispersion, which is due to variations both in shape and location of the avalanches. More pronounced is the variation in jump size with temperature. As $T_0$ gets lower the jump size becomes smaller, and the events more frequent. In the graphs for $T_0/T_c = 0.20$ and 0.17, the jump size reduces to $0.03
m_0$ and $0.01 m_0$, and jumps appear on average with field intervals of $\Delta H_a/J_{c0} = 0.01$ and $0.002$, respectively. In real samples a similar temperature variation of jumps in the $m$-$H$ curves was observed by magnetometry. [@zhao02; @rudnev03; @prozorov06; @colauto08]
It has been reported [@johansen02] that the morphology of flux avalanches is strongly temperature dependent. This is illustrated in the bottom panel of Fig. \[fig:temp3\] showing three magneto-optical images of a 0.4 $\mu$m thick MgB$_2$ square film at $T_0=$4 K, 6.3 K and 7.9 K. The images show a crossover from many long fingers at 4 K to medium sized dendrites at 6.3 K, to a single highly branched structure at 7.9 K. The simulation results shown in the top panels reproduce this result and show exactly the same trend as the experiments. At the lowest temperature, $0.17T_c$, there are many finger-like avalanches. At the middle temperature $0.2T_c$ there are fewer avalanches, with typically three to four branches each. At the highest temperature $0.25T_c$ there is just one big avalanche, with seven main branches.
Conclusion {#sec:conclusion}
==========
In conclusion, we have developed and demonstrated the use of a fast numerical scheme for simulation of nonlinear and nonlocal transverse magnetic dynamics of type-II superconducting films under realistic boundary conditions. Our simulations of thermomagnetic flux avalanches qualitatively and quantitatively reproduces numerous experimentally observed features: the fast flux dynamics, morphology of the flux patterns, enhanced branching at higher temperatures, irreproducibility of the exact flux patterns, preferred locations for nucleation, and the existence of a threshold field. The scheme allows determination of key characteristics of the process such as maximal values of the temperature and electric field as well as typical propagation velocity.
The work was supported financially by the Norwegian Research Council. We are thankful to M. Baziljevich for helpful discussions.
[35]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevLett.71.2646) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**** ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.76.092504) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevLett.94.037002) [****, ()](\doibase 10.1103/PhysRevB.73.014512) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.49.9802) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} in @noop [**]{}, (, ) p. [****, ()](\doibase
10.1103/PhysRevB.74.064506) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
[^1]: The discrete time integration is explained using Euler’s method, but the actual implementation uses the Runge-Kutta method.
[^2]: Note a slight corrugation in this smooth pattern, which originates from the slightly nonuniform $J_{c0}$, a detail commonly seen in magneto-optical images of real samples.
[^3]: See Supplemental Material at URL for VIDEO clips showing the development of $B_z$ with time.
[^4]: The two roots overlap because clustering of the quenched disorder facilitate nucleation of the thermomagnetic instability.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The aim of this paper is to provide cohomologies of $n$-ary Hom-Nambu-Lie algebras governing central extensions and one parameter formal deformations. We generalize to $n$-ary algebras the notions of derivations and representation introduced by Sheng for Hom-Lie algebras. Also we show that a cohomology of $n$-ary Hom-Nambu-Lie algebras could be derived from the cohomology of Hom-Leibniz algebras.'
address:
- 'Abdenacer Makhlouf, Université de Haute Alsace, Laboratoire de Mathématiques, Informatique et Applications, 4, rue des Frères Lumière F-68093 Mulhouse, France'
- 'Faouzi Ammar and Sami Mabrouk, Université de Sfax, Faculté des Sciences, Sfax Tunisia'
author:
- 'F. AMMAR, S. MABROUK and A. MAKHLOUF'
title: 'Representations and Cohomology of n-ary multiplicative Hom-Nambu-Lie algebras'
---
[^1]
Introduction {#introduction .unnumbered}
============
The first instances of $n$-ary algebras in Physics appeared with a generalization of the Hamiltonian mechanics proposed in 1973 by Nambu [@Nam]. More recent motivation comes from string theory and M-branes involving naturally an algebra with ternary operation called Bagger-Lambert algebra which give impulse to a significant development. It was used in [@bag] as one of the main ingredients in the construction of a new type of supersymmetric gauge theory that is consistent with all the symmetries expected of a multiple M2-brane theory: 16 supersymmetries, conformal invariance, and an SO(8) R-symmetry that acts on the eight transverse scalars. On the other hand in the study of supergravity solutions describing M2-branes ending on M5-branes, the Lie algebra appearing in the original Nahm equations has to be replaced with a generalization involving ternary bracket in the lifted Nahm equations, see [@basu]. For other applications in Physics see [@ker1], [@ker2], [@ker3].
The algebraic formulation of Nambu mechanics is du to Takhtajan [@Takhtajan0; @Takhtajan1] while the abstract definition of $n$-ary Nambu algebras or $n$-ary Nambu-Lie algebras (when the bracket is skew symmetric) was given by Filippov in 1985 see [@Fil]. The Leibniz $n$-ary algebras were introduced and studied in [@CassasLodayPirashvili]. For deformation theory and cohomologies of $n$-ary algebras of Lie type, we refer to [@makhloufAtaguema1; @makhloufAtaguema2; @Gautheron1; @Izquirdo; @Takhtajan1].
The general Hom-algebra structures arose first in connection to quasi-deformation and discretizations of Lie algebras of vector fields. These quasi-deformations lead to quasi-Lie algebras, a generalized Lie algebra structure in which the skew-symmetry and Jacobi conditions are twisted. For Hom-Lie algebras, Hom-associative algebras, Hom-Lie superalgebras, Hom-bialgebras ... see [@AmmarMakhloufJA2010; @LS1; @HomNonAss; @MS; @HomHopf; @HomAlgHomCoalg]. Generalizations of $n$-ary algebras of Lie type and associative type by twisting the identities using linear maps have been introduced in [@makh]. These generalizations include $n$-ary Hom-algebra structures generalizing the $n$-ary algebras of Lie type such as $n$-ary Nambu algebras, $n$-ary Nambu-Lie algebras and $n$-ary Lie algebras, and $n$-ary algebras of associative type such as $n$-ary totally associative and $n$-ary partially associative algebras. See also [@yau1; @yau2; @yau3].
In the first Section of this paper we summarize the definitions of $n$-ary Hom-Nambu (resp. Hom-Nambu-Lie) algebras and the multiplicative $n$-ary Hom-Nambu (resp. Hom-Nambu-Lie) algebras. In Section $2$, we extend to $n$-ary algebras the notions of derivations and representation introduced for Hom-Lie algebras in [@Sheng]. In Section $3$, we show that for an $n$-ary Hom-Nambu-Lie algebra $\mathcal{N}$, the space $\wedge^{n-1}\mathcal{N}$ carries a structure of Hom-Leibniz algebra. Section $4$ is dedicated to central extensions. We provide a cohomology adapted to central extensions of $n$-ary multiplicative Hom-Nambu-Lie algebras. In Section $5$, we provide a cohomology which is suitable for the study of one parameter formal deformations of $n$-ary Hom-Nambu-Lie algebras. In the last Section we show that the cohomology of $n$-ary Hom-Nambu-Lie algebras may be derived from the the cohomology of Hom-Leibniz algebras. To this end we generalize to twisted situation the process used by Daletskii and Takhtajan [@Takhtajan0] for the classical case.
The n-ary Hom-Nambu algebras
============================
Throughout this paper, we will for simplicity of exposition assume that $\mathbb{K}$ is an algebraically closed field of characteristic zero, even though for most of the general definitions and results in the paper this assumption is not essential.
Definitions
-----------
In this section, we recall the definition of $n$-ary Hom-Nambu algebras and $n$-ary Hom-Nambu-Lie algebras, introduced in [@makh] by Ataguema, Makhlouf and Silvestrov. They correspond to a generalized version by twisting of $n$-ary Nambu algebras and Nambu-Lie algebras which are called Filippov algebras. We deal in this paper with a subclass of $n$-ary Hom-Nambu algebras called multiplicative $n$-ary Hom-Nambu algebras.
An *$n$-ary Hom-Nambu* algebra is a triple $(\mathcal{N}, [\cdot ,..., \cdot], \widetilde{\alpha} )$ consisting of a vector space $\mathcal{N}$, an $n$-linear map $[\cdot ,..., \cdot ] : \mathcal{N}^{ n}\longrightarrow \mathcal{N}$ and a family $\widetilde{\alpha}=(\alpha_i)_{1\leq i\leq n-1}$ of linear maps $ \alpha_i:\ \ \mathcal{N}\longrightarrow \mathcal{N}$, satisfying\
$$\begin{aligned}
\label{NambuIdentity}
&& \big[\alpha_1(x_1),....,\alpha_{n-1}(x_{n-1}),[y_1,....,y_{n}]\big]= \\ \nonumber
&& \sum_{i=1}^{n}\big[\alpha_1(y_1),....,\alpha_{i-1}(y_{i-1}),[x_1,....,x_{n-1},y_i]
,\alpha_i(y_{i+1}),...,\alpha_{n-1}(y_n)\big],
\end{aligned}$$ for all $(x_1,..., x_{n-1})\in \mathcal{N}^{ n-1}$, $(y_1,..., y_n)\in \mathcal{N}^{ n}.$\
The identity $(1.1)$ is called *Hom-Nambu identity*.
Let $x=(x_1,\ldots,x_{n-1})\in \mathcal{N}^{n-1}$, $\widetilde{\alpha}
(x)=(\alpha_1(x_1),\ldots,\alpha_{n-1}(x_{n-1}))\in \mathcal{N}^{n-1}$ and $y\in \mathcal{N}$. We define an adjoint map $ad(x)$ as a linear map on $\mathcal{N}$, such that $$\label{adjointMapNaire}
ad(x)(y)=[x_{1},\cdots,x_{n-1},y].$$
Then the Hom-Nambu identity may be written in terms of adjoint map as $$ad(\widetilde{\alpha} (x))( [x_{n},...,x_{2n-1}])=
\sum_{i=n}^{2n-1}{[\alpha_1(x_{n}),...,\alpha_{i-n}(x_{i-1}),
ad(x)(x_{i}), \alpha_{i-n+1}(x_{i+1}) ...,\alpha_{n-1}(x_{2n-1})].}$$
When the maps $(\alpha_i)_{1\leq i\leq n-1}$ are all identity maps, one recovers the classical $n$-ary Nambu algebras. The Hom-Nambu Identity , for $n=2$, corresponds to Hom-Jacobi identity (see [@MS]), which reduces to Jacobi identity when $\alpha_1=id$.
Let $(\mathcal{N},[\cdot,\dots,\cdot],\widetilde{\alpha})$ and $(\mathcal{N}',[\cdot,\dots,\cdot]',\widetilde{\alpha}')$ be two $n$-ary Hom-Nambu algebras where $\widetilde{\alpha}=(\alpha_{i})_{i=1,\cdots,n-1}$ and $\widetilde{\alpha}'=(\alpha'_{i})_{i=1,\cdots,n-1}$. A linear map $f:
\mathcal{N}\rightarrow \mathcal{N}'$ is an $n$-ary Hom-Nambu algebras *morphism* if it satisfies $$\begin{aligned}
f ([x_{1},\cdots,x_{n}])&=&
[f (x_{1}),\cdots,f (x_{n})]'\\
f \circ \alpha_i&=&\alpha'_i\circ f \quad \forall i=1,n-1.\end{aligned}$$
An $n$-ary Hom-Nambu algebra $(\mathcal{N}, [\cdot ,..., \cdot], \widetilde{ \alpha} )$ where $\widetilde{\alpha}=(\alpha_i)_{1\leq i\leq n-1}$ is called *$n$-ary Hom-Nambu-Lie* algebra if the bracket is skew-symmetric that is $$[x_{\sigma(1)},..,x_{\sigma(n)}]=Sgn(\sigma)[x_1,..,x_n],\ \ \forall \sigma\in \mathcal{S}_n
\ \ \textrm{and}\ \ \forall x_1,...,x_n\in \mathcal{N}.$$ where $\mathcal{S}_n$ stands for the permutation group of $n$ elements.
In the sequel we deal with a particular class of $n$-ary Hom-Nambu-Lie algebras which we call $n$-ary multiplicative Hom-Nambu-Lie algebras.
An *$n$-ary multiplicative Hom-Nambu algebra* (resp. *$n$-ary multiplicative Hom-Nambu-Lie algebra*) is an $n$-ary Hom-Nambu algebra (resp. $n$-ary Hom-Nambu-Lie algebra) $(\mathcal{N}, [\cdot ,..., \cdot], \widetilde{ \alpha})$ with $\widetilde{\alpha}=(\alpha_i)_{1\leq i\leq n-1}$ where $\alpha_1=...=\alpha_{n-1}=\alpha$ and satisfying $$\alpha([x_1,..,x_n])=[\alpha(x_1),..,\alpha(x_n)],\ \ \forall x_1,...,x_n\in \mathcal{N}.$$ For simplicity, we will denote the $n$-ary multiplicative Hom-Nambu algebra as $(\mathcal{N}, [\cdot ,..., \cdot ], \alpha)$ where $\alpha :\mathcal{N}\rightarrow \mathcal{N}$ is a linear map. Also by misuse of language an element $x\in \mathcal{N}^n$ refers $x=(x_1,..,x_{n})$ and $\alpha(x)$ denotes $(\alpha (x_1),...,\alpha (x_n))$.
The following theorem gives a way to construct $n$-ary multiplicative Hom-Nambu algebras (resp. Hom-Nambu-Lie algebras) starting from classical $n$-ary Nambu algebras (resp. Nambu-Lie algebras) and algebra endomorphisms.
[@makh] Let $(\mathcal{N},[\cdot,...,\cdot])$ be an $n$-ary Nambu algebra (resp. $n$-ary Nambu-Lie algebra) and let $\rho:\mathcal{N}\rightarrow \mathcal{N}$ be an n-ary Nambu (resp. Nambu-Lie) algebra endomorphism. Then $(\mathcal{N},\rho\circ[\cdot ,...,\cdot ],\rho)$ is a $n$-ary multiplicative Hom-Nambu algebra (resp. $n$-ary multiplicative Hom-Nambu-Lie algebra).
Representations of Hom-Nambu-Lie algebras
=========================================
In this section we extend the representation theory of Hom-Lie algebras introduced in [@Sheng] and [@BenayadiMakhlouf] to the $n$-ary case. We denote by $End(\mathcal{N})$ the linear group of operators on the ${\mathbb K}$-vector space $\mathcal{N}$. Sometimes it is considered as a Lie algebra with the commutator brackets.
Derivations of $n$-ary Hom-Nambu-Lie algebras
---------------------------------------------
Let $(\mathcal{N}, [\cdot ,..., \cdot ], \alpha )$ be an $n$-ary multiplicative Hom-Nambu-Lie algebra. We denote by $\alpha^k$ the $k$-times composition of $\alpha$ (i.e. $\alpha^k=\alpha\circ...\circ\alpha$ $k$-times). In particular $\alpha^{-1}=0$ and $\alpha^0=id$.
For any $k\geq1$, we call $D\in End(\mathcal{N})$ an $\alpha^k$-*derivation* of the $n$-ary multiplicative Hom-Nambu-Lie algebra $(\mathcal{N}, [\cdot ,...,\cdot], \alpha )$ if $$\label{alphaKderiv1}[D,\alpha]=0\ \ (\textrm{i.e.}\ \ D\circ\alpha=\alpha\circ D),$$ and $$\label{alphaKderiv2}
D[x_1,...,x_n]=\sum_{i=1}^n[\alpha^k(x_1),...,\alpha^k(x_{i-1}),D(x_i),\alpha^k(x_{i+1}),...,\alpha^k(x_n)],$$ We denote by $Der_{\alpha^k}(\mathcal{N})$ the set of $\alpha^k$-derivations of the $n$-ary multiplicative Hom-Nambu-Lie algebra $\mathcal{N}$.
For $x=(x_1,...,x_{n-1})\in \mathcal{N}^{ n-1}$ satisfying $\alpha(x)=x$ and $k\geq 1$, we define the map $ad_k(x)\in End(\mathcal{N})$ by $$\label{ad_k(u)}
ad_k(x)(y)=[x_1,...,x_{n-1},\alpha^k(y)]\ \ \forall y\in \mathcal{N}.$$ Then
The map $ad_k(x)$ is an $\alpha^{k+1}$-derivation, that we call inner $\alpha^{k+1}$-derivation.
We denote by $Inn_{\alpha^k}(\mathcal{N})$ the ${\mathbb K}$-vector space generated by all inner $\alpha^{k+1}$-derivations. For any $D\in Der_{\alpha^k}(\mathcal{N})$ and $D'\in Der_{\alpha^k}(\mathcal{N})$ we define their commutator $[D,D']$ as usual: $$\label{DerivationsCommutator}[D,D']=D\circ D'-D'\circ D.$$ Set $Der(\mathcal{N})={\displaystyle}\bigoplus_{k\geq -1}Der_{\alpha^k}(\mathcal{N})$ and $Inn(\mathcal{N})={\displaystyle}\bigoplus_{k\geq -1}Inn_{\alpha^k}(\mathcal{N})$.
\[2.3\] For any $D\in Der_{\alpha^k}(\mathcal{N})$ and $D'\in Der_{\alpha^{k'}}(\mathcal{N})$, where $k+k'\geq-1$, we have $$[D,D']\in Der_{\alpha^{k+k'}}(\mathcal{N}).$$
Let $x_i\in \mathcal{N},\ 1\leq i\leq n$, $D\in Der_{\alpha^k}(\mathcal{N})$ and $D'\in Der_{\alpha^{k'}}(\mathcal{N})$, then $$\begin{aligned}
D\circ D'([x_1,...,x_n]) &=& \sum_{i=1}^n D([\alpha^{k'}(x_1),...,D'(x_i),...,\alpha^{k'}(x_n)])\\
&=& \sum_{i=1}^n [\alpha^{k+k'}(x_1),...,D\circ D'(x_i),...,\alpha^{k+k'}(x_n)] \\
&+& \sum_{i<j}^n [\alpha^{k+k'}(x_1),...,\alpha^k( D'(x_i)),...\alpha^{k'}(D(x_j)),...,\alpha^{k+k'}(x_n)]\\
&+& \sum_{i>j}^n [\alpha^{k+k'}(x_1),...,\alpha^{k'}( D(x_j)),...\alpha^k(D'(x_i)),...,\alpha^{k+k'}(x_n)].\end{aligned}$$ The second and the third term in $[D,D']$ are symmetrical, then $$\begin{aligned}
[D, D']([x_1,...,x_n]) &=& (D\circ D'-D'\circ D)([x_1,...,x_n])\\
&=& \sum_{i=1}^n [\alpha^{k+k'}(x_1),...,(D\circ D'-D'\circ D)(x_i),...,\alpha^{k+k'}(x_n)] \\
&= &\sum_{i=1}^n [\alpha^{k+k'}(x_1),...,[D, D'](x_i),...,\alpha^{k+k'}(x_n)],\end{aligned}$$ which yield that $[D,D']\in Der_{\alpha^{k+k'}}(\mathcal{N})$.
Moreover we have:
The pair $(Der(\mathcal{N}),[\cdot ,\cdot ])$, where the bracket is the usual commutator, defines a Lie algebra and $Inn(V)$ constitutes an ideal of it.
$(Der(\mathcal{N}),[\cdot ,\cdot ])$ is a Lie algebra by using Lemma \[2.3\]. We show that $Inn(V)$ is an ideal. Let $ad_k(x)=[x_1,...,x_{n-1},\alpha^{k-1}(\cdot)]$ be an inner $\alpha^k$-derivation on $\mathcal{N}$ and $D\in Der_{\alpha^{k'}}(\mathcal{N})$ for $k\geq -1$ and $k'\geq -1$ where $k+k'\geq -1$. Then $$[D,ad_k(x)]\in Der_{\alpha^{k+k'}}(\mathcal{N})$$ and for any $y\in \mathcal{N}$ $$\begin{aligned}
[D,ad_k(x)](y) &=& D([x_1,...,x_{n-1},\alpha^{k-1}(y)])-[x_1,...,x_{n-1},\alpha^{k-1}(D(y))],\\
&=& D([\alpha^{k}(x_1),...,\alpha^{k}(x_{n-1}),\alpha^{k-1}(y)])-[\alpha^{k+k'}(x_1),...,\alpha^{k+k'}(x_{n-1}),\alpha^{k-1}(D(y))], \\
&=& \sum_{i\leq n-1}[\alpha^{k+k'}(x_1),...,D(\alpha^k(x_i)),...,\alpha^{k+k'}(x_{n-1}),\alpha^{k+k'-1}(y)], \\
&=& \sum_{i\leq n-1}[x_1,...,D(x_i),...,x_{n-1},\alpha^{k+k'-1}(y)], \\
&=&\sum_{i\leq n-1}ad_{k+k'}(x_1\wedge...\wedge D(x_i)\wedge...\wedge x_{n-1})(y).\end{aligned}$$ Therefore $[D,ad_k(x)]\in Inn_{\alpha^{k+k'}}(V)$.
Representations of $n$-ary Hom-Nambu-Lie algebras
-------------------------------------------------
In this section we introduce and study the representations of $n$-ary multiplicative Hom-Nambu-Lie algebras.
A representation of an $n$-ary multiplicative Hom-Nambu-Lie algebra $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ on a vector space $\mathcal{N}$ is a skew-symmetric multilinear map $\rho:\mathcal{N}^{ n-1}\longrightarrow End(\mathcal{N})$, satisfying for $x,y\in \mathcal{N}^{n-1}$ the identity $$\label{RepIdentity1}
\rho(\alpha(x))\circ\rho(y)-\rho(\alpha(y))\circ\rho(x)=\sum_{i=1}^{n-1}\rho(\alpha (x_1),...,ad(y)(x_i),...,\alpha (x_{n-1}))\circ \nu$$
where $\nu$ is a linear map.
Two representations $\rho$ and $\rho'$ on $\mathcal{N}$ are *equivalent* if there exists $f:\mathcal{N} \rightarrow \mathcal{N} $ an isomorphism of vector space such that $f(x\cdot y)=x\cdot ' f(y)$ where $x\cdot y=\rho(x)(y)$ and $x\cdot' y=\rho'(x)(y)$ for $x\in \mathcal{N}^{n-1}$ and $y\in \mathcal{N}.$
Let $(\mathcal{N}, [\cdot ,..., \cdot ], \alpha )$ be an $n$-ary multiplicative Hom-Nambu-Lie algebra. The map $\rm ad$ defined in is a representation, where the operator $\nu$ is the twist map $\alpha$. The identity is equivalent to Hom-Nambu identity. It is called the adjoint representation.
From $n$-ary Hom-Nambu-Lie algebra to Hom-Leibniz algebra
=========================================================
In the context of Hom-Lie algebras one gets the class of Hom-Leibniz algebras (see [@MS]). Following the standard Loday’s conventions for Leibniz algebras, a Hom-Leibniz algebra is a triple $(V, [\cdot, \cdot], \alpha)$ consisting of a vector space $V$, a bilinear map $[\cdot, \cdot]:
V\times V \rightarrow V$ and a linear map $\alpha: V \rightarrow
V$ with respect to $[\cdot, \cdot]$ satisfying $$\label{Leibnizalgident}
[\alpha(x),[y,z]]=[[x,y],\alpha(z)]+[\alpha (y),[x,z]]$$
Let $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ be a $n$-ary multiplicative Hom-Nambu-Lie algebras, we define\
$\bullet$ a linear map $L:\wedge^{n-1}\mathcal{N}\longrightarrow End(\mathcal{N})$ by $$\label{adj}L(x)\cdot z=[x_1,...,x_{n-1},z],$$ for all $x=x_1\wedge...\wedge x_{ n-1}\in\wedge^{n-1}\mathcal{N}, \ z\in \mathcal{N}$ and extending it linearly to all $\wedge^{n-1}\mathcal{N}$. Notice that $L(x)\cdot z=ad(x)(z)$.\
$\bullet$ a linear map $\tilde{\alpha}:\wedge^{n-1}\mathcal{N}\longrightarrow\wedge^{n-1}\mathcal{N}$ by $$\tilde{\alpha}(x)=\alpha(x_1)\wedge...\wedge\alpha(x_{n-1})\,$$
for all $x=x_1\wedge...\wedge x_{ n-1}\in\wedge^{n-1}\mathcal{N}$,\
$\bullet$ a bilinear map $[\ ,\ ]_{\alpha}:\wedge^{n-1}\mathcal{N}\times\wedge^{n-1}\mathcal{N}\longrightarrow\wedge^{n-1}\mathcal{N}$ by $$\label{brackLei}[x ,y]_{\alpha}=L(x)\bullet_{\alpha}y=\sum_{i=0}^{n-1}\big(\alpha(y_1),...,L(x)\cdot y_i,...,\alpha(y_{n-1})\big),$$ for all $x=x_1\wedge...\wedge x_{ n-1}\in\wedge^{n-1}\mathcal{N},\ y=y_1\wedge...\wedge y_{ n-1}\in\wedge^{n-1}\mathcal{N}$\
We denote by $\mathcal{L}(\mathcal{N})$ the space $\wedge^{n-1}\mathcal{N}$ and we call it the fundamental set.
\[3.1\] The map $L$ satisfies $$L([x ,y ]_{\alpha})\cdot \alpha(z)=L(\alpha(x))\cdot \big(L(y)\cdot z\big)-L(\alpha(y))\cdot \big(L(x)\cdot z\big)$$ for all $x,\ y\in \mathcal{L}(\mathcal{N}),\ z\in \mathcal{N}$
\[HomLeibOfHomNambu\]The triple $\big(\mathcal{L}(\mathcal{N}),\ [\ ,\ ]_{\alpha},\ \alpha\big)$ is a Hom-Leibniz algebra.
Let $x=x_1\wedge...\wedge x_{ n-1},\ y=y_1\wedge...\wedge y_{n-1} $ and $z=z_1\wedge...\wedge z_{n-1}\in \mathcal{L}(\mathcal{N})$, the Leibniz identity can be written $$\label{brackLei2}\big[[x ,y ]_{\alpha} ,\alpha(z)\big]_{\alpha}=[\alpha(x) ,[y ,z ]_{\alpha} \big]_{\alpha}-[\alpha(y) ,[x ,z ]_{\alpha}\big]_{\alpha}$$ and equivalently $$\label{brackLei3}
\Big(L\big(L(x)\bullet_{\alpha}y\big)\bullet_{\alpha}\tilde{\alpha}(z)\Big)\cdot (v)
=\Big(L(\alpha(x))\bullet_{\alpha}\big(L(y)\bullet_{\alpha}z\big)\Big)\cdot (v)-
\Big(L(\alpha(y))\bullet_{\alpha}\big(L(x)\bullet_{\alpha}z\big)\Big)\cdot (v).$$ Let us compute first $\Big(L(\tilde{\alpha}(x))\bullet_{\alpha}\big(L(y)\bullet_{\alpha}z\big)\Big)$. This is given by $$\begin{aligned}
\Big(L(\alpha(x))\bullet_{\alpha}\big(L(y)\bullet_{\alpha}z\big)\Big) &=& \sum_{i=0}^{n-1}L(\alpha(x))\bullet_{\alpha}\big(\alpha(z_1),...,L(y)\cdot z_i,...,\alpha(z_{n-1})\big) \\
&=& \sum_{i=0}^{n-1}\sum_{i\neq j,j=0}^{n-1}\big(\alpha^2(z_1),...,\alpha(L(x)\cdot z_j),...,\alpha(L(y)\cdot z_i)...,\alpha^2(z_{n-1})\big) \\
&+& \sum_{i=0}^{n-1}\big(\alpha^2(z_1),...,L(\tilde{\alpha}(x))\cdot (L(y)\cdot z_i),...,\alpha^2(z_{n-1})\big).
\end{aligned}$$ The right hand side of is skewsymmetric in $x$, $y$; hence, $$\Big(L(\alpha(x))\bullet_{\alpha}\big(L(y)\bullet_{\alpha}z\big)\Big)-
\Big(L(\alpha(y))\bullet_{\alpha}\big(L(x)\bullet_{\alpha}z\big)\Big)=$$$$\sum_{i=0}^{n-1} (\alpha^2(z_1),...,\{L(\alpha(x))\cdot (L(y)\cdot z_i)
- L(\alpha(y))\cdot (L(x).z_i)\},...,\alpha^2(z_{n-1})\big).$$ In the other hand, using Definition , we find $$\Big(L\big(L(x)\bullet_{\alpha}y\big)\bullet_{\alpha}\tilde{\alpha}(z)\Big)
=$$$$\sum_{i=0}^{n-1}\sum_{j=0}^{n-1}\big(\alpha^2(z_1),...,\alpha^2(z_{i-1}),[\alpha(y_1),...,
L(x)\cdot y_j,...,\alpha(y_{n-1}),\alpha(z_i)],\alpha^2(z_{i+1}),...,\alpha^2(z_{n-1})\big)$$ $$=\sum_{i=0}^{n-1}\big(\alpha^2(z_1),...,\alpha^2(z_{i-1}),[x ,y ]_{\alpha}\cdot \alpha(z_i),\alpha^2(z_{i+1}),...,\alpha^2(z_{n-1})\big).$$ Using Lemma \[3.1\], the proof is completed.
We obtain a similar result if we consider the space $T\mathcal{N}=\otimes^n \mathcal{N}$ instead of $\mathcal{L}(\mathcal{N})$.
For $n=2$ the map $L:\mathcal{L}(\mathcal{N})\longrightarrow End(\mathcal{N})$ defines a representation of $\mathcal{L}(\mathcal{N})$ on $\mathcal{N}$.
One should set $\nu =\alpha$ and check $$\begin{aligned}
\label{I}
L(\alpha(x))\cdot \alpha(z)&=\alpha(L(x)\cdot z)\\\label{II} L([x,y]_\alpha)\cdot \alpha(z)&=L(\alpha(x))(y)\cdot z-L(\alpha(y))(x)\cdot z\end{aligned}$$ Indeed and are equivalent to $$\begin{aligned}
[\alpha(x),\alpha (y)]&=\alpha([x,y]),\\
[[x,y],\alpha(z)]&=[[\alpha(x),y],z]-[[\alpha(y),x],z].\end{aligned}$$ According to [@Sheng] and [@BenayadiMakhlouf] it corresponds to the adjoint representation of a Hom-Lie algebra.
Central Extensions and Cohomology of $n$-ary Hom-Nambu-Lie algebras
===================================================================
Central extensions of $n$-ary multiplicative Hom-Nambu-Lie algebras
-------------------------------------------------------------------
Let $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ be an $n$-ary multiplicative Hom-Nambu-Lie algebra.
We define a central extension $\tilde{\mathcal{N}}$ of $\mathcal{N}$ by adding a new central generator $e$ and modifying the bracket as follows: for all $\tilde{x}_i=x_i+a_i e$, $ a_i\in{\mathbb K}$ and $ 1\leq i\leq n$ we have $$\label{7.1}
[\tilde{x}_1,...,\tilde{x}_n]_{\widetilde{\mathcal{N}}}=[x_1,...,x_n]+\varphi(x_1,...,x_n)e,$$ $$\label{7.11}\beta(\widetilde{x}_i)=\alpha(x_i)+\lambda(x_i)e,$$ $$\label{7.12} [\tilde{x}_1,...,\tilde{x}_{n-1},e]_{\widetilde{\mathcal{N}}}=0,$$ where $\lambda:\mathcal{N}\rightarrow\mathbb{K}$ a linear map.
One may think of adding more than one central generator, but this will not be needed here for the discussion.
- Clearly, $\varphi$ has to be an $n$-linear and skew-symmetric map, $\varphi\in \wedge^{n-1}\mathcal{N}^*\wedge \mathcal{N}^*$, where $\mathcal{N}^*$ is the dual of $\mathcal{N}$. It will be identified with a $1$-cochain.
- The new bracket for the $\tilde{x}_i\in\widetilde{\mathcal{N}}$ has to satisfy the Hom-Nambu identity. This leads to a condition on $\varphi$ when one of the vector involved is $e$.
- Since $e$ is a central then the he Hom-Nambu identity has no restriction on $\lambda$.\
For $\tilde{x}_i=x_i+a_i e\in \tilde{\mathcal{N}}\ $, $\tilde{y}_i=y_i+b_i e\in \tilde{\mathcal{N}},\ \ 1\leq i\leq n$, we have $$\begin{aligned}
&& \big[\beta(\tilde{x}_1),....,\beta(\tilde{x}_{n-1}),[\tilde{y}_1,....,\tilde{y}_{n}]_{\widetilde{\mathcal{N}}}\big]_{\widetilde{\mathcal{N}}}= \\ && \sum_{i=1}^{n-1}\big[\beta(\tilde{y}_1),....,\beta(\tilde{y}_{i-1}),[\tilde{x}_1,....,\tilde{x}_{n-1},y_i]_{\widetilde{\mathcal{N}}}.
\beta(\tilde{y}_{i+1}),...,\beta(\tilde{y}_n)\big]_{\widetilde{\mathcal{N}}},
\end{aligned}$$
Using and the Hom-Nambu identity for the original Hom-Nambu-Lie algebra, one gets $$\begin{aligned}
\label{7.2}&& \varphi\big(\alpha(x_1),....,\alpha(x_{n-1}),[y_1,....,y_{n}]\big)- \nonumber\\ && \sum_{i=1}^{n-1}\varphi\big(\alpha(y_1),....,\alpha(y_{i-1}),[x_1,....,x_{n-1},y_i],\alpha(y_{i+1}),...,\alpha(y_n)\big)=0,
\end{aligned}$$
- The previous equation, may be written as $$\delta^2\varphi(x,y,z)=0$$ where $x=x_1\otimes...\otimes x_{n-1}\in \mathcal{N}^{\otimes n-1},\ y=y_1\otimes...\otimes y_{ n-1}\in \mathcal{N}^{\otimes n-1},\ z=y_n\in \mathcal{N}$.\
We provide below the condition that characterizes $\varphi\in \wedge^{n-1}\mathcal{N}^*\wedge \mathcal{N}^*$, $\varphi:x\wedge z\rightarrow\varphi(x,z)$ as a $1$-cocycle. It is seen now why becomes natural to call $\varphi$ a $1$-cocycle (rather than a $2$-cochain, as it is in the Hom-Lie cohomology case in [@HomDeform]).\
The number of elements of $\mathcal{L}(\mathcal{N})$ in the argument of a cochain determines its order. As we shall see shortly, an arbitrary $p$-cochain takes $p(n-1)+1$ arguments in $\mathcal{N}$. A $0$-cochain is an element of $\mathcal{N}^*$.
Cohomology adapted to central extensions of multiplicative Hom-Nambu-Lie algebras
---------------------------------------------------------------------------------
Let us now construct the cohomology complex relevant for central extensions of multiplicative Hom-Nambu-Lie algebras. Since $\mathcal{N}$ does not act on $\varphi(x,z)$, it will be the cohomology of multiplicative Hom-Nambu-Lie algebras for the trivial action.
We define an arbitrary $p$-cochain as an element $\varphi\in \wedge^{n-1}\mathcal{N}^*\otimes...\otimes\wedge^{n-1}\mathcal{N}^*\wedge \mathcal{N}^*$, $$\begin{aligned}
\varphi:\mathcal{L}(\mathcal{N})\otimes...\otimes \mathcal{L}(\mathcal{N})\wedge \mathcal{N} &\longrightarrow& \mathbb{K} \\
(x_1,..,x_p,z)\ \ \ \ &\longmapsto& \varphi(x_1,..,x_p,z)\end{aligned}$$
We denote the set of $p$-cochains with values in ${\mathbb K}$ by $C^p(\mathcal{N},\mathbb{K})$.
Condition guarantees the consistency of $\varphi$ according to with the Hom-Nambu identity . Then $$\label{7.3}\delta^2\varphi(x,y,z)=\varphi\big(\alpha(x),L(y)\cdot z\big)-\varphi\big(\alpha(y),L(x)\cdot z\big)-
\varphi\big([x,y]_{\alpha},\alpha(z)\big)=0,$$ where $L(x)\cdot z$ and $[x,y]_\alpha$ are defined in and . It is now straightforward to extend to a whole cohomology complex; $\delta^p\varphi$ will be a $(p+1)$-cochain taking one more argument of $\mathcal{L}(\mathcal{N})$ than $\varphi$. This is done by means of the following
Let $\varphi\in C^p(\mathcal{N},\mathbb{K})$ be a $p$-cochain on a multiplicative $n$-ary Hom-Nambu-Lie algebra $\mathcal{N}$. A coboundary operator $\delta^p$ on arbitrary $p$-cochain is given by
$$\begin{aligned}
\delta^p\varphi(x_1,...,x_{p+1},z) &=& \sum_{1\leq i<j}^{p+1}(-1)^i\varphi\big(\alpha(x_1),...,\hat{x_i},...,[x_i,x_j]_{\alpha},...,\alpha(x_{p+1}),\alpha(z)\big) \\
&+& \sum_{i=1}^{p+1}(-1)^i\varphi\big(\alpha(x_1),...,\hat{x_i},...,\alpha(x_{p+1}),L(x_i)\cdot z\big)\nonumber
\end{aligned}$$
where $x_1,...,x_{p+1}\in \mathcal{L}(\mathcal{N}),\ z\in \mathcal{N}$ and $\hat{x}_i$ designed that $x_i$ is omitted.
\[propo4.4\] If $\varphi\in C^p(\mathcal{N},\mathbb{K})$ be a $p$-cochain, then $$\delta^{p+1}\circ\delta^p(\varphi)=0$$
Let $\varphi$ be a $p$-cochain, $(x_i)_{1\leq i\leq p}\in \mathcal{L}(\mathcal{N})$ et $z\in \mathcal{N}$, we can write $\delta^p$ and $\delta^{p+1}\circ\delta^p$ as $$\begin{aligned}
& & \delta^p=\delta_1^p+\delta_2^p \\
\textrm{and }& & \delta^{p+1}\circ\delta^p=\eta_{11}+\eta_{12}+\eta_{21}+\eta_{22}\end{aligned}$$ where $\eta_{ij}=\delta_i^{p+1}\circ\delta_j^p$, $1\leq i,j\leq 2$, and
$$\begin{aligned}
\delta_1^p\varphi(x_1,...,x_{p+1},z )&=& \sum_{1\leq i<j}^{p+1}(-1)^i\varphi\big(\alpha(x_1),...,\hat{x_i},...,[x_i,x_j]_{\alpha},...,\alpha(x_{p+1}),\alpha(z)\big) \\
\delta_2^p\varphi(x_1,...,x_{p+1},z) &=& \sum_{i=1}^{p+1}(-1)^i\varphi\big(\alpha(x_1),...,\hat{x_i},...,\alpha(x_{p+1}),L(x_i)\cdot z\big)\end{aligned}$$
$\bullet$ Let us compute first $\eta_{11}\varphi(x_1,...,x_{p+1},z)$. This is given by $$\begin{aligned}
& & \eta_{11}(\varphi)(x_1,...,x_{p+1},z)\\
&=& \sum_{1\leq i<k< j}^{p+1}(-1)^{i+k}\varphi\big(\alpha^2(x_1),...,\widehat{x_i},...,\widehat{\alpha(x_k)},...,[\alpha(x_k),[x_i,x_j]_\alpha]_\alpha,....,
\alpha^2(x_{p+1}),\alpha^2(z)\big) \\
&+& \sum_{1\leq i<k< j}^{p+1}(-1)^{i+k-1}\varphi\big(\alpha^2(x_1),...,\widehat{\alpha(x_i)},...,\widehat{x_k},...,[\alpha(x_i),[x_k,x_j]_\alpha]_\alpha,....,
\alpha^2(x_{p+1}),\alpha^2(z)\big) \\
&+& \sum_{1\leq i<k< j}^{p+1}(-1)^{i+k-1}\varphi\big(\alpha^2(x_1),...,\widehat{x_i},...,\widehat{[x_i,x_k]_\alpha},...,[[x_i,x_k]_\alpha,\alpha(x_j)]_\alpha,....,
\alpha^2(x_{p+1}),\alpha^2(z)\big).\end{aligned}$$ Whence applying the Hom-Leibniz identity to $x_i,\ x_j,\ x_k\in \mathcal{L}(\mathcal{N})$, we find $\eta_{11}=0$.\
$\bullet$ $$\begin{aligned}
& & \eta_{21}(\varphi)(x_1,...,x_{p+1},z)+\eta_{12}(\varphi)(x_1,...,x_{p+1},z)= \\
& & \sum_{1\leq i< j}^{p+1}(-1)^{i-1}\varphi\big(\alpha^2(x_1),...,\widehat{x}_i,...,\widehat{[x_i,x_j]_\alpha},...,
\alpha^2(x_{p+1}),L([x_i,x_j]_\alpha)\cdot\alpha(z)\big)\end{aligned}$$ and $$\begin{aligned}
& & \eta_{22}(\varphi)(x_1,...,x_{p+1},z) \\
&=& \sum_{1\leq i< j}^{p+1}(-1)^i\varphi\big(\alpha^2(x_1),...,\widehat{x}_i,...,\widehat{\alpha(x_j)},...,
\alpha^2(x_{p+1}),\big(L(\alpha(x_i))\cdot(L(x_j)\cdot z)\big)\big) \\
&+& \sum_{1\leq i< j}^{p+1}(-1)^{i-1}\varphi\big(\alpha^2(x_1),...,\widehat{\alpha(x_i)},...,\widehat{x}_j,...,
\alpha^2(x_{p+1}),\big(L(\alpha(x_j))\cdot(L(x_i)\cdot z)\big)\big).\end{aligned}$$ Then applying the Lemma \[3.1\] to $x_i,\ x_j\in \mathcal{L}(\mathcal{N})$ and $z\in \mathcal{N}$, $\eta_{12}+\eta_{21}+\eta_{22}=0$.\
Which ends the proof
The space of $p$-cocycles is defined by $$Z^p(\mathcal{N},\mathbb{K})=\{\varphi\in C^p(\mathcal{N},\mathbb{K}):\delta^p\varphi=0\}$$ and the space of $p$-coboundaries is defined by $$B^p(\mathcal{N},\mathbb{K})=\{\psi=\delta^{p-1}\varphi:\varphi\in C^{p-1}(\mathcal{N},\mathbb{K})\}$$
$B^p(\mathcal{N},\mathbb{K})\subset Z^p(\mathcal{N},\mathbb{K})$
We call $p^{\textrm{th}}$-cohomology group the quotient $$H^p(\mathcal{N},\mathbb{K})=\frac{Z^p(\mathcal{N},\mathbb{K})}{B^p(\mathcal{N},\mathbb{K})}$$
Let $(\mathcal{N},[\cdot ,...,\cdot ])$ be a Nambu-Lie algebra $($see [@Fil] [@W.X]$)$ and $\{e_i\}_{i= 1}^{ n+1}$ be a basis such that $$\label{filip}[e_1,...,\hat{e}_i,...,e_{n+1}]=(-1)^{i+1}\varepsilon_ie_i\ \ \ \textrm{or}\ \ [e_{i_1},....,e_{i_n}]=(-1)^n\sum_{i=1}^{n+1}\varepsilon_i\epsilon_{i_1,...,i_n}^ie_i$$ where $\varepsilon_i=\pm1$ $($no sum over the $i$ of the $\varepsilon_i$ factors$)$ just introduce signs that affect the different terms of the sum in $i$ and we have used Filippov’s notation.\
Note that we might equally well have the $\epsilon_{i_1,...,i_n}^i$ without signs $\varepsilon_i$ in $\ref{filip}$ by taking $\epsilon_{i_1,...,i_n}^i=\eta^{ij}\epsilon_{i_1,...,i_n,j}$, where $\epsilon_{1,...,n,(n+1)}=1$ and $\eta$ is a $(n+1)\times(n+1)$ diagonal matrix with $+1$ and $-1$ in places indicated by the $\varepsilon_i$’s. We shall keep nevertheless the customary $\varepsilon_i$ factors above as in $e.g.$ [@W.X].\
Let $\alpha :\mathcal{N}\rightarrow \mathcal{N}$ be a morphism of Nambu-Lie algebras. Then using Theorem 1.5, $\mathcal{N}_\alpha=(\mathcal{N},[\cdot ,...,\cdot ]_\alpha,\tilde{\alpha}=(\alpha,...,\alpha))$ is a Hom-Nambu-Lie algebra where the bracket $[\cdot ,...,\cdot ]_\alpha$ is given by $$_\alpha=(-1)^{i+1}\varepsilon_i\alpha(e_i)\ \ \ \textrm{or}\ \ [e_{i_1},....,e_{i_n}]_\alpha=(-1)^n\sum_{i=1}^{n+1}\varepsilon_i\epsilon_{i_1,...,i_n}^i\alpha(e_i).$$
We establish the following result.
Any $1$-cochain of the Hom-Nambu-Lie algebra $\mathcal{N}_\alpha$ is a $1$-coboundary (and thus a trivial $1$-cocycle).
Let $\varphi\in C^1(\mathcal{N},\mathbb{K})$ be a $1$-cochain on $\mathcal{N}_\alpha$, $\varphi$ is determined by its coordinates $\varphi_{i_1,...,i_n}=\varphi(e_{i_1},...,e_{i_n})$. We now show that, in fact, a $1$-cochain on $\mathcal{N}_\alpha$ is a $1$-coboudary, that is there exists a $0$-cochain $\phi$ such that $$\label{7.4}
\varphi_{i_1,...,i_n}=-\phi([e_{i_1},...,e_{i_n}])=-\sum_{k=1}^{n+1}\varepsilon_k\epsilon_{i_1,...,i_n}^k\phi_k,$$ where $\phi_k=\phi\circ\alpha(e_k)$. Indeed, given $\varphi$ then the $0$-cochain $\phi$ is given by $$\phi_k=-\frac{\varepsilon_k}{n!}\sum_{i_1...i_n}^{n+1}\epsilon_k^{i_1,...,i_n}\varphi_{i_1,...,i_n}$$ has the desired property : $$\begin{aligned}
-\phi([e_{i_1},...,e_{i_n}])&=&-\sum_{k=1}^{n+1}\varepsilon_k\epsilon_{i_1,...,i_n}^k\phi_k\nonumber \\
&=& \sum_{k=1}^{n+1}\epsilon_{i_1,...,i_n}^k \frac{\varepsilon_k^2}{n!}\sum_{j_1...j_n}^{n+1}\epsilon_k^{j_1,...,j_n}\varphi_{j_1,...,j_n}\nonumber\\
&=& \frac{1}{n!}\sum_{j_1...j_n}^{n+1}\epsilon_{i_1,...,i_n}^{j_1,...,j_n}\varphi_{j_1,...,j_n}=\varphi_{i_1,...,i_n}
\end{aligned}$$ which proves the lemma.
Deformation of $n$-ary Hom-Nambu-Lie algebras
==============================================
Let $\mathbb{K}[[t]]$ be the power series ring in one variable $t$ and coefficients in $\mathbb{K}$ and $\mathcal{N}[[t]]$ be the set of formal series whose coefficients are elements of the vector space $\mathcal{N}$, ($\mathcal{N}[[t]]$ is obtained by extending the coefficients domain of $\mathcal{N}$ from $\mathbb{K}$ to $\mathbb{K}[[t]]$). Given a $\mathbb{K}$-$n$-linear map $\varphi:\mathcal{N}\times...\times \mathcal{N}\rightarrow \mathcal{N}$, it admits naturally an extension to a $\mathbb{K}[[t]]$-$n$-linear map $\varphi:\mathcal{N}[[t]]\times...\times \mathcal{N}[[t]]\rightarrow \mathcal{N}[[t]]$, that is, if $x_i={\displaystyle}\sum_{j\geq0}a_i^jt^j$, $1\leq i\leq n$ then $\varphi(x_1,...,x_n)={\displaystyle}\sum_{j_1,...,j_n\geq0}t^{j_1+...+j_n}\varphi(a_1^{j_1},...,a_n^{j_n})$. The same holds for linear map.
Let $(\mathcal{N},[\cdot ,...,\cdot ],\widetilde{\alpha}),\ \widetilde{\alpha}=(\alpha_i)_{1\leq i\leq n-1}$ be a Hom-Nambu-Lie algebra. A formal deformation of the Hom-Nambu-Lie algebra $\mathcal{N}$ is given by a $\mathbb{K}[[t]]$-$n$-linear map $$[\cdot ,...,\cdot ]_t:\mathcal{N}[[t]]\times...\times \mathcal{N}[[t]]\rightarrow \mathcal{N}[[t]]$$of the form $[\cdot ,...,\cdot ]_t={\displaystyle}\sum_{i\geq0}t^i[\cdot ,...,\cdot ]_i$ where each $[\cdot ,...,\cdot ]_i$ is a $\mathbb{K}[[t]]$-$n$-linear map $[\cdot ,...,\cdot ]_i:\mathcal{N}\times...\times \mathcal{N}\rightarrow \mathcal{N}$ (extending to be $\mathbb{K}[[t]]$-$n$-linear), and $[\cdot ,...,\cdot ]_0=[\cdot ,...,\cdot ]$ such that for $(x_i)_{1\leq i\leq n-1},\ (y_i)_{1\leq i\leq n}\in \mathcal{N}$ $$\big[\alpha_1(x_1),....,\alpha_{n-1}(x_{n-1}),[y_1,....,y_n]_t\big]_t=$$ $$\sum_{i=1}^{n-1}\big[\alpha_1(y_1),....,\alpha_{i-1}(y_{i-1}),[x_1,....,x_{n-1},y_i]_t
,\alpha_i(y_{i+1}),...,\alpha_{n-1}(y_n)\big]_t.$$ The deformation is said to be of order $k$ if $[\cdot ,...,\cdot ]_t={\displaystyle}\sum_{i=0}^kt^i[\cdot ,...,\cdot ]_i$ and infinitesimal if $t^2=0$.\
In terms of elements $x=(x_i)_{1\leq i\leq n-1},\ y=(y_i)_{1\leq i\leq n-1}\in \mathcal{L}(\mathcal{N})$ and setting $z=y_n$ the above condition reads $$\label{8.1} L_t([x ,y ]_{\alpha})\cdot \alpha_n(z)=L_t(\tilde{\alpha}(x))\cdot \big(L_t(y)\cdot z\big)-L_t(\tilde{\alpha}(y))\cdot \big(L_t(x).z\big)$$ where $L_t(x)\cdot z=[x_1,...,x_{n-1},z]_t$ and $\tilde{\alpha}(x)=(\alpha_i(x_i))_{1\leq i\leq n-1}$.\
Now let $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ be a multiplicative Hom-Nambu-Lie (i.e. $\alpha_1=...=\alpha_n=\alpha$).\
Eq. implies, keeping only terms linear in $t$, $$\begin{aligned}
& &\big[\alpha(x_1),....,\alpha(x_{n-1}),\psi(y_1,....,y_n)\big] + \psi\big(\alpha(x_1),....,\alpha(x_{n-1}),[y_1,....,y_n]\big) \\
&=& \sum_{i=1}^n \big[\alpha(y_1),....,\alpha(y_{i-1}),\psi(x_1,....,x_{n-1},y_i)
,\alpha(y_{i+1}),...,\alpha(y_n)\big] \\
&+&\sum_{i=1}^n\psi\big(\alpha(y_1),....,\alpha(y_{i-1}),[x_1,....,x_{n-1},y_i]
,\alpha(y_{i+1}),...,\alpha(y_n)\big).
\end{aligned}$$ This expression may be read as the $1$-cocycle condition $\delta^1\psi=0$ for the $\mathcal{N}$-valued cochain $\psi$. In terms of $x,\ y\in \mathcal{L}(\mathcal{N})$ it may be written, (setting again $y_n=z$ ), as $$\begin{aligned}
\delta^1\psi(x,y,z) &=& \psi(\alpha(x),L(y)\cdot z)-
\psi(\alpha(y),L(x)\cdot z)-\psi([x_1,x_2]_{\alpha},\alpha(z)) \\
&+&L(\alpha(x))\cdot \psi(y,z)- L(\alpha(y))\cdot \psi(x,z)+\big(\psi(x,\ \ )\cdot y\big)\bullet_{\alpha}\alpha(z)\nonumber
\end{aligned}$$ where $$\big(\psi(x,\ \ )\cdot y\big)\bullet_{\alpha}\alpha(z)={\displaystyle}\sum_{i=0}^{n-1}[\alpha(y_1),...,\psi(x,y_i),...,\alpha(y_{n-1}),\alpha(z)].$$
a $p$-cochains is an $p+1$-linear map $
\varphi:\mathcal{L}(\mathcal{N})\otimes...\otimes \mathcal{L}(\mathcal{N})\wedge \mathcal{N} \longrightarrow \mathcal{N} $, such that $$\alpha\circ\varphi(x_1,...,x_p,z)=\varphi(\alpha(x_1),...,\alpha(x_p),\alpha(z)).$$ We denote the set of a $p$-cochain by $C^p(\mathcal{N},\mathcal{N})$
We call, for $p\geq 1$, $p$-coboundary operator of the multiplicative Hom-Nambu-Lie $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ the linear map $\delta^p:C^p(\mathcal{N},\mathcal{N})\rightarrow C^{p+1}(\mathcal{N},\mathcal{N})$ defined by
$$\begin{aligned}
\delta^p\psi(x_1,...,x_p,x_{p+1},z) &=&\sum_{1\leq i\leq j}^{p+1}(-1)^i\psi\big(\alpha(x_1),...,\widehat{\alpha(x_i)},...,
\alpha(x_{j-1}),[x_i,x_j]_{\alpha},...,\alpha(x_{p+1}),\alpha(z)\big)\nonumber\\
&+& \sum_{i=1}^{p+1}(-1)^i\psi\big(\alpha(x_1),...,\widehat{\alpha(x_i)},...,
\alpha(x_{p+1}),L(x_i)\cdot z\big) \nonumber\\
&+& \sum_{i=1}^{p+1}(-1)^{i+1}L(\alpha^p(x_i))\cdot \psi\big(x_1,...,\widehat{x_i},...,
x_{p+1},z\big)\nonumber\\
&+& (-1)^p\big(\psi(x_1,...,x_p,\ )\cdot x_{p+1}\big)\bullet_{\alpha}\alpha^p(z)
\end{aligned}$$
where $$\big(\psi(x_1,...,x_p,\ )\cdot x_{p+1}\big)\bullet_{\alpha}\alpha^p(z)={\displaystyle}\sum_{i=1}^{n-1}[\alpha^p(x_{p+1}^1),...,\psi(x_1,...,x_p,x_{p+1}^i ),...,\alpha^p(x_{p+1}^{n-1}),\alpha^p(z)],$$\
for all $x_i=(x_i^j)_{1\leq j\leq n-1}\in \mathcal{L}(\mathcal{N}),\ 1\leq i\leq p+1$, $z\in \mathcal{N}$ and $\widehat{x}_i$ designed that $x_i$ is omitted.
\[opercob\]Let $\psi\in C^p(\mathcal{N},\mathcal{N})$ be a $p$-cochain then $$\delta^{p+1}\circ\delta^p(\psi)=0.$$
Let $\psi$ be a $p$-cochain, $x_i=(x_i^j)_{1\leq j\leq n-1}\in \mathcal{L}(\mathcal{N}),\ 1\leq i\leq p+2$ and $z\in \mathcal{N}$ we can write $\delta^p$ and $\delta^{p+1}\circ\delta^p$ as $$\begin{aligned}
& & \delta^p=\delta_1^p+\delta_2^p+\delta_3^p+\delta_4^p, \\
\textrm{and} & & \delta^{p+1}\circ\delta^p= \sum_{i,j=1}^4\eta_{ij},
\end{aligned}$$ when $\eta_{ij}=\delta_i^{p+1}\circ\delta_j^p$ and $$\begin{aligned}
\delta_1^p\psi(x_1,...,x_{p+1},z )&=& \sum_{1\leq i<j}^{p+1}(-1)^i\psi\big(\alpha(x_1),...,\widehat{x_i},...,[x_i,x_j]_{\alpha},...,\alpha(x_{p+1}),\alpha(z)\big) \\
\delta_2^p\psi(x_1,...,x_{p+1},z) &=& \sum_{i=1}^{p+1}(-1)^i\psi\big(\alpha(x_1),...,\widehat{x_i},...,\alpha(x_{p+1}),L(x_i).z\big) \\
\delta_3^p\psi(x_1,...,x_{p+1},z) &=& \sum_{i=1}^{p+1}(-1)^{i+1}L(\alpha^p(x_i))\cdot \psi\big(x_1,...,\widehat{x_i},...,
x_{p+1},z\big) \\
\delta_4^p\psi(x_1,...,x_{p+1},z) &=& (-1)^p\big(\psi(x_1,...,x_p,\ )\cdot x_{p+1}\big)\bullet_{\alpha}\alpha^p(z)
\end{aligned}$$ To simplify the notations we replace $L(x)\cdot z$ by $x\cdot z$.\
The proof that $\eta_{11}+\eta_{12}+\eta_{21}+\eta_{22}=0$ is similar to the proof in Proposition \[propo4.4\].\
On the other hand, we have $$\begin{aligned}
\star \ \eta_{13}\psi(x_1,...,x_{p+2},z )
&=&\sum_{1\leq i<j<k}^{p+2} \big\{(-1)^{k+i}\alpha^{p+1}(x_k)\cdot \psi(\alpha(x_1),...,\widehat{x}_i,...,[x_i,x_j]_\alpha,...,\widehat{x}_k,...,\alpha(z)) \\
&+& (-1)^{j+i}\alpha^{p+1}(x_j)\cdot \psi(\alpha(x_1),...,\widehat{x}_i,...,\widehat{x}_j,...,[x_i,x_k]_\alpha,...,\alpha(z)) \\
&+& (-1)^{j+i-1}\alpha^{p+1}(x_i)\cdot \psi(\alpha(x_1),...,\widehat{x}_i,...,\widehat{x}_j,...,[x_j,x_k]_\alpha,...,\alpha(z))\big\}\\
& & \\
\star\ \eta_{31}\psi(x_1,...,x_{p+2},z ) &=& -\eta_{13}\psi(x_1,...,x_{p+2},z ) \\
& +& \sum_{1\leq i<j}^{p+2} (-1)^{i+j}\alpha^p([x_i,x_j]_\alpha)\cdot \alpha\big(\psi(x_1,...,\widehat{x}_i,...,\widehat{x}_j,...,z)\big)\\
& & \\
\star\ \eta_{33}\psi(x_1,...,x_{p+2},z )
&=&\sum_{1\leq i<j}^{p+2} \Big\{ (-1)^{i+j}\alpha^{p+1}(x_i)\cdot \big(\alpha^p(x_j).\big(\psi(x_1,...,\widehat{x}_i,...,\widehat{x}_j,...,z)\big)\big) \\
&+& (-1)^{i+j-1}\alpha^{p+1}(x_j)\cdot \big(\alpha^p(x_i)\cdot \big(\psi(x_1,...,\widehat{x}_i,...,\widehat{x}_j,...,z)\big)\big)\Big\}
\end{aligned}$$ Then, applying Lemma \[3.1\] to $\alpha^p(x_i)\in \mathcal{L}(\mathcal{N})$, $\alpha^p(x_j)\in \mathcal{L}(\mathcal{N})$ et $\psi(x_1,...,\widehat{x}_i,...,\widehat{x}_j,...,z)\in \mathcal{N}$, we have$$\eta_{13}+\eta_{33}+\eta_{31}=0.$$ by the same calculation, we can prove that $$\eta_{23}+\eta_{32}=0.$$ $$\begin{aligned}
& &\star\ \eta_{14}\psi(x_1,...,x_{p+2},z) \\&=& (-1)^p\sum_{1\leq i<j}^{p+1}(-1)^i\sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,[x_i,x_j]_\alpha,...,\alpha(x_p),\alpha(x_{p+1}^k)\big),...,\alpha^{p+1}(z)\big] \\
&+& (-1)^p\sum_{i=1}^{p+1}(-1)^i\sum_{k,l=1;k\neq l}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),...,\alpha^p(x_i\cdot x_{p+2}^l), ...,\psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^{p+1}(z)\big] \\
&+& (-1)^p\sum_{i=1}^{p+1}(-1)^i\sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),x_i\cdot x_{p+2}^k\big),...,\alpha^{p+1}(z)\big].
\end{aligned}$$ The first term in $\eta_{14}$ is equal to $-\eta_{41}$, hence $$\begin{aligned}
& & \star\ (\eta_{14}+\eta_{41})\psi(x_1,...,x_{p+2},z)\\
&=& \sum_{i=1}^{p+1}(-1)^{p+i}\sum_{k,l=1;k\neq l}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),...,\alpha^p(x_i\cdot x_{p+2}^l),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^{p+1}(z)\big] \\
&+& \sum_{i=1}^{p+1}(-1)^{p+i}\sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),x_i\cdot x_{p+2}^k\big),...,\alpha^{p+1}(z)\big]\\
\textrm{and}& & \\
& & \star\ \eta_{24}\psi(x_1,...,x_{p+2},z)\\ &=& \sum_{i=1}^{p+1}\sum_{k=1}^{n-1}(-1)^{p+i}\big[\alpha^{p+1}(x_{p+2}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^p(x_i\cdot z)\big]\\
&+& \sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+1}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_p),\alpha(x_{p+1}^k)\big),...,\alpha^p(x_{p+2}\cdot z)\big]\\
& & \\
& &\star\ \eta_{42}\psi(x_1,...,x_{p+2},z)\\
&=& (-1)^{p+1}\sum_{i=1}^{p+1}(-1)^i\sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),x_i\cdot x_{p+2}^k\big),...,\alpha^{p+1}(z)\big].
\end{aligned}$$ Hence, $-\eta_{42}$ and the second term of $ (\eta_{14}+\eta_{41})$ are equal.\
Using the Hom-Nambu identity for any integers $1\leq i\leq p+1$ et $1\leq k\leq n-1$ $$\begin{aligned}
& & \alpha^{p+1}(x_i)\cdot \big[\alpha^p(x_{p+2}^1),..., \psi\big(x_1,...,\widehat{x}_i,...,x_{p+1},x_{p+2}^k\big),...,\alpha^p(z)\big] \\
&= & \sum_{l=1;l\neq k}^{n-1}\Big\{\big[\alpha^{p+1}(x_{p+2}^1),...,\alpha^p(x_i\cdot x_{p+2}^l),...,
\psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^{p+1}(z)\big]\Big\} \\
& &+\big[\alpha^{p+1}(x_{p+2}^1),...,
\psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^p(x_i\cdot z)\big] \\
& & +\big[\alpha^{p+1}(x_{p+1}^1),...,\alpha^p(x_i)\cdot \psi\big(x_1,...,\widehat{x}_i,...,x_{p+1},x_{p+2}^k\big),...,\alpha^{p+1}(z)\big]\end{aligned}$$ when we add the four terms $\eta_{14}$, $\eta_{41}$, $\eta_{24}$ and $\eta_{42}$, we have the following expression $$\begin{aligned}
& & (\eta_{14}+\eta_{41}+\eta_{24}+\eta_{42})\psi(x_1,...,x_{p+2},z)\\
&=& \sum_{i=1}^{p+1}(-1)^{i+p}\sum_{l=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \alpha^p(x_i)\cdot \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^{p+1}(z)\big] \\
&+& (-1)^{p-1}\sum_{i=1}^{p+1}(-1)^i\sum_{k=1}^{n-1}\alpha^{p+1}(x_i)\cdot \big[\alpha^p(x_{p+2}^1),..., \psi\big(x_1,...,\widehat{x}_i,...,x_{p+1},x_{p+2}^k\big),...,\alpha^p(z)\big]\\
&+& \sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+1}^1),..., \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_p),\alpha(x_{p+1}^k)\big),...,\alpha^p(x_{p+2}\cdot z)\big]\\
\textrm{and}& & \\
& &\star\ \eta_{43}\psi(x_1,...,x_{p+2},z)\\
&=& \sum_{i=1}^{p+1}(-1)^{p+i}\sum_{k=1}^{n-1}\alpha^{p+1}(x_i)\cdot \big[\alpha^{p}(x_{p+2}^1),..., \psi\big(x_1,...,\widehat{x}_i,...,x_{p+1},x_{p+2}^k\big),...,\alpha^{p}(z)\big] \\
&-& \sum_{k=1}^{n-1}\alpha^{p+1}(x_{p+2})\cdot \big[\alpha^p(x_{p+1}^1),..., \psi\big(x_1,...,x_p,x_{p+1}^k\big),...,\alpha^p(z)\big],
\end{aligned}$$ $$\begin{aligned}
& & \eta_{34}\psi(x_1,...,x_{p+2},z)\\
&=& \sum_{i=1}^{p+1}(-1)^{i+p+1}\sum_{l=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., \alpha^p(x_i)\cdot \psi\big(\alpha(x_1),...,\widehat{x}_i,...,\alpha(x_{p+1}),\alpha(x_{p+2}^k)\big),...,\alpha^{p+1}(z)\big].
\end{aligned}$$ Hence $$\begin{aligned}
& & (\eta_{14}+\eta_{41}+\eta_{24}+\eta_{42}+\eta_{34}+\eta_{43})\psi(x_1,...,x_{p+2},z)=-\eta_{44}\psi(x_1,...,x_{p+2},z)\\
&=& -\sum_{i=1}^{n-1}\sum_{k=1}^{n-1}\big[\alpha^{p+1}(x_{p+2}^1),..., [\alpha^p(x_{p+1}^1),..., \psi\big(x_1,...,x_p,x_{p+1}^k\big),...,\alpha^p(x_{p+2}^i)],...,\alpha^{p+1}(x_{p+2}^{n-1}),\alpha^{p+1}(z)\big]\\
&=&-\sum_{k=1}^{n-1}\alpha^{p+1}(x_{p+2})\cdot [\alpha^p(x_{p+1}^1),..., \psi\big(x_1,...,x_p,x_{p+1}^k\big),...,\alpha^p(z)]\\
&+&\sum_{k=1}^{n-1}[\alpha^{p+1}(x_{p+1}^1),..., \psi\big(\alpha(x_1),...,\alpha(x_p),\alpha(x_{p+1}^k)\big),...,\alpha^p(x_{p+2}\cdot z)].\end{aligned}$$ Then, we have $$\eta_{14}+\eta_{41}+\eta_{24}+\eta_{42}+\eta_{34}+\eta_{43}+\eta_{44}=0,$$ which ends the proof.
The space of $p$-cocycles is defined by $$Z^p(\mathcal{N},\mathcal{N})=\{\varphi\in C^p(\mathcal{N},\mathcal{N}):\delta^p\varphi=0\},$$ and the space of $p$-coboundaries is defined by $$B^p(\mathcal{N},\mathcal{N})=\{\psi=\delta^{p-1}\varphi:\varphi\in C^{p-1}(\mathcal{N},\mathcal{N})\}.$$
$B^p(\mathcal{N},\mathcal{N})\subset Z^p(\mathcal{N},\mathcal{N})$
We call the $p^{\textrm{th}}$-cohomology group the quotient $$H^p(\mathcal{N},\mathcal{N})=\frac{Z^p(\mathcal{N},\mathcal{N})}{B^p(\mathcal{N},\mathcal{N})}.$$
Cohomology of $n$-ary Hom-algebras induced by cohomology of Hom-Leibniz algebras
================================================================================
Cohomology of ternary Hom-Nambu algebras induced by cohomology of Hom-Leibniz algebras
--------------------------------------------------------------------------------------
In this section we extend to ternary multiplicative Hom-Nambu-Lie algebras the Takhtajan’s construction of a cohomology of ternary Nambu-Lie algebras starting from Chevalley-Eilenberg cohomology of binary Lie algebras, (see [@Takhtajan0; @Takhtajan1; @Takhtajan2]). The cohomology of multiplicative Hom-Lie algebras was introduced in [@AEM] and independently in [@Sheng].
The cohomology complex for Leibniz algebras was defined by Loday-Pirashvili in [@lodayPirashvili93]. We extend it to Hom-Leibniz algebras as follows.
Let $(A, [\cdot , \cdot ], \alpha )$ be a Hom-Leibniz algebras and $\mathcal{C}_\mathcal{L}(A,A)$ be the set of cochains $\mathcal{C}^p_\mathcal{L}(A,A)=Hom(\otimes^p A,A)$ for $n\geq 1$. We set $\mathcal{C}^0_\mathcal{L}(A,A)=A$. We define a coboundary operator $d$ by $d\varphi(a)=-[\varphi, a]$ when $\varphi\in \mathcal{C}^0_\mathcal{L}(A,A)$ and for $p\geq 1$, $\varphi\in \mathcal{C}^p_\mathcal{L}(A,A)$, $a_1,\cdots ,a_{p+1}\in A$
$$\begin{aligned}
\label{Leibnizcohomo}
d^p\varphi(a_1,\cdots ,a_{p+1})& =\sum_{k=1}^p(-1)^{k-1} \big[ \alpha^{p-1}(a_k),\varphi(a_1,\cdots,\widehat{a_k},\cdots,a_{p+1})\big]\\
\ & +(-1)^{p+1} \big[ \varphi(a_1\otimes\cdots\otimes a_p),\alpha^{p-1}(a_{p+1}) \big] \nonumber\\
\ & +\sum_{1\leq k<j}^{p+1}{(-1)^k \varphi(\alpha(a_1)\otimes \cdots \otimes\widehat{a_k}\otimes \cdots\otimes\alpha(a_{j-1})\otimes [a_k,a_j]\otimes\alpha(a_{j+1})\otimes \cdots\otimes\alpha(a_{p+1}))}\nonumber\end{aligned}$$
Notice that we recover the classical case when $\alpha =id.$\
We aim now to derive the cohomology of a ternary Hom-Nambu algebra from the cohomology of Hom-leibniz algebra following the procedure described for ternary Nambu algebra in [@Takhtajan0].\
Let $(\mathcal{N},[\cdot ,\cdot,\cdot ],\alpha)$ be a multiplicative ternary Hom-Nambu-Lie algebra. Using Proposition \[HomLeibOfHomNambu\] the triple $(\mathcal{L}(\mathcal{N})=\mathcal{N}\otimes\mathcal{N},[\cdot,\cdot]_\alpha,\alpha)$ where the bracket is defined for $x=x_1\otimes x_2$ and $y=y_1\otimes y_2$ by $$=[x_1,x_2,y_1]\otimes \alpha (y_2)+\alpha (y_1)\otimes [x_1,x_2,y_2],$$ is a Hom-Leibniz algebra.
Let $(\mathcal{N},[\cdot ,\cdot,\cdot ],\alpha)$ be a multiplicative ternary Hom-Nambu-Lie algebra and $\mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})=Hom(\otimes^{2p+1}\mathcal{N},\mathcal{N})$ for $n\geq 1$ be the cochains. Let $\Delta :\mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})\rightarrow \mathcal{C}^{p+1}_\mathcal{L}(\mathcal{L},\mathcal{L})$ be the linear map defined for $p=0$ by $$\Delta\varphi(x_1\otimes x_2)=x_1\otimes\varphi( x_2)+\varphi(x_1)\otimes x_2$$ and for $p>0$ $$\begin{aligned}
(\Delta\varphi )(a_1,\cdots ,a_{p+1})=\alpha^{p-1}(x_{2p+1})\otimes\varphi(a_1,\cdots ,a_{p}\otimes x_{2p+2})+\varphi(a_1,\cdots ,a_{p}\otimes x_{2p+1})\otimes \alpha^{p-1}(x_{2p+2}),\end{aligned}$$ where we set $a_j=x_{2j-1}\otimes x_{2j}.$
Then there exists a cohomology complex $(\mathcal{C}^\bullet_\mathcal{N}(\mathcal{N},\mathcal{N}),\delta )$ for ternary Hom-Nambu-Lie algebras such that $$d\circ \Delta =\Delta\circ \delta.$$
The coboundary map $\delta: \mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})\rightarrow \mathcal{C}^{p+1}_\mathcal{N}(\mathcal{N},\mathcal{N})$ is defined for $\varphi\in \mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})$ by $$\begin{aligned}
\delta\varphi (x_1\otimes\cdots \otimes x_{2p+1})& =\sum_{j=1}^p{\sum_{k=2j+1}^{2p+1}{(-1)^j\varphi(\alpha(x_1)\otimes\cdots \otimes [x_{2j-1},x_{2j},x_k]\otimes \cdots \otimes \alpha(x_{2p+1}))}}+\\
\ & \sum_{k=1}^{p}{(-1)^{k-1}[\alpha^{p-1}(x_{2k-1}),\alpha^{p-1}(x_{2k}),\varphi(x_1\otimes\cdots \otimes\widehat{x_{2k-1}}\otimes \widehat{x_{2k}}\otimes \cdots \otimes x_{2p+1})]}+
\nonumber\\
\ &(-1)^{n+1}[\alpha^{p-1}(x_{2p-1}),\varphi(x_1\otimes \cdots \otimes x_{2p-2}\otimes x_{2p}),\alpha^{p-1}(x_{2p+1})]+
\nonumber\\
\nonumber \ &(-1)^{p+1}[\varphi(x_1\otimes \cdots \otimes x_{2p-1} ),\alpha^{p-1}(x_{2p}),\alpha^{p-1}(x_{2p+1})]\end{aligned}$$
The proof is a particular case of Theorem \[constakh\] proof.
The theorem shows that one may derive the cohomology complex of ternary Hom-Nambu-Lie algebras from the cohomology complex of Hom-Leibniz algebras.
Cohomology of n-ary Hom-Nambu-Lie algebras induced by cohomology of Hom-Leibniz algebras
----------------------------------------------------------------------------------------
We generalize in this section the result of the previous section to $n$-ary Hom-Nambu-Lie algebras.\
Let $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ be a multiplicative $n$-ary Hom-Nambu-Lie algebra and the triple $(\mathcal{L}(\mathcal{N})=\mathcal{N}^{\otimes n-1},[\cdot,\cdot]_\alpha,\alpha)$ be the Hom-Leibniz algebra associates to $\mathcal{N}$ where the bracket is defined in .
\[constakh\]Let $(\mathcal{N},[\cdot ,...,\cdot ],\alpha)$ be a multiplicative $n$-ary Hom-Nambu-Lie algebra and $\mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})=Hom(\otimes^{p}\mathcal{L}(\mathcal{N})\otimes\mathcal{N},\mathcal{N})$ for $p\geq 1$ be the cochains. Let $\Delta :\mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})\rightarrow \mathcal{C}^{p+1}_\mathcal{L}(\mathcal{L},\mathcal{L})$ be the linear map defined for $p=0$ by $$\begin{aligned}
\Delta\varphi(x_1\otimes\cdots\otimes x_{n-1})={\displaystyle}\sum_{i=0}^{n-1}x_1\otimes\cdots\otimes\varphi(x_i)\otimes\cdots\otimes x_{n-1}\end{aligned}$$ and for $p>0$ by $$\begin{aligned}
\label{defdelta}(\Delta\varphi )(a_1,\cdots ,a_{p+1})=\sum_{i=1}^{n-1}\alpha^{p-1}(x_{p+1}^1)\otimes\cdots\otimes\varphi(a_1,\cdots ,a_{n-1}\otimes x_{p+1}^i)\otimes\cdots\otimes \alpha^{n-1}(x_{p+1}^{n-1}),\end{aligned}$$ where we set $a_j=x_{j}^1\otimes\cdots\otimes x_j^{n-1}.$
Then there exists a cohomology complex $(\mathcal{C}^\bullet_\mathcal{N}(\mathcal{N},\mathcal{N}),\delta )$ for $n$-ary Hom-Nambu-Lie algebras such that $$d\circ \Delta =\Delta\circ \delta.$$
The coboundary map $\delta: \mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})\rightarrow \mathcal{C}^{p+1}_\mathcal{N}(\mathcal{N},\mathcal{N})$ is defined for $\varphi\in \mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})$ by $$\begin{aligned}
\delta\varphi(a_1,...,a_p,a_{p+1},x) &=&\sum_{1\leq i\leq j}^{p+1}(-1)^i\varphi\big(\alpha(a_1),...,\widehat{\alpha(a_i)},...,
\alpha(a_{j-1}),[a_i,a_j]_{\alpha},...,\alpha(a_{p+1}),\alpha(x)\big)\\
&+& \sum_{i=1}^{p+1}(-1)^i\varphi\big(\alpha(a_1),...,\widehat{\alpha(a_i)},...,
\alpha(a_{p+1}),L(a_i).x\big) \\
&+& \sum_{i=1}^{p+1}(-1)^{i+1}L(\alpha^p(a_i))\cdot \varphi\big(a_1,...,\widehat{a_i},...,
a_{p+1},x\big)\\
&+& (-1)^p\big(\varphi(a_1,...,a_p,\ )\cdot a_{p+1}\big)\bullet_{\alpha}\alpha^p(x),
\end{aligned}$$ where $$\big(\varphi(a_1,...,a_p,\ )\cdot a_{p+1}\big)\bullet_{\alpha}\alpha^p(x)={\displaystyle}\sum_{i=1}^{n-1}[\alpha^p(x_{p+1}^1),...,\varphi(a_1,...,a_p,x_{p+1}^i ),...,\alpha^p(x_{p+1}^{n-1}),\alpha^p(x)].$$\
for all $a_i\in \mathcal{L}(\mathcal{N})$, $x\in \mathcal{N}$.
Let $\varphi\in\mathcal{C}^p_\mathcal{N}(\mathcal{N},\mathcal{N})$ and $(a_1\cdots a_{p+1})\in\mathcal{L} $ where $a_j=x_{1}^j\otimes\cdots\otimes x_{n-1}^j.$\
Then $\Delta\varphi\in \mathcal{C}^{p+1}_\mathcal{L}(\mathcal{L},\mathcal{L})$ and using we can to write $d=d_1+d_2+d_3$, where
$$\begin{aligned}
d_1\varphi(a_1,\cdots ,a_{p+1})& =\sum_{k=1}^p(-1)^{k-1} \big[ \alpha^{p-1}(a_k),\varphi(a_1,\cdots,\widehat{a_k},\cdots,a_{p+1})\big]\nonumber\\
d_2\varphi(a_1,\cdots ,a_{p+1})\ & =(-1)^{p+1} \big[ \varphi(a_1\otimes\cdots\otimes a_p),\alpha^{p-1}(a_{p+1}) \big] \nonumber\\
d_3\varphi(a_1,\cdots ,a_{p+1})\ & =\sum_{1\leq k<j}^{p+1}{(-1)^k \varphi(\alpha(a_1)\otimes \cdots \otimes\widehat{a_k}\otimes \cdots\otimes\alpha(a_{j-1})\otimes [a_k,a_j]\otimes\alpha(a_{j+1})\otimes \cdots\otimes\alpha(a_{p+1}))}\nonumber\end{aligned}$$
By we have $$\begin{aligned}
& d_1\circ\Delta\varphi(a_1,\cdots ,a_{p+1})\nonumber\\
\ & =\sum_{k=1}^p(-1)^{k-1} \big[ \alpha^{p-1}(a_k),\Delta\varphi(a_1,\cdots,\widehat{a_k},\cdots,a_{p+1})\big]\nonumber\\
\ & =\sum_{k=1}^p(-1)^{k-1}\sum_{i=1}^{n-1} \big[ \alpha^{p-1}(a_k),\alpha^{p-1}(x_{p+1}^1)\otimes\cdots\otimes\varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots\otimes\alpha^{p-1}(x_{p+1}^{n-1})\big]\nonumber\\
\ & =\sum_{k=1}^p(-1)^{k-1}\sum_{i>j}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\
\ & +\sum_{k=1}^p(-1)^{k-1}\sum_{j>i}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes \varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots \otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\\ & +\sum_{k=1}^p(-1)^{k-1}\sum_{i=1}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes L(\alpha^{p-1}(x_k)).\varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\
\ & =\sum_{k=1}^p(-1)^{k-1}\sum_{i>j}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\
\ & +\sum_{k=1}^p(-1)^{k-1}\sum_{j>i}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes \varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots \otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\
\ & +\Delta\circ\delta_3\circ\varphi(a_1,\cdots ,a_{p+1})\nonumber\\
\ & =\Lambda_1+\Lambda_2+\Delta\circ\delta_3\circ\varphi(a_1,\cdots ,a_{p+1})\nonumber\end{aligned}$$ where $$\begin{aligned}
& \Lambda_1=\sum_{k=1}^p(-1)^{k-1}\sum_{i>j}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\\
\ & \Lambda_2=\sum_{k=1}^p(-1)^{k-1}\sum_{j>i}^{n-1} \alpha^{p}(x_{p+1}^1)\otimes\cdots\otimes \varphi(a_1,\cdots,\widehat{a_k},\cdots,x_{p+1}^i)
\otimes\cdots \otimes L(\alpha^{p-1}(x_k)).\alpha^{p-1}(x_{p+1}^j)\otimes\cdots\otimes\alpha^{p}(x_{p+1}^{n-1})\nonumber\end{aligned}$$ Similarly we can prove that $$\begin{aligned}
d_2\circ\Delta\varphi(a_1,\cdots ,a_{p+1})
\ & =\Delta\circ\delta_4\varphi(a_1,\cdots ,a_{p+1})\nonumber\end{aligned}$$ and $$\begin{aligned}
& d_3\Delta\circ\varphi(a_1,\cdots ,a_{p+1})\nonumber\\
\ & =\sum_{1\leq k<j}^{p}{(-1)^k \Delta\circ\varphi(\alpha(a_1)\otimes \cdots \otimes\widehat{a_k}\otimes \cdots\otimes\alpha(a_{j-1})\otimes [a_k,a_j]\otimes\alpha(a_{j+1})\otimes \cdots\otimes\alpha(a_{p+1}))}\nonumber\\
\ & +\sum_{ k=1}^{p+1}{(-1)^k \varphi(\alpha(a_1)\otimes \cdots \otimes\widehat{a_k}\otimes \cdots\otimes\alpha(a_{p})\otimes [a_k,a_{p+1}])}\nonumber\\
\ & =\Delta\circ\delta_1\varphi(a_1,\cdots ,a_{p+1})+\Delta\circ\delta_2\varphi(a_1,\cdots ,a_{p+1})\nonumber\\
\ & +\Lambda'_1+\Lambda'_2\nonumber\end{aligned}$$ where $\Lambda'_1=-\Lambda_1$ and $\Lambda'_2=-\Lambda_2$.\
Finally we have $$d\circ\Delta=d_1\circ\Delta+d_2\circ\Delta+d_3\circ\Delta=\Delta\circ\delta_3+\Delta\circ\delta_4+
\Delta\circ\delta_1+\Delta\circ\delta_2=\Delta\circ\delta$$ where $\delta=\delta_1+\delta_2+\delta_3+\delta_4$ as defined in Proof \[opercob\].
If $d^2=0$, then $\delta^2=0$.\
In fact,we have $d\circ\Delta=\Delta\circ\delta$, then $$\Delta\circ\delta^2 =\Delta\circ\delta\circ\delta =d\circ\Delta\circ\delta=d\circ. d\circ\Delta=d^2\circ\Delta=0.$$
[10]{}
Ammar F. and Makhlouf A., *Hom-Lie algebras and Hom-Lie admissible superalgebras*, Journal of Algebra, Vol. 324 (7), (2010) 1513–1528. Ammar F., Ejbehi Z. and Makhlouf A., *Cohomology and Deformations of Hom-algebras,* arXiv:1005.0456 (2010). Ammar F., Makhlouf A. and Silvestrov S., *Ternary q-Virasoro-Witt Hom-Nambu-Lie algebras,* Journal of Physics A: Mathematical and Theoretical, **43** 265204 (2010). Ataguema H., Makhlouf A. *Deformations of ternary algebras*, Journal of Generalized Lie Theory and Applications, vol. **1,** (2007), 41–45. [to3em]{} *Notes on cohomologies of ternary algebras of associative type*, Journal of Generalized Lie Theory and Applications **3** no. 3, (2009), 157–174 Ataguema H., Makhlouf A. and Silvestrov S., *Generalization of $n$-ary Nambu algebras and beyond*, Journal of Mathematical Physics **50**, 1 (2009). Arnlind J., Makhlouf A. and Silvestrov S., *Ternary Hom-Nambu-Lie algebras induced by Hom-Lie algebras*, Journal of Mathematical Physics, **51**, 043515 (2010). Bagger J. and Lambert N., *Gauge Symmetry and Supersymmetry of Multiple M2-Branes,* arXiv:0711.0955v2 \[hep-th\], (2007). Phys. Rev. D 77, 065008 (2008). Basu A. and Harvey J.A., *The M2-M5 brane system and a generalzed nahm equation*, Nucl. Phys. B **713** (2005). Benayadi S. and Makhlouf A., *Hom-Lie algebras with symmetric invariant nondegenerate bilinear forms*, e-print arXiv 0113.111 (2010). Cassas J.M., Loday J.-L. and Pirashvili *Leibniz $n$-algebras*, Forum Math. **14** (2002), 189–207. Daletskii Y.L. and Takhtajan L.A., *Leibniz and Lie Structures for Nambu algebra*, Letters in Mathematical Physics **39** (1997) 127–141. De Azcarraga J. A. and Izquierdo J.M. *n-ary algebras: a review with applications*, e-print arXiv 1005.1028 (2010). Filippov V., *n-Lie algebras*, Sibirsk. Mat. Zh. 26, 126-140 (1985) (English transl.: Siberian Math. J. 26, 879-891 (1985)). Gautheron P., Some Remarks Concerning Nambu Mechanics, Letters in Mathematical Physics **37** (1996) 103–116. Larsson D., Silvestrov S. D.: *Quasi-Hom-Lie algebras, Central Extensions and 2-cocycle-like identities,* J. of Algebra **288**, 321–344 (2005). Ling W. X. , *On the structure of n-Lie algebras*, PhD thesis, Siegen, 1993. Loday J.L. and Pirashvili T., *Universal enveloping algebras of Leibniz algebras and (co)homology*, Math. Ann. **296** 139–158 (1993). Makhlouf A., *Paradigm of Nonassociative Hom-algebras and Hom-superalgebras*, *Proceedings of Jordan Structures in Algebra and Analysis Meeting*, Eds: J. Carmona Tapia, A. Morales Campoy, A. M. Peralta Pereira, M. I. Ramírez Álvarez, Publishing house: Circulo Rojo, ( 145–177). Makhlouf A. and Silvestrov S. D., *Hom-algebra structures*, J. Gen. Lie Theory Appl. **2** (2) , 51–64 (2008).
[to3em]{}*Hom-Lie admissible Hom-coalgebras and Hom-Hopf algebras*, Published as Chapter 17, pp 189-206, [S. Silvestrov, E. Paal, V. Abramov, A. Stolin, (Eds.), Generalized Lie theory in Mathematics, Physics and Beyond, Springer-Verlag, Berlin, Heidelberg, (2008).]{}
[to3em]{}*Notes on Formal deformations of Hom-Associative and Hom-Lie algebras*, Forum Mathematicum, vol. **22** (4) (2010), 715–739.
[to3em]{}*Hom-Algebras and Hom-Coalgebras*, Journal of Algebra and Its Applications Vol. **9**, DOI: 10.1142/S0219498810004117, (2010). Markl M. and Remm E. *(Non-)Koszulity of operads for n-ary algebras, cohomology and deformations*, e-print arXiv:0907.1505v1 (2009). Nambu Y., *Generalized Hamiltonian mechanics*, Phys. Rev. D7, 2405-2412 (1973) Okubo S. *Triple products and Yang-Baxter equation (I): Octonionic and quaternionic triple systems*, J. Math.Phys. 34, 3273-3291 (1993). Kerner R., *Ternary algebraic structures and their applications in physics*, in the “Proc. BTLP 23rd International Colloquium on Group Theoretical Methods in Physics”, ArXiv math-ph/0011023, (2000). [to3em]{}*Z3-graded algebras and non-commutative gauge theories*, dans le livre “Spinors, Twistors, Clifford Algebras and Quantum Deformations”, Eds. Z. Oziewicz, B. Jancewicz, A. Borowiec, pp. 349-357, Kluwer Academic Publishers (1993). [to3em]{}*The cubic chessboard : Geometry and physics, Classical Quantum Gravity* 14, A203-A225 (1997). Sheng Y., *Representations of hom-Lie algebras*, arXiv:1005.0140v1 \[math-ph\] (2010). Takhtajan L., *On foundation of the generalized Nambu mechanics*, Comm. Math. Phys. **160** (1994), 295-315.
[to3em]{}*A higher order analog of Chevally-Eilenberg complex and deformation theory of n-algebras*, St. Petersburg Math. J. **6** (1995), 429-438. [to3em]{}*Leibniz and Lie algebra structures for Nambu algebra*, Lettres in Mathematical Physics **39:** 127-141, (1997). Yau D., *On n-ary Hom-Nambu and Hom-Nambu-Lie algebras*, arXiv:1004.2080v1 (2010). [to3em]{}*on $n$-ary Hom-Nambu and Hom-Maltsev algebras*, arXiv:1004.4795v1 (2010). [to3em]{}*A Hom-associatve analogue of $n$-ary Hom-Nambu algebras*, arXiv:1005.2373v1 (2010).
[^1]:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we introduce a new hierarchical model for the simultaneous detection of brain activation and estimation of the shape of the hemodynamic response in multi-subject fMRI studies. The proposed approach circumvents a major stumbling block in standard multi-subject fMRI data analysis, in that it both allows the shape of the hemodynamic response function to vary across region and subjects, while still providing a straightforward way to estimate population-level activation. An efficient estimation algorithm is presented, as is an inferential framework that not only allows for tests of activation, but also for tests for deviations from some canonical shape. The model is validated through simulations and application to a multi-subject fMRI study of thermal pain.'
author:
- |
David Degras$^{\rm a}$ and Martin A. Lindquist$^{\rm b}$\
\
$^{\rm a}$[*[Department of Mathematical Sciences, DePaul University, USA.]{}*]{}\
$^{\rm b}$[*[Department of Biostatistics, Johns Hopkins University, USA.]{}*]{}\
- |
David Degras$^{\rm a}$ and Martin A. Lindquist$^{\rm b} \footnote{Corresponding author: 615 N. Wolfe Street, E3634; Baltimore, MD 21205, e-mail: [email protected]. This research was supported by NIH grant R01EB016061.}$\
\
$^{\rm a}$[*[Department of Mathematical Sciences, DePaul University, USA.]{}*]{}\
$^{\rm b}$[*[Department of Biostatistics, Johns Hopkins University, USA.]{}*]{}\
bibliography:
- 'fmri.bib'
title: 'A Hierarchical Model for Simultaneous Detection and Estimation in Multi-subject fMRI Studies'
---
INTRODUCTION
============
Depending on their scientific goals, researchers in functional magnetic resonance imaging (fMRI) often choose modeling strategies with the intent to either [*detect*]{} the magnitude of activation in a certain brain region, or [*estimate*]{} the shape of the hemodynamic response associated with the task being performed [@poldrack2011handbook]. While most of the focus in neuroimaging to date has been on detection [@lindquist2008statistical], the magnitude of evoked activation cannot be accurately measured without either assuming or measuring timing and shape information as well. In practice, many statistical models of fMRI data attempt to simultaneously incorporate information about the shape, timing, and magnitude of task-evoked hemodynamic responses.
As an example, consider the general linear model (GLM) approach [@worsley], which is arguably the dominant approach towards analyzing fMRI data. It models the fMRI time series as a linear combination of several different signal components and tests whether activity in a brain region is related to any of them. Typically the shape of the hemodynamic response is assumed [*a priori*]{}, using a canonical hemodynamic response function (HRF) [@friston1998event; @glover1999deconvolution], and the focus of the analysis is on obtaining the magnitude of the response across the brain. However, it is well-known that the shape of the HRF varies both across space and subjects [@aguirre; @Schacter; @Handwerker2004; @Badillo2013]; thus assuming a constant shape across all voxels and subjects may give rise to significant bias in large parts of the brain [@lindquist6; @lindquist8]. The constant HRF assumption can be relaxed by expressing it as a linear combination of several known basis functions. This can be done within the GLM framework by convolving the same stimulus function with multiple canonical waveforms and including them as multiple columns of the design matrix for each condition. The coefficients for an event type constructed using different basis functions can then be combined to fit the evoked HRF in that particular area of the brain.
The ability to use basis sets to capture variations in hemodynamic responses depends both on the number and shape of the reference waveforms that are used in the model. For example, the finite impulse response (FIR) basis set, consists of one free parameter for every time-point following stimulation in every cognitive event-type that is modeled [@glover1999deconvolution; @goutte]. Therefore, it can be used to estimate HRFs of arbitrary shape for each event type in every voxel of the brain. Another, perhaps more common approach, is to use the canonical HRF together with its temporal and dispersion derivatives to allow for small shifts in both the onset and width of the HRF. Other choices of basis sets include those composed of principal components [@aguirre; @woolrich], cosine functions [@zarahn2002using], radial basis functions [@riera], spectral basis sets [@Liao02] and inverse logit functions [@lindquist6]. For a critical evaluation of a number of commonly used basis sets, see [@lindquist6] and [@lindquist8].
Though basis sets allow the constant HRF assumption to be relaxed in the GLM framework, they are still not without problems, particularly when performing multi-subject analysis. Most analyses of multi-subject fMRI data involve two separate models. A first-level GLM is performed separately on each subject’s data, providing subject-specific contrasts of parameter estimates. A second-level model is thereafter used to provide population inference on whether the contrasts are significantly different from zero and assess the effects of any group-level predictors, such as group status or behavioral performance. However, it is problematic to define appropriate first-level contrasts that truly capture the behavior we are interested in detecting, when multiple basis sets are included in the model. For example, when each condition consists of multiple basis functions it is not self-evident how to properly define a relevant contrast comparing the difference in activation between the two conditions.
There have been a number of suggestions about how to deal with this issue, most notably using only the “main” basis set [@kiehl2001event]. In the case of the canonical HRF and its derivatives, this entails only using the coefficient corresponding to the canonical HRF, and treating the coefficients corresponding to the derivatives as nuisance parameters. While this works relatively well for small deviations from the HRFs canonical form, it quickly falls apart as the shape begins to differ. To circumvent this problem [@Calhoun04] suggested using the norm of the coefficients for the canonical HRF and its derivatives. Another suggestion [@lindquist8] is to re-create the HRF after estimation and use the resulting amplitude as the contrast of interest.
In this work we introduce a new approach towards multi-subject analysis of fMRI data that enables us to simultaneously estimate the specific shape of the HRF for a subject in a given voxel, and to obtain a population-level estimate of the magnitude of activation. This approach offers the flexibility of basis sets while retaining the simplicity of multi-subject inference with a single canonical HRF. We also provide an inferential framework that allows us to test both for activations as well as for any differences in HRF shape from some canonical form.
The idea of performing joint estimation and detection is not new in the neuroimaging literature. For example, Makni and colleagues [@makni2005joint; @makni2008fully] have suggested a Bayesian approach towards the detection of brain activity that uses a mixture of two Gaussian distributions as a prior on a latent neural response, whereas the hemodynamic impulse response is constrained to be smooth using a Gaussian prior. In this model all parameters of interest are estimated from the posterior distribution using Gibbs sampling. Later work has provided a number of interesting extensions of this model, including to spatial mixture models [@vincent2010spatially] and reformulating it as a missing data problem that allows for a simplified estimation procedure [@chaari2012fast].
Our suggested model takes a different approach. Here the HRF is modeled as a linear combination of B-spline functions (see e.g., [@Genovese2000] for an early use of spline functions in model fitting). We assume that subject-level HRFs are random draws from a population-level distribution and that for any given voxel the population average response across stimuli will vary only in scale. We provide an efficient algorithm for estimating the model parameters as well as inferential methods. The latter includes both tests of activation and for deviations in the HRF from some canonical form.
This paper is organized as follows. In Section 2 we introduce our hierarchical model. In Sections 3 and 4 we outline an efficient algorithm for estimating the model parameters and performing inference on them. In Sections 5 and 6 we evaluate the performance of the model on a series of simulated data sets and data from an experiment studying the effects of thermal pain. Both the simulated and real data were previously used in a large study of flexible HRF modeling procedures [@lindquist8]. The proposed method is shown to outperform each of the previously tested approaches, which include the canonical GLM plus its derivatives, the smooth FIR model, and the inverse logit model. We conclude with a discussion of the suggested approach.
METHODS
=======
The Hierarchical Model
----------------------
In this section we outline the proposed hierarchical model for simultaneous estimation and detection. For simplicity, we assume only one session per subject and a single experimental group. To study multiple groups, it suffices to apply the model below and the estimation procedure of section \[sec: estimation\] separately to each group. Similarly, the inference method of section \[sec: inference\] easily extends to multiple samples. We assume that all scans have been acquired at the same repetition time $\Delta$ and registered to a standard stereotactic space. For the $j^{th}$ subject ($1\le j \le n$), we model the BOLD response at the $v^{th}$ voxel ($1 \le v \le V$) and $t^{th}$ scan ($1\le t \le T_j$) as $$\label{model}
\underbrace{y_{j}(v,t)}_{\textrm{BOLD}} =\sum_{l=1}^L \int_0^{t\Delta} \underbrace{s_{jl} (t\Delta -\tau)}_{\textrm{stimulus}}\, \underbrace{ h_{j l }(v,\tau) }_{\textrm{HRF}} d\tau+
\sum_{\nu=1}^q d_{j\nu}(v) \underbrace{ \varphi_{j\nu}(t) }_{\textrm{nuisance} } +
\underbrace{\varepsilon_{j}(v,t)}_{\textrm{noise}} .$$ Here the response consists of a linear combination of stimulus-induced signal of interest (represented by the first sum on the right-hand side of the equation), nuisance signal (the second sum) and noise. The stimulus function $s_{j l }$ is a stick-function that has baseline at zero and takes the value one whenever stimuli of the $ l^{th} $ type are presented. The nuisance signals $\varphi_{j\nu}, \, 1\le \nu \le q,$ typically include scanner drift, represented by polynomial and/or cosine basis sets, and physiological noise such as head motion, heart beat, and respiration. The corresponding nuisance parameters $d_{j\nu}$ must be estimated from the data. The subject-level HRF $h_{j l }$ decomposes into
$$\label{hrf model decomp}
h_{j l }(v, \tau) = h_{ l }(v, \tau) + \eta_{j l }(v,\tau) ,$$
where $h_l$ is a population-level HRF (fixed effect) and $\eta_{j l }$ is a subject-specific (random) effect. Hence, each subject-level HRF is assumed to be a random draw from a population with mean $h_{l}$. Thus, similar to other recent methods (e.g. [@sanyal2012bayesian] and [@Zhang2013]) this allows us to borrow strength across subjects to improve subject-specific estimation. The functions $h_l(v, \cdot)$ and $\eta_{j l }(v, \cdot)$ are represented using a set of basis functions $(B_k )_{1\le k \le K}$: $$\label{hrf model function basis}
h_{ l }(v,\tau) = \sum_{k=1}^K \gamma_{ l k} (v) B_{k}(\tau), \quad
\eta_{j l }(v,\tau) = \sum_{k=1}^K \xi_{j l k} (v) B_{k}(\tau),$$ so estimation reduces to solving for the coefficients $\gamma_{ l k} (v)$ and $\xi_{j l k} (v)$. In practice we suggest specifying the $B_k$ as B-splines with regularly spaced knots over a time interval where the HRF is believed to be non-zero, say in the range between $0$ and $30$ seconds. B-spline basis sets have several desirable features. First, the coefficients of a function in a B-spline basis are very close to the function itself, i.e., the function values at the knots (possibly up to a scaling factor). For this reason, B-spline coefficients are immediately interpretable and inference of local features of the HRF is greatly facilitated. In addition, the compact support of B-splines typically induces sparsity in the design matrices and thus reduces the computational load.
Noting that the shape of the HRF at a given brain location is mostly determined by physiological factors that are independent of the nature of the stimulus, we further assume that the population-level HRFs $h_l(v,\cdot), \, 1\le l \le L,$ have the same shape for all conditions and differ only in scale: $$\label{simplification hrf}
\gamma_{ l k}(v) = \beta_{ l } (v)\, \gamma_{k}(v) .$$ Here the terms $\gamma_k (v), \, 1\le k \le K,$ determine the shape of the HRF, while $\beta_l (v)$ determines the amplitude of the response to stimulus of the $l^{th}$ type. To make these parameters identifiable, we impose the scale constraint $\sum_k \gamma_{k}(v)^2=1$ and the orientation constraint $\gamma_{k_0}(v)>0$, with $k_0 = \mathrm{arg\,max}_k |\gamma_{k}(v)|$. Assumption considerably reduces the number of HRF parameters from $KL$ to $K+L$ (the same modeling assumption is used in e.g., [@makni2005joint]), which allows one to use a reasonably large number of basis functions while maintaining a good estimation accuracy. On the other hand, the product form of makes model nonlinear with respect to the parameters $\beta_l(v)$ and $\gamma_k(v)$. Note that the shape coefficients $\gamma_{k}(v)$ are well defined only if at least one of the HRFs $h_{l}(v,\cdot)$ is not identically zero, that is, if at least one experimental condition induces an activation at voxel $v$.
The random effects $ \xi_{j l k} (v)$ in represent subject-specific deviations from the group-level HRF coefficients $ \gamma_{ l k} (v)$. They are assumed to be independent across subjects and conditions. In addition they are assumed to be Gaussian random vectors with mean zero and correlation structure that is stationary in time and constant in space: $$\label{cov paradigm-related random effects}
\begin{array}{l}
\mathrm{Cov}( \xi_{j l k}(v), \xi_{j' l ' k'}(v)) = ( \delta_{jj'} \delta_{ l l '}) \, \sigma_{\xi l}^2(v) \, \rho_{\xi l }(|k-k'|) .
\end{array}$$ In the previous equation, $\sigma_{\xi l}^2(v)$ denotes the common variance of the $ \xi_{j l k} (v)$ ($1\le j \le n,\, 1\le k \le K$), $\rho_{\xi l}$ is an autocorrelation function, and $\delta$ is the Kronecker delta ($\delta_{xy}=1$ if $x=y$, $\delta_{xy}=0$ if $x\ne y$). Note that $\rho_{\xi l }(0) = 1$.
To specify the dependence structure of the noise component $\varepsilon$, we consider a partition of the spatial domain $\mathcal{D}$ into neuro-anatomic parcels $\mathcal{D}_1,\ldots,\mathcal{D}_M$ (e.g., Brodmann areas or any suitable brain atlas). We assume that for each voxel $v$ of a parcel $\mathcal{D}_m $, $\varepsilon_{j}(v,\cdot)$ is a stationary Gaussian AR($p$) process whose variance $\sigma_{\varepsilon m}^2$ and structural parameters $\boldsymbol{\theta}_{\varepsilon m}=(\theta_{\varepsilon 1m} ,\ldots, \theta_{\varepsilon pm})$ only depend on $m$. In other words, $\varepsilon_{j}(v,t) = \sum_{k=1}^p \theta_{km} \,\varepsilon_j(v,t-k) + e_j(v,t)$, where the $e_j(v,t), \, 1\le t \le N_j ,$ are i.i.d. normal innovations. The specification of the noise dependence at the parcel level reflects the belief that this noise is spatially smooth. An alternative way to characterize the spatial smoothness of $\varepsilon$ would be to model the AR parameters $\sigma_{\varepsilon }^2$ and $\boldsymbol{\theta}_{\varepsilon}$ as smooth functions of $v$.
Model Summary {#model-summary .unnumbered}
-------------
For the $j^{th}$ subject and $v^{th}$ voxel, the BOLD time course $\mathbf{y}_{j} (v) = \left( y_{j}(v,1),\ldots, y_{j}(v,T_j)\right)' $ can be expressed in matrix form as $$\label{time course model}
{\mathbf{y}_j}(v) = {\mathbf{X}_j}({\boldsymbol{\beta}}(v) \otimes {\boldsymbol{\gamma}}(v)) +{\mathbf{X}_j}{\boldsymbol{\xi}_j}(v) + {\boldsymbol{\Phi}_j}{\mathbf{d}_j}(v)+ {\boldsymbol{\varepsilon}_j}(v) .$$ As in the standard GLM, the design matrix ${\mathbf{X}_j}$ is the convolution of the stimulus functions $s_{jl}$ with the basis functions $B_k$. To be precise, ${\mathbf{X}_j}= ( \mathbf{X}_{j1},\ldots,\mathbf{X}_{jL} ) $ with ${\mathbf{X}_{jl}}=
( \mathbf{x}_{j1l },\ldots,\mathbf{x}_{jKl} ) $ and $\mathbf{x}_{jkl } = \big( \int_0^{\Delta} B_k(\tau) s_{jl}(\Delta-\tau)d\tau,\ldots, \int_0^{T_j \Delta} B_k(\tau) s_{jl}(T_j \Delta -\tau)d\tau\big)'$. We have also written ${\boldsymbol{\beta}}(v) = ( \beta_{1}(v), \ldots , \beta_{L}(v) )' $ and ${\boldsymbol{\gamma}}(v) = ( \gamma_{1}(v), \ldots , \gamma_{K}(v) )'$ for the amplitude and scale parameters of the population-level HRF; $ {\boldsymbol{\xi}_j}(v) = ( {\boldsymbol{\xi}}_{j1}(v)',\ldots,{\boldsymbol{\xi}}_{jL}(v)' )'$ and ${\boldsymbol{\xi}_{jl}}(v) = ( \xi_{j1l}(v),\ldots,\xi_{jKl}(v))' $ for the subject-specific effects; $ {\boldsymbol{\Phi}_j}= \left(\varphi_{j\nu}(t) \right)_{1\le \nu\le q,\, 1\le t \le T_j} $ and $ {\mathbf{d}}_j (v) = \left(d_{j1}(v),\ldots,d_{jq}(v) \right)' $ for the nuisance signals; and $ {\boldsymbol{\varepsilon}_j}(v) = \left( \varepsilon_j(v,1),\ldots, \varepsilon_j(v,T_j)\right)'$ for the noise. The symbol $\otimes$ denotes the Kronecker product.
The vector ${\mathbf{y}_j}(v)$ has a multivariate normal distribution with mean and covariance $$\label{mean and covariance of y_ij}
\begin{array}{l}
\displaystyle \boldsymbol{\mu}_j (v)= \mathbf{X}_j\left( \boldsymbol{\beta}(v) \otimes \boldsymbol{\gamma}(v) \right)
+ \boldsymbol{\Phi}_j \, \mathbf{d}_j(v), \medskip \\
{\mathbf{V}_j}(v) = {\mathbf{X}_j}\big({\mathbf{D}}_{\xi}(v) \otimes {\mathbf{I}}_K\big){\mathbf{T}}_\xi {\mathbf{X}_j}'
+ {\mathbf{V}}_{\varepsilon jm} ,
\end{array}$$ where ${\mathbf{D}}_\xi(v) = \mathrm{diag}(\sigma^2_{\xi 1}(v),\ldots,\sigma^2_{\xi L}(v))$, ${\mathbf{T}}_{\xi} = \mathrm{diag}({\mathbf{T}}_{\xi 1},\ldots,{\mathbf{T}}_{\xi L}) $ is a block diagonal matrix with ${\mathbf{T}}_{\xi l} = (\rho_{\xi l } (|k-k'| ) )_{1\le k,k' \le K} $, ${\mathbf{I}}_K$ is the $K\times K$ identity matrix, and $ {\mathbf{V}}_{\varepsilon jm}$ is the covariance matrix of ${\boldsymbol{\varepsilon}_j}(v)$ for $v\in \mathcal{D}_m$. Here the first term of ${\mathbf{V}_j}(v)$ corresponds to between-subject variation, while the second term is the within-subject variation.
Table 1 summarizes all model notations.
------------------------------------- ------------------------------------------------------------------------------
$j$ Subject index
$k$ Basis function index
$l$ Condition index
$v$ Voxel index
$n$ Sample size
$K$ Number of basis functions
$L$ Number of experimental conditions
$V$ Number of voxels
$B_k$ Basis function
$T_j$ Number of brain scans for subject $j$
${\mathbf{y}_j}(v) $ fMRI time course for subject $j$ at voxel $v$
${\mathbf{X}_j}$ Design matrix for subject $j$
$\mathbf{X}_{jl} $ Design matrix for subject $j$, condition $l$
${\boldsymbol{\beta}}(v)$ Amplitude parameters for the population HRF at voxel $v$
${\boldsymbol{\gamma}}(v) $ Shape parameters for the population HRF at voxel $v$
$ {\boldsymbol{\xi}_j}(v) $ Deviation of subject $j$ from population HRF at voxel $v$
${\boldsymbol{\xi}_{jl}}(v)$ Deviation of subject $j$ from population HRF for condition $l$ and voxel $v$
$ {\boldsymbol{\Phi}_j}$ Matrix of nuisance signals for subject $j$
$ {\mathbf{d}}_j (v)$ nuisance coefficients for subject $j$ and voxel $v$
$ {\boldsymbol{\varepsilon}_j}(v) $ Noise vector for subject $j$ at voxel $v$
${\mathbf{D}}_\xi(v)$ Variance coefficients of subject effects at voxel $v$
${\mathbf{T}}_{\xi} $ Correlation matrix for subject effects at voxel $v$
${\mathbf{T}}_{\xi l} $ Correlation matrix for subject effects for condition $l$ and voxel $v$
$ {\mathbf{V}}_{\varepsilon jm}$ Covariance matrix of the noise for subject $j$ and parcel $m$
${\mathbf{V}_j}(v)$ Covariance matrix of ${\mathbf{y}_j}(v) $
------------------------------------- ------------------------------------------------------------------------------
: Notations
\[default\]
Estimation {#sec: estimation}
----------
In this section we outline our procedure for estimating the parameters of our model. Our main objective is to estimate the population HRF at each voxel while taking into account the covariance structure of the data. We formulate this objective mathematically as a generalized, penalized, and constrained least squares problem. Since the corresponding objective function is non-convex, our procedure is only guaranteed to yield a local minimum. It is therefore important to select good starting values for the procedure, which we do by constructing a consistent pilot estimator of the HRF. We then provide consistent estimators of the data covariance parameters, after which we optimize the objective function to obtain the final HRF estimates. The entire procedure is performed in the following five steps, each described in detail in a subsequent subsection:
1. For each voxel $v$, estimate the HRF parameters $ \gamma_{kl}(v) $ by Penalized Least Squares (PLS). This pilot estimation does not exploit the HRF shape assumption and does not account for the covariance structure of the data.
2. For each parcel $\mathcal{D}_m$, estimate the parameters $\sigma_{\varepsilon m}^2$ and $\boldsymbol{\theta}_{\varepsilon m}$ of the AR noise process $\varepsilon$ by solving the Yule-Walker equations associated with the predicted errors ${\hat{\boldsymbol{\varepsilon}}_j}(v)$. The ${\hat{\boldsymbol{\varepsilon}}_j}(v)$ are obtained from a least squares fit on the residuals of step 1.
3. Estimate the temporal correlation parameters $\rho_{\xi l }(k)$ of the subject random effects by Maximum Likelihood (ML). The ML estimates are obtained separately on a small sample of voxels and pooled with a suitable statistic (e.g., trimmed mean or median).
4. For each voxel $v$, estimate the between-subject variance $\sigma_{\xi}^2(v)$ by Variance Least Squares (VLS).
5. For each voxel $v$, estimate $ {\boldsymbol{\beta}}(v) $ and $ {\boldsymbol{\gamma}}(v)$ again using a generalized, penalized, and constrained least squares approach.
### Step 1: Pilot estimation of the HRF {#sub: hrf and nuisance estimation .unnumbered}
For each voxel $v$, the HRF scale and shape coefficients ${\boldsymbol{\beta}}(v)$ and ${\boldsymbol{\gamma}}(v)$ are first estimated by penalized least squares (PLS): $$\label{pilot alpha_l}
\min_{\mathbf{h}, \mathbf{d}} \Bigg\{ \sum_{j=1}^n \big\| {\mathbf{y}_j}(v) - {\mathbf{X}_j}\mathbf{h} - {\boldsymbol{\Phi}_j}{\mathbf{d}_j}\big\|^2 + n\lambda_0 \, \mathbf{h}' ({\mathbf{I}}_L\otimes {\mathbf{P}}) \mathbf{h} \Bigg\} \, ,$$ where ${\mathbf{h}}$ is a vector of length $KL$ that estimates the HRF coefficients $\gamma_{lk}(v)$ and ${\mathbf{d}}= ({\mathbf{d}}_1' ,\ldots ,{\mathbf{d}}_n')'$ is a vector of length $nq$ that estimates the nuisance signals. The matrix ${\mathbf{P}}$ penalizes departures of ${\mathbf{h}}$ from a linear space of “reasonable" HRFs (e.g., the canonical HRF and its temporal derivative). More precisely, let $\boldsymbol{\Psi} $ be a matrix whose columns contain the coefficients of a few realistic HRFs in the function basis $(B_k)_{1\le k \le K}$. Then ${\mathbf{P}}= {\mathbf{I}}_K - \boldsymbol{\Psi}(\boldsymbol{\Psi}'\boldsymbol{\Psi})^{-1} \boldsymbol{\Psi}'$ is the projection on the orthogonal space of $\boldsymbol{\Psi}$. The smoothing parameter $\lambda_0>0$ determines the tradeoff between fitting the data and closeness to $\boldsymbol{\Psi}$. It can be selected manually or, for example, by $k$-fold cross-validation with the subjects randomly partitioned in $k$ subsamples.
Note that the minimization problem is unconstrained and does not rely on . Its solutions are $\hat{{\mathbf{h}}}(v) = ( \sum_{j=1}^n {\mathbf{X}_j}' {\mathbf{R}_j}{\mathbf{X}_j}+ n \lambda_0 ({\mathbf{I}}_L\otimes {\mathbf{P}}) )^{-1} \sum_{j=1}^n {\mathbf{X}_j}' {\mathbf{R}_j}{\mathbf{y}_j}(v) $ and $\hat{{\mathbf{d}}}_j (v) =
({\boldsymbol{\Phi}_j}'{\boldsymbol{\Phi}_j})^{-1} {\boldsymbol{\Phi}_j}'
({\mathbf{y}_j}(v) - {\mathbf{X}_j}\hat{{\mathbf{h}}}(v) ), \, 1\le j \le n$, where ${\mathbf{R}_j}= \mathbf{I}_T- {\boldsymbol{\Phi}_j}({\boldsymbol{\Phi}_j}'{\boldsymbol{\Phi}_j})^{-1}{\boldsymbol{\Phi}_j}' $ is the projection matrix on the orthogonal space of $\boldsymbol{\Phi}_j$. The estimator $\hat{{\mathbf{h}}}(v)$ is consistent and asymptotically unbiased for the $\gamma_{lk}(v)$ as the sample size $n\to\infty$ and the smoothing parameter $\lambda_0\to 0$. In view of , it follows that $\hat{{\boldsymbol{\beta}}}_{0}(v) = (\| \hat{{\mathbf{h}}}_1(v) \|, \ldots, \| \hat{{\mathbf{h}}}_L(v) \| )'$ is a consistent estimator of ${\boldsymbol{\beta}}(v)$, where $\hat{{\mathbf{h}}}(v) = (\hat{{\mathbf{h}}}_1(v)',\ldots,\hat{{\mathbf{h}}}_{L}(v)')'$ and the vectors $\hat{{\mathbf{h}}}_l(v) , 1\le l \le L,$ have length $K$. Similarly, the scaled average $\hat{{\boldsymbol{\gamma}}}_{0}(v) = \sum_{l=1}^L \hat{{\mathbf{h}}}_l(v) / \| \sum_{l=1}^L \hat{{\mathbf{h}}}_l(v) \| $ consistently estimates ${\boldsymbol{\gamma}}(v)$ when the latter is well-defined, i.e., when the voxel $v$ is activated by at least one experimental condition.
### Step 2: Estimation of the noise structure {#step-2-estimation-of-the-noise-structure .unnumbered}
We turn to the estimation of the noise parameters $\sigma^2_{\varepsilon m} $ and $\boldsymbol{\theta}_{\varepsilon m} $ in each parcel $\mathcal{D}_m, \, 1\le m \le M$. For each subject $ 1 \le j \le n $ and voxel $ v \in \mathcal{D}_m$, consider the residual vector ${\mathbf{r}_j}(v) = {\mathbf{y}_j}(v) - {\mathbf{X}_j}(\hat{{\boldsymbol{\beta}}}_0(x)\otimes \hat{{\boldsymbol{\gamma}}}_0(v)) - {\boldsymbol{\Phi}_j}\hat{\mathbf{d}}(v) = {\mathbf{R}_j}({\mathbf{y}_j}(v) - {\mathbf{X}_j}(\hat{{\boldsymbol{\beta}}}_0(x)\otimes \hat{{\boldsymbol{\gamma}}}_0(v)) )$ resulting from step 1. Given and the consistency of $\hat{\mathbf{h}} = \hat{\mathbf{h}}(v)$ in step 1, it holds that ${\mathbf{r}_j}(v) \approx {\mathbf{X}_j}{\boldsymbol{\xi}_j}+ {\boldsymbol{\varepsilon}_j}(v)$ for $n$ large enough and $\lambda_0$ small. The random effects ${\boldsymbol{\xi}_j}(v)$ and ${\boldsymbol{\varepsilon}_j}(v)$ can thus be predicted by least squares based on ${\mathbf{r}_j}(v)$, yielding $$\label{prediction xi eps}
\hat{{\boldsymbol{\xi}}}_j(v) = ({\mathbf{X}_j}'{\mathbf{X}_j})^{-1} {\mathbf{X}_j}'{\mathbf{r}_j}(v), \quad {\hat{\boldsymbol{\varepsilon}}_j}(v) = {\mathbf{r}_j}(v) - {\mathbf{X}_j}\hat{{\boldsymbol{\xi}}}_j(v) .$$ If the design matrix ${\mathbf{X}_j}$ is not full rank, the inverse $({\mathbf{X}_j}'{\mathbf{X}_j})^{-1}$ in the above formula is not defined and can be replaced by its pseudoinverse $ ({\mathbf{X}_j}'{\mathbf{X}_j})^{+} $. We then solve the Yule-Walker equations (see e.g., [@BrockwellDavis06 p. 239]) associated with ${\hat{\boldsymbol{\varepsilon}}_j}(v)$, producing consistent estimates of $\sigma^2_{\varepsilon m} $ and $\boldsymbol{\theta}_{\varepsilon m} $ for each subject $ 1\le j \le n$ and voxel $ v\in \mathcal{D}_m$. By taking the medians of these estimates across subjects and voxels, we obtain robust estimates $\hat{\sigma}^2_{\varepsilon m} $ and $\hat{\boldsymbol{\theta}}_{\varepsilon m} $.
### Step 3: Estimation of the temporal dependence in subject-specific effects {#sub: estim temp corr space var .unnumbered}
We estimate the temporal correlation parameters $ \rho_{\xi l }(k), \,1\le k \le K-1, \, 1\le l \le L,$ by Maximum Likelihood (ML). For computational efficiency, ML estimates are separately produced at a small number of voxels and aggregated with a suitable statistic such as the median or trimmed mean. In practice, we propose to select a random sample of about 1000 voxels for the ML estimation. Following a common usage, we first perform a few iterations of the EM algorithm to provide good starting values for the optimization of the likelihood function.
In the rest of this section, we fix a voxel and omit ther index $v$ for conciseness. Writing $ \boldsymbol{\sigma}_{\xi }^2 = ( \sigma_{\xi 1}^2,\ldots, \sigma_{\xi L}^2)' $ and $\boldsymbol{\rho}_{\xi}= (\rho_{\xi 1 }(1),\rho_{\xi 1 }(2),\ldots, \rho_{\xi L}(K-1))'$, we optimize the log-likelihood function (multiplied by $-2$) $$\label{loglik}
\begin{split}
& L ({\boldsymbol{\beta}}, {\boldsymbol{\gamma}},{\mathbf{d}}, \boldsymbol{\sigma}_{\xi }^2, \boldsymbol{\rho}_{\xi },\sigma_{\varepsilon m}^2 , \boldsymbol{\theta}_{\varepsilon m}) \\
& \qquad =
\sum_{j=1}^n \ln\left| {\mathbf{V}_j}\right|
+ \sum_{j=1}^n \left( {\mathbf{y}_j}- {\mathbf{X}_j}({\boldsymbol{\beta}}\otimes {\boldsymbol{\gamma}}) - {\boldsymbol{\Phi}_j}{\mathbf{d}}_j \right)' \mathbf{V}_j^{-1} \left( {\mathbf{y}_j}- {\mathbf{X}_j}({\boldsymbol{\beta}}\otimes {\boldsymbol{\gamma}}) - {\boldsymbol{\Phi}_j}{\mathbf{d}}_j \right)
\end{split}$$ with respect to $\boldsymbol{\sigma}_{\xi }^2 $ and $ \boldsymbol{\rho}_{\xi }$ while fixing ${\boldsymbol{\beta}}, {\boldsymbol{\gamma}},{\mathbf{d}},\sigma_{\varepsilon m}^2$, and $\boldsymbol{\theta}_{\varepsilon m} $ to their previously estimated values. Note that although we are only concerned here with the estimation of $ \boldsymbol{\rho}_{\xi }$, the likelihood must also be optimized with respect to $\boldsymbol{\sigma}_{\xi }^2 $. These variance parameters, which must be estimated at each voxel, will be assessed more efficiently in step 4. The implementation of the EM algorithm and likelihood optimization is described in Appendix \[app: temp dependence\]. More details on the EM algorithm for linear mixed models can be found in e.g., ([@Pawitan2001], chap. 12).
### Step 4: Estimation of the between-subjects variance {#sec: VLS .unnumbered}
For each voxel $v$, we estimate the between-subjects variances $\boldsymbol{\sigma}_{\xi }^2 (v)$ by a variance least squares approach (e.g., [@Demidenko2004], chap. 3) that consists of minimizing the distance between the residual covariance matrix and the theoretical covariance matrix: $$\label{VLS objective}
\min_{ \boldsymbol{\sigma}_{\xi }^2 }\ \sum_j \Big\| {\mathbf{r}_j}(v) {\mathbf{r}_j}'(v) -
{\mathbf{X}_j}\big({\mathbf{D}}_{\xi}(v) \otimes {\mathbf{I}}_K\big)\hat{{\mathbf{T}}}_\xi {\mathbf{X}_j}'
- \hat{{\mathbf{V}}}_{\varepsilon jm} \Big\|^2_F$$ subject to the constraint $\sigma_{\xi l}^2 \ge 0$ for $1\le l \le L$. Recall that the residuals ${\mathbf{r}_j}(v)$ are defined in step 2 of this section, ${\mathbf{D}}_{\xi} = \mathrm{diag} ( \sigma_{\xi 1}^2,\ldots, \sigma_{\xi L}^2 )$, and $ \hat{{\mathbf{V}}}_{\varepsilon jm}$ and $\hat{{\mathbf{T}}}_\xi$ are the estimates of ${\mathbf{V}}_{\varepsilon jm}$ and ${\mathbf{T}}_\xi$ obtained in steps 2-3. The notation $\| {\mathbf{A}}\|_F $ stands for the Frobenius norm $ \mathrm{tr}({\mathbf{A}}'{\mathbf{A}})^{1/2}$ of a matrix ${\mathbf{A}}$. Problem is a standard quadratic programming problem that expressed more simply as $$\label{VLS reloaded}
\min_{ \boldsymbol{\sigma}_{\xi }^2 } \Big\{ (\boldsymbol{\sigma}_{\xi }^2) ' {\mathbf{A}}\boldsymbol{\sigma}_{\xi }^2
- 2 \mathbf{b}' \boldsymbol{\sigma}_{\xi }^2 \Big\} ,$$ where ${\mathbf{A}}$ is a $L\times L$ matrix with $(l,l')$ entry $\sum_{j=1}^n \mathrm{tr} \big( {\mathbf{X}_{jl}}\hat{{\mathbf{T}}}_{\xi l} {\mathbf{X}_{jl}}' \mathbf{X}_{jl'} \hat{{\mathbf{T}}}_{\xi l'} \mathbf{X}_{jl'}' \big)$ and $\mathbf{b}$ is a vector of length $L$ with $l^{th}$ entry $\sum_{j=1}^n \big\{ {\mathbf{r}_j}'(v) {\mathbf{X}_{jl}}\hat{{\mathbf{T}}}_{\xi l} {\mathbf{X}_{jl}}' {\mathbf{r}_j}(v)
- \mathrm{tr} \big( \hat{{\mathbf{T}}}_{\xi l} {\mathbf{X}_{jl}}'
\hat{{\mathbf{V}}}_{\varepsilon jm} {\mathbf{X}_{jl}}\big) \big\} $. The solutions to - can be computed by various methods (e.g., [@Nocedal2006], p. 449) that are widely available in software packages.
### Step 5: Generalized least squares estimation of the HRF {#sec: GLS .unnumbered}
The pilot estimation of the HRF can be improved upon in two ways: (i) by accounting for the dependence structure of the BOLD signal, and (ii) by imposing the form to the HRF estimates. To integrate these features in the estimation, we use a penalized, constrained, generalized least squares approach. For a given voxel $v$, let $\hat{{\mathbf{V}}}_j(v) ={\mathbf{X}_j}(\hat{\mathbf{D}}_{\xi}(v) \otimes {\mathbf{I}}_K) \hat{{\mathbf{T}}}_\xi {\mathbf{X}_j}'+
\hat{\mathbf{V}}_{\varepsilon jm}$ be the estimate of ${\mathbf{V}_j}(v)$ resulting from steps 1-4. We seek to solve $$\label{constrained gls}
\min_{{\boldsymbol{\beta}},{\boldsymbol{\gamma}}, {\mathbf{d}}} \Bigg\{
\sum_{j=1}^n \Big\| {\mathbf{y}_j}(v) - {\mathbf{X}_j}({\boldsymbol{\beta}}\otimes{\boldsymbol{\gamma}}) - {\boldsymbol{\Phi}_j}{\mathbf{d}_j}\Big\|_{\hat{\mathbf{V}}_j^{-1}(v)}^2 + n \lambda\, {\boldsymbol{\gamma}}' {\mathbf{P}}{\boldsymbol{\gamma}}\Bigg\}$$ under the constraint $\| {\boldsymbol{\gamma}}\|^2 = 1$. Like in the pilot estimation, the nuisance parameter ${\mathbf{d}}$ can be eliminated from . To that intent, let ${\tilde{\mathbf{R}}_j}(v) = \mathbf{I}_T - {\boldsymbol{\Phi}_j}( {\boldsymbol{\Phi}_j}' {\hat{\mathbf{V}}_j^{-1}}(v) {\boldsymbol{\Phi}_j})^{-1} {\boldsymbol{\Phi}_j}' {\hat{\mathbf{V}}_j^{-1}}(v)$ be the projection matrix on the orthogonal space of ${\boldsymbol{\Phi}_j}$ in the metric ${\hat{\mathbf{V}}_j^{-1}}(v)$. Then is equivalent to $\min_{{\boldsymbol{\beta}},{\boldsymbol{\gamma}}} \big\{
\sum_{j=1}^n \big\| {\tilde{\mathbf{R}}_j}(v) {\mathbf{y}_j}(v) - {\tilde{\mathbf{R}}_j}(v) {\mathbf{X}_j}({\boldsymbol{\beta}}\otimes{\boldsymbol{\gamma}}) \big\|_{{\hat{\mathbf{V}}_j^{-1}}(v)}^2 + n \lambda\, {\boldsymbol{\gamma}}' {\mathbf{P}}{\boldsymbol{\gamma}}\big\}
$.
Because of the tensor product ${\boldsymbol{\beta}}\otimes {\boldsymbol{\gamma}}$ and the quadratic constraint $\|{\boldsymbol{\gamma}}\|^2=1$, problem is nonlinear and has no closed-form solutions. However, is a separable least squares problem: for a fixed ${\boldsymbol{\gamma}}$, solving with respect to ${\boldsymbol{\beta}}$ reduces to a generalized least squares problem that admits a closed-form solution. For a fixed ${\boldsymbol{\beta}}$, solving with respect to ${\boldsymbol{\gamma}}$ is a quadratically constrained quadratic program that requires little more than a singular value decomposition. As a result, can be efficiently solved in an iterative way.
For conciseness, we omit the index $v$ from notations in the remainder of the section. Let $ {\mathbf{M}}= \sum_{j=1}^n {\mathbf{X}_j}' {\tilde{\mathbf{R}}_j}' {\hat{\mathbf{V}}_j^{-1}}{\tilde{\mathbf{R}}_j}{\mathbf{X}_j}$ and ${\boldsymbol{\eta}}= \sum_{j=1}^n {\mathbf{X}_j}' {\hat{\mathbf{V}}_j^{-1}}{\tilde{\mathbf{R}}_j}{\mathbf{y}_j}$. The solutions $\hat{{\boldsymbol{\beta}}}$ and $\hat{{\boldsymbol{\gamma}}}$ of are obtained by cycling through the following equations until convergence:
\[update beta gls\] & = \^[-1]{} ( \_L )’ , &\
\[Lagrange gls\] & = { ’ ( \_K ) \^[-1]{} ( \_K )’ + C }, &\
\[update gamma gls\] & = \^[-1]{} ( \_K )’ ,&
with $\hat{{\boldsymbol{\gamma}}} $ initially set to the pilot estimator $ \hat{{\boldsymbol{\gamma}}}_0 $. Equation corresponds to the generalized least squares problem that updates $ \hat{{\boldsymbol{\beta}}}$ for a given $ \hat{{\boldsymbol{\gamma}}}$. Equations - correspond to the quadratically constrained quadratic program that updates $ \hat{{\boldsymbol{\gamma}}}$ for a given $ \hat{{\boldsymbol{\beta}}}$ using the method of Lagrange multipliers. The Lagrange multiplier $\hat{C}$ is computed by performing the singular value decomposition of $ (\hat{{\boldsymbol{\beta}}} \otimes {\mathbf{I}}_K )' \, {\mathbf{M}}\, ( \hat{{\boldsymbol{\beta}}} \otimes {\mathbf{I}}_K ) + n \lambda {\mathbf{P}}$ and numerically finding the root of a monotone function (see e.g., [@Golub2013 chap. 6] for details).
Inference {#sec: inference}
---------
In this section we discuss the sampling distribution of the HRF estimates and illustrate how to perform inference on model parameters. As before, we omit the voxel index $v$ from notations for conciseness. Recall that for a given voxel, the HRF shape parameter ${\boldsymbol{\gamma}}$ is well defined only if at least one condition induces an activation in the voxel. Under this assumption, for a sufficiently large sample size $n$ and sufficiently small smoothing parameter $\lambda$, the sampling distributions of $\hat{{\boldsymbol{\beta}}}$ and $\hat{{\boldsymbol{\gamma}}}$ can be respectively approximated by $N({\boldsymbol{\beta}}, [ ( {\mathbf{I}}_L \otimes {\boldsymbol{\gamma}})' {\mathbf{M}}( {\mathbf{I}}_L \otimes {\boldsymbol{\gamma}}) ]^{-1} )$ and $N({\boldsymbol{\gamma}}, [ ( {\boldsymbol{\beta}}\otimes{\mathbf{I}}_K )'{\mathbf{M}}( {\boldsymbol{\beta}}\otimes {\mathbf{I}}_K ) ]^{-1} )$, where the matrices $\hat{{\mathbf{V}}}_j $ are replaced by the true covariances ${\mathbf{V}_j}$ in ${\mathbf{M}}$ and where $N(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ denotes the multivariate normal distribution with mean $\boldsymbol{\mu} $ and covariance matrix $\boldsymbol{\Sigma}$. Further, neglecting the uncertainty about ${\boldsymbol{\gamma}}$ (and about the covariance matrices $\mathbf{V}_j$) in the estimation of ${\boldsymbol{\beta}}$ and vice-versa, we obtain $$\label{eq: inference}
\begin{split}
\smallskip
\hat{{\boldsymbol{\beta}}} & \approx N\left( {\boldsymbol{\beta}}, \big[ \left( {\mathbf{I}}_L \otimes \hat{{\boldsymbol{\gamma}}} \right)' {\mathbf{M}}\left( {\mathbf{I}}_L \otimes \hat{{\boldsymbol{\gamma}}} \right) \big]^{-1} \right) , \\
\hat{{\boldsymbol{\gamma}}} &\approx N\left( {\boldsymbol{\gamma}}, \big[ \big( \hat{{\boldsymbol{\beta}}} \otimes{\mathbf{I}}_K \big)'{\mathbf{M}}\big( \hat{{\boldsymbol{\beta}}} \otimes {\mathbf{I}}_K \big) \big]^{-1} \right).
\end{split}$$ The first result allows us to perform inference to detect activation, as is commonly performed in the GLM setting. The second results allows us to test hypotheses regarding the shape of the HRF. For example, after computing the values of ${\boldsymbol{\gamma}}$ that correspond to the canonical HRF, we could test for deviations in the shape across the brain based on the fact that $\big(\hat{{\boldsymbol{\gamma}}}-{\boldsymbol{\gamma}}\big)' \big[ \big( \hat{{\boldsymbol{\beta}}} \otimes{\mathbf{I}}_K \big)'{\mathbf{M}}\big( \hat{{\boldsymbol{\beta}}} \otimes {\mathbf{I}}_K \big) \big]^{-1}\, \big(\hat{{\boldsymbol{\gamma}}}-{\boldsymbol{\gamma}}\big)$ follows a $\chi^2$ distribution with $K$ degrees of freedom. A theoretical justification of is provided in Appendix B.
Recall that the above inference procedures rely on the assumption that the voxel under study is activated in at least one condition (otherwise ${\boldsymbol{\gamma}}$ is not well defined). This assumption amounts to the fact that at least one of the unconstrained HRF coefficients $\gamma_{lk}$ is not zero. It can easily be tested using the pilot estimator $\hat{\mathbf{h}}$ of the $\gamma_{lk}$ defined in . More precisely, $\hat{\mathbf{h}}$ has a normal distribution with asymptotic bias zero and asymptotic variance $( \sum_{j=1}^n {\mathbf{X}_j}' {\mathbf{R}_j}{\mathbf{X}_j})^{-1} (\sum_{j=1}^n {\mathbf{X}_j}' {\mathbf{R}_j}{\mathbf{V}_j}{\mathbf{R}_j}{\mathbf{X}_j})
( \sum_{j=1}^n {\mathbf{X}_j}' {\mathbf{R}_j}{\mathbf{X}_j})^{-1} $ for large $n$ and small $\lambda_0$. If the null hypothesis $\gamma_{lk}=0$ for all $k,l$ fails to be rejected, then no further inference should be performed for this voxel. Another sensible approach is to use first to test for activations in all voxels and then to infer the HRF shape in activated voxels.
Simulations
-----------
The basic framework for this simulation study is similar to the one found in Lindquist et al. (2008), where a number of different HRF modeling approaches were evaluated. Inside a static brain slice, with dimensions $51 \times 40$, a set of $25$ identically sized squares, with dimensions $4 \times 4$, were placed to represent active regions (see Fig. 1A). Within each square, we simulated BOLD fMRI signal based on different stimulus functions, which varied systematically across the squares in terms of onset and duration. From left to right the onset of activation varied from the first to the fifth TR. From top to bottom, the duration of activation varied from one to nine TRs in steps of two. To create the response, we convolved the stimulus function in each square with SPMs canonical HRF, using a modified nonlinear convolution that includes an exponential decay to account for refractory effects with stimulation across time, with the net result that the BOLD response saturates with sustained activity in a manner consistent with observed BOLD responses (Wager et al., 2005). This procedure gave rise to a set of $25$ distinct HRF shapes. Fig. 1B shows examples of the $5$ HRFs with no onset shift, which are representative of the remaining HRFs.
In total we performed five simulation studies in order to evaluate the properties of the proposed model. Below follows a description of each.\
[*Simulation 1:*]{} In this simulation the TR was assumed to be $1$s long and the inter-stimulus interval was set to $30$s. This activation pattern was repeated to simulate a total of $10$ epochs. To simulate a sample of subjects and group random-effects, we generated $15$ subject datasets, which consisted of the BOLD time series at each voxel plus white noise, creating a plausible effect size (Cohen’s d$=0.5$) based on observed effect sizes in the visual and motor cortex (Wager et al., 2005). The value of $\sigma^2_{\varepsilon}$ was set to $3$. In addition, a random between-subject variation with a standard deviation of size one third of the within-subject variation was added to each subject’s time course.\
[*Simulation 2:*]{} The second simulated data set was constructed in precisely the same manner as outlined in Simulation 1, except here instead of using white noise to simulate within-subject variation we used an AR(1) noise process with $\theta_{\varepsilon} = 0.3$.\
[*Simulation 3:*]{} The third simulated data set was constructed, using the exact same process with AR(1) noise described in Simulation 2, expect here instead of using SPMs canonical HRF we used a subject-specific HRF. These were randomly generated using 20 B-spline basis sets with weights drawn from a normal distribution with mean equal to the weights corresponding to the canonical HRF and standard deviation $0.1$.\
[*Simulation 4:*]{} The fourth simulated data set was constructed in precisely the same manner as outlined in Simulation 1, except here we allowed for two separate conditions. For both conditions the inter-stimulus interval was set to $30$s and the activation pattern was repeated to simulate a total of $10$ epochs. However, the two conditions were interleaved to begin $15$s apart from one another. The $\beta$ value for the two conditions were set to $0.5$ and $1$, respectively. All other parameters were set according to the description of Simulation 1.\
[*Simulation 5:*]{} The fifth simulated data set was constructed in precisely the same manner as outlined in Simulation 1, except here we used a fast event-related design. The inter-stimulus interval was randomized across each trial using a uniform distribution between $10$ and $20$s. All other parameters were set according to the description of Simulation 1.\
For each of the five simulations, the basic data sets of dimensions $51 \times 40 \times 300$ were fit voxel-wise using both using the standard GLM/OLS approach and our proposed hierarchical approach. For Simulations 1-3 an event-related stimulus function with a single spike repeated every $30$s was used for fitting the models to the data set described above. For Simulation 4 this was supplemented by a second stimulus function with a single spike repeated every $30$s corresponding to the timing of the second condition. Finally, in Simulation 5 we used a stimulus function with a single spike repeated according to the outcomes of the randomization scheme outlined above.
This implies that for each simulation the square in the upper left-hand corner of Fig. 1A is correctly specified for the standard GLM while the remaining squares have activation profiles that are mis-specified to various degrees. When fitting our method we used $20$ B-spline basis functions of order $6$. While our proposed method provides multi-subject estimates and a framework for direct inference on these parameters, the standard OLS (ordinary least squares) approach to multi-subject analysis in fMRI involves using a two-stage model. It begins by fitting individual regression coefficients for each subject using a standard GLM. Thereafter group estimates of the parameters are obtained by averaging across subjects and variance components are obtained by computing the variance of the estimates across subjects. Since this analysis is performed on the estimated regression coefficients, the variance will contain contributions from both the standard error of the estimates and the between-subject variance components. This method is the most popular method in the neuroimaging community for estimating the parameters of a mixed-effects model [@mumford2009simple].
After estimation we performed population level inference to determine whether $\beta$ was significantly different from $0$. In each case we performed a one-sided test. In order to control for multiple comparisons we used an FDR-controlling procedure with $q=0.05$ [@Genovese].
Experimental Data
-----------------
All subjects ($n = 20$) provided informed consent in accord with the Declaration of Helsinki, and the Columbia University Institutional Review Board approved all procedures. Subjects were all right-handed, as assessed by the Edinburgh Handedness Inventory, and free of self-reported history of psychiatric and neurological disorders, excessive caffeine or nicotine use, and illicit drug use. They were pre-screened during an initial calibration session to ensure that stimuli were painful and that they could rate pain reliably ($r \ge .65$) between applied temperature and pain rating). During fMRI scanning, $48$ thermal stimuli, $12$ at each of $4$ temperatures, were delivered to the left forearm using a Peltier device (TSA-II, Medoc, Inc.) with an fMRI-compatible $1.5$ mm-diameter thermode. Temperatures were calibrated individually for each participant before scanning to be warm, mildly painful, moderately painful, and near tolerance. Heat stimuli were preceded by a $2$s warning cue and $6$s anticipation period, lasted $10$s in duration ($1.5$s ramp up/down, $7$s at peak), and were followed by a $30$s inter-trial interval (ITI). At a time $14$s into the ITI, participants were asked to rate the painfulness of the stimulus on an 8-point visual analogue scale using an fMRI-compatible trackball (Resonance Technologies, Inc.). Functional T2\*-weighted EPI-BOLD images (TR = $2$s, $3.5 \times 3.5 \times 4$mm voxels, ascending interleaved acquisition) were collected during $6$ functional runs each consisting of $6$ minutes $8$s. Images were corrected for slice-timing acquisition delay and realigned to adjust for head motion using SPM5 software (http://www.fil.ion.ucl.ac.uk/spm/). A high-resolution anatomical image (T1-weighted spoiled-GRASS \[SPGR\] sequence, $1 \times 1 \times 1$mm voxels, TR $= 19$ms) was collected after the functional runs and coregistered to the mean functional image using a mutual information cost function (SPM5, with manual checking and adjustment of starting values to ensure satisfactory alignment for each participant), and was then segmented and warped to the Montreal Neurologic Institute template (avg152T1.nii) using SPM5’s “generative” segmentation [@ashburner2005unified]. Warps were applied to functional images. Functional images were smoothed with a $6$ mm-FWHM Gaussian kernel, high-pass filtered with a $120$s ($.0083$ Hz) discrete cosine basis set (SPM5), and windsorized at $2$ standard deviations prior to analysis. Each of the models described above was fit to data from each voxel in a single axial slice (z $= -22$mm) that covered several pain-related regions of interest, including the anterior cingulate cortex. Separate HRFs were estimated for stimuli of each temperature, though we focus on the responses to the highest heat level in the results.
We fit the data using our proposed hierarchical approach with $12$ B-spline basis functions of order $6$. As the fMRI data consisted of $6$ functional runs for each subject, we combined the results across runs in a manner corresponding to a fixed-effects model, assuming the same HRF for all runs for a given subject and voxel. As an alternative it would be possible to extend our model to allow $\beta$ to be a random effect across runs in a manner analogous to that outlined in [@BadilloVC13]. After estimation we performed population level inference to determine whether $\beta$ was significantly different from $0$. In each case we performed a one-sided test. In order to control for multiple comparisons we used an FDR-controlling procedure with $q=0.05$ [@Genovese].
RESULTS
=======
Simulations
-----------
Figs. 2-4 show results of the first three simulation studies. Each contains estimates of $\beta$, $\theta_{\varepsilon}$, $\sigma^2_{\varepsilon}$, the $t$-map, and the thresholded $t$-map, obtained using both the standard GLM/OLS approach and our hierarchical model. In each case, GLM/OLS (first row) gives reasonable results for delayed onsets within $3$s and widths up to $3$s, corresponding to squares in the upper left-hand corner. However, its performance worsens dramatically as onset and duration increase. As pointed out in Lindquist et al. (2008) this is natural as the GLM is correctly specified for the square in the upper left-hand corner, but not well equipped to handle large deviations from this model. Interestingly, in Fig. 2 we see that this model misspecification gives rise to a positive autocorrelation in several of the most severely effected squares, even though we used white noise in that particular simulation.
Almost all of these problems are solved using our hierarchical model. Clearly, irrespective of the shape of the underlying HRF we were able to efficiently recover both $\beta$ and the variance component in each square. Clearly, these improved estimations lead to significantly improved sensitivity and specificity in the population-level hypothesis tests.
The results of Simulation 4 are shown in Fig. 5. Here the first column contains the t-map corresponding to the test of whether the contrast, $\beta_1 - \beta_2$, between the parameters of the two conditions was significantly different from zero, while the second column shows the analogous thresholded t-maps. Again, our hierarchical model is able to detect significant regions with both high sensitivity and specificity. In contrast, the GLM/OLS approach performs poorly, even in the upper-left hand corner where it is expected to be optimal.
Fig. 6 shows results for Simulation 5 where we used a rapid event-related task. The first column contains the t-map corresponding to testing whether the parameter $\beta$ was significantly different from zero, and the second column corresponds to the thresholded t-maps. The GLM/OLS approach performs somewhat better than in the other simulations, perhaps due to the increased number of trials. However, our hierarchical model is again able to detect significant regions with both a high degree of sensitivity and specificity.
To illustrate further, we see in Fig. 7 the results of $100$ replications of the first simulation. Fig. 7A shows the portion of times the standard GLM/OLS approach gave significant results in each voxel across the simulated brain. Fig. 7B shows the same results for our hierarchical model. Clearly, our hierarchical model is able to consistently detect truly activated regions, while avoiding spurious findings. Not surprisingly, the GLM/OLS approach only performs well in squares where it is correctly defined. Fig 7C shows the portion of times the estimated HRF significantly deviated from SPMs canonical form ($\alpha <0.05$). This test was performed by first computing values of $\gamma$ corresponding to the canonical HRF and thereafter using the $\chi^2$ test described in Section \[sec: inference\]. The results show, again not surprisingly, that the HRF in voxels in the lower right-hand corner of the brain deviate significantly from the canonical form. Interestingly, these voxels corresponds closely to voxels that were erroneously deemed non-active in Fig. 7A. This leads us to believe that our approach could have an alternative use as a diagnostic tool for assessing the performance of standard GLM analyses.
Fig 8 shows the methods ability to recover the time-to-peak and width of the underlying HRF used to generate the data. The left hand column shows the true values and the right hand columns the mean estimated values across the $100$ replications. Clearly we are able to extremely accurately capture the true value of the time-to-peak. However, it appears that the estimates of the width, while close are somewhat confounded by the changes in onset, with the best results occurring when there are no onset shifts present.
Experimental Data
-----------------
The results of the pain experiment are shown in Fig. 9 for a single axial slice (z $= -22$mm). The location of the slice used and an illustration of key areas of interest are shown in Fig. 9A. The highlighted areas are the rostral dorsal anterior cingulate cortex (rdACC) and the secondary somatosensory cortex (S2); two brain regions known from previous work to be involved in the processing of pain intensity [@ferretti2003functional; @peyron2000functional]. Activation in S2 is thought to be related to sensory-discriminant aspects of pain-processing, while rdACC has been shown to be related to expectancy [@atlas2012dissociable].
In Fig. 10 we see examples of the estimated HRFs, obtained using our approach, from voxels chosen because they lay in the center of the rdACC and S2, respectively. These results include both the subject-specific and group-level estimates. The shapes of both group-level HRFs are significantly different from the canonical HRF ($p <0.01$) using the $\chi^2$ test. Note that the shapes of the HRFs are also quite different from one another. For rdACCC, the width of the response is significantly wider from what we would expect from the canonical HRF. While for S2, the onset is significantly delayed. However, interestingly both HRFs appear to reach their peak at roughly the same time point following activation.
Due to the apparent variability in the HRF across the slice, it would be problematic to analyze this data set using a canonical HRF or one that uses a constrained basis set. In fact, the GLM showed no activation in either rdACC or S2. However, the flexibility of our approach allows for large deviations in the shape of the HRF across voxels. In Fig. 9B we see an activation map obtained using our method. In particular, note that there is significant activation in S2 in response to the noxious stimulation. In previous analysis of this data set [@lindquist8], activations in this region were particularly difficult to detect using standard GLM methods (e.g., the canonical HRF plus its temporal and dispersion derivatives, or the finite impulse response (FIR) basis set). The inverse-logit (IL) model was the only approach that showed activation in S2 contralateral to noxious stimulation. The activation shown here is extremely robust in comparison.
DISCUSSION
==========
In this paper we introduce a new approach towards the simultaneous detection of activation and the estimation of the HRF for multi-subject fMRI. The suggested approach circumvents a number of shortcomings in the standard approach for performing group analysis. In these approaches there is often a tension between flexible modeling of the HRF at the first level and straightforward inference in the second level. For example, if multiple basis sets are used in the first-level GLM, then it is difficult to determine an approriate contrast to bring forward to the second-level. Often, researchers use only a subset of the basis functions and therefore potentially ignore important information contained in those left behind. The proposed approach allows the shape of the hemodynamic response function to vary across regions and subjects, as when using basis sets, while still providing a straightforward way to estimate population-level activation using all the information from the first level analysis.
An additional benefit of our model is that the suggested inferential framework not only provides a means for performing the standard tests for determining whether a voxel is significantly active, but also allows one to test whether the estimated HRF deviates from some canonical shape (e.g., SPMs canonical HRF). This type of inference has been under-utilized in the field, but we feel it can be extremely useful for diagnostic purposes, as it can help identify regions that would normally not be deemed active when using a canonical HRF.
In addition, our method could prove useful in situations when the standard HRF is either ill-fitting, such as in studies of young or elderly populations, or when the exact onset time or width of activation is unknown and added flexibility is needed to properly fit the data. An example of the latter is the thermal pain data presented in this paper. In previous studies [@lindquist8], activations in S2 were particularly difficult to detect using either the canonical HRF plus its temporal and dispersion derivatives, or the finite impulse response (FIR) basis set. However, using our approach, the activation was extremely robust in comparison.
To date the method has only been implemented in a voxel-size manner. Hence, we make the common, but rather implausible, assumption of independence between voxels with regards to both the HRF shape and amplitude. However, we do allow for the possibility of a parcel-specific noise structure. We are currently in the process of extending the method to also estimate spatial dependences in the data. For that purpose two types of approaches can be considered: one consists in spatially regularizing the estimation in local neighborhoods; the other is to integrate a functional parcellation of the brain. In the latter case, one can either resort to an existing atlas (see e.g. [@Karahano2013]) or define a data-driven parcellation (see e.g., [@Chaari2012hemo] for such a parcellation at the subject level). Though to make the manuscript more manageable we decided to limit our discussion of this issue until a later time.
The presented simulations and data set are identical to those used in a previous study [@lindquist8], where we evaluated the performance of seven different HRF models. These included: SPMs canonical HRF; the canonical HRF plus its temporal derivative; the canonical HRF plus its temporal and dispersion derivatives; the finite impulse response (FIR) basis set; a regularized version of the FIR model (denoted the smooth FIR); a nonlinear model with the same functional form as the canonical HRF but with 6 variable parameters; and the inverse logit (IL) model. The results of that work showed that it was surprisingly difficult to accurately recover true task-evoked changes in BOLD signal and that there were substantial differences among models in terms of power, bias and parameter confusability. While the derivative models were accurate for very short shifts in latency they became progressively less accurate as the shift increased. The IL model and the smooth FIR model showed the least amount of biases, and the IL model showed by far the least amount of confusability of all the models that were examined. Both these methods were clearly able to handle even large amounts of model misspecification and uncertainty about the exact timing of the onset and duration of activation.
The suggested model clearly outperformed each of the 7 other models in the same battery of tests. For space purposes we only present the canonical HRF for comparison purposes, and we encourage interested readers to look back at [@lindquist8], for more results. The hierarchical model was found to have a superb balance of sensitivity and specificity that none of the other models was able to obtain. In addition, in the previous work the IL model was the only model that showed significant activation in S2 contralateral to noxious stimulation. Here the proposed model shows extremely robust signal in this region. For these reasons we believe the proposed model is a useful approach towards effectively modeling multi-subject fMRI data.
A Matlab implementation of the proposed methodology is available upon request. The code runs in a reasonable time for fMRI data sets of moderate size. By optimizing the code and running it in parallel on voxels and/or on subjects, our methodology can scale up to large fMRI data. Specifically, step 1 of the estimation algorithm can be solved in closed form and requires few matrix multiplications. Step 2 is also very fast due to the definition of the noise parameters at the parcel level and to the computational efficiency of the Yule-Walker equations. Step 3 (maximum likelihood/EM algorithm) would be very slow if it was run for each voxel but is in reality only carried for a small number of voxels. Step 4 consists in a large number of standard quadratic programming problems that can be solved quickly and in parallel. Step 5 is arguably the slowest part of the estimation procedure because of the necessity to compute inverse data covariance matrices. However, linear algebra tricks can reduce the dimension of the matrices to be inverted from the number of scans ($T_j$) to the number of regressors ($KL$).
Acknowledgements {#acknowledgements .unnumbered}
================
We thank two anonymous reviewers for remarks that helped us improve the quality of this paper, and Tor Wager for supplying the data. This research was partially supported by NIH grant R01EB016061.
[**Appendix**]{}
Estimation of the dependence in subject effects {#app: temp dependence}
===============================================
EM algorithm
------------
We fix a voxel and omit the index $v$ from notations. Since the random effects ${\boldsymbol{\xi}_j}$ are independent, identically distributed, and stationary, it holds that $\sigma_{\xi l}^2 = \mathbb{E}(\| {\boldsymbol{\xi}}_{jl}\|^2 )/K$ for each subject ($1\le j \le n$) and condition ($1\le l \le L$). Hence, we first estimate $\sigma_{\xi l}^2$ by the empirical average $\sigma_{\xi l}^{2(0)} = (1/n) \sum_{j=1}^n \|\hat{{\boldsymbol{\xi}}}_{jl} \|^2 / K $. We initially assume working independence for the ${\boldsymbol{\xi}_{jl}}$ so that $\rho_{\xi l}^{(0)}(k) = \delta_{0k}$ for each lag ($1\le k \le K-1$) and condition. At the $(r+1)^{th}$ iteration ($r\ge 0$) of the EM algorithm, the E-step computes the conditional expectation of (minus twice the logarithm of) the complete likelihood, i.e., the likelihood of the augmented data $({\mathbf{y}}_1,\ldots,{\mathbf{y}}_n,{\boldsymbol{\xi}}_1,\ldots, {\boldsymbol{\xi}}_n)$. The conditioning variables are ${\mathbf{y}}_1,\ldots,{\mathbf{y}}_n $, the current estimators $ \boldsymbol{\sigma}_{\xi}^{2(r)},\boldsymbol{\rho}_{\xi}^{(r)}$, and $\hat{{\boldsymbol{\beta}}}_0, \hat{{\boldsymbol{\gamma}}}_0, \hat{\sigma}_{\varepsilon m}^2 , \hat{\boldsymbol{\theta}}_{\varepsilon m } $. Up to constant terms, the conditional expectation is $$\label{EM E step}
\begin{split}
\hspace*{-3mm}Q \big(\boldsymbol{\sigma}_{\xi}^2,\boldsymbol{\rho}_{\xi} \big| \boldsymbol{\sigma}_{\xi}^{2(r)},\boldsymbol{\rho}_{\xi}^{(r)}\big)
&=
nK \ln \left| {\mathbf{D}}_{\xi} \right| + n \ln \left| {\mathbf{T}}_{\xi }\right| \\
&\quad + \sum_{j=1}^n {\boldsymbol{\xi}}_j^{(r)'} {\mathbf{T}}_{\xi }^{-1}\big( {\mathbf{D}}_{\xi}^{-1}\otimes {\mathbf{I}}_K\big) {\boldsymbol{\xi}}_j^{(r)}
+ \sum_{j=1}^n \mathrm{tr} \big\{ {\mathbf{T}}_{\xi }^{-1}\big( {\mathbf{D}}_{\xi}^{-1}\otimes {\mathbf{I}}_K\big) \mathbf{B}_j^{(r)} \big\} ,
\end{split}$$ where $ \mathbf{B}_j^{(r)}=\big[ {\mathbf{X}_j}' \hat{{\mathbf{V}}}_{\varepsilon jm}^{-1} {\mathbf{X}_j}+ \big({\mathbf{T}}_{\xi}^{(r)}\big)^{-1}
\big(( {\mathbf{D}}_{\xi}^{(r)})^{-1} \otimes {\mathbf{I}}_K\big) \big]^{-1} $, $ {\boldsymbol{\xi}}^{(r)}_j =\mathbf{B}_j^{(r)} {\mathbf{X}_j}' \hat{{\mathbf{V}}}_{\varepsilon jm}^{-1} \,{\mathbf{r}_j}$ is the predicted random effect for the $j^{th}$ subject, and ${\mathbf{r}_j}$ is the residual vector defined in step 2 of section \[sec: estimation\].
For $1\le l \le L$, the derivative of $Q$ with respect to $\sigma_{\xi l}^2$ is $$\label{dQ-dsigma2}
\frac{\partial Q}{\partial \sigma_{\xi l}^2} = \frac{nK}{ \sigma_{\xi l}^2}
- \frac{1}{\sigma_{\xi l}^4} \sum_{j=1}^n {\boldsymbol{\xi}}_{jl}^{(r)'} {\mathbf{T}}_{\xi l}^{-1} {\boldsymbol{\xi}}_{jl}^{(r)}
- \frac{1}{\sigma_{\xi l}^4} \sum_{j=1}^n \mathrm{tr} \big\{ {\mathbf{T}}_{\xi l }^{-1} \,
\mathbf{B}_{jll}^{(r)}\big\},$$ where the matrix $ \mathbf{B}_{j}^{(r)} $ has been partitioned in blocks $ \mathbf{B}_{jl l'}^{(r)} , 1\le l,l'\le L,$ of size $K\times K$.
Writing $ \mathbf{C}_l^{(r)} = \sum_{j=1}^n \big( {\boldsymbol{\xi}}_{jl}^{(r)} {\boldsymbol{\xi}}_{jl}^{(r)' } + \mathbf{B}_{jll}^{(r)} \big)$ and equating with zero, we get $$\begin{aligned}
\label{sigma xi EM}
\sigma_{\xi l}^2 & =
\frac{1}{nK}\bigg( \sum_{j=1}^n {\boldsymbol{\xi}}_{jl}^{(r)'} {\mathbf{T}}_{\xi l}^{-1} {\boldsymbol{\xi}}_{jl}^{(r)}+ \sum_{j=1}^n \mathrm{tr} \big\{ {\mathbf{T}}_{\xi }^{-1}
\mathbf{B}_{jll}^{(r)} \big\}\bigg)\nonumber \\
& = \frac{1}{nK}\,
\mathrm{tr} \big( {\mathbf{T}}_{\xi l}^{-1}\mathbf{C}_l^{(r)} \big).\end{aligned}$$
After plugging in , the variance-profile function $Q_p$ to be optimized in the M step of the algorithm is $$\begin{aligned}
\label{profiled Q}
Q_p\big(\boldsymbol{\rho}_{\xi} \big| \boldsymbol{\sigma}_{\xi}^{2(r)},\boldsymbol{\rho}_{\xi}^{(r)} \big) &= nK \ln \left| {\mathbf{D}}_{\xi} \right| + n \ln \left| {\mathbf{T}}_{\xi }\right| + n \nonumber \\
& = nK \sum_{l=1}^L \ln \big( \mathrm{tr} \big( {\mathbf{T}}_{\xi l}^{-1}\mathbf{C}_l^{(r)} \big) \big) - nK \ln(nK) +
n\sum_{l=1}^L \ln \left| {\mathbf{T}}_{\xi l}\right| + n .\end{aligned}$$
For computational speed, we may use any suitable gradient-based optimization method. Let ${\mathbf{D}}_k$ be the $K\times K$ matrix whose $(i,j)$ entry is 1 if $|i-j|=k$ and 0 otherwise. Then ${\mathbf{T}}_{\xi l} = \sum_{k=0}^{K-1} \rho_{\xi l}(k) {\mathbf{D}}_k$ and the gradient of $Q_p$ is $$\label{gradient profiled Q}
\frac{\partial Q_p}{\partial \rho_{\xi l }(k)} = -
nK \, \frac{ \mathrm{tr} \big( {\mathbf{T}}_{\xi l}^{-1} {\mathbf{D}}_{k} {\mathbf{T}}_{\xi l}^{-1} \mathbf{C}_l^{(r)} \big) } { \mathrm{tr} \big( {\mathbf{T}}_{\xi l}^{-1}\mathbf{C}_l^{(r)} \big)}
+ n\, \mathrm{tr} \big( {\mathbf{T}}_{\xi l}^{-1} {\mathbf{D}}_{k} \big)$$ for $1\le k \le K-1$ and $1\le l \le L$. The box constraints $-1\le \rho_{\xi l}(k) \le 1$ and positive definiteness of ${\mathbf{T}}_{\xi l}$ are enforced during the optimization. The updated variance estimator $\sigma_{\xi l}^{2(r+1)}$ is obtained by plugging $\rho_{\xi}^{(r+1)}$ in . Writing ${\boldsymbol{\Gamma}}^{(r)} = \big(\mathrm{vec}({\mathbf{T}}_{\xi 1}^{-1}\mathbf{C}^{(r)}_1 {\mathbf{T}}_{\xi 1}^{-1}),\ldots, \mathrm{vec}({\mathbf{T}}_{\xi L}^{-1} \mathbf{C}^{(r)}_L {\mathbf{T}}_{\xi L}^{-1})\big)$, ${\mathbf{D}}= \big(\mathrm{vec}({\mathbf{D}}_1),\ldots,\mathrm{vec}({\mathbf{D}}_{K-1})\big) $, $\mathbf{t}^{(r)}=\big( \mathrm{tr} \big( {\mathbf{T}}_{\xi 1}^{-1}\mathbf{C}_1^{(r)} \big),\ldots, \mathrm{tr} \big( {\mathbf{T}}_{\xi L}^{-1}\mathbf{C}_L^{(r)} \big)\big)'$, and $\mathbf{S} = \big( \mathrm{vec} ({\mathbf{T}}_{\xi 1}^{-1}),\break \ldots, \mathrm{vec} ({\mathbf{T}}_{\xi L}^{-1})\big)$, the gradient can be compactly written as $$\frac{\partial Q_p}{ \partial \boldsymbol{\rho}_{\xi} }= - nK \, \frac{\mathrm{vec} \big( {\mathbf{D}}' {\boldsymbol{\Gamma}}^{(r)} \big) }{ \mathbf{t}^{(r)} \otimes \mathbf{1}_{ K-1} }
+n\, \mathrm{vec} \big( {\mathbf{D}}' \mathbf{S} \big)\, ,$$ where the division is taken element-wise and $ \mathbf{1}_{ K-1}$ is a vector containing $(K-1)$ ones.
Maximum Likelihood Estimation
-----------------------------
For a given voxel $v$, the partial derivatives of the likelihood function are $$\begin{aligned}
\frac{\partial L}{\partial \sigma_{\xi l}^2 } & = \sum_{j=1}^n
\mathrm{tr}\big( {\mathbf{X}_{jl}}' \mathbf{V}_j^{-1}(v) {\mathbf{X}_{jl}}{\mathbf{T}}_{\xi l } \big)
- \sum_{j=1}^n {\mathbf{r}_j}' \mathbf{V}_j^{-1}(v) {\mathbf{X}_{jl}}{\mathbf{T}}_{\xi l } {\mathbf{X}_{jl}}' \mathbf{V}_j^{-1}(v) {\mathbf{r}_j}(v) \label{pdLsigma} \\
\noalign{and} \frac{\partial L}{\partial \rho_{\xi l}(k) } & = \sigma_{\xi l}^2 \sum_{j=1}^n
\mathrm{tr}\big( {\mathbf{X}_{jl}}' \mathbf{V}_j^{-1}(v) {\mathbf{X}_{jl}}{\mathbf{D}}_k \big)
- \sigma_{\xi l}^2 \sum_{j=1}^n {\mathbf{r}_j}' \mathbf{V}_j^{-1}(v) {\mathbf{X}_{jl}}{\mathbf{D}}_k {\mathbf{X}_{jl}}' \mathbf{V}_j^{-1}(v) {\mathbf{r}_j}(v) \label{pdLrho}\end{aligned}$$ for $1\le k \le K-1$ and $ 1\le l \le L$. Based on - and a suitable gradient-based optimization procedure, we obtained the ML estimators $\hat{\sigma}_{\xi l}^2 (v) $ and $\hat{\rho}_{\xi l}(k,v)$. We then define the aggregated correlation estimate $\hat{\rho}_{\xi l}(k)$ as the median of the $\hat{\rho}_{\xi l}(k,v)$ across voxels where the estimation was carried.
Sampling distribution of the HRF estimators
===========================================
Here we provide the theoretical justification of the large-sample approximation to the sampling distribution of the estimators $\hat{{\boldsymbol{\beta}}}(v)$ and $\hat{{\boldsymbol{\gamma}}}(v)$.
First, the estimators used in this paper rely on standard statistical procedures whose consistency properties are well documented in the literature. In step 1, the penalized least squares estimator $\hat{\mathbf{h}}(v)$ of the HRF coefficients $\gamma_{lk}(v)$ is consistent as $n\to\infty $ and $\lambda_0 \to 0$. Note that increasing the number of scans $T_j$ reduces the influence of the noise $\varepsilon$ on the estimation but not the sampling variability (subject effects $\xi$). In fact, the variance of the pilot estimator $\hat{\mathbf{h}}(v)$ is dominated by the sampling variability: it is of order $\mathcal{O}(\sum_j T_j^2 / (\sum_j T_j)^2 )$, i.e. $\mathcal{O}(1/n)$ if the $T_j$ are of comparable size. Also, the parameter $\lambda_0$ governing the penalty on HRF shapes lying outside the null space $\boldsymbol{\Psi}$ must go to zero to render the pilot estimator asymptotically unbiased. In step 2, the Yule-Walker estimators of the noise parameters $\sigma_{\varepsilon m}^2$ and $\boldsymbol{\theta}_{\varepsilon m}$ are consistent as $\max_j T_j \to \infty$. The consistency of steps 3-5 derives from the large-sample properties of least squares- and maximum likeklihood estimators and from the consistency of the previous estimation steps. Note that the estimators of the noise and random effects parameters are averaged across subjects and space (except for the variance estimators $\hat{\sigma}_{\xi l}^2(v)$). As a consequence, their variance is generally small in comparison to the variance of the HRF estimators.
We now turn to the large-sample distribution of $\hat{{\boldsymbol{\beta}}}(v)$ and $\hat{{\boldsymbol{\gamma}}}(v)$. Assuming that for each subject, stimuli of each type are presented sufficiently often, the design matrices ${\mathbf{X}_j}$ are of order $\mathcal{O}(\sqrt{T_j})$ in norm and the matrix ${\mathbf{M}}= {\mathbf{M}}(v)$ is of order $\mathcal{O}(\sum_j T_j)$ in probability. Given the consistency of $\hat{{\boldsymbol{\gamma}}}(v)$, $\hat{{\mathbf{V}}}_j(v)$, the normality of the data, and the fact that $\mathrm{Var}(\boldsymbol{\eta}) \approx {\mathbf{M}}$, one can apply Slutsky’s theorem and the Law of Large Numbers in to obtain the large-sample approximation $ \hat{{\boldsymbol{\beta}}}(v) \to N({\boldsymbol{\beta}}(v) , [ ( {\mathbf{I}}_L \otimes {\boldsymbol{\gamma}}(v) )' {\mathbf{M}}( {\mathbf{I}}_L \otimes {\boldsymbol{\gamma}}(v) ) ]^{-1} )$ (with $\hat{{\mathbf{V}}}_j^{-1}(v)$ in ${\mathbf{M}}$ replaced by $\hat{{\mathbf{V}}}_j^{-1}(v)$) as $n,\sum_j T_j \to\infty$ and $\lambda \to 0$. Turning to the estimator $\hat{{\boldsymbol{\gamma}}}(v)$, simple algebraic manipulations in and show that $\hat{C} = -n\lambda\, \hat{{\boldsymbol{\gamma}}}(v)' {\mathbf{P}}\hat{{\boldsymbol{\gamma}}}(v)$. In other words, the terms $\hat{C}{\mathbf{I}}_K$ and $n\lambda {\mathbf{P}}$ in are of order $\mathcal{O}(n \lambda)$ in probability and thus negligible in comparison to ${\mathbf{M}}$. Applying the same arguments as with $\hat{{\boldsymbol{\beta}}}(v)$ in , we obtain the limit distribution $ \hat{{\boldsymbol{\gamma}}}(v) \to N({\boldsymbol{\gamma}}(v), [ ( {\boldsymbol{\beta}}(v) \otimes{\mathbf{I}}_K )'{\mathbf{M}}( {\boldsymbol{\beta}}(v) \otimes {\mathbf{I}}_K ) ]^{-1} )$.
![[Overview of simulation set-up (A) A set of 25 equally sized squares were placed within a static brain image to represent regions of interest. BOLD signals were simulated based on different stimulus functions, which varied systematically across the squares in their onset and duration of neuronal activation. From left to right the onset of activation varied between the squares from the first to the fifth TR. From top to bottom, the duration of activation varied from one to nine TR in steps of two. (B) The five HRFs with varying duration. The plot illustrates differences in time-to-peak and width attributable to changes in duration.]{}[]{data-label="SimDesc"}](SimDescription.pdf){width="90.00000%"}
![[Results of the first simulation shown for the standard GLM/OLS approach (top row), and our hierarchical model (bottom row). From left-to-right the columns represent the estimated values of $\beta$, $\theta_{\varepsilon}$, $\sigma^2_{\varepsilon}$, $t$-map and thresholded $t$-map. ]{}[]{data-label="Sim1Res"}](Sim1Res.pdf){width="100.00000%"}
![[Results of the second simulation shown for the standard GLM/OLS approach (top row), and our hierarchical model (bottom row). From left-to-right the columns represent the estimated values of $\beta$, $\theta_{\varepsilon}$, $\sigma^2_{\varepsilon}$, the $t$-map and the thresholded $t$-map. ]{}[]{data-label="Sim2Res"}](Sim2Res.pdf){width="100.00000%"}
![[Results of the third simulation shown for the standard GLM/OLS approach (top row), and our hierarchical model (bottom row). From left-to-right the columns represent the estimated values of $\beta$, $\theta_{\varepsilon}$, $\sigma^2_{\varepsilon}$, $t$-map and thresholded $t$-map. ]{}[]{data-label="Sim3Res"}](Sim3Res.pdf){width="100.00000%"}
![[Results of the fourth simulation shown for the standard GLM/OLS approach (top row), and our hierarchical model (bottom row). From left-to-right the columns represent the estimated and thresholded $t$-map for testing whether the contrast between the two conditions was significantly different from 0.]{}[]{data-label="Sim4"}](Sim4.pdf){width="70.00000%"}
![[Results of the fifth simulation shown for the standard GLM/OLS approach (top row), and our hierarchical model (bottom row). From left-to-right the columns represent the estimated and thresholded $t$-map. ]{}[]{data-label="Sim5"}](Sim5.pdf){width="70.00000%"}
![[The results of $100$ replications of the first simulation. (A) The portion of times the standard GLM/OLS approach gave significant results in each voxel. (B) The same results for our hierarchical model. Clearly the hierarchical model is able to effectively separate signal from noise in a more consistent manner than the GLM/OLS. (C) The portion of times the estimated HRF deviated from the canonical form using our model.]{}[]{data-label="Sim3b"}](Sim3b.pdf){width="90.00000%"}
![[The results of $100$ replications of the first simulation. (Top row) The true and estimated values of the time-to-peak for the group-level HRF. (Bottom row) Same results for the width.]{}[]{data-label="Sim3bTW"}](Sim3bTTP_Width.pdf){width="70.00000%"}
![[(A) The location of the slice and an illustration of areas of interest. Both rdACC and S2 are regions known to process pain intensity. (B) A statistical map obtained using the proposed herarchical model. ]{}[]{data-label="Results"}](Results.pdf){width="90.00000%"}
![[Estimates of the subject-specific HRF computed using voxels from the rdACC and S2. The group-level estimates are shown in bold.]{}[]{data-label="ResultsHRF"}](ResultsHRF.pdf){width="70.00000%"}
|
{
"pile_set_name": "ArXiv"
}
|
[**Jack polynomial fractional quantum Hall states and their generalizations**]{}
Wendy Baratta and Peter J. Forrester
Department of Mathematics and Statistics, University of Melbourne,\
Victoria 3010, Australia\
> In the the study of fractional quantum Hall states, a certain clustering condition involving up to four integers has been identified. We give a simple proof that particular Jack polynomials with $\alpha = - (r-1)/(k+1)$, $(r-1)$ and $(k+1)$ relatively prime, and with partition given in terms of its frequencies by $[n_00^{(r-1)s}k 0 ^{r-1}k 0 ^{r-1}k \cdots 0 ^{r-1} m]$ satisfy this clustering condition. Our proof makes essential use of the fact that these Jack polynomials are translationally invariant. We also consider nonsymmetric Jack polynomials, symmetric and nonsymmetric generalized Hermite and Laguerre polynomials, and Macdonald polynomials from the viewpoint of the clustering.
Introduction
============
The symmetric Jack polynomials $P_{\kappa}(z;\alpha)$, $z:=(z_1,\ldots,z_N)$ a coordinate in $\mathbb C^N$, $\alpha$ a scalar and $\kappa$ a partition, are an orthogonal, homogeneous basis for symmetric function generalizing the Schur ($\alpha=1$) and zonal ($\alpha=2$) polynomials. They appear in physics in random matrix theory [@Fo10 Ch. 12 & 13], [@De08] and in the study of quantum many body wave functions [@Fo10 Ch. 11], [@BH08a; @BH08]. Here we will be interested in the latter interpretation.
There are two classes of quantum many body systems for which Jack polynomials are relevant, one involving the $1/r^2$ pair potential in one dimension and the other corresponding to certain fractional quantum Hall states. Regarding the former [@Fo10 Ch. 11], with the domain a unit circle, the corresponding Schödinger operator reads $$\label{HC}
H^{(C)}:=-\sum_{j=1}^N\frac{\partial^2}{\partial \theta^2_j}+\frac{\beta}{4}\bigg(\frac{\beta}{2}-1\bigg)\sum_{1\leq j <k \leq N}\frac{1}{\sin^2(\theta_k-\theta_j)/2},$$ where $\beta$ parametrizes the coupling. With $z_j=e^{i\theta_j}$ the ground state wave function for (\[HC\]) is proportional to $$\label{J1}
\psi_0^{(C)}(z):=|\Delta(z)|^{\beta/2},\qquad \Delta(z):=\prod_{1\leq j <k \leq N}(z_j-z_k)$$ and with $\alpha:=2/\beta$, a complete set of eigenfunctions is given in terms of Jack polynomials by [@Fo10 eq. (13.199)] $$\label{J1a}
\psi_0^{(C)}(z)z^{-l}P_{\kappa}(z;\alpha)\qquad (l=0,1,\ldots)$$ where $$\label{J1b}
z^\kappa:=z_1^{\kappa_1}z_2^{\kappa_2}\ldots z_N^{\kappa_N}$$ and for $l>0$ it is required that $\kappa_N=0$.
Next we will revise how certain fractional quantum Hall states relate to Jack polynomials [@BH08a; @BH08]. An infinite family of bosonic fractional quantum Hall states states, indexed by a positive integer $k$, are due to Read and Rezayi [@RR99]. For a system of $kN$ particles, these are defined up to normalization as $$\label{RR}
\psi_{RR}^{(k)}={\rm Sym} \prod_{s=1}^k\prod_{1\leq i_s <j_s \leq N}(z_{i_s}-z_{j_s})^2$$ where Sym denotes symmetrization (see (\[21\]) below). Note that the $kN$ particles are thus partitioned into $k$ groups of $N$. Setting $k=1$ we read off that $$\psi_{RR}^{(1)}=\prod_{1\leq j<k\leq N}(z_j-z_k)^2$$ which is the filling factor $\nu=1/2$ bosonic Laughlin state. For $k=2$ it turns out that [@RR99] $$\label{Pf}
\psi_{RR}^{(2)}={\rm Pf}\Big[\frac{1}{z_k-z_l}\Big]_{k,l=1,\ldots,2N} \prod_{1\leq i<j \leq 2N}(z_{i}-z_{j}),$$ where the diagonal entry is to be replaced by zero if $k=l$, which is the filling factor $\nu=1$ Moore-Read state [@MR91]. As noted in [@RR99], $\psi_{RR}^{(k)}$ is characterized by the requirements that it be symmetric, and exhibit the factorization property $$\label{RRz}
\psi_{RR}^{(k)}(z_1,\ldots,z_{(N-1)k},\underbrace{z,\ldots,z}_{k \;\; {\rm times}}\,)=\prod_{l=1}^{(N-1)k}(z_l-z)^2\psi_{RR}^{(k)}(z_1,\ldots,z_{(N-1)k}).$$ It is at this stage the Jack polynomials show themselves. Thus it is a remarkable finding of recent times [@BH08a] that (\[RRz\]) is satisfied by $$\label{J3a}
\psi_{RR}^{(k)}(z)=P_{(2\delta)^k}(z;-k-1),$$ where $\delta:=(N-1,N-2,\ldots,1,0)$, $2\delta$ means each part of $\delta$ is multiplied by $2$, and $(2\delta)^k$ means each part of $2\delta$ is repeated $k$ times.
The relation (\[J3a\]) is one result in a broader theory relating Jack polynomials to quantum Hall states. This comes about by generalizing (\[RRz\]) to the so called $(k,r)$ clustering property [@BH08] $$\label{J4}
\psi^{(k, r)}(z_1,\ldots,z_{(N-1)k},\underbrace{z,\ldots,z}_{k \;\; {\rm times}}\,)=\prod_{l=1}^{N-k}(z_l-z)^r\psi^{(k,r)}(z_1,\ldots,z_{(N-1)k})$$ (see also [@WW08; @LWWW09] in relation to general factorizations of quantum Hall states). For $k=1$ and $r$ even this, together with the requirement $\psi^{(k,r)}$ be symmetric, implies $$\label{J4a}
\psi^{(1,r)}(z_1,\ldots,z_N)=\prod_{1\leq j<k \leq N}(z_j-z_k)^r$$ which is the filling factor $\nu=1/r$ bosonic Laughlin state. For $k=2$, $N\mapsto 2N$ and $r$ odd it is known [@BH08a] that (\[J4\]) is satisfied by $$\psi^{(2,r)}(z_1,\ldots,z_{2N})={\rm Pf}\Big[\frac{1}{z_k-z_l}\Big]_{k,l=1,\ldots,2N}\prod_{1\leq i<j \leq 2N}(z_i-z_j)^r$$ which is the $\nu=1/r$ Moore-Read state.
For general $k\in \mathbb{Z}^+$ it was conjectured in [@BH08a] and later proved in [@ES09] using methods from conformal field theory (see also the related works [@BGS09; @ERS10; @EBS10]) that for $k+1$ and $r-1$ relatively prime, (\[J4\]) is satisfied by $$\label{J51}
\psi^{(k,r)}(z_1,\ldots, z_N)=P_{\kappa(k,r)}(z_1,\ldots, z_N;-(k+1)/(r-1)),$$ where $\kappa(k,r)$ is the staircase partition [@JL10] $$\label{J52}
(((\beta+1)r+1)^k,(\beta r+1)^k,\ldots,(r+1)^k).$$ In (\[J52\]) $\beta\in \mathbb{Z}^+$ must be related to $N$ by $$\label{J53}
N=\frac{k+1}{r-1}+k(\beta+2),$$ and the notation $(\kappa_1^{n_1},\kappa_2^{n_2},\ldots,\kappa_p^{n_p})$ means that the part $\kappa_1$ is repeated $n_1$ times, $\kappa_2$ is repeated $n_2$ times etc. Alternatively, if $f_j$ denotes the frequency of the part equal to $j$ in $\kappa$ (e.g. if $\kappa=211100$ then $f_0=2,\,f_1=3,\,f_2=1$), $\kappa$ is specified in terms of its frequencies according to [@BH08a] $$\label{J54}
\kappa(k,r)=[k 0^{r-1}k 0^{r-1}k 0^{r-1}k\ldots].$$
We see from (\[J52\]) or (\[J54\]) that $$\label{4.1}
\kappa_i-\kappa_{i+k}\geq r$$ which in [@BH08a] was interpreted as a generalized exclusion principle. The significance of such partitions in Jack polynomial theory was first noticed by Feigin et al. [@FJMM02], who showed that the set of Jack polynomials $\{P_{\kappa(k,r)+\mu}(z;-(k+1)/(r-1)) \}_\mu$ forms a basis for the set of symmetric functions vanishing when $k+1$ variables coincide.
The Laughlin, Read-Moore and Read-Rezayi states are all translationally invariant, and so satisfy $$\label{Lp}
L^+\psi=0,\qquad L^+:=\sum_{j=1}^N\frac{\partial}{\partial z_j}.$$ This can also be interpreted as a highest weight condition in a raising and lowering operator formalism of angular momentum on the sphere, projected onto the plane [@LL97]. The companion lowest weight condition is that $$\label{Lm}
\bigg(\sum_{j=1}^Nz_j^2 \frac{\partial^2}{\partial z_j^2}+N_\phi \sum_{j=1}^Nz_j\bigg)\psi=0$$ where $N_\phi$ is interpreted as the monopole charge. When (\[Lp\]) and (\[Lm\]) are satisfied, as is the case for Laughlin, Read-Moore and Read-Rezayi states, $N_\phi$ must obey $$\sum_{j=1}^Nz_j\frac{\partial}{\partial z_j} \psi=\frac{N}{2}N_\phi\psi.$$ Most importantly, the Jack polynomials (\[J51\]) satisfy both (\[Lp\]) and (\[Lm\]) and so are well founded quantum Hall states.
The contribution to the study of these so called Jack states in this paper relates to viewing the clustering condition (\[J4\]), and related factorization formulas, as identities in Jack polynomial theory. We know that symmetric Jack polynomial theory has a number of extensions and generalizations. In particular there are multivariable classical orthogonal polynomials which appear in the study of the eigenfunctions of variants of the Calogero-Sutherland Schrödinger operator [@BF97b]; there are nonsymmetric versions of the Jack polynomials and the multivariable classical orthogonal polynomials [@Ch95; @BF98b]; and there are $q$-generalizations by way of Macdonald polynomial theory [@Ma95 Ch. VI]. It is our aim to initiate a study of the clustering condition (\[J4\]), and related factorizations in the context of these additional families of polynomials.
In Section 2 we revise Jack polynomial theory, and its extensions to generalized Hermite and Laguerre polynomials, and Macdonald polynomials, as needed for use in subsequent sections. In Section 3 we show that the case $k=1$ of the clustering (\[J4\]) can be solved in terms of Jack polynomials involving an arbitrary partition (actually this is an already known result). We provide too a similar solution in terms of nonsymmetric Jack polynomials, symmetric and nonsymmetric generalized Hermite and Laguerre polynomials, and symmetric Macdonald polynomials (the latter after an appropriate $(q,t)$-generalization).
In Section 4 we provide a very simple proof that (\[J51\]) satisfies (\[J4\]). The main ingredient in our proof is the fact that the Jack polynomials (\[J4\]) are translationally invariant. This proof also applies to the more general clustering (\[25.1\]) below, first isolated in [@BH08]. We show that the symmetric generalized Hermite and Laguerre polynomials coincide with the Jack polynomials under the conditions that the latter satisfy (\[25.1\]). We provide a Macdonald polynomial analogue of (\[25.1\]), but we do not have a proof.
Preliminary theory
==================
Jack polynomials
----------------
Let $\kappa:=(\kappa_1,\ldots,\kappa_N)$ denote a partition of non-negative integers such that $\kappa_1\geq\kappa_2\geq\ldots\geq \kappa_N$ and $|\kappa|:=\sum_{j=1}^N\kappa_j$ be its modulus, $l(\kappa)$ its length (i.e. number of non-zero parts) and define $z^\kappa$ as in (\[J1b\]). The monomial symmetric functions $m_\kappa(z)$ are specified by $$m_\kappa(z)=\frac{1}{C}{\rm Sym} \, z^{\kappa}$$ where $$\label{21}
{\rm Sym}f(z_1,\ldots, z_N)=\sum_{\sigma \in S_N}f(z_{\sigma(1)},\ldots, z_{\sigma(N)})$$ and the normalization $C$ is chosen so that the coefficient of $z^\kappa$ in $m_\kappa$ is unity.
Continuing with the definitions, let $<$ be a partial ordering on partitions $|\mu|=|\kappa|$, $\mu\not=\kappa$, specified by $\mu<\kappa$ iff $$\sum_{j=1}^p \mu_i \leq \sum_{j=1}^p\kappa_j \qquad (p=1,\ldots,N).$$ The symmetric Jack polynomial $P_\kappa(z;\alpha)$, labelled by a partition $\kappa$ and dependent on a scalar parameter $\alpha$, can be specified as the polynomial eigenfunction of the differential operator $$\label{H1}
\widetilde{H}^{(C)}:=\sum_{j=1}^N\Big(z_j\frac{\partial}{\partial z_j} \Big)^2+\frac{2}{\alpha}\sum_{1\leq j<k\leq N}\frac{z_j+z_k}{z_j-z_k} \Big(\frac{\partial}{\partial z_j}-\frac{\partial}{\partial z_k} \Big)$$ with eigenvalue $$\label{H1a}
e(\kappa;\alpha)=\sum_{j=1}^N\kappa_j(\kappa_j-1)+(\alpha(N-1)+1)|\kappa|-2\alpha\sum_{j=1}^N(j-1)\kappa_j,$$ and having the structure $$P_{\kappa}(z;\alpha)=m_\kappa(z)+\sum_{\mu<\kappa}a_{\kappa \mu}m_\mu(z)$$ for some coefficients $a_{\kappa \mu}\in \mathbb{Q}(\alpha)$. In fact $P_{\kappa}(z;\alpha)$ is the unique symmetric polynomial eigenfunction of $\widetilde{H}_N^{(C)}$ with leading term $m_\kappa(z)$ and eigenvalue (\[H1a\]). We remark too that $\widetilde{H}^{(C)}$ is related to the Calogero-Sutherland operator (\[HC\]) by $$\label{HH}
|\Delta(z)|^{-1/\alpha}(H^{(C)}-E_0^{(C)})|\Delta(z)|^{1/\alpha}=\widetilde{H}^{(C)},$$ where $E_0^{(C)}$ is the ground state energy.
Fundamental to the theory of the integrability properties of (\[HC\]) is the more general Schrödinger operator $$\label{HCE}
H^{(C,Ex)}=-\sum_{j=1}^N\frac{\partial^2}{\partial \theta_j^2}+\frac{\beta}{4}\sum_{1\leq j < k \leq N}\frac{(\beta/2-s_{jk})}{\sin^2(\theta_j-\theta_k)/2}$$ where $s_{jk}$ is the operator acting on a function $f(\theta_1,\ldots,\theta_N)$ by interchanging $\theta_j$ and $\theta_k$. The ground state wave function is again proportional to (\[J1\]). Conjugating by this state as in (\[HH\]) gives the transformed operator $$\begin{aligned}
\widetilde{H}^{(C,Ex)}=&|\Delta(z)|^{-1/\alpha}(H^{(C,Ex)}-E_0^{(C)})|\Delta(z)|^{1/\alpha} \notag \\
=&\sum_{j=1}^N\Big(z_j\frac{\partial}{\partial z_j}\Big)^2+\frac{(N-1)}{\alpha}\sum_{j=1}^Nz_j\frac{\partial}{\partial z_j } \notag\\
&+\frac{2}{\alpha}\sum_{1\leq j <k \leq N}\frac{z_jz_k}{z_j-z_k}\bigg(\Big(\frac{\partial}{\partial z_j }-\frac{\partial}{\partial z_k } \Big) - \frac{1-s_{jk}}{z_j-z_k} \bigg) \label{HCE1}.\end{aligned}$$ The significance of (\[HCE\]) shows itself upon the introduction of the mutually commuting Cherednik operators [@Ch95], [@Fo10 Def. 11.4.3] $$\label{xi}
\xi_i:=\alpha z_i d_i+1-N+\sum_{p=i+1}^Ns_{ip} \qquad (i=1,\ldots,N),$$ where $d_i$ denotes the type $A$ Dunkl operator [@Du89], [@Fo10 Def. 11.4.2] $$\label{di}
d_i:=\frac{\partial}{\partial z_i}+\frac{1}{\alpha}\sum_{\substack{k=1 \\ \not= i}}^N\frac{1-s_{jk}}{z_i-z_k}.$$ Thus $$\widetilde{H}^{(C,Ex)}=\frac{1}{\alpha^2}\sum_{j=1}^n\Big( \xi_i+\frac{N-1}{2} \Big)^2 -E_0^{(C)}.$$
With $\eta$ denoting a composition $\eta=(\eta_1,\ldots,\eta_N)$ $(\eta_j\in\mathbb{Z}_{\geq0})$, $\{\xi_i\}$ permits a complete set of simultaneous polynomial eigenfunctions $\{ E_{\eta}(z;\alpha)\}_{\eta},$ $$\xi_i E_\eta(z;\alpha)=\overline{\eta}_i E_\eta(z;\alpha) \qquad (1=1,\ldots,N),$$ where the eigenvalue $\overline{\eta}_i$ is specified by $$\label{ni}
\overline{\eta}_i=\alpha \eta_i - \#\{k<i|\eta_k\geq \eta_i\}-\#\{k>i|\eta_k>\eta_i\}.$$ The $E_\eta(z;\alpha)$ are referred to as the nonsymmetric Jack polynomials, and analogous to (\[H1a\]) they exhibit the structure $$\label{di1}
E_\eta(z;\alpha)=z^\eta+\sum_{\nu \prec \eta}\widetilde{a}_{\eta \nu} z^\nu.$$ With $\rho^+$ denoting the partition corresponding to the composition $\rho$, in (\[di1\]) $\prec$ denotes the Bruhat ordering on compositions, defined by the statement that $\nu\prec\eta$ if $\nu^+\prec \eta^+$, or in the case $\nu^+=\eta^+$, if $\nu=\prod_{l=1}^rs_{i_lj_l}\eta$ where $\eta_{i_l}>\eta_{j_l}$, $i_l<j_l$.
The operator (\[HCE\]) also permits a complete set of symmetric, and anti-symmetric, polynomial eigenfunctions. The complete set of symmetric eigenfunctions are the symmetric Jack polynomials. Since Sym commutes with (\[HCE1\]) we have $$\label{J5a}
{\rm Sym} \, E_\eta(z;\alpha)=a_\eta P_{\eta^+}(z;\alpha)$$ for some $a_\eta$ (see [@Fo10 eq. (12.101)].
The complete set of anti-symmetric polynomial eigenfunctions of (\[HCE1\]) are referred to as the anti-symmetric Jack polynomials [@BF99]. They are denoted $S_{\kappa+\delta}(z;\alpha)$ where $\delta$ is as in (\[J3a\]), and have the structure $$S_{\kappa+\delta}(z;\alpha)=\Delta(z)\Big(m_\kappa+\sum_{\sigma<\kappa}\widehat{a}_{\kappa \sigma}m_\sigma\Big)$$ (cf. (\[H1a\])). With $${\rm Asym} \, f(z_1,\ldots, z_N):=\sum_{P\in S_N}\epsilon(P)f(z_{P(1)},\ldots,z_{P(N)}),$$ where $\epsilon(P)$ denotes the signature of $P$, analogous to (\[J5a\]), for $\rho^+=\kappa+\delta$, we have $$\label{J5b}
{\rm Asym} \, E_\rho(z)=c_\rho S_{\kappa+\delta}(z;\alpha)$$ for some $c_\rho$ (see [@Fo10 eq. (12.113)]). Furthermore, the symmetric and anti-symmetric Jack polynomials are related by [@Fo10 eq. (12.118)] $$\label{J6a}
S_{\kappa+\delta}(z;\alpha)=\Delta(z)P_{\kappa}(z;\alpha/(1+\alpha)).$$
Generalized classical polynomials
---------------------------------
The quantum many body system on a circle with $1/r^2$ pair potential, as specified by the Schrödinger operator (\[HC\]), can also be defined on a line with an harmonic confining potential. When generalized to include exchange terms the Schrödinger operator for the latter reads [@Fo10 Prop. 11.3.1] $$\label{517}
H^{(H,Ex)}:=-\sum_{j=1}^N\frac{\partial^2}{\partial x_j^2}+\frac{\beta^2}{4}\sum_{j=1}^Nx_j^2+\beta\sum_{1\leq j< k \leq N}\frac{\beta/2-s_{jk}}{(x_j-x_k)^2}.$$ This has ground state wave function proportional to $$\label{518}
\psi_0^{(H)}(x)=\prod_{l=1}^Ne^{-\beta x_l ^2/4}\prod_{1\leq j<k\leq N}|x_k-x_j|^{\beta/2},$$ and furthermore permits a complete set of eigenfunctions of the form $$\psi_0^{(H)}(x)E_\eta^{(H)}(\sqrt{\beta/2}x;\alpha),$$ where $\{E_\eta^{(H)}(y;\alpha) \}$ are referred to as the generalized nonsymmetric Hermite polynomials. These polynomials are eigenfunctions of the transformed operator $$\begin{aligned}
\widetilde{H}^{(H,Ex)}&:=-\frac{2}{\beta}(\psi_0^{(H)}(x))^{-1}(H^{(H,Ex)}-E_0^{(H)})\psi_0^{(H)}(x) \notag \\
&=\sum_{j=1}^N\Big(\frac{\partial^2}{\partial y_j^2} -2 y_j \frac{\partial}{\partial y_j}\Big)+\frac{2}{\alpha}\sum_{j<k}\frac{1}{y_j-y_k}\bigg( \Big(\frac{\partial}{\partial y_j}-\frac{\partial}{\partial y_k}\Big)-\frac{1-s_{jk}}{y_j-y_k}\bigg), \label{HEx}\end{aligned}$$ where $E_0^{(H)}$ denotes the ground state energy and we have changed variables $y_j=\sqrt{\beta/2}x_j$.
The operator (\[HEx\]) permits a decomposition in terms of the generalized Laplacian $$\Delta_A:=\sum_{i=1}^Nd_i^2,$$ where $d_i$ denotes the Dunkl operator (\[di\]). With the $d_i$ defined in terms of $\{y_i\}$, a direct calculation shows [@Fo10 Prop. 11.5.1] $$\label{J7a}
\Delta_A=\widetilde{H}^{(H,Ex)}+2\sum_{j=1}^Ny_j\frac{\partial}{\partial y_j}.$$ Moreover, $\Delta_A$ can be used to generate the $E_\eta^{(H)}$ from the nonsymmetric Jack polynomials according to [@Fo10 eq. (13.91)] $$\label{J7}
{\rm exp} \Big(-\frac{1}{4}\Delta_A \Big)E_\eta(y;\alpha)=E_\eta^{(H)}(y;\alpha).$$ And with symmetric $P_\kappa^{(H)}$ and anti-symmetric $S_{\kappa+\delta}^{(H)}$ generalized Hermite polynomials constructed from the $E_\eta^{(H)}$ by the analogue of (\[J5a\]) and (\[J5b\]), the appropriate modification of (\[J7\]) generates these polynomials from their symmetric and anti-symmetric counterparts.
It is well known [@OP83] that (\[517\]) is related to the $A$ type root system. There is also a Calogero-Sutherland system on the half line $x\geq0$ with $B$ type symmetry (unchanged by $x\mapsto -x$), specified by the Schrödinger operator $$\begin{aligned}
H^{(L,Ex)}:=&-\sum_{j=1}^N\frac{\partial^2}{\partial x_j^2}+\frac{\beta^2}{4}\sum_{j=1}^Nx_j^2+\frac{(\beta a + 1)}{2}\sum_{j=1}^N\frac{(\beta a +1) /2-\sigma_j}{x_j^2} \notag\\
&+\beta\sum_{1\leq j<k\leq N}\bigg(\frac{\beta/2-s_{jk}}{(x_j-x_k)^2}+\frac{\beta/2-\sigma_j \sigma_k s_{jk}}{(x_j+x_k)^2} \bigg). \label{11.54}
\end{aligned}$$ Here $\sigma_j$ is the operator which replaces the coordinate $x_j$ by $-x_j$.
The ground state wave function is proportional to $$\psi_0^{(L)}(x^2)=\prod_{l=1}^Nx_l^{(\beta a + 1)/2}e^{-\beta x_l^2/4}\prod_{1\leq j < k \leq N}|x_k^2-x_j^2|^{\beta/2}$$ and there is a complete set of even eigenfunctions of the form [@BF98b] $$\psi_0^{(L)}(x^2)E_{\eta}^{(L)}\Big( \frac{\beta}{2}x^2;\alpha\Big),$$ where $\{ E_\eta^{(L)}(y^2;\alpha)\}$ are referred to as the generalized nonsymmetric Laguerre polynomials. The latter are eigenfunctions of the transformed operator $$\begin{aligned}
\widetilde{H}^{(L,Ex)}:=&\frac{2}{\beta}(\psi_0^{(L)})^{-1}(H^{(L,Ex)}-E_0^{(L)})\psi_0^{L}(x^2) \notag \\
=&\frac{1}{4}\sum_{j=1}^N\Big(\frac{\partial^2}{\partial y_j^2}-2y_j\frac{\partial}{\partial y_j}+(2a+1)\frac{1}{y_j}\frac{\partial}{\partial y_j} \Big) \notag
\\
&+\frac{1}{\alpha}\sum_{j<k}\frac{1}{y_j^2-y_k^2}
\bigg( \Big( y_j\frac{\partial}{\partial y_j}-y_k\frac{\partial}{\partial y_k}\Big)
-\frac{y_j^2+y_k^2}{y_j^2-y_k^2}(1-s_{jk})\bigg),
\label{LEx}\end{aligned}$$ where as in (\[HEx\]) we have changed variables $y_j=\sqrt{\beta/2}x_j$. Analogous to (\[J7a\]), with $$d_i^{(B)}:=\frac{\partial}{\partial y_i}+\frac{1}{\alpha}\sum_{p=1}^N\Big( \frac{1-s_{ip}}{y_i-y_p}+\frac{1-\sigma_i\sigma_p s_{ip}}{y_i+y_p}\Big)+\frac{(a+1/2)}{y_i}(1-\sigma_i)$$ we have [@Fo10 eq. (11.78)] $$\label{LEy}
\Delta_B:=\sum_{i=1}^N(d_i^{(B)})^2=4 \Big(\widetilde{H}^{(L,Ex)}+\frac{1}{2}\sum_{j=1}^Ny_j\frac{\partial}{\partial y_j}\Big),$$ provided $\Delta_B$ is restricted to act on functions even in each $y_j$. We can use this operator to compute the nonsymmetric Laguerre polynomials in terms of the nonsymmetric Jack polynomials according to [@Fo10 eq. (13.116)] $$\label{J9}
{\rm exp}\Big( -\frac{1}{4} \Delta_B \Big)E_\eta(y^2;\alpha)=E_\eta^{(L)}(y^2;\alpha).$$
Macdonald polynomials
---------------------
Macdonald polynomials [@Ma95] generalize Jack polynomials. They were introduced into the study of fractional quantum Hall states in the recent work [@JL10].
The symmetric Macdonald polynomials $P_{\kappa}(z;q,t)$ can be uniquely characterized as the symmetric polynomial solutions of the eigenvalue equation $$\label{Ma}
M_1 P_\kappa(z;q,t)=e(\kappa;q,t)P_\kappa(z;q,t),$$ with a structure the same as exhibited in (\[H1a\]) for the symmetric Jack polynomials. Here $$\label{Mb}
M_1:=\sum_{i=1}^N\prod_{\substack{j=1\\\not=i}}^N\frac{tz_i-z_j}{z_i-z_j}T_{q,z_i},$$ where $T_{q,z_i}$ acts on $f(z_1,\ldots, z_N)$ by the replacement $z_i \mapsto q z_i$, and the eigenvalue has the explicit form $$\label{Mc}
e(\kappa;q,t)=\sum_{i=1}^Mq^{\kappa_i}t^{N-i}.$$ They relate to the Jack polynomials by $$\label{Md}
\lim_{q\rightarrow 1}P_\kappa(z;q,q^{1/\alpha})=P_\kappa(z;\alpha).$$
The nonsymmetric Macdonald polynomials $E_\eta(z;q,t)$ can be characterized as the simultaneous polynomial eigenfunctions $$Y_iE_\eta(z;q,t)=\overline{\eta}_iE_\eta(z;q,t)\qquad (1=1,\ldots,N)$$ with structure as in (\[di1\]). Here $$Y_i:=t^{-N+i}T_i\ldots T_{N-1}\omega T_1^{-1}\ldots T_{i-1}^{-1}$$ where, with $s_i:=s_{i,i+1}$ $$\begin{aligned}
T_i&:=t+\frac{tz_i-z_{i+1}}{z_i-z_{i+1}}(s_i-1), \\
\omega&:=s_{n-1}\ldots s_2 T_{q,z_1},\end{aligned}$$ and $$\overline{\eta}_i := q^{\eta_i} t^{-l_\eta'(i)}, \quad
l_\eta'(i)=\#\{j<i|\eta_j\geq \eta_i\}-\#\{j>i|\eta_j>\eta_i\}.$$
Introducing the $t$-symmetrization and $t$-antisymmetrization operators by $$\label{UU}
U^+:=\sum_{\sigma\in S_N}T_\sigma,\qquad U^-:=\sum_{\sigma\in S_N}\Big(-\frac{1}{t}\Big)^{l(\sigma)}T_\sigma,$$ where $\sigma:=s_{i_l(\sigma)}\ldots s_{i_1}$ is a minimal length decomposition in terms of transpositions, and $T_\sigma:=T_{i_l(\sigma)}\ldots T_{i_1}$, we have [@Ma99] $$\label{Me}
U^+E_\eta(z;q,t)=a_\eta(q,t)P_{\eta^+}(z;q,t)$$ for some (known) $a_\eta(q,t)$. Also, with $S_{\kappa+\delta}(z;q,t)$ defined as the $t$-antisymmetric polynomial eigenfunctions of the eigenvalue equation $$\Big(\sum_{i=1}^N Y_i\Big) S_{\kappa+\delta}(z;q,t)=\Big( \sum_{i=1}^N\overline{\eta}_i\Big) S_{\kappa+\delta}(z;q,t)$$ we have $$U^- E_{\delta+\eta}(x;q,t)=b_\eta(q,t)S_{\delta+\eta^+}(z;q,t)$$ for some (known) $b_\eta(q,t)$. And analogous to (\[J6a\]) these $t$-antisymmetric Macdonald polynomials are related to their symmetric counterparts by [@Ma99] $$\label{5Pa}
S_{\delta+\kappa}(z;q,t)=t^{-N(N-1)/2}\Delta_t(z)P_\kappa(z;q,qt),$$ where $$\Delta_t(z):=\prod_{1\leq j<k \leq N}(t z_j-z_k).$$ With $t=q^{1/\alpha}$ we see from (\[Md\]) that in the limit $q\rightarrow 1$ (\[5Pa\]) reduces to (\[J6a\]).
The clustering condition for $k=1$
==================================
Symmetric Jack polynomials
--------------------------
Iterating (\[J4\]) with $k=1$ gives (\[J4a\]). This is a symmetric function for $r$ even, and an anti-symmetric function for $r$ odd. According to (\[J51\]), for $r$ even we have $$\label{J8.1}
\prod_{1\leq j < k \leq N} (z_j-z_k)^r=P_{r\delta}(z_1,\ldots,z_N;-2/(r-1)),$$ where the partition $r\delta$ is specified as in (\[J3a\]). In the context of fractional quantum Hall states, this result was first proved in [@BH08]. The method used was to observe that for the product formula (\[J4a\]) $$D_i \psi^{(1,r)}(z)=0, \qquad D_i:=\frac{\partial}{\partial z_i}-r\sum_{\substack{j=1 \\ \not=i}}^N\frac{1}{z_i-z_j}.$$ Consequently $$\sum_{j=1}^N(z_jD_j)^2\psi^{(1,r)}(z)=0.$$ On the other hand, by direct calculation $$\sum_{j=1}^N(z_jD_j)^2=\Big(\widetilde{H}^{(C)}-e(r\delta;\alpha) \Big) \bigg|_{\alpha=-2/(r-1)},$$ and we know that the unique symmetric polynomial null vector of this latter operator with leading term $z^{r\delta}$ is the Jack polynomial in (\[J8.1\]).
In fact the result (\[J8.1\]) is a special case of a more general identity, already known in the Jack polynomial literature [@Op98], [@Fo10 Ex. 12.6 q.5].
\[Prop1\] Let $r>0$ be even. For a general partition $\kappa$, $l(\kappa)\leq N$, $$\label{578}
P_{r\delta+\kappa}(z;-2/(r-1))=(\Delta(z))^rP_\kappa(z;2/(r+1)).$$
According to the theory below (\[HC\]) $$\label{579}
(H^{(C)}-E_0^{(C)})\Big( |\Delta(z)|^{1/\alpha}P_\kappa(z;\alpha) \Big)=|\Delta(z)|^{1/\alpha}P_\kappa(z;\alpha).$$ Suppose we replace $1/\alpha$ by $1-1/\alpha$ in this equation. Making use of the simple but crucial observation that $$H^{(C)}\bigg|_{1/\alpha \mapsto 1-1/\alpha}=H^{(C)}$$ then applying (\[HH\]) we see $$\label{580}
\widetilde{H}^{(C)}|\Delta(z)|^{1-2/\alpha}P_\kappa(z;1/(1-1/\alpha))=e(\kappa;1/(1-1/\alpha))|\Delta(z)|^{1-2/\alpha}P_\kappa(z;1/(1-1/\alpha)).$$ The next step is to replace $1/\alpha$ by $-\alpha+1/2$, replace $\kappa$ by $\kappa+(\alpha(N-1))^N$ and to use the basic property of Jack polynomials $$P_{\kappa+p^N}(z;\alpha)=z^{p^N}P_\kappa(z;\alpha)$$ to deduce from (\[580\]) that for $\alpha$ a non-negative integer $$\begin{aligned}
&\widetilde{H}^{(C)}\bigg|_{\alpha \mapsto 1/(-\alpha+1/2)}\Big((\Delta(z))^{2\alpha}P_\kappa(z;1/(\alpha+1/2))\Big) \notag \\
&\hspace{4cm}=e(\kappa+(\alpha(N-1))^N;1/(\alpha+1/2))(\Delta(z))^{2\alpha}P_\kappa(z;1/(\alpha+1/2)) \label{J10}.\end{aligned}$$ Furthermore, we can check from the definition (\[H1\]) that $$\label{J10a}
e(\kappa+(\alpha(N-1))^N;1/(\alpha+1/2))=e(\kappa+2\alpha\delta;1/(-\alpha+1/2)).$$ This tells us that (\[J10\]) is the eigenequation for the Jack polynomial $P_{\kappa+2\alpha\delta}(z;1/(-\alpha+1/2))$. We know too that the latter is the unique polynomial eigenfunction of this eigenequation with leading term $z^{\kappa+2\alpha\delta}$ so (\[578\]) follows. $\square$
Nonsymmetric Jack polynomials
-----------------------------
A feature of (\[578\]) is that $r$ must be even. It turns out that in the case of the nonsymmetric Jack polynomials the analogue of $r$ must be odd.
\[Prop2\] For $l>0$ and odd, and $\kappa$ a partition with $l(\kappa)\leq N$ we have $$\label{J11}
E_{\kappa+l\delta}(z;-2/l)=(\Delta(z))^{l}E_\kappa(z;2/l).$$
Both sides have the same leading term $z^{\delta l+ \kappa}$. Hence it suffices to show that for $l$ odd $(\Delta(z))^{l}E_\kappa(z;2/l)$ is a simultaneous polynomial eigenfunction of each $\xi_i |_{\alpha=-2/l}$ $(i=1,\ldots, N)$ with eigenvalue $(\overline{\kappa_i+l \delta_i}) |_{\alpha=-2/l}$. Now, we see from (\[di\]) that for $l$ odd $$\begin{aligned}
&-\frac{2}{l}z_i d_i\bigg|_{\alpha=-2/l}\bigg((\Delta(z))^l E_\kappa(z;2/l) \bigg) \\
&\hspace{2cm}=-\frac{2}{l}(\Delta(z))^l z_i d_i\bigg|_{\alpha=2/l} E_\kappa(z;2/l) \\
&\hspace{2cm}=(\Delta(z))^l \Big(-\frac{2}{l}z_i\frac{\partial}{\partial z_i}-\sum_{\substack{k=1 \\ \not= i}}^N\frac{(1-s_{ik})}{z_i-z_k} \Big) E_\kappa(z;2/l) \end{aligned}$$ Substituting in (\[xi\]), again making essential use of $l$ being odd, shows $$\begin{aligned}
&\xi_i\bigg|_{\alpha=-2/l}\Big( (\Delta(z))^l E_\kappa(z;2/l) \Big) \\
&\hspace{2cm}=-(\Delta(z))^l\Big(\xi_i\bigg|_{\alpha=2/l}-2(1-N) \Big) E_\kappa(z;2/l) \\
&\hspace{2cm}=-\bigg(\overline{\kappa}_i\bigg|_{\alpha=2/l}-2(1-N)\bigg)(\Delta(z))^lE_\kappa(z;2/l)\\
&\hspace{2cm}=(\overline{\kappa_i+l\delta_i})\bigg|_{\alpha=-2/l}(\Delta(z))^lE_\kappa(z;2/l),\end{aligned}$$ where the final equality follows from (\[ni\]), which is the sought equation. $\square$
Note that after writing $l=r-1$, ($r$ even), $\kappa\mapsto\kappa+\delta$ and symmetrizing both sides (\[J11\]) reads $$\label{12.1}
(\Delta(z))^{r-1}{\rm Asym} \;E_{\kappa+\delta}(z;2/(r-1))={\rm Sym}\; E_{r \delta +\kappa}(z;2/(r-1)).$$ Recalling (\[J5b\]), (\[J5a\]) and (\[J6a\]) we see that this reclaims (\[578\]).
Generalized Hermite and Laguerre polynomials
--------------------------------------------
We will first show that the nonsymmetric generalized Hermite and Laguerre polynomials satisfy a factorization formula structurally identical to (\[J11\]). By symmetrization we can then deduce the analogues of (\[578\]).
\[K\] Let $l>0$ be odd, and let $\kappa$ be a partition $l(\kappa)\leq N$. We have $$\begin{aligned}
E_{\kappa+l\delta}^{(H)}(z;-2/l)&=(\Delta(z))^{l}E^{(H)}_\kappa(z;2/l) \label{13.1}, \\
E^{(L)}_{\kappa+l\delta}(z;-2/l)&=(\Delta(z))^{l}E^{(L)}_\kappa(z;2/l).\label{13.2}\end{aligned}$$ Also, with $r$ even and positive $$\begin{aligned}
P_{\kappa+l\delta}^{(H)}(z;-2/(r-1))&=(\Delta(z))^{r}P^{(H)}_\kappa(z;2/(r+1)), \label{13.3} \\
P^{(L)}_{\kappa+l\delta}(z;-2/(r-1))&=(\Delta(z))^{r}P^{(L)}_\kappa(z;2/(r+1)).\label{13.4}\end{aligned}$$
First, we can note (\[13.3\]), (\[13.4\]) follow from (\[13.1\]), (\[13.2\]) by symmetrizing both sides to deduce the analogue of (\[12.1\]), then using the Hermite and Laguerre analogues of (\[J5a\]), (\[J5b\]) and (\[J6a\]).
To derive (\[13.1\]) and (\[13.2\]) we use the explicit forms of $\Delta_A$ and $\Delta_B$ as given in (\[J7\]), (\[LEy\]), (\[LEx\]), together with the identity $$\frac{1}{(a-b)(a-c)}+\frac{1}{(b-a)(b-c)}+\frac{1}{(c-a)(c-b)}=0$$ to show that for $l$ odd $$\Delta_A\Big |_{\alpha=-2/l}\Big((\Delta(z))^lf(z) \Big)=(\Delta(z))^l \Delta_A\Big |_{\alpha=2/l}f(z)$$ and $$\Delta_B\Big |_{\alpha=-2/l}\Big((\Delta(z^2))^lf(z^2) \Big)=(\Delta(z^2))^l \Delta_B\Big |_{\alpha=2/l}f(z^2).$$ This shows that when applying the exponential operators in (\[J7\]) to (\[J11\]), the nonsymmetric Jack polynomials therein are transferred to their generalized Hermite and Laguerre counterparts, as stated in (\[13.1\]) and (\[13.2\]). $\square$
Setting $\kappa=0^N$ in Proposition \[K\] shows that for $l$ odd $$\label{14.1}
E_{l\delta}(z;-2/l)=E_{l \delta}^{(H)}(z;-2/l)=E_{l\delta}^{(L)}(z;-2/l)=(\Delta(z))^l$$ and for $r$ even $$\label{14.2}
P_{r\delta}(z;-2/(r-1))=P_{r \delta}^{(H)}(z;-2/(r-1))=P_{r\delta}^{(L)}(z;-2/(r-1))=(\Delta(z))^r.$$ The first of these formulas dramatically displays the special properties which take effect for $\alpha=-2/l$, $l$ odd. Thus the generalized Hermite and Laguerre polynomials generally have the structure in terms of the Jack polynomials $$\begin{aligned}
E_\eta^{(H)}(z;\alpha)&=E_\eta(z;\alpha)+\sum_{|\nu|<|\eta|}b_{\eta \nu}E_\nu(z;\alpha) \\
E_\eta^{(L)}(z;\alpha)&=E_\eta(z;\alpha)+\sum_{|\nu|<|\eta|}\widetilde{b}_{\eta \nu}E_\nu(z;\alpha).\end{aligned}$$ However for $\alpha=-2/l$, $l$ odd and a special choice of $\eta$, all but the leading term vanishes and the generalized Hermite and Laguerre polynomials become homogeneous. Similar remarks apply in relation to (\[14.2\]).
Symmetric Macdonald polynomials
-------------------------------
Let us set $$\label{DD}
D_l(z;q)=\prod_{i=1}^lD_1(z;q^{2i+1}),\qquad D_1(z;q):=\prod_{i=1}^{N}\prod_{\substack{j=1\\j\not=i}}^N(qz_j-z_i).$$ For all $i,j\in\{1,\ldots,N \},$ $i\not=j$ and $0\leq s \leq 2l$ we see that $D_l(z;q)=0$ when $$\label{20.1}
z_j=z_it q^s,\qquad t=q^{-(2l-1)}.$$ This is (a special case of) the wheel condition, first identified in [@FJMM03] in relation to the vanishing properties of Macdonald polynomials with $t^{k+1}q^{r-1}=1$, and partitions with parts satisfying (\[4.1\]). For $k=1$ and $r$ even the theory of [@FJMM03] tells us that the corresponding Macdonald polynomial with partition satisfying (\[4.1\]) vanishes when (\[20.1\]) holds with $q\mapsto q^{1/2}$ and $l=r/2$ ($r$ even) and thus that they contain $D_{r/2}(z;q^{1/2})$ as a factor. Consistent with the Jack polynomial identity (\[578\]), the remaining factor is another Macdonald polynomial.
\[PD\] Let $D_l$ be as in (\[DD\]), and let $r>0$ be even. We have $$\label{20.c}
P_{\kappa+r\delta}(z;q,q^{-(r-1)/2})=(-q^{-1/2})^{r^2N(N-1)/8}D_{r/2}(z;q^{1/2})P_\kappa(z;q,q^{(r+1)/2}).$$
Our strategy is to make use of the characterization of the Macdonald polynomials as eigenfunctions of (\[Ma\]). We first observe that both sides of (\[20.c\]) are polynomials with leading term the monomial symmetric function $m_{\kappa+r\delta}(z)$. It remains then to check that the RHS satisfies the same eigenvalue equation as the LHS.
Noting that $$T_{q,z_i}D_{r/2}(z;q^{1/2})=q^{r/2}\frac{(q^{(r+1)/2}z_i-z_j)}{(q^{(r-1)/2}z_i-z_j)}D_{r/2}(z;q^{1/2}),$$ and recalling the definition (\[Mb\]), we see $$\begin{aligned}
&M_1\bigg|_{t=q^{-(r-1)/2}}\Big(D_{r/2}(z;q)P_\kappa(z;q,q^{(r+1)/2})\Big) \notag \\
&\hspace{1cm}=q^{r(N-1)/2}D_{r/2}(z;q^{1/2})\sum_{i=1}^N\bigg( \prod_{\substack{j=1 \\ \not= i}}\frac{(q^{(r+1)/2}z_i-z_j)}{z_i-z_j}\bigg) \; T_{q,z_i}P_{\kappa}(z;q.q^{(r+1)/2}) \notag\\
&\hspace{1cm}=q^{r(N-1)/2}D_{r/2}(z;q^{1/2}) e(\kappa;q,q^{(r+1)/2})P_\kappa(z;q,q^{(r+1)/2}). \label{20.d}\end{aligned}$$ But according to the definition (\[Mc\]) $$q^{r(N-1)/2}e(\kappa;q,q^{(r+1)/2})=e(\kappa;q,q^{(r-1)/2}),$$ which establishes the sought eigenequation. $\square$
We remark that the case $\kappa = 0^N$ of Proposition \[PD\] was known previously [@BL08 Thm. 3.2]. Also, we draw attention to the work [@Lu10] in which a factorization identity for $P_\kappa(z;q,q^k)$, with $z$ an infinite number of variables corresponding to the alphabet ${1 - q \over 1 - q^k} \mathbb X$, is established.
Nonsymmetric Macdonald polynomials
----------------------------------
In the work [@Ka05c] vanishing properties of the nonsymmetric Macdonald polynomials with $t^{k+1}q^{r-1}=1$ were studied. It was found that for $k=1$ the relevant wheel condition is $$z_j=z_it q^s,\qquad t=q^{-(r-1)/2}$$ where $0\leq s \leq r-2$, and it is required $j<i$ if $s=0$, and $i\not=j$ otherwise. It follows that for $l$ odd $$\label{22.1}
E_{\kappa+l \delta}(z;q,q^{-l/2})=D_{(l-1)/2}(z;q)\prod_{1\leq i < j \leq N}(q^{(l-1)/2}z_j-z_i)f(z),$$ where $f(z)$ is a homogeneous polynomial of degree $\kappa$. However, unlike the case of the Jack limit (\[J11\]), computer algebra computations show that in general $f(z)$ is not itself a single nonsymmetric Macdonald polynomial.
The clustering condition for $k>1$
==================================
Jack polynomials
----------------
The case $k=1$ is special because the clustering condition determines the explicit form (\[J8.1\]) of the wave function. This is not the case for $k>1$, and moreover the solution of (\[J8.1\]) in terms of the Jack polynomials (\[J51\]) appears not to be a result available from the existing Jack polynomial literature. Our first point of interest is to provide a self contained derivation from the perspective of Jack polynomial theory. This can be carried out as a corollary of the following general formula, the derivation of which is quite elementary.
\[G\] Let the part $0$ in $\kappa=(\kappa_1,\ldots, \kappa_N)$ have frequency $f_0$ so that $l(\kappa)=N-f_0$, and set $\widetilde{\kappa}=(\kappa_1,\ldots,\kappa_{N-f_0})-(\kappa_{N-f_0})^{N-f_0}$. For the Jack polynomials $P_\kappa(z;\alpha)$ and $P_{\widetilde{\kappa}}(z;\alpha)$ let $\alpha$ be such that they are translationally invariant, and thus $$\begin{aligned}
P_\kappa(z_1,\ldots, z_N;\alpha)&=P_{\widetilde{\kappa}}(z_1-1,\ldots, z_N-1;\alpha)\label{23.1}\\
P_{\widetilde{\kappa}}(z_1,\ldots, z_{N-f_0};\alpha)&= P_{\widetilde{\kappa}}(z_1-1,\ldots, z_{N-f_0}-1;\alpha)\label{23.2}.\end{aligned}$$ We have $$\begin{aligned}
&P_\kappa(z_1,\ldots, z_N;\alpha)\bigg|_{z_{N-f_0+1}=\cdots=z_N=1} \notag \\
&\hspace{2cm}=\prod_{l=1}^{N-f_0}(1-z_l)^{\kappa_N-f_0}P_{\widetilde{\kappa}}(z_1,\ldots, z_{N-f_0};\alpha). \label{23.3}\end{aligned}$$
We have $$\begin{aligned}
&P_\kappa(z_1,\ldots,z_{N-f_0},\underbrace{1,\ldots,1}_{f_0\;\;times};\alpha) \\
&\hspace{2cm}=P_\kappa(z_1-1,\ldots,z_{N-f_0}-1,\underbrace{0,\ldots,0}_{f_0\;\;times};\alpha) \\
&\hspace{2cm}=P_{\widetilde{\kappa}+(\kappa_{N+f_0})^{N-f_0}}(z_1-1,\ldots,z_{N-f_0}-1;\alpha)\\
&\hspace{2cm}=\prod_{l=1}^{N-f_0}(z_l-1)^{\kappa_{N-f_0}}P_{\widetilde{\kappa}}(z_1-1,\ldots,z_{N-f_0}-1;\alpha)\\
&\hspace{2cm}=\prod_{l=1}^{N-f_0}(z_l-1)^{\kappa_{N-f_0}}P_{\widetilde{\kappa}}(z_1,\ldots,z_{N-f_0};\alpha),\end{aligned}$$ where the first equality follows from (\[23.1\]), the second from the stability property of the Jack polynomials, the third from the simple property of Jack polynomials that $P_{\kappa+p^N}(z;\alpha)=z^{p^N}P_\kappa(z;\alpha)$ and the fourth from the assumption (\[23.2\]).$\square$
According to Proposition \[G\], in order to verify that (\[J51\]) satisfies (\[J4\]), it suffices to establish (\[23.1\]) and (\[23.2\]). But these in turn are immediate corollaries of the highest weight condition (\[Lp\]) (thus (\[Lp\]) implies $\psi$ must be a symmetric function in $z_i-\overline{z}$, $\overline{z}=\sum_{j=1}^Nz_j/N$; see e.g. [@Li10]), which we know from [@BH08a] is satisfied by (\[J4\]).
In [@BH08] the class of partitions $\kappa$ for which $P_\kappa(z;-(k+1)/(r-1))$ satisfies the highest weight condition was extended from (\[J54\]) to include a positive integer $s\geq1$ specified by $$\label{25x}
\kappa(k,r,s)=[n_00^{(r-1)s}k 0 ^{r-1}k 0 ^{r-1}k 0 ^{r-1} k \cdots]$$ where $N - l(\kappa) = (k+1)s - 1 =: n_0$ (the case $s=1$ corresponds to (\[J54\])). This generalization was itself generalized in [@JL10] to include a further positive integer $1 \le m \le k$ specified by $$\label{25xx}
\kappa(k,r,s,m)=[n_00^{(r-1)s}k 0 ^{r-1}k 0 ^{r-1}k \cdots 0 ^{r-1} m].$$ It follows from Proposition \[G\] that with $\alpha=-(k+1)/(r-1)$, $(k+1)$ and $(r-1)$ relatively prime $$\begin{aligned}
&P_{\kappa(k,r,s,m)}(z_1,\ldots, z_N;\alpha)\bigg|_{z_{N-n_0+1}=\ldots=z_N=z}\notag \\
&\hspace{2cm}=\prod_{j=1}^{N-n_0}(z_j-z)^{(r-1)s+1}P_{\kappa(k,r,1,m)}(z_1,\ldots,z_{N-n_0})\label{25.1}\end{aligned}$$ (homogeneity of the Jack polynomials has been used to go from setting $z_{N-n_0+1}=\ldots=z_N=1$ to setting $z_{N-n_0+1}=\ldots=z_N=z$).
The partition (\[25x\]), written therein in terms of occupation numbers, is an example of a staircase partition. The special case of staircase partitions, consisting of a single part $r$ repeated say $g$ times is referred to as a rectangular partition. It was shown in [@JL10] that with $$\label{25.2}
\alpha=-\frac{N+1-g}{r-1},\qquad N\geq 2g$$ and $N+1-g$, $r-1$ relatively prime, the Jack polynomial $P_{r^{g}}(z;\alpha)$ satisfies the highest weight condition. It then follows from Proposition \[G\] that $$\label{26}
P_{r^g}(z_1,\ldots,z_N;-(N+1-g)/(r-1))\bigg|_{z_{g+1}=\ldots=z_N=1}=\prod_{l=1}^{g}(z_l-1)^r$$
We now turn our attention to the case of nonsymmetric Jack polynomials in the context of the clustering condition (\[25.1\]). Unless $s=1$ and $m=k$, the nonsymmetric counterparts of the Jack polynomials in (\[25.1\]) do not satisfy the highest weight condition and so Proposition \[G\] does not apply. In fact the computer algebra computations show that there is no longer a factorization of the same type as (\[25.1\]). We find instead a structure $$\label{26.1}
E_{\kappa(k,r,s,m)}(z_1,\ldots,z_N;\alpha)\bigg|_{z_{N-n_0+1}=\ldots=z_N=z}=\prod_{j=1}^{N-n_0}(z_j-z)^{(r-1)s}f(z,z_1,\ldots, z_{N-n_0}).$$ This differs from (\[25.1\]) in that the exponent in the first lot of factorized products is reduced by $1$, and the variable $z$ becomes a part of the remaining factor $f$.
Generalized Hermite and Laguerre polynomials
--------------------------------------------
In Jack polynomial theory the (symmetric) binomial coefficients can be specified by the generalized binomial expansion [@Fo10 eq. (12.179)] $$\label{B1}
{P_\kappa(1+x;\alpha) \over P_\kappa((1)^N;\alpha)} =
\sum_{\mu \subseteq \kappa} \Big ( { \kappa \atop \mu} \Big )
{P_\mu(x;\alpha) \over P_\mu((1)^N;\alpha)}.$$ It is known [@BF97b] that the symmetric Laguerre polynomials can be expanded in terms of the symmetric binomial coefficients according to $$\begin{aligned}
\label{B2}
&&P_\kappa^{(L)}(x;\alpha) = (-1)^{|\kappa|} P_\kappa((1)^N;\alpha) [a+h]_\kappa^{(\alpha)}
\sum_{\mu \subseteq \kappa} \Big ( { \kappa \atop \mu} \Big )
{ (-1)^{|\mu|} P_\mu(x;\alpha) \over [a+h]_\mu^{(\alpha)} P_\mu((1)^N;\alpha)},\end{aligned}$$ where $h := 1 + (N-1)/\alpha$ and $$[u]_\kappa^{(\alpha)} := \prod_{j=1}^N {\Gamma(u - (j-1)/\alpha + \kappa_j) \over
\Gamma(u - (j-1)/\alpha)}.$$
Suppose now that $\alpha$ and $\kappa$ are such that $P_\kappa(x;\alpha)$ satisfies the highest weight condition. We then have $P_\kappa(1+x;\alpha) = P_\kappa(x;\alpha)$, and this in turn implies that $ P_\kappa((1)^N;\alpha) = 0$. Moreover, if follows from (\[B1\]) that we also have $$P_\kappa((1)^N;\alpha) \Big ( { \kappa \atop \mu} \Big )
{P_\mu(x;\alpha) \over P_\mu((1)^N;\alpha)} = 0, \qquad \mu \ne \kappa.$$ Using this in (\[B2\]), together with the fact that $[a+h]_\kappa^{(\alpha)}/[a+h]_\mu^{(\alpha)}$ must be finite for $\mu \subseteq \kappa$ we see that all terms in (\[B2\]) must vanish except for the term $\mu = \kappa$. Thus in this setting we have in general $$\label{B3}
P_\kappa^{(L)}(x;\alpha) = P_\kappa(x;\alpha).$$ Also, by inspection of the orthogonalities of the generalized Hermite and Laguerre polynomials (see e.g. [@Fo10 Ch. 13], it is easy to see that $$\label{B4}
\lim_{a \to \infty} \Big ( {1 \over \sqrt{2a}} \Big )^{|\kappa|}
P_\kappa^{(L)}(a + \sqrt{2a} x;\alpha) = P_\kappa^{(H)}(x;\alpha).$$ Suppose $\alpha$ and $\kappa$ are such that $P_\kappa(x;\alpha)$ satisfies the highest weight condition. Then (\[B3\]) holds and it substituted in (\[B4\]) implies $$\label{B5}
P_\kappa^{(H)}(x;\alpha) = P_\kappa(x;\alpha).$$ The identities (\[B3\]) and (\[B5\]) tell us that the symmetric generalized Hermite and Laguerre polynomials satisfy the analogues of (\[25.1\]) and (\[26\]).
With regards to the nonsymmetric Hermite and Laguerre polynomials, computer algebra computations indicate a factorization having the structure (\[26.1\]), but again we have not been able to identify the analogue of the function $f$.
Macdonald polynomials
---------------------
Set $\alpha = - (k+1)/(r-1)$, $(k+1)$ and $(r-1)$ relatively prime and require that $z_i = q^{(i-1)/\alpha} z$, $(i=1,\dots,(k+1)s - 1)$. Computer algebra computations indicate that the Macdonald polynomial analogue of the identity (\[25.1\]) is $$\begin{aligned}
&P_{\kappa(k,r,s,m)}(z_1,\ldots, z_N;q,q^{1/\alpha})\bigg|_{z_{i}= q^{(i-1)/\alpha} z \: \: (i=1,\dots,
(k+1)s - 1)}\notag \\
&\hspace{2cm}=
\prod_{j=-(r-1)(s-1)}^{r-1}
\prod_{i=s(k+1)}^{N}(z_i-zq^{k/\alpha + j} )
P_{\kappa(k,r,1,m)}(z_{s(k+1)},\ldots,z_{N};q,q^{1/\alpha})\label{23.8}.\end{aligned}$$ In the case $k=s=m=1$ we see that this is consistent with the case $\kappa = 0^N$ of (\[20.c\]). For $s=1$, $m=k$, the wheel condition [@FJMM03] tells us that with $$z_1 = x, \quad z_2 = tx_1, \quad \dots, \quad z_k = t^{k-1} x,$$ the Macdonald polynomial $P_{\kappa(k,r,s,m)}(z;q,q^{1/\alpha})$ must vanish for $z_j = t^k q^s x$ ($s=0,\dots,r-1$). We see that conjectured identity (\[23.8\]) is consistent with this requirement. Furthermore, in the limit $q \to 1$ (\[23.8\]) reduces to the Jack polynomial identity (\[25.1\]).
Our derivation of (\[25.1\]) relied on the corresponding Jack polynomials satisfying the highest weight condition. The significance of the latter being that it implied the translation invariance conditions (\[23.1\]), (\[23.2\]). Let $${\partial \over \partial_q z_i} f(z) :=
{f(z) - T_{q,z_i} f(z) \over (1 - q) z_i}.$$ The analogue of the highest weight condition for the Macdonald polynomials, $\psi$ say, in (\[23.8\]) is [@JL10] $$\label{Lqt}
L^{+(q,t)} \psi = 0, \qquad
L^{+(q,t)} := \sum_{i=1}^N \Big ( \prod_{j=1 \atop \ne i}^N
{t z_i - z_j \over z_i - z_j} \Big ) {\partial \over \partial_q z_i}$$ (compare the definition of $L^{+(q,t)}$ with the definition of $M_1$ in (\[Mb\])). But as yet we have not seen how to make use of this property in providing a proof of (\[23.8\]).
It is shown in [@JL10] that the Macdonald polynomials $P_{r^g}(z;q,q^{1/\alpha})$, with $\alpha$ as in (\[25.2\]), also satisfy the $(q,t)$-highest weight condition (\[Lqt\]). Computer algebra computations indicate that $$P_{r^g}(z,zq^{1/\alpha},\dots, zq^{(N-g-1)/\alpha},z_{N-g+1},\dots,z_N) =
\prod_{l=N-g+1}^N \prod_{j=0}^{r-1} (z_l - q^{1/\alpha + j}z),$$ although as with (\[23.8\]) we are yet to find a proof.
Concluding remarks
==================
The focus of our study has been the setting within Jack polynomial theory — interpreted broadly to include generalized Hermite and Laguerre polynomials, and Macdonald polynomials — of the $(k,r)$ clustering condition (\[J4\]) and its generalization (\[25.1\]). In the case $k=1$ we were able to identify more general identities in Jack and Macdonald polynomial theory containing (\[J4\]) as a special case. For general $(k,r)$, by making essential use of the highest weight condition (\[Lp\]), we found a simple derivation of the Jack polynomial solution to (\[J4\]), and more generally of the clustering property (\[25.1\]). However our derivation does not apply to the Macdonald analogue (\[23.8\]) of this clustering, formulated on the basis of computer algebra computations, which remains a conjecture. We should point out that there is interest in clustering/vanishing properties of Macdonald polynomials with $t^{k+1} q^{r-1}=1$ for there application to certain statistical mechanical models based on the Temperley-Lieb algebra [@KP07; @RSZ07; @dGPS09]
To finish, we make note of a $(q,t)$-generalization of the Reed-Rezayi state (\[RR\]), $$\psi_{RR}^{(k;q,t)}(z_1,\dots,z_{kN}) = U^+
\prod_{s=1}^k \prod_{1 \le i_s < j_s \le N} (z_{i_s} - t z_{j_s}) (t z_{i_s} - z_{j_s})
\Big |_{t = q^{-1/(k+1)}},$$ where $U^+$ is the $t$-symmetrization operator in (\[UU\]). Thus computer algebra computations indicate that $$\psi_{RR}^{(k;q,t)}(z_1,\dots,z_{kN}) \propto
P_{\kappa(k,2,1)}(z_1,\dots,z_{kN};q,q^{-1/(k+1)})$$ as is consistent with (\[23.8\]). We remark that $(q,t)$-generalizations of quantum Hall states seem first to have been considered in the work of Kasatani and Pasquier [@KP07].
Acknowledgements {#acknowledgements .unnumbered}
----------------
This work was supported by the Australian Research Council.
[10]{}
T.H. Baker and P.J. Forrester, *The [Calogero-Sutherland]{} model and generalized classical polynomials*, Commun. Math. Phys. **188** (1997), 175–216.
[to3em]{}, *Nonsymmetric Jack polynomials and integral kernels*, Duke Math. J. **95** (1998), 1–50.
[to3em]{}, *Symmetric [Jack]{} polynomials from nonsymmetric theory*, Annals Comb. **3** (1999), 159–170.
B.A. Bernevig, V. Gurarie, and S.H. Simon, *Central charge and quasihole scaling dimensions from model wave functions: relating jack wavefunctions to ${\mathcal W}$-algebras*, J. Phys. A **42** (2009), 245206.
B.A. Bernevig and F.D.M. Haldane, *Model fractional quantum [H]{}all states and [J]{}ack polynomials*, Phys. Rev. Lett. **100** (2008), 246802.
[to3em]{}, *Generalized clustering conditions of Jack polynomials at negative [J]{}ack parameter $\alpha$*, Phys. Rev. B **77** (2008), 184502.
A. Boussicault and J.-G. Luque, *Staircase [M]{}acdonald polynomials and the $q$-discriminant*, [DMTCS]{} Proceedings [FPSAC]{} 2008, Maison de l’informatique et des mathématiques discrètes, Nancy, France, 2008, pp. 381–392.
I.V. Cherednik, *Double affine [H]{}ecke algebras and [M]{}acdonald’s conjectures*, Ann. Math. **141** (1995), 191–216.
C.F. Dunkl, *Difference-differential operators associated to reflection groups*, Trans. Amer. Math. Soc. **311** (1989), 167–183.
J. de Gier, A. Ponsaing and K. Shigechi, *Exact finite size groundstate of the O$(n=1)$ loop model with open boundaries*, J. Stat. Mech. **2009** (2009), P04010.
P. Desrosiers, *Duality in random matrix ensembles for all $\beta$*, Nucl. Phys. B **817** (2009), 224–251.
B. Estienne, B.A. Bernevig, and R. Santachiara, *Electron-quasihole duality and second order differential equation for [R]{}ead-[R]{}ezayi and [J]{}ack wavefunctions*, arXiv:1005.3475, 2010.
B. Estienne, N. Regnault, and R. Santachiara, *Clustering properties, [J]{}ack polynomials and unitary conformal field theories*, Nucl. Phys. B **824 \[FS\]** (2010), 539–562.
B. Estienne and R. Santachiara, *Relating [J]{}ack wavefunctions to [WA]{}${}_{k-1}$ theories*, J. Phys. A **42** (2009), 445209.
B. Feigin, M. Jimbo, T. Miwa, and E. Mukhin, *A differential ideal of symmetric polynomials spanned by [J]{}ack polynomials at $\beta = -
(r-1)/(k+1)$*, Int. Math. Res. Not. **2002** (2002), 1223–1237.
[to3em]{}, *Symmetric polynomials vanishing on the shifted diagonals and [M]{}acdonald polynomials*, Int. Math. Res. Not. **2003** (2003), 1015–1034.
P.J. Forrester, *Log-gases and random matrices*, Princeton University Press, Princeton, NJ, 2010.
Th. Jolicoeur and J.G. Luque, *Highest weight [M]{}acdonald and [J]{}ack polynomials*, arXiv:1003.3475, 2010.
M. Kasatani, *Subrepresentations in the polynomial representations of the double affine Hecke algebra of type $GL_n$ at $t^{k+1} q^{r-1}=1$*, Int. Math. Res. Not. **2005** (2005), 1717–1742.
M. Kasatani and V. Pasquier, *On polynomials interpolating between the stationary state of a $O(n)$ model and a Q.H.E. ground state*, Comm. Math. Phys. **276** (2007), 397–435.
J. Liptrap, *On translation invariant symmetric polynomials and [H]{}aldane’s conjecture*, Topology and Physics: Proceedings of the Nankai International Conference in memory of Xiai-Song Lin, Nankai Tracts in Mathematics, vol. 19, World Scientific, Singapore, 2008, pp. 279–287.
Y.-M. Lu, X.-G. Wen, Z. Wang, and Z. Wang, *Non-[A]{}belian quantum [H]{}all states and their quasiparticles: from the pattern of zeros to vertex algebra*, arXiv:0910.3988, 2009.
J.-G. Luque, *Macdonald polynomials at $t=q^k$*, J. Algebra **324**, 36–50, 2010.
C. Lucia de Souza Batista and D. Li, *Analytic calculations of trial wave functions of the fractional quantum Hall effect on the sphere*, Phys. Rev. B **55** (1997), 1582–1595.
I.G. Macdonald, *Hall polynomials and symmetric functions*, 2nd ed., Oxford University Press, Oxford, 1995.
D. Marshall, *Symmetric and nonsymmetric [Macdonald]{} polynomials*, Annals Comb. **3** (1999), 385–415.
G. Moore and N. Read, *Nonabelions in the fractional quantum [H]{}all effect*, Nucl. Phys. B **360** (1991), 362–396.
M.A. Olshanetsky and A.M. Perelomov, *Quantum integrable systems related to [L]{}ie algebras*, Phys. Rep. **94** (1983), 313–404.
E.M. Opdam, *Lectures on [D]{}unkl operators*, Math. Soc. Japan Mem. **8** (1998), 1–62.
A.V. Razumov and Y.G. Stroganov and P. Zinn-Justin, *Polynomial solutions of the qKZ equation and ground state of XXZ spin chain at $\Delta = - 1/2$*, J. Phys. A **40** (2007) 11827.
N. Read and E. Rezayi, *Beyond paired fractional quantum [H]{}all states: parafermions and incompressible states in the first excited [L]{}andau level*, Phys. Rev. B **59** (1999), 8084.
X.-G. Wen and Z. Wang, *Classification of symmetric polynomials of infinite variables: construction of [A]{}belian and non-[A]{}belian quantum [H]{}all states*, Phys. Rev. B **77** (2008), 235108.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report time and angle resolved spectroscopic measurements in optimally doped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$. The photoelectron intensity maps are monitored as a function of temperature, photoexcitation density and delay time from the pump pulse. We evince that thermal fluctuations are effective only for temperatures near to the critical value whereas photoinduced fluctuations scale linearly at low pumping fluence. The minimal energy to fully disrupt the superconducting gap slightly increases when moving off the nodal direction. No evidence of a pseudogap arising from other phenomena than pairing has been detected in the explored region of reciprocal space. On the other hand, an intermediate coupling model of the photoinduced phase transition can constistently explain the gap filling in the near as well as in the off-nodal direction. Finally, we observed that nodal quasiparticles develop a faster dynamics when pumping the superconductor with fluence large enough to induce the total collapse of the gap.'
author:
- 'Z. Zhang$^{1}$, C. Piovera$^{2}$, E. Papalazarou$^{3}$, M. Marsi$^{3}$, M. d’Astuto$^{1}$, C. J. van der Beek $^{1}$, A. Taleb-Ibrahimi${^4}$, and L. Perfetti$^{2}$'
title: 'Photoinduced filling of near nodal gap in Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$'
---
Introduction
============
The superconductors of the copper-oxide family have been matter of extensive investigations and are still subject of fierce debates. After 30 years of research, some issues have been settled, whereas others remain controversial. The evolution of the superconducting order parameter with temperature and doping level is an exemplary case. It is nowadays well established that the single particle gap near to the nodal direction is an hallmark of superconductivity. By moving half a way from the nodal direction it possible to observe that a remnant pairing persists up to temperatures higher than the critical value $T_c$ [@Campuzano; @Kaminski; @Shin]. This fluctuating regime can be explained by intermediate coupling models [@Perali] with a finite rate of pair-breaking [@Norman; @Dessau; @Shin]. Finally, it has been established that the antinodal region displays an additional pseudogap that interplays and, eventually, competes with superconductivity [@Kaminski; @Shen].
In the last years, the field has been enriched by experimental protocols that are capable of detecting the single particle spectra out of equilibrium conditions [@Perfetti_ARPES; @Smallwood_science; @Current]. Smallwood *et al.* reported the collapse and subsequent recovery of the single particle gap after photoexcitation by a short laser pulse [@Smallwood_science; @Smallwood_gap]. Their data indicate that the gap is more robust when moving towards the antinodes. Moreover, the minimal fluence necessary to melt the near nodal gap has been related to a qualitative change in the dynamics of nodal Quasi-Particles (QPs). Soon after, Ishida *et al.* reproduced the gap melting, observed Bogoliubov excitations and outlined a concurrent reduction of QPs coherence [@Ishida]. Apparently, the recovery of the gapped phase proceeds within several picoseconds, due to electronic cooling by phonon emission. On this same timescale, the dynamics of a reforming condensate also affects the electromagnetic response function [@Demsar; @Kaindl] and the transient population of hot electrons [@Smallwood_science]. Indeed, the cooling of QPs displays a dramatic slow down when the system enters in the superconducting phase [@Piovera; @Kirchmann]. Such effect persists at photoexcitation fluence much larger than the threshold value necessary for the complete disruption of superfluid density. As a consequence, we proposed that a remnant pairing may protect QPs from energy dissipation. Alternatively, Smallwood et al. developed a model that would explain the slow cooling of QPs as the result of a dynamical gap opening [@Smallwood_Model].
This article reports additional measurements on the photoinduced collapse of the superconducting gap in optimally doped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ (Bi2212). We aim to a comparative study of the Cooper pairs melting by thermal excitations and ultrafast laser pulses. In both cases the near nodal gap is filled by the appearance of a fluctuating condensate. The threshold fluence necessary to completely fill the gap depends slightly on the azimuthal angle and becomes larger towards the antinodes. The section of Brillouin zone that can be explored by a photon source centered at 6.3 eV allow us to cover roughly half of the Fermi surface. By extending recent experiments[@Smallwood_gap] to higher pumping fluence, we exclude that a pseudogap survives to high photoexcitation density. Instead, the gradual filling of the gap can be accurately reproduced by an intermediate coupling model accounting for the finite rate of pair-breaking [@Perali].
The second novelty of our work is the comparison of the transient state following photoexcitation with an adiabatic heating across the phase transition. On one hand, we find that photoinduced and thermal fluctuations result in similar collapse of the gapped spectral function. On the other hand the evolution of the superconducting state clearly depends on the applied perturbation. Thermal fluctuations are effective only for temperatures near to the critical value whereas photoinduced fluctuations scale linearly with the pumping fluence. The peculiar behavior of the photoexcited state suggests that Cooper pairs scatter with a non-thermal distribution of excited phonons.
At last, the fluence dependence of the gap is compared with the energy dissipation of low energy QPs. We use 600 fs pulses to perform high resolution spectroscopy of the superconducting gap and 80 fs pulses to accurately follow the QP dynamics. The data show that nodal QPs develop a fast relaxation component at the same threshold fluence where the gap collapses [@Smallwood_QP]. These results are discussed in terms of a dynamical gap opening [@Smallwood_Model].
The article is organized as follow: section \[sec2\] reports the experimental configurations employed from time resolved ARPES measurements, sections \[sec3\] and \[sec4\] focus on the dependence of the near nodal gap on temperature and pumping fluence, section \[sec5\] extends the equilibrium and non-equilibrium spectroscopy to the off nodal direction, section \[sec6\] includes a general discussion of the experimental data and the comparative analysis of the gap evolution with the energy dissipation of nodal QPs.
![(a) Fermi surface of Bi2212 in the first Brillouin zone. The red and blue line indicate cuts in reciprocal space where time resolved measurements have been performed. (b) Superconducting gap with $d$-wave symmetry[]{data-label="Fig1"}](Fig1.jpg){width="1\columnwidth"}
Methods {#sec2}
=======
We investigate single crystals of optimally doped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ (Bi2212) with transition temperature $T_c=91$ K. Time resolved photoemission experiments are performed on the FemtoARPES setup [@FemtoARPES], using a Ti:sapphire laser system delivering 35 fs pulses at 1.55 eV (780 nm) with 250 kHz repetition rate. Part of the laser beam is used to generate 6.3 eV photons through cascade frequency mixing in BBO crystals. The 1.55 eV and 6.3 eV beams are employed to photoexcite the sample and induce photoemission, respectively. Two different setups have been optimized for complementary experiments on the electrons dynamics. The quasiparticles relaxation has been probed with short pulses having duration of 80 fs and bandwidth of 70 meV. These pulses can follow the temporal evolution of excited electrons with high accuracy but do not provide the required energy resolution to monitor the superconducting gap. As a consequence, high resolution measurements have been done with UV pulses of 600 fs duration and spectral bandwidth below 15 meV. In each case the field amplitude of the probe has been reduced until space charge distortions dropped below the detectable limit. The incident photoexcitation fluence on the sample is evaluated by imaging the pump and probe beam in the focal plane. All reported measurements have been performed on freshly cleaved crystals at the base pressure of $7\times10^{-11}$ mbars.
Near nodal gap in Equilibrium {#sec3}
=============================
![The data in this image have been acquired in equilibrium conditions along a direction cutting the Fermi surface at azimuthal angle $\phi=30^\circ$. Photoelectron intensity maps at 100 K (a) and 40 K (b). (c) Differential signal between the intensity map collected at 100 K and 40 K. Energy distribution curves extracted at the Fermi wavevector for several temperatures above (d) and below $T_c$ (e). (f) Symmetrized EDCs extracted at the Fermi wavevector for different temperatures. Fitting curves (solid lines) based on the self energy $\Sigma(k,\omega)=-i \gamma+\Delta^2/(\omega+\epsilon_k+i\gamma)$ are superimposed to experimental data (marks).[]{data-label="Fig2"}](Fig2.jpg){width="1\columnwidth"}
![The data in this image have been acquired at azimuthal angle of $30^\circ$ and base temperature of 40 K for different values of photoexitation fluence. (a) Photoelectron intensity map collected 0.6 ps after the arrival of a pump pulse depositing 40 $\mu J/cm^2$. (b) Pump-on minus pump-off intensity difference upon photoexictation with 40 $\mu J/cm^2$. EDCs (c) and symmetrized EDCs (d) extracted at the Fermi wavevector for different pumping fluence. Fitting curves (solid lines) are superimposed to symmetrized EDCs (marks).[]{data-label="Fig3"}](Fig3.jpg){width="1\columnwidth"}
As shown in Fig. \[Fig1\], we define the zero of the azimutal angle as the $\bar{M}-Y$ direction in reciprocal space, so that the nodal point is at $\phi=45^\circ$. First we analyze the development of a near nodal gap for azimuthal angle $\phi = 30^\circ$. Figure \[Fig2\](a) shows the ARPES intensity maps acquired at the equilibrium temperature of 100 K. The QP peak gains coherence and intensity for excitation energy above -70 meV. This energy scale is related to collective modes coupling to single-particle excitations and coincides to a kink in the QP dispersion [@Damascelli]. Figure \[Fig2\](b) shows the intensity map collected after cooling the sample in the superconducting phase at 40 K. In contrast to the previous case, a small but detectable gap inhibits the quasiparticle crossing of the Fermi level. We show in Fig. \[Fig2\](c) the difference intensity map between data acquired at 100 K and 40 K. Such plot visualize the redistribution of spectral weight upon thermal melting and serves as reference for the data on the photoinduced phase transition.
The gradual formation of the superconducting gap is obtained by acquiring intensity maps at several intermediate temperatures. Figure \[Fig2\](d,e) show the Energy Distribution Curves (EDCs) extracted at the Fermi wavevector for each one of these maps. The EDCs are normalized only by the total acquisition time, making possible a direct comparison of the relative intensity between different curves. As shown in figure \[Fig2\](d), the peak of the EDCs merely loose intensity as long as the temperature is above the critical value. Instead, the shift of leading edge observed in Fig. \[Fig2\](e) is the hallmark of an electronic gap developing below $T_c$. Following a common procedure in data treatment [@Shin; @Kaminski; @Smallwood_gap; @Campuzano], we show in Fig. \[Fig2\](f) the symmetrized EDCs at different equilibrium temperatures. The distance of the peaks from the Fermi level is a phenomenological indicator of gap magnitude and attains the value of $\cong20$ meV at 40 K.
As originally proposed by Norman et al. [@Norman], we model the spectral function at the Fermi wavevector by the phenomenological self energy $\Sigma(k,\omega)=-i \gamma+\Delta^2/(\omega+\epsilon_k+i\gamma)$, where $\epsilon_k$ is the band dispersion, $\Delta$ is superconducting gap and $\gamma$ is the pair-breaking rate. Perali *et al.* have shown that such phenomenological expression is a reasonable approximation as long as the asymmetry of the spectral function is not too large [@Perali]. In the intermediate coupling regime the pair-breaking rate $\gamma$ increases subtantially near the transition temperature and overcomes $\Delta$ in the normal phase. The resulting spectral function $A(k,\omega)=-\frac{1}{\pi}$Im$(1/(\omega-\epsilon_k-\Sigma))$ is convoluted in energy and momentum to account for the finite experimental resolution. Finally, a smooth background has been included to reproduce the incoherent spectral weight adding up at high energy. As shown by Fig. \[Fig2\](f), the curves obtained by such self energy fit accurately the experimental data. The fitting parameters of Fig. \[Fig4\](a) indicate that $\Delta$ displays minor variations for $T\leq T_c$. As expected, the superconducting phase transition is ruled by the sudden increase of the pair-breaking rate $\gamma$ when the system approaches the critical point. The gap is completely filled once $\gamma$ overcomes the value of $\Delta$. We stress that this scenario is different from the weak coupling BCS limit. In the latter case the temperature window where fluctuations become visible is not detectable and the pair-breaking rate is negligibly small.
Near nodal gap in photoexcited samples {#sec4}
======================================
Next we investigate the evolution of the near nodal gap upon sudden photoexcitation with 1.5 eV pulses. Figure \[Fig3\](a) shows photoelectron intensity maps acquired at the azimuthal angle of 30 degrees, equilibrium temperature of 40 K and incident fluence of $ F= 40 \mu J/cm^2$. The images have been collected for a delay time corresponding to the maximal pump-probe signal. We show in \[Fig3\](b) difference intensity maps between the pump-on and pump-off case. Strong similarities between Fig. \[Fig3\](b) and Fig. \[Fig2\](c) indicates that photoexcitation and thermal fluctuations result in comparable transfer of spectral weight.
 and Fig. \[Fig3\](d).[]{data-label="Fig4"}](Fig4.jpg){width="1\columnwidth"}
![a) Intensity map of symmetrized EDCs as a function of pump probe delay. The data have been acquired at 40 K, $\phi=30^\circ$ and pump fluence $F = 51 \mu J/cm^2$. b) Symmetrized EDCs (marks) extracted at selected pump probe delays are compared with the fitting curves (solid lines). c) Temporal evolution of the gap $\Delta$ (red circles) and of the pair-breaking rate $\gamma$ (dark squares). (d) the temporal evolution of the gap filling factor $\rho$ (filled bars) is compared with $1.8/(1+\Delta/\gamma)$ (blue circles). The solid line is an exponential fitting function with time constant $\tau=4.5$ ps.[]{data-label="Fig5"}](Fig5.jpg){width="0.95\columnwidth"}
The description of the non-equilibrium case is first performed by acquiring photoelectron intensity maps at intermediate pumping fluences. We observed that photoexictation also generates a photoinduced band displacement smaller than 3 meV. The quantitative estimate of such rigid shift is obtained by fitting momentum distribution curves at energy below -70 meV. Although the physical origin of the spectral shift is still debated [@Smallwood_Shift; @Bovensiepen_Shift], we suspect the occurrence of photoinduced fields at the sample surface. In the following, this minor correction has been considered when constructing the symmetrized EDCs. Figure \[Fig3\](c) shows EDCs extracted at the Fermi wavevector for increasing pumping fluence. Also in this case, the symmetrized EDCs of Figure \[Fig3\](d) are accurately fitted by the model spectral function.
We compare in Fig. \[Fig4\] the fitting parameters of the curves in Fig. \[Fig2\](f) (thermal heating) and Fig. \[Fig3\](d) (sudden photoexcitation). In both case the collapse of the superconducting phase is dominated by the increase of the pair-breaking rate $\gamma$. Once $\gamma>\Delta$ the binding energy of Cooper pairs is ill defined and the incertitude of $\Delta$ becomes increasingly large. Note in Fig. \[Fig4\](a) that $\gamma$ is appreciably different from zero only if the temperature is larger than $0.7T_c\cong 60$ K. We evince that superconducting fluctuations are thermally activated near to the transition temperature. On the other hand, even photon pulses with arbitrarily small fluence can generate a detectable pair-breaking. In the weak perturbation regime, the density of the photoinduced fluctuations is proportional to the number of absorbed photons. As a consequence the pair-breaking rate in Fig. \[Fig4\](b) displays a linear scaling for $F<20 \mu$J/cm$^2$.
The dynamics of gap filling is captured by acquiring photoelectron intensity maps at variable pump probe delays. Figure \[Fig5\](a) shows on a false color scale the temporal evolution of the symmetrized EDCs while Fig. \[Fig5\](b) display symmetrized EDCs and fitting curves for selected delays. At 0.6 ps, the photoexcitation by $51 \mu J/cm^2$ results in the complete filling of the superconducting gap. We show in Fig. \[Fig5\](c) the temporal evolution of the parameters $\Delta$ and $\gamma$ as a function of delay time. The near nodal gap is fully melted as long as $\gamma>\Delta$, namely during the first 2 picoseconds. For practical purpose, it is useful to quantify the melting of superconducting phase by the filling factor $\rho$. This phenomenological observable is independent on the specific model and saturates in the regime of the collapsed gap. We obtain the filling factor by integrating the symmetrized EDCs in an energy window of 20 meV around the Fermi level and linearly rescaling the resulting values so that $\rho$ is comprised in the interval $[0,1]$. Figure \[Fig5\](d) shows that the evolution of $\rho$ nearly follows $1.8/(1+\Delta/\gamma)$. Therefore, the increase of $\rho$ depends both on the gap shrinking and on $\gamma$. The filling factor is correctly fit by an exponential function with recovery time of $4.5 \pm 0.5$ ps. Our data are in qualitative in agreement with the work of Smallwood *et al.* [@Smallwood_gap]. However, the analysis of ref. [@Smallwood_gap] does not consider the finite pair-breaking rate. Therefore, the fitting parameters cannot be directly compared. We outline here that superconducting models with finite $\gamma$ have a natural link to established interpretations of the transient optical measurements[@Kabanov; @Demsar_THz]. Indeed, it has been often proposed that Cooper pairs are broken by the inelastic scattering with the non-equilibrium phonons that follow from the photoexcitation process. Within this framework, the recovery of superconductivity takes place via the heat transfer towards less harmful modes and has been successfully simulated by Rothwarf-Taylor equations [@Kabanov].
Spectroscopy of the gap away from the nodal direction {#sec5}
=====================================================
Figure \[Fig6\](a) displays a photoelectron intensity map acquired at azimuthal angle $\phi=23^\circ$ and 40 K. Due to bilayer splitting, the map contains two parallel QP bands that are not spectrally resolved. A strong renormalization of QP dispersion takes place for excitation energy above -70 meV. The presence of the superconducting gap induces the weak backfolding of dispersive peaks at the Fermi wavevector. We show in Fig. \[Fig6\](b) the pump-on minus pump-off map acquired just after optical pumping with $66 \mu J/cm^2$. The observed contrast visualizes the transfer of spectral weight from the QP peak to the gapped spectral region. Symmetrized EDCs of Fig. \[Fig6\](c) display the photoinduced filling of the gap at different pumping fluence. Any signature of pairing is lost at high photoexcitation density. We show in Fig. \[Fig6\](d) that a similar spectral evolution takes place in equilibrium by increasing the sample temperature. Simulations based on the intermediate coupling model have been superimposed to the experimental EDCs. The partially filled gap above $T_c$ is a signature of superconducting fluctuations [@Alloul; @Bergeal; @Ong; @Perfetti_THz]. At $\phi=23^\circ$ the condition $\Gamma>\Delta$ takes place at $T\cong 1.3 T_c$ so that preformed Cooper pairs exist for $T_c<T<1.3T_c$. Note that such gradual filling of the gap along the off nodal direction cannot be captured by models neglecting the pair-breaking [@Smallwood_gap]. Figure \[Fig6\](e) shows that $\gamma$ increases appreciably only above 60 K. This results is consistent with the superconducting phase transition being ruled by thermal fluctuations in a restricted temperature window around $T_c$. However, the sudden photoexcitation generates superconducting fluctuations via a distinct mechanism. Figure \[Fig6\](f) confirms that $\gamma$ scales linearly at low pumping fluence. This finding is the signature of non-thermal processes leading to pair-breaking. Beside the purely electronic excitations, fluctuations can also originate from the high average occupation of harmful phonons. The emergent role of such hot phonons would also explain the large discrepancy between the threshold energy for a photoinduced phase transition and the condensation energy measured in thermal equilibrium [@Demsar_THz]. Finally, figure \[Fig7\](a) shows the temporal evolution of symmetrized EDCs at $\phi=23^\circ$. The dynamics of filling factor in Fig. \[Fig7\](b) indicates that the recovery of the gapped phase takes place on the characteristic timescale of $3.5 \pm 0.7$ ps. Within the error bars, this value is consistent with the one obtained at $\phi=30^\circ$.
![The data of this image have been acquired at azimutal angle $\phi=23^\circ$. a) Intensity map in equilibrium at the base temperature of 40 K. b) Pump-on minus pump-off signal acquired at 40 K with photoexcitation fluence $F= 66 \mu J/cm^2$. c) Symmetrized EDCs acquired at 40 K and just after photoexcitation with different pumping fluence. Fitting curves are superimposed to the experimental data d) Symmetrized EDCs (marks) acquired in equilibrium conditions at different sample temperatures. Fitting curves (solid lines) are superimposed to the experimental data. Evolution of gap $\Delta$ (red circles) and pair-breaking rate $\gamma$ (dark squares) with temperature (e) and photoexcitation fluence (f).[]{data-label="Fig6"}](Fig6.jpg){width="1\columnwidth"}
![a) Intensity map of symmetrized EDCs as a function of pump probe delay. The data have been acquired at 40 K, $\phi =23^\circ$ and pump fluence $F= 66 \mu J/cm^2$. b) Temporal evolution of the gap filling factor and exponential function with time constant $\tau=3.5$ ps. The filling factor is extracted from the simmetrized EDCs by integrating the spectral intensity in an in an energy window of 20 meV centered at the Fermi level.[]{data-label="Fig7"}](Fig7.jpg){width="1\columnwidth"}
Gap evolution and quasiparticles dissipation {#sec6}
============================================
![a) Filling factor of superconducting gap at $\phi=30^\circ$ (red circles) and $\phi=23^\circ$ (blue circles) extracted at roughly 300 fs after photoexcitation with a variable pump fluence. b) Temporal evolution of nodal quasiparticle intensity integrated in an energy window from the Fermi level up to excitation energy of 50 meV. Comparison between the QP relaxation after photoexcitation with $40 \mu J/cm^2$ (green triangles) and $240 \mu J/cm^2$ (black circles). c) The filling factor of the gap (open circles) and the fast component of the nodal QP relaxation (dark squares).[]{data-label="Fig8"}](Fig8.jpg){width="1\columnwidth"}
We show in Fig. \[Fig8\](a) the filling factor of the superconducting gap as a function of pump fluence acquired at azimuthal angles $\phi=30^\circ$ and $\phi=23^\circ$. Our data extend the work of Smallwood *et al.* [@Smallwood_gap] and show that no signature of pairing can be detected above $40 \mu J/cm^2$. By this means we exclude that spectra at $\phi\geq 23^\circ$ develop a pseudogap surviving up to high photoexcitation density. Note in Figure \[Fig8\](a) that the filling factor of the superconducting gap depends slightly on the azimuthal angle. For a given fluence, the relative weight of in-gap states is larger towards the node ($\phi=30^\circ$) than farther for it ($\phi=23^\circ$). The anisotropic character of gap melting is common both to the photoinduced and to the thermally heated state [@Shin]. This finding is directly linked to the $d$-wave symmetry of the superconducting gap, leading to $\gamma>\Delta$ at larger fluence (or temperature), when moving off the nodal direction.
Next we compare the temporal evolution of the gap with the energy relaxation of nodal QP. In the latter case we employ the setup providing 80 fs probe pulses with 70 meV bandwidth. Figure \[Fig8\](b) shows the temporal evolution of nodal QP signal integrated in the energy window \[0,50\] meV. After photoexcitation with fluence $F= 40 \mu J/cm^2$ the QPs follow a single exponential relaxation with decay constant of 2.5 ps. The relaxation of hot electrons becomes qualitatively different when pumping the sample with more intense pulses. Upon photoexcitation with $F=240 \mu J/cm^2$ the dynamics displays an initial decay with inverse rate of 0.5 ps. At longer delays the cooling time converges to 2.5 ps, independently on photoexcitation intensity. These results are consistent with the data first reported by Cortes *et al.* [@Bovensiepen_QP] and have been thoroughly discussed in ref. [@Piovera]. Phenomenologically, the relaxation of the energy integrated QPs signal can be correctly reproduced by a double exponential decay. The fast component becomes visible above a threshold fluence and gains weight upon increasing the photoexcitation density. We compare in Fig. \[Fig8\](c) the amplitude of the fast QPs delay with the filling factor of the superconducting gap. The fast dissipation rate becomes detectable at the same threshold fluence where the superconducting gap has fully collapsed. Such connection has been already outlined by ref. [@Smallwood_QP] although at fluence values lower than the ones of our work. Smallwood *et al.* proposed that a dynamical gap opening affects the recovery rate of photoexicted quasiparticles [@Smallwood_Model]. In this case the reforming pairs not only hinder quasiparticle dissipation but also act as an heat bath for nodal QPs. The energy released by Cooper pairs formation can be transferred effectively to QPs, drastically reducing their cooling time. We support this plausible scenario, which is likely valid also in the intermediate coupling regime.
Conclusions and Acknowledgments
===============================
In conclusion our time resolved ARPES data of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ compare the collapse of the superconducting gap upon adiabatic heating and sudden photoexcitation. The intermediate coupling model correctly accounts for the gap filling, both near and off the nodal direction. Within this framework, the melting of the superconducting phase takes place when the pair-breaking rate overcomes the electronic gap. Interestingly, the superconducting fluctuations are thermally activated in equilibrium conditions whereas they scales linearly in the weak photoexcitation regime. This finding suggest that non equilibrium phonons are the main source of pair-breaking in the transient state. The data in the off nodal direction report a full collapse of the gap for $F>40 \mu$ J/cm$^2$ and exclude the presence of a pseudogap unrelated to superconductivity for $\phi>23^\circ$. Finally, we compare the narrow bandwidth spectroscopy of the gap, with a short pulses spectroscopy of the QP relaxation. The latter develops a faster component at the threshold fluence where the gap has fully collapsed. Dynamical reformation of the superconducting gap is a plausible explanation for such dynamics.
This work is supported by “Investissements d’Avenir” LabEx PALM (grant No. ANR-10-LABX-0039PALM), by the China Scholarship Council (CSC, Grant No. 201308070040) by the EU/FP7under the contract Go Fast (Grant No. 280555), and by the Région Ile-deFrance through the program DIM OxyMORE. We are thankfull to Andrea Gauzzi for critical reading and discussing the manuscript.
[99]{}
A. Kanigel, U. Chatterjee, M. Randeria, M. R. Norman, S. Souma, M. Shi, Z. Z. Li, H. Raffy, and J. C. Campuzano Phys. Rev. Lett. **99**, 157001 (2007).
Takeshi Kondo, Yoichiro Hamaya, Ari D. Palczewski, Tsunehiro Takeuchi, J. S. Wen, Z. J. Xu, Genda Gu, Jörg Schmalian and Adam Kaminski, Nature Physics **7**, 21–25 (2011).
Takeshi Kondo, W. Malaeb, Y. Ishida, T. Sasagawa, H. Sakamoto, Tsunehiro Takeuchi, T. Tohyama and S. Shin, Nature Communications **6**, 7699 (2015).
A. Perali, P. Pieri, G. C. Strinati, and C. Castellani, Phys. Rev. B **66**, 024510 (2002).
M. R. Norman, M. Randeria, H. Ding and J. C. Campuzano, Phys. Rev. B **57**, R11093 (1998).
T. J. Reber, N. C. Plumb, Y. Cao, Z. Sun, Q. Wang, K. McElroy, H. Iwasawa, M. Arita, J. S. Wen, Z. J. Xu, G. Gu, Y. Yoshida, H. Eisaki, Y. Aiura, and D. S. Dessau Phys. Rev. B **87**, 060506(R) (2013).
M. Hashimoto, E. A. Nowadnick, R.-H. He, I. M. Vishik, B. Moritz, Y. He, K. Tanaka, R. G. Moore, D. Lu, Y. Yoshida, M. Ishikado, T. Sasagawa, K. Fujita, S. Ishida, S. Uchida, H. Eisaki, Z. Hussain, T. P. Devereaux and Z.-X. Shen, Nature Materials **14**, 37 (2015).
L. Perfetti, P. A. Loukakos, M. Lisowski, U. Bovensiepen, H. Eisaki, and M. Wolf Phys. Rev. Lett. **99**, 197001 (2007).
Christopher L. Smallwood, James P. Hinton, Christopher Jozwiak, Wentao Zhang, Jake D. Koralek, Hiroshi Eisaki, Dung-Hai Lee, Joseph Orenstein, and Alessandra Lanzara, Science **336**, 1137 (2012).
A. Kaminski, S. Rosenkranz, M. R. Norman, M. Randeria, Z. Z. Li, H. Raffy, and J. C. Campuzano Phys. Rev. X **6**, 031040 (2016).
Christopher L. Smallwood, Wentao Zhang, Tristan L. Miller, Chris Jozwiak, Hiroshi Eisaki, Dung-Hai Lee, and Alessandra Lanzara, Phys. Rev. B **89**, 115126 (2014).
Y. Ishida, T. Saitoh, T. Mochiku, T. Nakane, K. Hirata and S. Shin, Scientific Reports **6**, 18747 (2016).
J. Demsar, B. Podobnik, V. V. Kabanov, Th. Wolf, and D. Mihailovic, Phys. Rev. Lett. 82, 4918 (1999).
R. A. Kaindl, M. A. Carnahan, D. S. Chemla, S. Oh, and J. N. Eckstein, Phys. Rev. B 72, 060510 (2005).
C. Piovera, Z. Zhang, M. d’Astuto, A. Taleb-Ibrahimi, E. Papalazarou, M. Marsi, Z. Z. Li, H. Raffy, and L. Perfetti, Phys. Rev. B **91**, 224509 (2015).
S.-L. Yang, J. A. Sobota, D. Leuenberger, Y. He, M. Hashimoto, D. H. Lu, H. Eisaki, P. S. Kirchmann, and Z.-X. Shen Phys. Rev. Lett. **114**, 247001 (2015).
Christopher L. Smallwood, Tristan L. Miller, Wentao Zhang, Robert A. Kaindl, and Alessandra Lanzara Phys. Rev. B **93**, 235107 (2016).
Christopher L. Smallwood, Wentao Zhang, Tristan L. Miller, Gregory Affeldt, Koshi Kurashima, Chris Jozwiak, Takashi Noji, Yoji Koike, Hiroshi Eisaki, Dung-Hai Lee, Robert A. Kaindl, and Alessandra Lanzara Phys. Rev. B **92**, 161102(R) (2015).
J. Faure, J. Mauchain, E. Papalazarou, W. Yan, J. Pinon, M. Marsi, and L. Perfetti, Rev. Sci. Inst. **83**, 043109 (2012).
Andrea Damascelli, Zahid Hussain, and Zhi-Xun Shen Rev. Mod. Phys. **75**, 473 (2003)
Tristan L. Miller, Christopher L. Smallwood, Wentao Zhang, Hiroshi Eisaki, Joseph Orenstein, and Alessandra Lanzara Phys. Rev. B **92**, 144506 (2015).
J. D. Rameau, S. Freutel, L. Rettig, I. Avigo, M. Ligges, Y. Yoshida, H. Eisaki, J. Schneeloch, R. D. Zhong, Z. J. Xu, G. D. Gu, P. D. Johnson, and U. Bovensiepen, Phys. Rev. B **89**, 115115 (2014).
M. Beyer, D. Staedter, M. Beck, H. Schaefer, V. V. Kabanov, G. Logvenov, I. Bozovic, G. Koren, and J. Demsar, Phys. Rev. B **83**, 214515 (2011).
V. V. Kabanov, J. Demsar, and D. Mihailovic, Phys. Rev. Lett. **95**, 147002 (2005).
F. Rullier-Albenque, H. Alloul, and G. Rikken, Phys. Rev. B **84**, 014522 (2011).
N. Bergeal, J. Lesueur, M. Aprili, G. Faini, J. P. Contour, and B. Leridon, Nature Phys. 6, 296 (2010).
Yayu Wang, L. Li, M. J. Naughton, G. D. Gu, S. Uchida, and N. P. Ong, Phys. Rev. Lett. 95, 247002 (2005).
L. Perfetti, B. Sciolla, G. Biroli, C. J. van der Beek, C. Piovera, M. Wolf, and T. Kampfrath Phys. Rev. Lett. **114**, 067003 (2015).
R. Cortés, L. Rettig, Y. Yoshida, H. Eisaki, M. Wolf, and U. Bovensiepen, Phys. Rev. Lett. **107**, 097002 (2011).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We define a natural conformally invariant measure on unrooted Brownian loops in the plane and study some of its properties. We relate this measure to a measure on loops rooted at a boundary point of a domain and show how this relation gives a way to “chronologically add Brownian loops” to simple curves in the plane.'
author:
- 'Gregory F. Lawler[^1]'
- 'Wendelin Werner[^2]'
title: The Brownian loop soup
---
Introduction
============
The recent study of conformally invariant scaling limits of two-dimensional lattice systems has shown that measures on paths that satisfy conformal invariance (or conformal covariance) and a certain restriction property are important. In particular, in [@LSWrest], it is shown how to construct “restriction” measures by dynamically adding bubbles to Schramm-Loewner evolution (SLE) curves. As announced there, this construction has an equivalent formulation in terms of a Brownian soup of loops. The purpose of this paper is to describe these Brownian measures and to prove this equivalence.
This description will be given without reference to SLE and is interesting on its own, but since this is what initiated our own interest, let us now describe the link with SLE. In [@S], Oded Schramm introduced the SLE processes. These are the only random non-self-crossing curves in a domain that combine conformal invariance and a certain Markovian-type property. The definition of SLE is based on these two facts and can be viewed as a dynamic construction: one constructs the law of the curve on the time-interval $[t, t+dt]$ given $\gamma [0,t]$ and then iterates this procedure. In [@LSWrest], following ideas of [@LW2] and partially motivated by the problem of the self-avoiding walks in the plane (see [@LSWsaw]), a different approach to SLE was described. Basically, one looks at how the law of the random curve (seen globally) is distorted by an infinitesimal perturbation of the domain it is defined in. It turns out that a one-dimensional family of random sets is in some sense invariant under such perturbation. These are called restriction measures in [@LSWrest], where it is shown that all of these measures are closely related to Brownian excursions. The law of SLE, except for the special case of the SLE with parameter $
\kappa=8/3$, is not a restriction measure. However, one can measure precisely the “restriction defect” (i.e., the Radon-Nikodym derivative) with a term involving the Schwarzian derivatives of the corresponding conformal maps. On interpretation for SLE$_\kappa$ with $\kappa < 8/3$ goes as follows: if one adds a certain Poissonian cloud of Brownian bubbles to the SLE curve, then the resulting set is restriction invariant. This can be understood simply when $\kappa=2$. In that case, the SLE curve is [@LSWlesl] the scaling limit of the loop-erased random walk. In the scaling limit, the corresponding random walk converges to the Brownian excursion (which is restriction invariant). Hence, it is not surprising that if one puts the erased Brownian bubbles back onto the SLE$_2$ curve, one obtains a restriction measure. This Poissonian cloud of Brownian bubbles provides a simple geometric picture of the distortion of the law of SLE under perturbation of the boundary of a domain. This “variational” approach to SLE is closely related some conformal field theory considerations of e.g. [@BPZ; @Ca], as pointed out in [@FW; @FW2]. The density of the Poissonian cloud in particular plays the role of the (negative of the) central charge of the corresponding model in the theoretical physics language.
We will describe various measures on Brownian paths with an emphasis on two measures, the Brownian loop measure and the Brownian bubble measure. The latter was already defined and used in [@LSWrest] for the previously described reasons.
The Brownian loop measure is an infinite measure on unrooted Brownian loops in the plane. It is defined on the set of periodic continuous functions in the plane, where two functions are considered to be indistinguishable if one is obtained by a simple translation in time ($t \mapsto t + c$), and we call these equivalence classes “unrooted loops”. The Brownian loop measure is scale invariant, and translation-invariant. Furthermore, it is conformally invariant in the following sense: If there exists a conformal map $\phi$ from $D$ onto $D'$, then the image under $\phi$ of the Brownian loop measure restricted to those loops that stay in $D$ is exactly the Brownian loop measure restricted to those loops that stay in $D'$. This property is in fact very closely related to the restriction property.
This measure can also be considered a measure on “hulls” (compact sets $K$ such that ${\mathbb{C}}\setminus K$ is connected) by “filling in” the bounded loops. It is possible to argue that the Brownian loop measure is the only measure on hulls that is conformally invariant in the previous sense.
The Brownian loop soup of intensity $\lambda >0$ is a realization of a Poisson point process of density $\lambda$ times the Brownian loop measure. In other words, a sample of the Brownian loop soup is a countable family of Brownian unrooted loops. There is no non-intersection condition or other interaction between the loops. Each loop will intersect countably many other loops in the same realization of the loop soup. Although for some purposes it is sufficient to consider the hull generated by a loop, we will study the measure on loops with time parametrization in this paper. This is partially motivated by possible future applications.
A bubble in a domain $D$ will be a continuous path $\gamma [0,T]$ such that $\gamma (0,T) \subset D$ and $\gamma (0) = \gamma (T) \in \partial D$. We say that the bubble is rooted at $x$ if $\gamma (0) = x$. The Brownian bubble measure was introduced in [@LSWrest] in order to construct the restriction measures via SLE. The Brownian bubble in $D$, rooted in $x \in \partial D$ is a $\sigma$-finite measure on Brownian loops that start and end at $x$, and otherwise stay in $D$. The description is simplest if the considered domain is the upper half-plane $\H$, and the root is the origin. We will see that it can be considered as a conditioned version of the Brownian loop measure. The relation between these two measures will lead to an equivalence that we now describe.
Loosely speaking, the relation is as follows. Imagine that a realization of the loop soup in $\H$ has been chosen, but we cannot see a loop until we visit a point on that loop. Suppose that we travel along a simple curve $\eta$ with $\eta (0) = 0$ and $\eta (0, \infty) \subset \H$. Each time $t$ at which one encounters a loop in the loop soup for the first time, we can see the whole loop. This prescribes the order in which one finds the loops that intersect the curve $\eta[0,\infty)$. These loops are a priori unrooted; however, we can makes them into rooted loops by starting a loop found at time $t$ at the point $\eta(t)$ . If we use this point as a root, the loop becomes a bubble in the domain $\H \setminus \eta[0,t]$. The point is that this loop is “distributed” according to the bubble measure.
More precisely, let $\eta$ be as before. We do not make smoothness assumptions on $\eta$; in fact, the cases of most interest to us are SLE curves that have Hausdorff dimension greater than one. Assume that $\eta$ is parametrized by its “half-plane capacity” (as is customary for Loewner chains in the upper half-plane), i.e., that for all $t$, there exists a (unique) conformal map $\tilde g_t$ from $\H \setminus \eta [0,t]$ onto $\H$ such that $\tilde g_t(z) = z + 2t/z + o(1/z)$ when $z \to \infty$. We let $g_t(z) = \tilde g_t(z) - \tilde g_t(\eta_t)$.
Suppose that the countable collection of loops $\{\gamma_1,\gamma_2,\ldots\}$ in $\Half$ is a realization of the Brownian loop soup in $\H$ with intensity $\lambda
> 0$. This is a random family of equivalence classes of curves $\gamma_j:[0,t_j] \rightarrow \Half$ with $\gamma_j(0) = \gamma_j(t_j)$, under the equivalence $\gamma^1 \sim \gamma^2$ if the time-lengths $t^1$ and $t^2$ of $\gamma^1$ and $\gamma^2$ are identical, and if for some $r$, $\gamma^1(t) = \gamma^2(t+r)$ for all $t$ (with addition modulo $t^1$). For each $j$, let $$r_j = \inf\{s: \eta(s) \in \gamma_j[0,t_j] \}$$ with $r_j = \infty$ if $\eta[0,\infty) \cap \gamma_j[0, t_j] = \emptyset$. It is not difficult to see that with probability one for each $j$ with $r_j < \infty$ there is a unique $t \in [0,t_j)$ such that $\gamma_j(t) = \eta(r_j)$. Then we can choose the representative $\gamma_j$ so that $\gamma_j(0) = \eta(r_j)$. Note that $\gamma_j$ is a bubble in $\H \setminus \gamma [0, r_j]$. We define $\tilde \gamma_{r_j}$ as the image of $\gamma_j$ under the mapping $g_{r_j}^{-1}$ where the time-parametrization of $\tilde \gamma_{r_j}$ is obtained from that of $\gamma_j$ using the usual Brownian time-change under conformal maps. Note that each $\tilde \gamma_{r_j}$ is a bubble rooted at the origin in $\H$. Here, the parametrization of $\eta$ by its half-plane capacity is important since we index $\tilde \gamma$ by the time $r_j$.
\[ls=bs\] The process $(\tilde \gamma_r , r \ge 0)$ is a Poisson point process with intensity $\lambda$ times the Brownian bubble measure in $\H$.
Of course, this statement depends on the precise definitions of these measures, but it shows that adding the Poisson cloud of bubbles (as in [@LSWrest]) to the path $\eta$ is exactly the same as adding to $\eta$ the set of loops in a loop soup that it does intersect.
One direct application is that adding Brownian bubbles to $\eta$ or to the time-reversal of $\eta$ (that is, viewing $\eta$ as a curve from $\infty$ to the origin) is the same (from the point of view of the outside hulls). In the case where $\eta$ is chordal $SLE_2$, it corresponds to the fact that loop-erasing a random walk does not depend (in law) on the chosen time-orientation. This result is more generally closely related to the question of reversibility of the SLE’s.
One other application is for the “duality” conjecture of the SLEs, see [@Dub]: Indeed, from the point of vue of the outside hulls, adding the loops of the loop soup to a curve or to its outer boundary is the same. Hence, the same is true if one adds (dynamically) bubbles to a curve or to its outer boundary. This leads to an identity in law between the set obtained by adding the same loop soup to a process closely related to $SLE_\kappa$ or to a process closely related to $SLE_{16/\kappa}$. See [@Dub] for more details. Theorem \[ls=bs\] is also used in [@Wgi].
Another main point is just the definition of the Brownian loop measure. Despite its simplicity (and maybe its importance) and its nice properties, it does not seem (to our knowledge) to have been considered before.
The technical aspects of the present paper are not difficult. Once one has the correct definitions, the proofs are more or less standard exercises on Brownian motions, excursion theory and Green functions. In order to keep the pace of the paper flowing, we will at times be somewhat informal (we will not always describe precisely how to take the limit of one measure on paths, etc.), leaving the gaps to the interested reader. We will however not completely omit these problems (see, e.g., the next section).
The paper is organized as follows. In the next section, we mainly introduce some notation. In Section 3, we define some measures on Brownian paths, among which the Brownian bubbles. These are not new, but it is convenient to summarize some of their features in order to simplify the relation with the Brownian loop measure. This measure is defined and studied in Section 4, the relation with the bubble measure is described in Section 5. The final section is devoted to the question of time-parametrization of the Brownian “loop-adding” procedure.
Notations
=========
We will write $\Disk$ for the unit disk, $\Half = \{x+iy:
y > 0\}$ for the upper half-plane, and $\Disk_+$ for $ \Disk \cap \Half = \{z \in \Half:
|z| < 1 \} . $
Let $\fincurves$ be the set of all parametrized continuous planar curves $\gamma$ defined on a time-interval $[0,t_\gamma]$. We consider $\fincurves$ as a metric space with the metric $$\label{jan20.1}
\finmetric(\gamma,\gamma^1) =
\inf_\theta \; [\; \sup_{0 \leq s \leq t_\gamma}
|s - \theta(s) | + |\gamma(s) - \gamma^1(\theta(s))|
\; ],$$ where the infimum is over all increasing homeomorphisms $\theta: [0,t_\gamma] \rightarrow [0,t_{\gamma^1}]$. Note that $\fincurves$ under this metric does not identify curves that are the same modulo time-reparametrization.
If $\mu$ is any measure on $\fincurves$, we let $|\mu| = \mu(\fincurves)$ denote the total mass. If $0 < |\mu| < \infty$, then we let $\mu^\normed
= \mu/|\mu|$ be $\mu$ normalized to be a probability measure.
Let $\metricmeasure$ denote the set of finite Borel measures on $\fincurves$. This is a metric space under the Prohorov metric $d$ (see , e.g., for details). When we say that a sequence of measures converges it will be with respect to this metric. Recall that one standard way to show that two probability measures $\mu$ and $\nu$ are close with respect to this metric is via coupling: one finds a probability measure $m$ on $\fincurves \times \fincurves$ whose first marginal is $\mu$, whose second marginal is $\nu$, and such that $$m [\{(\gamma^1,\gamma^2): \finmetric(\gamma^1,\gamma^2)
> \epsilon\}] \leq \epsilon .$$ To show that a sequence of finite measures $\mu_n$ converges to a finite measure $\mu$, it suffices to show that $|\mu_n| \rightarrow |\mu|$ and $\mu_n^\# \rightarrow
\mu^\#.$
If $D$ is a domain, we say that $\gamma$ is in $D$ if $\gamma(0,t_\gamma) \subset D$; note that we do not require the endpoints of $\gamma$ to be in $D$. Let $\fincurves(D)$ be the set of $\gamma \in \fincurves$ that are in $D$. If $z,w \in {\mathbb{C}}$, let $\fincurves_z$ (resp., $\fincurves^w)$ be the set of $\gamma \in \fincurves$ with $\gamma(0) = z$ (resp., $\gamma(t_\gamma) = w$). We let $\fincurves_z^w = \fincurves_z \cap \fincurves^w$ and we define $\fincurves_z(D),\fincurves^w(D),\fincurves_z^w(D)$ similarly.
If $\gamma,\gamma^1 \in \fincurves$ with $\gamma(t_\gamma) = \gamma^1(0)$, we define the concatenation $\gamma \oplus \gamma^1$ by $t_{\gamma \oplus \gamma^1} = t_\gamma +
t_{\gamma^1}$ and $$\gamma \oplus \gamma^1(t) =
\left\{ \begin{array}{ll}
\gamma(t), & 0 \leq t \leq t_\gamma \\
\gamma^1(t - t_\gamma), & t_\gamma \leq
t \leq t_\gamma + t_{\gamma^1}.
\end{array} \right.$$ For every $w$, the map $(\gamma,\gamma^1) \mapsto
\gamma \oplus
\gamma^1$ is continuous from $\fincurves^w \times
\fincurves_w$ to $\fincurves$.
Suppose $f:D \rightarrow
D'$ is a conformal transformation and $\gamma
\in \fincurves(D)$. Let $$s_t =s_{t,\gamma} =
\int_0^t |f'(\gamma(s))|^2 \; ds .$$ If $s_{t} < \infty$ for all $t < t_\gamma$, we define $f\circ \gamma$ by $f \circ \gamma(s_t) = f(\gamma(t))$. If $s_{t_\gamma}
< \infty$ and $f$ extends continuously to the endpoints of $\gamma$, then $f \circ \gamma \in \fincurves(D')$ and $t_{f \circ \gamma} = s(t_\gamma)$. If $\mu$ is a measure supported on the set of curves $\gamma$ in $\fincurves(D)$ such that $f \circ \gamma$ is well defined and in $\fincurves(D')$, then $f \circ \mu$ will denote the measure $$f \circ \mu(V) = \mu [\{\gamma:
f \circ \gamma \in V \}] .$$
If $\gamma \in \fincurves$, we let $\gamma^R$ denote the time reversal of $\gamma$, i.e., $t_{\gamma^R}
= t_\gamma$ and $\gamma^R(s) = \gamma(t_\gamma - s),
0 \leq s \leq t_\gamma$. Similarly if $\mu$ is measure on $\fincurves$, we define the measure $\mu^R$ in the obvious way.
Suppose $\{\mu_D\}$ is a family of measures indexed by a family of domains $D$ in ${\mathbb{C}}$. We say that $\mu_D$ satisfies the [*restriction property*]{} if
- $\mu_D$ is supported on $\fincurves(D)$;
- if $D' \subset D$, then $\mu_{D'}$ is $\mu_D$ restricted to the curves in $\fincurves(D')$.
Note that if $\mu$ is any measure on $\fincurves$ and $\mu_D$ is defined as $\mu$ restricted to $\fincurves(D)$, then the family $\{\mu_D\}$ satisfies the restriction property. Conversely, suppose that
- $\{\mu_D\}$ satisfies the restriction property
- $D_n$ is an increasing sequence of domains whose union is ${\mathbb{C}}$
- $\mu = \lim_{n \rightarrow \infty} \mu_{D_n}$.
Then, for each $D$, $\mu_D$ is $\mu$ restricted to $\fincurves(D)$.
If $A$ is any compact set, we define $\rad(A) = \sup\{|z| : z \in A\}$. The [*half-plane capacity*]{} of a subset $A$ of ${\overline \H}$ is defined by $$\label{jan23.5}
\hcap(A) = \lim_{y \rightarrow \infty}
y\,\E^{iy}[\Im(B_{\rho_A})] ,$$ where $\rho_A = \inf\{t: B_t \in A \cup {\mathbb{R}}\}.$ It is not difficult to see that the limit exists, satisfies the scaling rule $\hcap(rA)
= r^2\, \hcap(A)$, and is monotone in $A$. If $A$ is such that $\Half \setminus
A$ is simply connected, then we use $\tilde g_A$ to denote the unique conformal transformation of $\Half \setminus A$ onto $\Half$ such that $\tilde g_A(z) - z = o(1)$ as $z \rightarrow \infty$. Then $\tilde g_A$ has an expansion at infinity $$\tilde g_A(z) = z + \frac{\hcap(A)}{z}
+ O(|z|^{-2}) .$$ Since $z \mapsto z + (1/z)$ maps $\{z \in \Half: |z| > 1\}$ conformally onto $\Half$, we can see that $\hcap(\overline {\Disk_+}) = 1$.
If $\eta:[0,\infty) \rightarrow {\mathbb{C}}$ is a curve, we will sometimes write $\tilde g_t$ for $\tilde g_{\eta[0,t]}$ and define $g_t (z)
= \tilde g_t (z) - g_t (\eta_t)$ as in the introduction.
Brownian bridges, Brownian bubbles
==================================
We will start defining some bridge-type Brownian measures on curves that we will use. These are measures on Brownian paths with prescribed starting point and prescribed terminal point. Since we are interested in conformally invariant properties, the standard bridges with prescribed time duration are not well-suited.
In our notation, $\mu_D(z,w)$ will always be a measure on Brownian paths that remain in the domain $D$, that start at $z$ and end at $w$, but this notation will have different meanings depending on whether $z,w$ are boundary or interior points of the domain $D$. We hope this will not cause confusion. Since the content of this section is rather standard, we will just review these definitions. The excursion measures have been defined in [@LW2; @Vi; @LSWrest], the bubble measures in [@LSWrest].
First definitions
-----------------
### Interior to interior
Let $\mu(z,\cdot;t)$ denote the law of a standard complex Brownian motion $(B_s, 0 \le s \le t)$, with $B_0 = z$, viewed as an element of $\fincurves$. We can write $$\mu(z,\cdot;t) = \int_{\mathbb{C}}\mu(z,w;t) \; dA(w) ,$$ where $A$ denotes area and $\mu(z,w;t)$ is a measure supported on $\gamma \in \fincurves_z^w$ with $t_\gamma = t$. In other words, $\mu (z,w;t)$ is $|\mu(z,w;t)|$ times the law $\mu^\normed (z,w;t)$ of the Brownian bridge from $z$ to $w$ in time $t$, where $|\mu(z,w;t)| =
(2 \pi t)^{-1} \exp\{-|z-w|^2/(2t)\}$.
The measure $\mu(z,w)$ is defined by $$\mu(z,w) = \int_0^\infty \mu(z,w;t) \; dt .$$ This is a $\sigma$-finite measure (the integral explodes at infinity so that the total mass of large loops is infinite; when $z=w$, it also diverges at $0$).
The measure $\mu (z,z)$ is an infinite measure on Brownian loops that start and end at $z$. We can write $$\mu (z,z) = \int_0^\infty \frac {1}{2 \pi t} \, \mu^\normed (z,z;t)\; dt.$$ where $\mu^\#(z,z;t)$ is the usual probability measure of a Brownian bridge from $z$ to $z$.
If $D$ is a domain and $z,w \in D$, we define $\mu_D(z,w)$ to be $\mu(z,w)$ restricted to $\fincurves(D)$. For fixed $z, w$, the family $\{ \mu_D (z,w), D \supset \{ z, w \} \}$ clearly satisfies the restriction property.
If $z\neq w$, and if the domain $D$ is such that a Brownian motion in $D$ eventually exits $D$, then $|\mu_D(z,w)| < \infty$. In fact, $$|\mu_D(z,w)| = \frac { G_D(z,w)}\pi ,$$ where $G_D$ denotes the Green’s function normalized so that $G_\Disk(0,z) = -\log |z|$. Note that $\mu_D(z,z)$ is well defined and has infinite total mass. The reversibility of the Brownian bridge immediately implies that $[\mu_D(z,w)]^R = \mu_D(w,z)$.
### Interior to boundary
Let $D$ be a connected domain in ${\mathbb{C}}$ whose boundary is a finite union of curves (we allow the curves to be in the sphere and for infinity to be a boundary point). We will call $\p D$ nice if it is piecewise analytic, i.e., if it is a finite union of analytic curves. A nice boundary point will be any point at which the boundary is locally an analytic curve.
Let $B$ be a Brownian motion starting at $z \in D$ and stopped at its exit time of $D$, i.e., at $$\tau_D = \inf\{t: B_t \not\in D\}.$$ Define $\mu_{D}(z,\p D)$ to be the law of $(B_t, 0 \leq t \leq \tau_D)$. If $D$ has a nice boundary we can write $$\mu_{D}(z,\p D) = \int_{\p D} \mu_D(z,w)
\; |dw| ,$$ where $\mu_D(z,w)$ for $z \in D$ and $w \in \p D$ denotes a measure supported on $\fincurves_z^w(D)$ with total mass $H_D(z,w)$, where $H_D(z,w)$ denotes the usual Poisson kernel. The normalized probability measure $\mu_D^\normed(z,w)$ is the law of Brownian motion conditioned to exit $D$ “at $w$”.
First properties
----------------
### Conformal invariance
It is well known that planar Brownian motion is conformally invariant. In our interior to interior notation, this can be phrased as follows. Suppose $f:D \rightarrow
D'$ is a conformal transformation and $z,w$ are two interior points in $D$. Then, $$f \circ \mu_D(z,w) = \mu_{f(D)}(f(z),f(w)) .$$ If $z \neq w$, this is a combination of the two classical results: $G_{f(D)}(f(z),f(w)) = G_D(z,w)$ and $[f \circ \mu_D]^\normed
(z,w) = \mu_{f(D)}^\normed(f(z),f(w)).$ For $z=w$ (in which case the measures are infinite), one can prove this by taking a limit.
Similarly, in the interior to boundary case, if $z \in D$ is an interior point and $w$ a boundary point, and if both $w$ and $f(w)$ are nice, then $$f \circ \mu_D (z,w) = |f'(w)| \ \mu_{f(D)} ( f(z), f(w) ).$$ This is a consequence of the two relations: $H_D(z,w) = |f'(w)| \; H_{D'}(f(z),
f(w))$ and $f \circ \mu_D^\normed(z,w) =
\mu_{f(D)}^\normed(f(z),f(w))$. It implies that one can define the probability measure $\mu_D^\normed(z,w)$ for any simply connected $D$ and any boundary point (i.e. prime end) $w$ by conformal invariance. For instance, it suffices to put $\mu_D^\normed(z,w) = f \circ \mu_{\Disk}^\normed(0,1)$ where $f:\Disk \rightarrow D$ is the conformal transformation with $f(0) = z$ and $f(1) = w$.
### Regularity
Note that the measures $\mu_D(z,w)$ are continuous functions of $z,w$ in the Prohorov metric. For instance, for any two interior points $z_0 \not= w_0$ in the fixed domain $D$, the mapping $(z,w) \mapsto \mu_D (z,w)$ is continuous at $(z_0, w_0)$. This can for instance be proved using a coupling argument.
Similarly, it is not difficult to show in the interior to boundary case that for a fixed boundary point $w$, the mapping $z \mapsto \mu_D (z, w)$ is continuous. When one wishes to let $w$ vary, one can for instance first note that $w \mapsto \mu_{\Disk} (0, w)$ is clearly continuous on the unit circle. Furthermore, for a conformal map $f$ from $\Disk$ onto $D$, the derivative $f'$ is uniformly bounded when restricted to any $r \Disk$ for $r <1$, so that one can control the variation of the time-parametrization. We will discuss this in more detail later in the (slightly more complicated) case of the excursion measures.
### Relation between the two
If $z,w$ are distinct points in $D$, then the normalized interior to interior measure $\mu^\#_D(z,w)$ can be given as a limit of boundary measures. Let $D_\epsilon = \{z' \in D: |z' - w| > \epsilon \}$, and let $\nu_\epsilon$ denote $\mu(z,\p D_\epsilon)$ restricted to curves whose terminal point is distance $\epsilon$ from $w$. As $\epsilon \rightarrow 0+$, $|\nu_\epsilon| \sim
G_D(z,w) \, [\log(1/\epsilon)]^{-1} $ and $\nu_\epsilon^\# \rightarrow
\mu^\#_D(z,w)$.
The interior to boundary measure can also be viewed as the limit of an appropriately rescaled interior to interior measure: If $w_n \in D$ and $w_n \rightarrow w$ where $w \in \p D$, then it is not hard to show that the corresponding probability measures converge $\mu^\normed_D(z,w_n) \rightarrow
\mu^\normed_D(z,w)$, for instance using a coupling argument. Also, if $w$ is a nice boundary point, and ${\bf n}_w$ denotes the inward normal at $w$, then as $\epsilon
\rightarrow 0+$, $$G_D(z,w - \epsilon {\bf n}_w) \sim
2 \pi \epsilon
H_D(z,w)$$ (the multiplicative constant can be worked out immediately using the case $D = \Disk, z=0,w=1$). Hence, $$\label{apr7.1}
\lim_{\epsilon \rightarrow 0+}
\frac{1}{2 \epsilon} \, \mu_D(z,
w - \epsilon {\bf n}_\epsilon) =
\mu_D(z,w),$$ for any interior point $z$ and any nice boundary point of $w$.
Excursion measures
------------------
### Definition and conformal invariance
Suppose that $D$ is a nice domain, and that $z$ and $w$ are different nice boundary points of $D$. We will define the Brownian measure on paths from $z$ to $w$ in $D$. This Brownian excursion measure $\mu_D (z,w)$ can be defined by various means (see e.g. [@LW2; @Vi; @LSWrest]). It can be viewed as limits of the previous measures: $$\label {c1}
\mu_D(z,w) = \lim_{\epsilon \rightarrow 0+}
\frac{1}{2 \epsilon^2} \mu_D(z + \epsilon {\bf n}_z,
w + \epsilon{\bf n}_w) = \lim_{\epsilon \rightarrow 0+}
\frac 1 { \epsilon} \mu_D(z+ \epsilon {\bf n}_z, w) .$$ Again we can write $\mu_D(z,w) = H_D(z,w) \, \mu^\normed_D(z,w)$ where $$H_D(z,w) = \lim_{\epsilon
\rightarrow 0+} \epsilon^{-1} H_D(z + \epsilon {\bf n}_z,w).$$ Under this normalization $H_\Half(0,x) = 1/ (\pi x^2)$.
The probability measures $\mu^\normed_D(z,w)$ are conformally invariant, i.e., $$f \circ \mu^\normed_D (z,w) = \mu^\normed_{f(D)} (f(z), f(w))$$ for a conformal transformation such that the four boundary points $z$, $w$, $f(z)$ and $f(w)$ are nice. This shows that one can define $\mu^\normed_D(z,w)$ by conformal invariance even if $z,w$ are not nice boundary points.
It is sometimes easier to consider $\mu_\Half^\normed(0,\infty)$ where $\Half$ denotes the upper half-plane. This is the distribution of $\Half$-excursions, which are Brownian motions in the first component and independent three-dimensional Bessel processes in the second component, see [@Vi; @LSWrest]. One could choose this as the definition of $\mu_\Half^\normed (0, \infty)$, define the measures $\mu_D^\normed (z,w)$ by conformal invariance, and define the measures $\mu_D (z,w)$ by multiplying by the total mass, and finally verify (\[c1\]). (The measure $\mu_\Half^\normed(0,\infty)$ is not supported on $\fincurves$ since curves under this measure have infinite time duration; however, this does not present a problem. In particular, the image of $\mu_\Half^\normed(0,\infty)$ under a conformal transformation onto a bounded domain is supported on paths of finite time duration.)
If $f: D \rightarrow D'$ is a conformal transformation, and $z,w,f(z),f(w)$ are nice boundary points, then [@Vi; @LSWrest] $$f \circ \mu_D(z,w) = |f'(z)| \; |f'(w) | \;
\mu_{f(D)}(f(z),f(w))$$ The “integrated measure” $$\mu_{\p D} := \int_{\p D} \int_{\p D}
\mu_D(z,w) \; |dz| \; |dw| ,$$ is therefore conformally invariant: $$f \circ \mu_{\p D}
= \mu_{\p f(D)}$$ as was pointed out in [@LW2].
### Regularity
We now study the regularity of the excursion measures with respect to the domain $D$. For this we will need some simple lemmas.
\[sl1\] For any simply connected domain $D$ and any two distinct points $w$ and $w'$ on the boundary of $D$, the expected time spent in an open subset $U$ of $D$ by an excursion defined under the probability measure $\mu_D^\normed (w,w')$ is bounded from above by $2 \, {\rm area} (U) / \pi$.
If $z \in D$, let $G^\normed_{D}(w,w';z)$ denote the Green’s function for $\mu^\normed_D(w,w')$. This can be obtained as the limit of $G^\normed_{D}(w_n,w';z)$ where $w_n$ is a sequence of points in $D$ converging to $w$. If $f:D \rightarrow
D'$ is a conformal transformation, then $$G^\normed_{D}(w,w';z) =
G^\normed_{f(D)}(f(w),f(w');f(z)) .$$ Also $$\label{jan19.1}
G^\normed_\Half(0,\infty;z) =
\lim_{\epsilon \rightarrow 0+} \frac{\Im(z)}
{2 \, \epsilon}
\log \frac{\Re(z)^2 + (\epsilon + \Im(z))^2}
{\Re(z)^2 + (\epsilon - \Im(z))^2 } =
2 \; \frac{\Im(z)^2}{|z|^2} \leq 2 .$$ By conformal invariance, we get $G^\normed_D(w,w';z) \leq 2 $ for all simply connected $D$ and all $w,w',z$. This readily implies the lemma.
There is a constant $c < \infty$ such that the following holds. Suppose $D,D'$ are simply connected domains and $f:D \rightarrow D'$ is a conformal transformation with $|f(z_1)-z_1| \leq \delta \leq 1$ for all $z_1 \in D$. Then for any path $\gamma$ in $D$, $$\begin{aligned}
\finmetric(\gamma,f\circ \gamma)
&\leq&
c \; [ \; \delta + \delta^{1/2} t_\gamma +
\int_0^{t_{\gamma}} 1\{\dist(\gamma(s),
\p D) \leq \delta^{1/2} \} \; ds \\
&& +
\int_0^{t_{f \circ \gamma}}
1\{\dist(f \circ \gamma(u),
\p D') \leq c \delta^{1/2} \} \; du\; ] .
\end{aligned}$$
For any $\gamma$ let $$\theta_\gamma(s) = \int_0^{s}
|f'(\gamma(r))|^2 \; dr .$$ Then, $$\finmetric(\gamma,f\circ \gamma)
\leq \sup_{0 \leq s \leq t_\gamma}
|s- \theta_\gamma(s)|
+ \sup_{0 \leq s \leq t_\gamma}
|\gamma(s) - f(\gamma(s))| .$$ The second term on the right is bounded above by $\delta$ and the first term is bounded above by $ \int_0^{t_\gamma} Y_s \; ds , $ where $Y_s = | \, |f'(\gamma(s))|^2 - 1 \, |$. Let $$\tilde D = \tilde D_\delta = \{
z \in D: \dist(z,\p D) \leq \delta^{1/2} \}.$$ For $z \in D \setminus \tilde D$, a standard estimate gives $|f'(z) - 1| \leq c\, \delta/\dist(z,\p D)
\leq c \, \delta^{1/2}$. Hence, $$\int_0^{t_\gamma} Y_s \, 1_{\gamma(s)
\not\in \tilde D} \, ds \ \leq c \,
\delta^{1/2} \, t_\gamma
.$$ For the other part, write $$\int_0^{t_\gamma} Y_s \, 1_{\gamma(s)
\in \tilde D} \, ds
\leq \int_0^{t_\gamma} 1_{\gamma(s)
\in \tilde D} \, ds +
\int_0^{t_\gamma} |f'(\gamma(s))|^2 \;
1_{\gamma(s) \in \tilde D} \, ds .$$ The first term on the right hand side is the amount of time that $\gamma$ spends within distance $\delta^{1/2}$ of the boundary. Since $|f(z) - z| \leq \delta$, the second term on the right is less than the amount of time that $f \circ \gamma$ spends within distance $2\delta^{1/2}$ of $\p D'$. Combining these estimates gives the lemma. .
\[sl3\] Suppose that $D \subset \Disk_+$ is a simply connected domain with $\Disk_+ \setminus D \subset \delta \Disk_+$ for some $\delta > 0$. Let $z,z',w $ on $\partial D$ with $|z|= |z'|= 1$, $|w| \le \delta$ and $|z-z'| \le \delta$. Then, the distance between $\mu_D^\normed (z,w)$ and $\mu_{\Disk_+}^\normed (z', 0)$ goes to zero with $\delta$, uniformly with respect to the choice of $w,z,z'$ and $D$.
Let $f$ denote the conformal mapping from $D$ onto $\Disk_+$ such that $f(w)=0$, $f(z)=z'$ and $|f'(i)| = 1$. It is standard that for some constant $c$, $| f(x) - x|\le c \delta$ for all $x \in D$. The total area in $D$ or $D'$ of the set of points that is at distance less than $\delta^{1/2}$ from the boundary is no larger than $c' \delta^{1/2}$. Hence, a combination of the two previous lemmas shows that $$ c \^[1/2]{} , where the expectation is with respect to $\mu_D^\#(z,w)$. Since the law of $f \circ \gamma$ is $\mu_{\Disk_+}^\normed (z',0)$, the lemma follows.
Brownian Bubbles
----------------
### Definition
The Brownian bubble measure in $\H$ at the origin is the $\sigma$-finite measure $$\label{jan22.1}
\bubble_\Half(0) = \lim_{z \rightarrow
0} \frac{\pi}{\Im(z)}\, \mu_\Half(z,0)
\;\;\;\; (z \in \Half)$$ or equivalently (see (\[apr7.1\])), $$\label{jan23.4}
\bubble_\Half(0) = \lim_{z,w \rightarrow
0} \frac{\pi}{2\, \Im(z)\, \Im(w)} \, \mu_\Half(z,w)
\;\;\;\; (z,w \in \Half).$$ When we speak of the limit, we mean that for every $r > 0$, if we restrict the measures on the right to loops that intersect the circle of radius $r$ (so that this is a finite measure), then the limit exists and equals $\bubble_\Half(0;r)$ which is $\bubble_\Half(0)$ restricted to loops that intersect $\{|z| = r\}$. It is not hard to show the limit exists and the normalization is chosen so that $| \bubble_\Half(0;r)| = 1/r^2$. If $r > 0$ and $f_r(z)
= rz$, then $\bubble_\Half(0)$ satisfies the scaling rule $$f_r \circ \bubble_\Half(0) =
r^2 \; \bubble_\Half(0) .$$ We can also define $\bubble_D(z)$ for other domains, at least if $\p D$ is smooth near $z$, using conformal covariance, $$f \circ \bubble_D(z) = |f'(z)|^2 \;
\bubble_{f(D)}(f(z)) .$$ These measures satisfy the restriction property: if $D' \subset D$, then $\bubble_{D'}(0)$ is $\bubble_D(0)$ restricted to loops that are in $D'$.
Suppose $D \subset \Half$ is a simply connected domain containing $r \Disk_+$ for some $r > 0$, and let $A$ be the image of $\Half \setminus
D$ under the map $z \mapsto - 1/z$. Then tells us that the $\bubble_\Half(0)$ measure of the set of loops that do not stay in $D$ is $\hcap(A)$. The reader can check that both the definition in the present paper and the definition in [@LSWrest] give measure $1$ to the set of loops that intersect the unit circle, and hence the two definitions use the same normalization. In particular, this shows immediately that $$\label {schw}
\bubble_\Half (0) [\{ \gamma \ : \gamma (0, t_\gamma) \not\subset D \} ]
= \frac {- S_\Phi (0)}6
,$$ where $\Phi$ is a conformal map from $D$ onto $\H$ that keeps the origin fixed, say, and $S_\Phi$ denotes the Schwarzian derivative $$S_\Phi(z) = \frac{\Phi'''(z)}{\Phi'(z)} - \frac{3 \Phi''(z)^2} {2 \Phi'(z)^2} .$$
### Path decomposition
The next proposition relates $\bubble_\Half(0)$ to excursion measures. This expression for $\bubble_\Half(0)$ splits the bubble at the point $s e^{i \theta}$ at which its distance to the origin is maximal.
One has $$\begin{aligned}
\label{jan23.1}
\bubble_\Half(0) &=& \pi \int_0^\infty \int_0^\pi
[\mu_{r \Disk_+}(0, r e^{i \theta})
\oplus \mu_{r \Disk_+}(re^{i \theta},0)]\; r \; d\theta \; dr\\
\label {bis}
&=&
\int_0^\infty
\frac {4}{\pi r^3} \int_0^\pi
[\mu_{r \Disk_+}^\#(0, r e^{i \theta})
\oplus \mu_{r \Disk_+}^\#(re^{i \theta},0)]\;
\sin^2 \theta \; d\theta \; dr. \end{aligned}$$
Let $r > 0, \delta > 0$. By the strong Markov property, $$\begin{aligned}
\lefteqn {\bubble_\Half(0;r) - \bubble_\Half(0;r+ \delta)} \\
& = & \lim_{\epsilon \rightarrow 0+}
\frac \pi \epsilon
\int_0^\pi
[\mu_{r \Disk_+}(\epsilon i, r e^{i \theta})
\oplus \mu_{(r+\delta) \Disk_+}(re^{i \theta},0)]
\; r \; d\theta . \\
& = & \pi \int_0^\pi
[\mu_{r \Disk_+}(0, r e^{i \theta})
\oplus \mu_{(r+\delta) \Disk_+}(re^{i \theta},0)]
\; r \; d\theta .\end{aligned}$$ But as $$\lim_{\delta \to 0+}
\delta^{-1} \mu_{(r+\delta) \Disk_+}(re^{i \theta},0) = \mu_{r \Disk_+}(re^{i \theta},0),$$ we get that $$\frac{d}{dr} \bubble_\Half(0;r)
= - \pi \int_0^\pi [\mu_{r \Disk_+}(0, r e^{i \theta})
\oplus \mu_{r \Disk_+}(re^{i \theta},0)]\; r \; d\theta,$$ which gives (\[jan23.1\]). Identity (\[bis\]) follows from the fact (see Lemma \[apr3.lemma1\]) that $$|\mu_{r \Disk_+}(0, r e^{i \theta})| = r^{-2} \;
|\mu_{\Disk_+}(0,e^{i \theta})| =
\frac {2 \sin \theta}{\pi r^2}.$$
Note that $$\frac d {dr} | \bubble_\Half(0;r)|
= - 4/(\pi r^3) \int_0^\pi \sin^2 \theta \; d\theta = -2 / r^3 ,$$ which is consistent with $| \bubble_\Half(0;r)|
= r^{-2}$.
Similarly, one can decompose the Brownian bubble measure in $\H$ at the point where the imaginary part is maximal. This gives a joint description of the real and imaginary parts using one-dimensional excursions and Brownian bridges (as briefly mentioned in [@LSWrest]).
(Unrooted) loop measure
=======================
Definition, restriction and conformal invariance
------------------------------------------------
We will now define the most important object for this paper, the Brownian loop measure $\loopmeasure$. Let $\percurves$ be the set of loops, i.e., the set of $\gamma \in \fincurves$ with $\gamma(0) = \gamma(t_\gamma)$. Such a $\gamma$ can also be considered as a function with domain $(-\infty,\infty)$ satisfying $\gamma(s) = \gamma(s + t_\gamma)$.
Define $\theta_r: \percurves
\rightarrow \percurves$ by $t_{\theta_r\gamma}
= t_\gamma$ and $\theta_r\gamma(s) = \gamma(s+ r)$. We say that two loops $\gamma$ and $\gamma'$ are equivalent if for some $r$, $\gamma' = \theta_r \gamma$. We write $[\gamma]$ for the equivalence class of $\gamma$. Let $\unrootedcurves$ be the set of [*unrooted loops*]{}, i.e., the equivalence classes in $\percurves$. Note that $\unrootedcurves$ is a metric space under the metric $$\unrootedmetric(\gamma,\gamma') = \inf_{r \in [0, t_\gamma]}
\finmetric(\theta_r\gamma, \gamma').$$
Any measure supported on $\percurves$ gives a measure on $\unrootedcurves$ by “forgetting the root”, i.e., by considering the map $\gamma \mapsto [\gamma]$. If $D$ is a domain, we define $\percurves(D),\unrootedcurves(D)$ to be the set of loops that lie entirely in $D$, i.e., $\gamma[0,t_\gamma] \subset D$.
We define the[ *Brownian loop measure*]{} $\loopmeasure$ on $\unrootedcurves$ by $$\loopmeasure
= \int_{\mathbb{C}}\frac{1}{t_\gamma} \, \mu (z,z)\; {dA(z)}=
\int_{\mathbb{C}}\int_0^\infty \frac {1}{2 \pi t^2 }\mu^\normed (z,z; t)\; dt \; dA (z),$$ where $dA$ denote the Lebesgue measure on ${\mathbb{C}}$. We insist on the fact that the measure $\loopmeasure$ is a measure on [*unrooted*]{} loops.
We will call a Borel measurable function $T: \percurves \rightarrow
[0,\infty)$ a [*unit weight*]{} if for every $\gamma \in \percurves$, $$\int_0^{t_\gamma} T(\theta_r\gamma) \; dr
= 1 .$$ One example of a unit weight is $T(\gamma) =
1/t_\gamma$. Note that $\loopmeasure$ satisfies $$\label{jan14.1}
\loopmeasure = \int_{\mathbb{C}}T\; \mu(z,z) \;
dA(z)$$ (considered as a measure on $\unrootedcurves$) for any unit weight $T$.
If $D$ is a domain, we define $\loopmeasure_D$ to be $\loopmeasure$ restricted to the curves in $\unrootedcurves(D)$; this is the same as the right-hand side of (\[jan14.1\]) with $D$ replacing ${\mathbb{C}}$ and $\mu_D(z,z)$ replacing $\mu(z,z)$. By construction, the family $\{\loopmeasure_D\}$ satisfies the restriction property. Not as obviously, these measures are also conformally invariant:
If $f: D
\rightarrow D'$ is a conformal transformation, then $f \circ \loopmeasure_D = \loopmeasure_{f(D)}$.
Showing this requires two observations. One, which we have already noted, is the conformal invariance of interior to interior measures, $f \circ \mu_D(z,z)
= \mu_{f(D)}(f(z),f(z))$. The other is the fact that we can define a unit weight $T_f$ by $T_f(\gamma) = 1/t_\gamma$ if $\gamma \not\in \percurves(D)$, and if $T
\in \percurves(D)$, $T_f(\gamma) = |f'(\gamma(0))|^2/
t_{f \circ \gamma}$. To check that this is a unit weight, note that $$\int_0^{t_\gamma} T_f(\theta_r\gamma) \; dr
= (1/t_{f \circ \gamma}) \int_0^{t_\gamma}
|f'(\gamma(r))|^2 \; dr = 1 .$$ Therefore, $$\begin{aligned}
f \circ \loopmeasure_D & = &
f \circ \int_D T_f \; \mu_D(z,z) \, dA(z)\\
& = & \int_D
(1/t_{f \circ \gamma}) \;
|f'(z)|^2\; f \circ \mu_D(z,z) \; dA(z) \\
& = & \int_D
(1/t_{f \circ \gamma}) \;
\mu_{D'}(f(z),f(z)) \; [|f'(z)|^2\;dA(z)] \\
& = & \int_{D'} T \, \mu_{D'}(w,w)\; dA(w)
= \loopmeasure_{D'}.\end{aligned}$$ Here $T$ denotes the simple unit weight $T(\gamma) = 1/t_\gamma$.
Note that the same argument shows that $\loopmeasure$ is invariant under the inversions $z \mapsto 1 / (z-z_0)$ for all fixed $z_0$.
Decompositions
--------------
The definition of $\loopmeasure$ makes it conformally invariant and hence independent of the choice of coordinate axes. It will be however convenient to have expressions for $\loopmeasure$ that do depend on the axes. We will write the measure on unrooted loops $[\gamma]$ as a measure on rooted loops by choosing the representative $\gamma$ whose initial point is the (unique) point on the loop of minimal imaginary part (the same works of course also for the maximal imaginary part). Note that this choice of “root” of the loop is not conformally invariant.
\[prop.jan23.1\] $$\loopmeasure = \frac 1 {2\pi} \; \int_{-\infty}^\infty
\int_{-\infty}^\infty \bubble_{\Half + iy}
(x+iy) \; dx \; dy$$
There are various simple ways to prove this. The main point is to get the multiplicative constants right. We therefore opt for a self-contained elementary proof that does not rely on other multiplicative conventions (i.e., excursions). We start by recalling some facts about one dimensional Brownian motion. Suppose $Y_t$ is a one-dimensional Brownian motion started at the origin. Let $t^*$ be the time in $[0,1]$ at which $Y_t$ is minimal, let $M = Y_{t^*}$, and let $\Psi = (Y_0 - M) \, (Y_1 - M)$. It is easy to see that the law of $t^*$ is the arcsine law with density $1/ ( \pi \sqrt { t (1-t)})$ on $[0,1]$. Given $t^*$, $Y_0 - M$ and $Y_1-M$ are independent random variables with the distribution of Brownian motion “conditioned to stay positive”. It is not difficult to show that $\E[Y_0 - M \mid t^* = t] =
\sqrt{\pi t/2}$ and hence that $\E [ \Psi] = 1/2$.
We now define a unit weight $T_\epsilon$ on $\gamma \in \percurves$ that will approximate the Dirac mass at the time of the minimal imaginary part of $\gamma$. If $t_\gamma <
\epsilon$, then $T_\epsilon(\gamma)
= 1/t_\gamma$. Suppose $t_\gamma \geq
\epsilon$ and there is a unique $r_0 \in [0,t_\gamma)$ such that $\Im[\gamma(r)] < \Im[\gamma(t)]$ for $t \in [0,t_\gamma) \setminus \{r_0\}$. Then $T_\epsilon(\theta_r \gamma) =
1/\epsilon$ for $r_0 - \epsilon \leq
r \leq r_0$ and $T_\epsilon(\theta_r \gamma)
=0$ for other $r_0 < t < r_0 + \epsilon$ (here $\gamma$ is considered as a periodic function of period $t_\gamma$). If no such unique $r_0$ exists, set $T_\epsilon (\gamma) = 1/t_\gamma$ (the choice here is irrelevant since this is a set of loops of measure zero). Note that the measures $\mu(z,z)$ are supported on loops for which a unique $r_0$ exists. It is easy to see that $T_\epsilon$ is a unit weight, and hence for every $\epsilon$, $$\loopmeasure = \int_{\mathbb{C}}T_\epsilon
\; \mu(z,z) \; dA(z) = \lim_{\epsilon
\rightarrow 0+}
\int_{\mathbb{C}}\epsilon^{-1}
\; \mu(z,z; \ge \epsilon) \; dA(z) ,$$ where $\mu (z,z; \ge \epsilon )$ denotes $\mu(z,z)$ restricted to curves $\gamma$ with $t_\gamma \geq \epsilon$ and $$\inf\{\Im(\gamma(t)); 0 \leq t
\leq \epsilon\} = \inf\{\Im(\gamma(t)):
0 \leq t < t_\gamma \} .$$ For fixed $\epsilon$, $\int_{\mathbb{C}}\epsilon^{-1}
\; \mu(z,z;\geq \epsilon) \; dA(z)$ is the same as $\loopmeasure$ restricted to curves with $t_\gamma \geq \epsilon$. Let us consider the measure $\epsilon^{-1} \; \mu(z,z;\geq \epsilon)$. For ease let $z=0$. Start a Brownian motion $B_t$ at $0$ and let it run until time $\epsilon$; let us write $B_\epsilon
= \sqrt{\epsilon}w$. We let $ - b
\sqrt{\epsilon} = \min\{\Im(B_t):
0 \leq t \leq \epsilon\}. $ Then given $B_t,
0 \leq t \leq \epsilon$, the remainder of the curve is obtained from the measure $\epsilon^{-1}
\; \mu_{\Half - i b \sqrt {\epsilon}}
(0,w \sqrt{\epsilon})$. As $\epsilon \rightarrow
0+$, this looks like $\epsilon^{-1} \, \mu_\Half
(i b \sqrt \epsilon, i(b+w) \sqrt{\epsilon})$, which in turn has the same limit as $\epsilon^{-1} \, b [b + \Im(w)] \, \mu_\Half
(i \sqrt \epsilon, i \sqrt \epsilon) .$ Hence (see (\[jan23.4\])), $$\begin{aligned}
\lim_{\epsilon \rightarrow 0+}
\epsilon^{-1} \;
\mu(0,0;\epsilon) & = & \lim_{\epsilon
\rightarrow 0+} \E[b(b+\Im(w))] \;
\epsilon^{-1} \mu_\Half
(i \sqrt \epsilon, i \sqrt \epsilon) \\
& = & \frac {1}{2 \pi}
\bubble_\Half(0) . \end{aligned}$$
The next proposition is similar. It gives an expression for $\loopmeasure_\Half$ by associating to an unrooted loop the rooted loop whose root has maximal absolute value.
\[jan23.prop2\] $$\loopmeasure_\Half =
\frac 1 {2 \pi} \int_0^\infty \int_0^\pi \bubble_{r \Disk_+}
(r e^{i \theta}) \; d \theta \; r \, dr .$$
Let $$\begin{aligned}
\rect & =& \{x+iy: -\infty < x < \infty , 0 < y < \pi \} \\
\rect_b & = & \{z \in \rect: \Re(z) < b\}
\end{aligned}$$ and let $\phi (z) = e^z$. Conformal invariance tells us that $\phi \circ \loopmeasure_\rect = \loopmeasure_\Half$. But Proposition \[prop.jan23.1\] (rotated ninety degrees) and restriction tell us that $$\loopmeasure_\rect = \frac 1 {2 \pi} \, \int_{-\infty}^\infty
\int_0^\pi \bubble_{\rect_x}(x+iy) \; dy \; dx.$$ The scaling rule for $\bubble$ gives $\phi \circ \bubble_{\rect_x}(x+iy) = e^{2x}
\; \bubble_{e^x \Disk_+}(e^{x+iy})$. Therefore $$\begin{aligned}
\loopmeasure_\Half = \phi \circ \loopmeasure_\rect
& = & \frac 1 {2\pi} \,
\int_{-\infty}^\infty \int_0^\pi
e^{2x}
\; \bubble_{e^x \Disk_+}(e^{x+iy}) \; dy \; dx \\
& = & \frac 1 {2\pi} \,
\int_0^\infty \int_0^\pi
\bubble_{r \Disk_+}(re^{iy} ) \; dy \; r \, dr . \end{aligned}$$
[**Remark.**]{} If we combine this description with the invariance of the unrooted loop measure under the inversion $z \mapsto -1/z$, we get that, when $r \to 0$, the measure $\loopmeasure_\Half$ restricted to those loops that intersect $r\Disk_+$ is close to a multiple of the Brownian bubble measure in $\H$ at the origin.
Bubbles and loops
=================
The goal of this section is to derive the relation between the Poissonian cloud of loops that intersect a given curve and the Poisson point process of bubbles that we briefly described in the introduction. In order to prove this, we need a clean generalization of the previous remark to shapes other than disks, and to show that the convergence holds uniformly over all shapes.
Some estimates
--------------
We will need some standard estimates about the Poisson kernel on rectangles and half-infinite rectangles, or, more precisely, on the images of these domains under the exponential map.
\[apr3.lemma1\] There exist a constant $c$ such that if $r \in (0,1/2) $ and $\theta,\varphi \in (0,\pi)$, $$\label{C}
|H_{\Disk_+}(r e^{i\theta},e^{i \varphi})
- \frac 2 \pi \, r\, \sin \theta \, \sin \varphi| \leq
c \, r^2 \, \sin \theta \, \sin \varphi ,$$ $$\label{A}
|H_{\Half \setminus \overline{\Disk_+}}(r^{-1} e^{i \theta},
e^{i\varphi}) - \frac 2 \pi \, r \, \sin \theta \, \sin \varphi | \leq
c \, r^2 \sin \theta \, \sin \varphi.$$
The map $f(z) = -z -(1/z)$ maps $\Disk_+$ onto $\Half$. Hence $$\begin{aligned}
H_{\Disk_+}(r e^{i\theta},e^{i\varphi}) & = &
|f'(e^{i\varphi})| \, H_\Half(f(re^{i\theta}),f(e^{i\varphi}) )\\
& = & 2 \, \sin \varphi \, H_\Half(f(re^{i\theta}),f(e^{i\varphi})).\end{aligned}$$ But if $|z| \geq 5/2, |x| \leq 2$, $$H_\Disk(z,x') = \frac{\Im(z)}{\pi \; [(\Re(z) - x')^2 + \Im(z)^2]}
= \frac{\Im(z)}{\pi \, |z|^2} \, [1 + O(\frac{1}{|z|})] ,$$ and $$f(r e^{i\theta}) = \frac{1}{r} \, e^{i(\pi - \theta)} \, +O(r),$$ $$\Im[f(r e^{i \theta})] = \frac{\sin \theta}{r} + \sin \theta \, O(r) .$$ This gives the first expression, and the second is obtained from the first using the map $z \mapsto -1/z$.
There exists a constant $c$ such that if $e^{-s} \in (3/4,1)$, $r \in (0,1/2)$, and $\theta,\varphi \in (0,\pi)$, then $$\label{B}
|H_{ \Disk_{+,r}}(e^{-s + i \theta},re^{i\varphi}) -
\frac{4}{\pi} \, \sinh s \, \sin \theta \, \sin \varphi| \leq
c \, r \, s \, \sin \theta \, \sin \varphi ,$$ where $\Disk_{+,r} = \{z \in \Disk_+: |z| > r \}.$
Separation of variables gives an exact form for the Poisson kernel on a rectangle, and the logarithm maps $\Disk_{+,r}$ onto a rectangle. Doing this we see, in fact, that $$H_{ \Disk_{+,r}}(e^{-s + i \theta},re^{i\varphi})
= \frac{4}{\pi \, r} \sum_{n=1}^\infty \sin(n\theta) \, \sin(n \varphi) \,
\sinh(ns) \; \frac{r^n}{1 + r^{2n}} ,$$ from which the estimate comes easily.
Bubble measure and loop measure {#estimatesec}
-------------------------------
Suppose $V_n$ is a sequence of sets in $\Half$ with $u_n = \rad(V_n) \rightarrow 0$ and such that $\Half \setminus V_n$ is simply connected. Let $m_n$ be $\loopmeasure_\Half$ restricted to loops that intersect both $V_n$ and the unit circle. We set $ h_n = \hcap (V_n)$.
\[p.tech\] When $n \to \infty$, $$m_n = \frac {h_n}2 \, \bubble_\Half (0,1) ( 1 + o(1)),$$ where $o(1)$ is uniformly bounded by a function of $u_n$ that goes to zero with $u_n$.
Note that scaling implies the corresponding results for the measures restricted to paths that intersect any given circle $r \partial \Disk$. We will use the decomposition of the measures according to the point at which the loop (or the bubble) has maximal absolute value. Recall that $$m_n = \frac 1 {2 \pi} \int_1^\infty \int_0^\pi \bubble_{r \Disk_+}
(r e^{i \theta}| V_n) \; d \theta \; r \, dr,$$ where $\bubble_{r \Disk_+}
(r e^{i \theta}| V_n)$ denotes $\bubble_{r \Disk_+}
(r e^{i \theta})$ restricted to loops that intersect $V_n$. Recall also that $$\label {recall}
\bubble_\Half(0,1) =
\int_1^\infty
\frac {4}{\pi r^3}
\int_0^\pi
[\mu_{r \Disk_+}^\#(0, r e^{i \theta})
\oplus \mu_{r \Disk_+}^\#(re^{i \theta},0)]\;
\sin^2 \theta \; d\theta \; dr.$$ It is not difficult to show that $$[\bubble_{r \Disk_+} (r e^{i \theta}|V_n)]^\#
\rightarrow
\mu_{r\Disk_+}^\#(r e^{i \theta },0) \oplus
\mu_{r\Disk_+}^\#(0,r e^{i \theta }),$$ uniformly on $\{1 \leq r \leq R\}$ and $\theta \in (0, \pi)$. (Note that there is a conformal transformation $g: \Half \setminus V_n
\rightarrow \Half$ with $\max|g(z) -z| \leq
c \,u_n $).
We now focus on the total masses. We claim that $$\label{jan23.2}
|\bubble_{ \Disk_+}
( e^{i \theta}| V_n) |= 4\,
h_n \sin^2 \theta \; [1+ O(u_n)] .$$ By the scaling rules for $\bubble$ and $\hcap$ this implies that for all $r \geq 1$, $$|\bubble_{ r\Disk_+}
( re^{i \theta}| V_n) |= 4\;r^{-4} \;
h_n \sin^2 \theta \; [1+ O(u_n)],$$ and the proposition follows, using (\[recall\]).
To prove (\[jan23.2\]), we first note that $$|\bubble_{ \Disk_+}
( e^{i \theta}| V_n) |
= \lim_{ \epsilon \to 0+} \frac {\pi}{\eps}
\mu_{\Disk_+} ( \exp (- \eps + i \theta), \exp ( i \theta)) [ B \hbox { hits } V_n ].$$ The estimates (\[A\]) and (\[B\]) show that the following two measures are very close (for all large $R$, and $r$ is small):
- The measure of $1_{\sigma_u < T} \arg (B_{\sigma_u})$ when $B$ is defined under the measure $R \P^{iR}$.
- The measure of $1_{\sigma_u < T, \sigma_u < \sigma_1} \arg (B_{\sigma_u})$ when $B$ is defined under the measure $$(2 \,
\sin \theta \sinh(\eps))^{-1} \P^{(1-\eps) \exp (i \theta)}.$$
After these hitting times, it is possible to “couple” the two paths up to their first hitting of $V_n$. After the hitting of $V_n$, we want to estimate the probability that the path go back to the unit circle without hitting ${\mathbb{R}}$ and that they hit it in the neighborhood of $\exp (i \theta)$. By (\[C\]), this will occur with a probability $$\frac {2}{\pi} \Im (B_{\rho_{V_n}}) \sin \theta d\theta .$$ Hence, we get finally that (recall that the estimates are uniform in $\theta$, $\epsilon$ and $R$) $$\begin{aligned}
|\bubble_{ \Disk_+} ( e^{i \theta} \mid V_n ) |
& \sim & \lim_{\eps \to 0, R \to \infty}
\frac {2 \, \pi}{\eps} {R \sin \theta}{\sinh \eps}
\E^{iR} [ \frac {2}{\pi} \Im ( B_{\rho_{V_n}} ) \sin \theta ] \\
&\sim &
4 \sin^2 \theta \lim_{R \to \infty} R \E^{iR} [ \Im ( B_{\rho_{V_n}} )]
\\
& \sim&
4 h_n \sin^2 \theta\end{aligned}$$ when $u_n \to 0$.
Bubble soup and loop soup {#soupsec}
-------------------------
We define a [*bubble soup*]{} with intensity $\lambda \geq 0$ to be a Poisson point process with intensity $\lambda \bubble_\Half$. One can also view it as a Poissonian sample from the measure $\lambda \bubble_\Half(0) \times ({\rm length})$ on $\fincurves_0^0(\Half)
\times [0,\infty)$. We can write a realization of the bubble soup as a countable collection $ {\cal U} = \{(\gamma_j,s_j)\}$. Recall that the law of ${\cal U}$ is characterized by the fact that:
- For any two disjoint measurable subsets $U_1$ and $U_2$ of $\fincurves_0^0(\Half) \times [0,\infty)$, ${\cal U} \cap U_1$ and ${\cal U} \cap U_2$ are independent.
- The law of the number of elements in ${\cal U} \cap U$ is the Poisson law with mean $\lambda \bubble_\Half(0) \times ({\rm length}) [ U] $ (when this quantity is finite).
We will think of the bubble $\gamma_j$ as being created at time $s_j$. Clearly, with probability one $s_j \neq s_k$ for $j\neq k$.
A [*Brownian loop soup*]{} with intensity $\lambda$ is a Poissonian sample from the measure $\lambda \loopmeasure$. We will use ${\cal L}_{\mathbb{C}}$ to denote a realization of the loop soup. A sample of the Brownian loop soup is a countable collection of (unrooted) Brownian loops in the plane. We will use ${\cal L}$ to denote the family of loops in ${\cal L}_{\mathbb{C}}$ that are in $\H$. This is the Brownian loop soup in the half-plane.
If $D \subset \Half$ is a domain, then we write
- ${\cal L}(D)$ for the family of loops in ${\cal L}$ that are in $D$
- ${\cal L}^\perp(D)$ for ${\cal L} \setminus {\cal L}(D)$, i.e., the family of loops that intersect $\Half \setminus D$.
By definition, for any fixed $D$, the two random families ${\cal L}(D)$ and ${\cal L}^\perp(D)$ are independent.
Note that the (law of the) families ${\cal L} (D)$ inherit the conformal invariance and restriction properties of the Brownian loop measure.
Now suppose that $\eta:[0,\infty) \rightarrow
{\mathbb{C}}$ is a simple curve with $\eta(0,\infty) \subset \Half$ and $|\eta(t)|\rightarrow \infty$ as $t \rightarrow \infty$. Assume that $\eta$ is parametrized by capacity, i.e., that $\hcap[\eta[0,t]] =2 t$. Let $H_t = \Half \setminus \eta[0,t]$ and let $g_t$ be the unique conformal transformation of $H_t$ onto $\Half$ such that $g_t(\eta(t)) = 0$ and $g_t(z)\sim z $ as $z \rightarrow \infty$. We let $f_t = g_t^{-1}$ which maps $\Half$ conformally onto $H_t$ with $f_t(0) = \eta(t)$.
Given a realization ${\cal U}$ of the bubble soup, consider the set of loops $${\cal U}_{\eta,t} = \{f_{s_j} \circ \gamma_j:
(\gamma_j,s_j) \in {\cal U}, s_j \leq t \} .$$ We consider this as realization of [*unrooted*]{} loops by forgetting the loop.
\[maintheorem\] For every $t < \infty$, if ${\cal U}$ is a bubble soup with intensity $\lambda > 0$, then ${\cal U}_{\eta,t}$, considered as a collection of unrooted loops, is a realization of ${\cal L}^\perp(H_t)$ with intensity $\lambda$.
It is useful to consider this theorem in the other direction, i.e. to see that it is equivalent to Theorem \[ls=bs\]. Let ${\cal L}$ be a realization of the loop soup in $\Half$ with intensity $\lambda$. We write elements of ${\cal L}$ as $[\gamma]$ since they are equivalence classes of loops. We write $V_\gamma$ for the hull generated by $[\gamma]$, i.e. $V_\gamma$ is the complement of the unbounded component of ${\mathbb{C}}\setminus \gamma[0,t_{\gamma}]$ (this does not depend on the choice of representative of $[\gamma]$). Let $\eta$ be as before and let us write ${\cal L}^\perp = \{[\gamma_1],[\gamma_2],\ldots\}$ for ${\cal L}^\perp(\Half \setminus \eta[0,\infty))$, i.e., for the set of loops in ${\cal L}$ that intersect $\eta[0,\infty)$. For every $[\gamma_j] \in {\cal L} $, let $r_j$ denote the smallest $r$ such that $\eta(r) \in \gamma[0,t_\gamma]$. Note that this does not depend on which representative $\gamma_j$ of $[\gamma_j]$ that we choose.
Let us now briefly justify the fact that with probability one, for each $j$ there is a unique representative of $[\gamma_j]$, which we write as just $\gamma_j$, such that $\gamma_j(0) = \eta(r_j)$ and $\gamma_j(0,t_{\gamma_j})
\subset H_{r_j}$. It follows for instance readily from the fact that if $B$ is a Brownian bridge (from $z$ to $w$ in time $t$), conditioned to stay in $\H$, then for each rational $0 < q_1 < q_2 < t$, if one defines the first time $s(q_1)$ at which $\eta$ hits $B[0,q_1]$, then $${{\bf P}}[\{\, s(q_1) < \infty; \; \eta(s(q_1)) \in B[q_2,t] \, \}] = 0 ,$$ since complex Brownian motion does not hit points.
From now on we consider $[\gamma_j]$ as a rooted loop by choosing this representative $\gamma_j$. Note that this choice depends on $\eta$. The set of times $ {\cal T} = \{r_j: \gamma_j \in {\cal L}^\perp \}$ is countable and dense in $[0,\infty)$ since with probability one for each rational $t$ there exists loops in ${\cal L}$ of arbitrarily small diameter surrounding $\eta(t)$. Also, $r_j \neq r_k$ if $j \neq k$. We let ${\cal L}^\perp_t$ denote the set of $\gamma_j \in {\cal L}$ with $r_j \leq t$, i.e., the set of loops that intersect $\eta[0,t]$. Recall that if $t < t_1$, then ${\cal L}^\perp_t$ and ${\cal L}^\perp_{t_1} \setminus {\cal L}^\perp_t$ are independent.
For $r > 0$, let ${\cal L}_t^\perp(r)$ denote the set of $\gamma_j
\in {\cal L}_t^\perp$ such that $\rad[g_{r_j} \circ \gamma_j] := \sup\{g_{r_j}
\circ \gamma_j(s): 0 \leq s \leq t_{g_{r_j} \circ \gamma_j}\} \geq r$. Note that with probability one ${\cal L}_t(r)$ is finite for each $t < \infty, r > 0$. It suffices to show that for every $r > 0$ the set of loops $$\{g_{r_j} \circ \gamma_j: \gamma_j \in {\cal L}_t^\perp(r)\}$$ is a Poissonian realization of the measure $ \lambda t \, \bubble_\Half(0;r)$ We only need to do this for the case $r=1$; the other cases are essentially the same. Let ${\cal A}_t = \{g_{r_j}
\circ \gamma_j: \gamma_j \in {\cal L}_t(1)^\perp\}$. We have already noted that for $\epsilon > 0$, ${\cal A}_{t + \epsilon}\setminus {\cal A}_t$ is independent of ${\cal A}_t$. If $t > 0$, the curve $\eta^t(s) = g_t[\eta(t+s)], 0 \leq s < \infty$, is also a simple curve parametrized by capacity. Conformal invariance of $\loopmeasure$ tells us that the distribution of $g_t \circ[{\cal A}_{t + \epsilon}
\setminus {\cal A}_t]$, derived from the curve $\eta$, is the same as the distribution of ${\cal A}_\epsilon$ derived from the curve $\eta^t$. Hence it suffices to prove the two conditions above for $t=0$. But this is the estimate that was done in §\[estimatesec\] so we have the result.
This implies (with (\[schw\])) in particular immediately the following fact: Suppose that $D \subset \H$ is simply connected, and that the curve $\eta (0,T] \subset D$ is parametrized as before. Define $D_t= g_t (D)$ (where $g_t$ is the conformal map from $\H \setminus \eta [0,t]$ onto $\H$ with $g_t (z) \sim z$ at infinity, and $g_t (\eta_t) = O$). As in (\[schw\]), define also a conformal map $\phi_t$ from $D_t$ onto $\H$ that fixes the origin. Then, $$\label {schw2}
{{\bf P}}[ \forall \gamma \in {\cal L}\ : \ \gamma \cap \eta [0,T] =
\emptyset \hbox { or } \gamma \subset D ]
= \exp ( \lambda \int_0^T \frac {S_{\phi_t} (O)} 6 dt ).$$
Parametrization
---------------
Suppose $\eta:[0,\infty) \rightarrow {\mathbb{C}}$ is a curve as before, and let ${\cal L}^\perp_t$ denote a realization of the loop soup in $\Half$ restricted to curves that intersect $\eta[0,t]$. We are going to show that if the path $\eta[0,\infty)$ has dimension strictly less than two, then the sum of all the time-lengths of the loops in ${\cal L}^\perp_t$ is almost surely finite. This will imply that one can construct a continuous path by attaching these loops “chronologically” to $\eta$.
\[parameterlemma\] Suppose that for some $\epsilon > 0$ and $T>0$, $$\label{parameter2}
\lim_{\delta \rightarrow 0+}
\delta^{-\epsilon} \;
\area(\{z: \dist[z,\eta[0,T]] \leq
\delta\}) = 0 .$$ Then with probability one, $$\sum_{\gamma \in {\cal L}^\perp_{T}}
t_\gamma < \infty .$$
Fix $T, \epsilon$, and let $r = \rad(\eta[0,T]) <
\infty$. Constants in this proof may depend on $T ,r, \eps$. It suffices to prove two facts: $$\#\{\gamma \in {\cal L}^\perp_{T}:
t_\gamma > 1\} < \infty \hbox { a.s., and }\;\;\;
\E[ \sum_{\gamma \in {\cal L}^\perp_{T}}
t_\gamma \; 1_{t_\gamma \leq 1}] < \infty .$$ Note that the first one is equivalent to $$\loopmeasure_\Half [\{\gamma \in {\cal L}_T^\perp \ : \ t_\gamma > 1 \} ]
< \infty .$$ But on the one hand $$\begin{aligned}
\loopmeasure [ \{ \gamma \ : \ \gamma \subset 2r \Disk \ : \ t_\gamma > 1 \} ]
&=& \int_{2r \Disk} \int_1^\infty dA (z) \frac {dt}{2 \pi t^2}
\mu^\# (z,z; t) [\{ \gamma \ : \ \gamma \subset 2r \Disk \} ]\\
&\le& \frac {A (2r \Disk)}{2 \pi}
< \infty .\end{aligned}$$ On the other hand, $$\begin{aligned}
\lefteqn {
\loopmeasure_\Half [ \{ \gamma \ : \ t_\gamma > 1 ,\ \gamma \not\subset 2r \Disk
,\ \gamma \cap r\Disk \not= \emptyset \} ]
} \\
&\le&
\frac {1}{2 \pi} \int_0^r \int_0^\pi \bubble_{u\Disk_+} (u\exp (i \theta))
[ \{\gamma \ : \ \gamma \not\subset 2r \Disk \} ] du\ d\theta
.\end{aligned}$$ It is easy (using conformal invariance) to see that $\bubble_{u\Disk_+} (u\exp (i \theta))
[ \{ \gamma \not\subset 2r \Disk \} ]$ is bounded independently from $u \le r$ and $\theta \in [0, \pi]$. Hence, the last displayed expression is finite, which completes the proof of the fact that the number of loops in ${\cal L}$ of length greater than one and that do intersect $\eta [0,T]$ is almost surely finite.
Note that $$\E[ \sum_{\gamma \in {\cal L}^\perp_{T}}
t_\gamma \; 1_{t_\gamma \leq 1}]
= \int_\Half \int_0^1 |\tilde \mu_\Half(z,z;t)| \, dt \; dA(z),$$ where $\tilde \mu_\Half(z,z;t)$ denotes $\mu_\Half(z,z;t)$ restricted to loops that intersect $\eta[0,T]$ (the $t_\gamma^{-1}$ in the definition of the loop measure cancels with the $t_\gamma$ in the expression on the left hand side). Let $$F(z) = \int_0^1 |\tilde \mu_\Half(z,z;t)|
\; dt ,$$ and let $d_z = \dist(z,\eta[0,T])$. It is standard to see that there exist constants $c,a$ such that $$|\tilde \mu_\Half(z,z;t)| \leq c \,t^{-1} \,
e^{-a \, d_z^2/t} .$$ Hence, we get $F(z) \leq c \; \log (1/d_z)$ and $$\label{parameter3}
\area\{z: F(z) \geq s\} \leq
\area\{z: \dist(z,\eta[0,T]) \leq e^{-s/c} \}
\leq e^{-s \epsilon/ c} .$$ Also we get $F(z) \leq c e^{-a |z|^2}$ for $|z| \geq 3r$, and hence we can see that $\int F(z) \; dA(z)
< \infty$.
[**Remark.**]{} From (\[parameter3\]) we can see that (\[parameter2\]) can be weakened to $${\rm area}\{z: \dist(z,\eta[0,T]) \leq e^{-s} \} \leq g(s) ,$$ where $\int_1^\infty g(s) \, ds < \infty$. However, if $\eta$ is space-filling, then the result does not hold as the following shows:
If $D$ is any nonempty open domain, then $\sum_{\gamma \in {\cal L}(D)} t_\gamma = \infty$ almost surely.
Note first that $$\E[\sum_{\gamma \in {\cal L}(D)} t_\gamma] = \infty .$$ This can be seen easily from the scaling rule $$\E[\sum_{\gamma \in {\cal L}(rD )} t_\gamma]
= r^2 \, \E[\sum_{\gamma \in {\cal L}(D)} t_\gamma] .$$ For example, if $D$ is a square, we can divide $D$ into $4$ squares of half the side length, $D_1,\ldots,D_4$. The scaling rule tells us that $$\label{parameter4}
\E[\sum_{\gamma \in {\cal L}(D)} t_\gamma] =
\sum_{j=1}^4 \; \E[\sum_{\gamma \in {\cal L}(D_j)} t_\gamma] .$$ But $$\sum_{\gamma \in {\cal L}(D)} t_\gamma =
\sum_{j=1}^4 \sum_{\gamma \in {\cal L}(D_j)} t_\gamma +
\sum_{{\cal L}
(D) \setminus [{\cal L}(D_1) \cup \cdots \cup {\cal L}(D_4)]}t_\gamma.$$ Since the last term has strictly positive expectation, the expectations in (\[parameter4\]) must be infinite.
Furthermore (by dividing the square into $2^m$ smaller squares), $\sum_{\gamma \in {\cal L}(D)} t_\gamma$ is larger than the mean of the values of $2^m$ independent copies of itself (i.e. the same random variable with infinite expectation) for any $m$. The result follows.
With Lemma \[parameterlemma\] we can give a Brownian parametrization to the curve “$\eta$ with the loops added.” Let ${\cal L}$ be a realization of the Brownian loop soup, and let $\{[\gamma_1],[\gamma_2],\ldots\}$ be the (unrooted) loops that intersect $\eta[0,\infty)$. As before, choose $r_j$ and representative $\gamma_j$ so that $\gamma_j(0) =\eta(r_j)$ and $\gamma_j[0,t_{\gamma_j}] \cap \eta[0,r_j) = \emptyset$. Define $$S(r-) = \sum_{r_j < r} t_{\gamma_j},
\;\;\;\;\; S(r+) = \sum_{r_j \leq r} t_{\gamma_j}.$$ Then $S(r)$ is an increasing function with jumps at $r_j$ of size $t_{\gamma_j}$. Define the process $Y_s$ by $$Y_{S(r-)} = \eta(r) ,$$ and if $S(r-) < S(r+)$, $$Y_{S(r-) + s} = \gamma_j(s),
\;\;\;\; 0 \leq s \leq t_\gamma .$$ The density of the loop-soup implies readily that $t \mapsto Y_t$ is continuous (provided $\eta$ is a simple curve for instance).
The results of [@LSWlesl] strongly suggest that the following conjecture holds.
If the curve $\eta$ is chordal $SLE_2$, and $\lambda = 1$, then the law of $Y$ is $\mu_\Half^\normed (0, \infty)$.
There seem to be different possible ways to prove this. One can use the convergence of loop-erased random walk to the $SLE_2$ curve [@LSWlesl]. The main missing step is the convergence of discrete bubbles towards the Brownian bubbles.
[**Acknowledgements.**]{} We would like to thank Oded Schramm for many inspiring conversations. Part of this work was carried at the Centre Emile Borel of the Institut Henri Poincaré.
[99]{}
Gregory Lawler
Department of Mathematics
310 Malott Hall
Cornell University
Ithaca, NY 14853-4201, USA
[email protected]
Wendelin Werner
Laboratoire de Mathématiques
Bât. 425
Université Paris-Sud
91405 Orsay cedex, France
[email protected]
[^1]: Cornell University; Research supported in part by the National Science Foundation
[^2]: Université Paris-Sud and IUF
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Stochastic cooling of trapped atoms is considered for a laser-beam configuration with beam waists equal or smaller than the extent of the atomic cloud. It is shown, that various effects appear due to this transverse confinement, among them heating of transverse kinetic energy. Analytical results of the cooling in dependence on size and location of the laser beam are presented for the case of a non-degenerate vapour.'
address: 'Emmy–Noether Nachwuchsgruppe “Kollektive Quantenmessung und Rückkopplung an Atomen und Molekülen”, Fachbereich Physik, Universität Rostock, Universitätsplatz 3, D-18051 Rostock, Germany'
author:
- 'D. Ivanov and S. Wallentowitz'
title: Transverse confinement in stochastic cooling of trapped atoms
---
Introduction
============
Cooling techniques for atoms play a crucial role in modern physics. They have allowed for the localisation of atomic gases by weak trapping forces and step by step have enabled the reach into the domain of ultracold gases. At such low temperatures the peculiar quantum-statistical properties of the atoms become immanent in various effects of cold atomic collisions. Bose-Einstein condensation [@bec; @bec2; @bec3; @bec4; @bec5] and the recent production of Fermi-Dirac degenerate gases [@fermi; @fermi2; @fermi3; @fermi4] are limiting cases of the now existing experimental feasibilities. Moreover, the implementation of atom-lasers [@atom-laser; @atom-laser2; @atom-laser3] and microstructured traps on so called atom chips [@bec-micro; @bec-micro2; @bec-micro3; @bec-micro4] show the vast potential of applications.
The typical strategy to generate a Bose-Einstein condensate from a moderately cold sample of trapped atoms is to apply different cooling techniques in sequence: it usually starts with laser cooling [@laser-cooling; @laser-cooling2; @laser-cooling3] and ends with evaporative cooling [@evap-cooling; @evap-cooling2; @evap-cooling3]. The latter technique seems to be unbeaten as of yet for the final cooling step. Laser cooling, that relies on cycling transitions where photons are spontaneously emitted, does not provide the ultimate cooling power, due to the reabsorption and scattering of the emitted photons. However, the drawback of evaporative cooling is well known to be its intrinsic loss of atoms. Since hot atoms are released from the trapping potential to reach a colder sample, a substantial atom loss has to be taken into account that ultimately limits the size of the condensed sample. Furthermore, as does sympathetic cooling [@symp-cooling], it requires sufficiently strong atomic collisions for thermal re-equilibration.
Given a prepared sample of condensed atoms, a multitude of technical and possibly fundamental noise effects lead to a finite lifetime of the condensate state. Among these noise effects there are collisions with background vapour, electromagnetic noise sources via the trapping potential, scattering of light, etc. The study of these detrimental disturbances and the development of methods to reduce their impact on the condensate will be a challenging task for the future. One way to compensate for such heating effects may be simply the continuous application of cooling during the entire experiment. This may partially compensate the heating and thus extends the lifetime of the condensate. The only technique working so far at these temperatures is evaporative cooling. Its continuous application, however, would be rather unfortunate for the condensate, since though the lifetime of the condensate may be extended, its size in terms of atom number will continuously decrease.
Some years ago, Raizen et al. have proposed the use of stochastic cooling for trapped atoms [@stochastic-raizen]. It is a successful method in high-energy physics [@stochastic-cooling; @stochastic-cooling2] where the transverse motion of a particle beam has to be collimated and cooled. Clearly, the energies involved there are not in the regime important for an application to trapped atoms. However, classical numerical simulations have shown the feasibility of stochastic cooling also for trapped atoms [@stochastic-raizen; @stochastic-raizen-josa]. Furthermore, it has been recently shown, that also at ultralow temperatures this technique reveals cooling [@stochastic]. Thus it may perhaps be utilized to stabilise an atomic Bose-Einstein condensate.
It should be pointed out that on the single-atom level feedback control of atomic position has been theoretically studied [@single-fb; @single-fb2] and experimentally realised in optical lattices [@morrow] and high-quality cavity fields [@rempe].
In this paper we extend our analysis of stochastic cooling of trapped atoms to include also effects due to the transverse confinement of atoms. The latter has its origin in the finite beam waist of the employed control-laser beam. At temperatures above the condensation point we give analytic results of the cooling and discuss its optimisation with respect to size and location of the control-laser beam waist.
In Sec. \[sec:stoch-cooling\] we explain the method of stochastic cooling of atoms and derive the expression for the single-atom density matrix after the single cooling step. Given this result at hand, in Sec. \[sec:energy-change\] we calculate the total energy change of atoms due to a single step of stochastic cooling in terms of quantum-statistical averages. In Sec. \[sec:non-degenerate-atomic-vapor\] the regime of a non-degenerate gas is considered for which analytical expressions for the energy contributions are obtained. Moreover, the dependence of cooling on geometrical parameters is discussed. Finally, in Sec. \[sec:conclusions\] conclusions are given.
Stochastic cooling {#sec:stoch-cooling}
==================
The method of stochastic cooling of trapped atoms consists of the repeated application of two operations: the measurement of the momentum of atoms and the subsequent application of a kick to compensate for the measured momentum. Several aspects are important and should be emphasised for an understanding of the working of this technique. First of all it is not done on a single atom but on a large set of atoms. Those atoms that are subject to measurement and kick are specified by their spatial location in a given volume of space. In the experiment that volume is defined by the spatial extent of the laser beams that implement the required operations.
Since in the experiment, at the time of measurement of momentum, it is usually unknown how many atoms contributed to the measured signal, the momentum per atom averaged over the atomic ensemble is not the measured observable. To obtain this momentum per atom one would in fact need knowledge on the precise number of atoms, that contributed to the measured signal. What can, instead, be assessed by the measurement is the total momentum of the atoms. This is the sum over the atomic momenta, since each atom equally contributes to the signal.
Given the measured total momentum of the set of atoms, for compensating it, a (optical) field is turned on to provide the necessary kick by its interaction with the atoms. Since each atom separately interacts with the field, one can only apply a common kick to each atom. The determination of the required kick per atom necessarily involves a characterisation of the number of atoms in the set, given that only the total momentum is known. Since the atom number will not be measured, a priori information is required for estimating the actual atom number. Clearly, this way atom-number fluctuations, whether classical or quantum in nature, cannot be coped with, which shows an intrinsic source of imperfection of the method.
Furthermore, since a measurement on a single system and not a series of measurements on identically prepared systems is performed, not ensemble averages are measured. Depending on the measurement resolution strong correlations between the atoms are induced by the measurement projection, since a huge number of microstates of the atoms may be associated to the same observed measurement outcome, which form a complicated superposition state.
Single-atom density matrix {#sec:density}
--------------------------
Here we consider a full three-dimensional model and consider spatial confinement of the volume where atoms are manipulated, see Fig. \[fig:geometry\]. For simplicity we assume only one laser-beam profile, despite the fact that several laser beams are involved in the implementation of the required operations [@recoil-induced; @recoil-induced2; @recoil-induced3]. The control-laser beam is directed along the $z$-axis and its transverse profile in $x$ and $y$ directions is described by the beam-waist function $w_\perp({\mathbf{r}}) \!=\!
w_\perp(x,y)$. Thus, the $z$ component of the total momentum of atoms inside the beam $\hat{P}_w$ is measured and then compensated to zero by means of a negative feedback loop. Using the atomic field operator $\hat{\phi} ({\mathbf{r}})$ for bosonic atoms, i.e. with commutator $$\label{eq:boson-field}
[\hat{\phi}
({\mathbf{r}}), \hat{\phi}^\dagger ({\mathbf{r}}')] \!=\! \delta({\mathbf{r}} \!-\!
{\mathbf{r}}') ,$$ this observable can be written as ($\hbar \!=\! 1$)[^1] $$\label{eq:momentum}
\hat{P}_w = -i \int \! dV \, w_\perp({\mathbf{r}}) \, \hat{\phi}^\dagger({\mathbf{r}})
\, \partial_z \hat{\phi}({\mathbf{r}}) .$$
![Geometry of the feedback setup. The laser beam is aligned along the $z$ axis, it determines the size of the feedback region.[]{data-label="fig:geometry"}](fig1.eps){width="50.00000%"}
The many-body quantum state of the atomic cloud after a single operation of stochastic cooling can be given as an integral over all possible measurement outcomes $P$ for the measured total momentum: $$\label{eq:uncond_rho}
\hat{\varrho}_+ = \int \! dP \, \hat{U}(P) \, \hat{M}(P) \, \hat{\varrho}_-
\, \hat{M}^\dagger(P) \, \hat{U}^\dagger(P) .$$ In this expression the initial many-body density operator is denoted as $\hat{\varrho}_-$ and the final one, after the single feedback operation, is denoted as $\hat{\varrho}_+$. It is assumed here, that the measurement and shift of momentum can be performed on a time scale $\Delta t$ much faster than the characteristic dynamics of the free system, i.e. $\Delta t \!\ll\!
\omega^{-1}$ with $\omega$ being the trap frequency. Then measurement and shift can be taken as instantaneous processes without time delay between them.
The measurement of momentum $\hat{P}_w$ with specific outcome $P$ is described by the resolution amplitude [@res-amplitude; @res-amplitude2] $$\label{eq:POVM}
\hat{M}(P) = \sqrt{\frac{1}{\sqrt{2 \pi} \sigma}} \, {\rm exp}
\left\{ -\frac{(P \!-\! \hat{P}_w )^2}{4 \, \sigma^2}\right\} ,$$ where $\sigma$ denotes the measurement resolution. Applied on a momentum eigenstate $| P_0 \rangle$ it gives the probability amplitude to observe the value $P$.[^2] A quasi canonically conjugate operator for the centre of mass of the atoms in the laser beam, being experimentally accessible, can be defined analogously as [@wal-feedback] $$\label{eq:coordinate}
\hat{Q}_w = \frac{1}{N_e} \int \! dV \, w_\perp({\mathbf{r}}) \,
\hat{\phi}^\dagger({\mathbf{r}}) \, z \, \hat{\phi}({\mathbf{r}}) .$$ Since $N_e$ is an estimated atom number, the commutator relation of operator $\hat{P}_w$ and $\hat{Q}_w$ reveals a deviation from the usual canonical form: $$\label{eq:commutator}
[ \hat{Q}_w , \hat{P}_w ] = i \hat{N}_w / N_e .$$ Here the true atom-number is defined as the operator $$\label{eq:number}
\hat{N}_w = \int \! dV \, w_\perp^2({\mathbf{r}}) \, \hat{\phi}^\dagger({\mathbf{r}})
\hat{\phi}({\mathbf{r}}) .$$ The modified commutation relation (\[eq:commutator\]) has an impact on the action of the shift operator $\hat{U}(P)$, that is supposed to produce a shift of $\hat{P}_w$ by $-P$, and that is defined as $$\label{eq:shift}
\hat{U}(P) = \exp \!\big( -i P \hat{Q}_w \big) .$$ Transforming the observable $\hat{P}_w$ by use of to the unitary transformation (\[eq:shift\]) we obtain $$\label{eq:P-shift}
\hat{U}^\dagger(P) \, \hat{P}_w \, \hat{U}(P) = \hat{P}_w - P \hat{N}_w /
N_e .$$ Thus an optimal shift by $-P$ is produced only on average when additionally estimating $$\label{eq:opt-estimate}
N_e \!=\! \langle \hat{N}_w \rangle .$$ Nevertheless, atom-number fluctuations will always deteriorate the perfection of the shift operation. In view of the lack of knowledge on the true atom number $\hat{N}_w$, the estimate (\[eq:opt-estimate\]) represents an optimum. Thus in the following we use this justifiable estimate.
For our further derivations it is convenient to calculate the single-atom density matrix from Eq. (\[eq:uncond\_rho\]). It is defined as $$\label{eq:single-atom-dm}
\sigma({\mathbf{r}}_1, {\mathbf{r}}_2) = \langle \hat{\phi}^\dagger({\mathbf{r}}_2)
\hat{\phi}({\mathbf{r}}_1) \rangle ,$$ and can be calculated as a trace over the many-body density operator given in Eq. (\[eq:uncond\_rho\]). In this way the single-atom density matrix after $(+)$ an operation of stochastic cooling reads $$\label{eq:reduced}
\sigma_+({\mathbf{r}}_1, {\mathbf{r}}_2) = \int \! dP \left\langle
\hat{M}^\dagger(P) \, \hat{U}^\dagger(P) \,
\hat{\phi}^\dagger({\mathbf{r}}_2) \, \hat{\phi}({\mathbf{r}}_1)
\, \hat{U}(P) \, \hat{M}(P) \right\rangle_- ,$$ where $\langle \ldots\rangle_-$ denotes tracing over the many-body density operator $\hat{\varrho}_-$, that represents the quantum state before the feedback operation.
The action of $\hat{U}(P)$ on a field operator results as a c-number exponential factor $$\label{eq:U-transform}
\hat{U}^\dagger(P) \, \hat{\phi}({\mathbf{r}}) \, \hat{U}(P) =
\hat{\phi}({\mathbf{r}}) \, \exp \!\left[ - i z w_\perp({\mathbf{r}}) P / \langle
\hat{N}_w \rangle \right] .$$ Moreover, using the Fourier representation of the resolution amplitude $$\label{eq:M-fourier}
\hat{M}(P) = \int \! dq \, \underline{M}(q) \, e^{i q (P -
\hat{P}_w )} ,$$ with $$\label{eq:M-fourier-def}
\underline{M}(q) = \sqrt[4]{\frac{2 \sigma^2}{\pi}} \exp(-\sigma^2
q^2)$$ the single-atom density matrix can be rewritten as $$\begin{aligned}
\label{eq:reduced2}
\fl
\sigma_+({\mathbf{r}}_1, {\mathbf{r}}_2) & = & \int \! dP \! \int \! dq \!\int \! dq'
\left\langle \ e^{i q \hat{P}_w} \hat{\phi}^\dagger({\mathbf{r}}_2)
\hat{\phi}({\mathbf{r}}_1) \, e^{-i q' \hat{P}_w} \right\rangle_-
\underline{M}^\ast(q) \, \underline{M}(q')
\nonumber \\
\fl & & \times \, \exp \! \left\{ i P \left[ \frac{z_2 w_\perp({\mathbf{r}}_2)
\!-\! z_1 w_\perp({\mathbf{r}}_1)}{\langle \hat{N}_w \rangle} + q' \!-\! q
\right] \right\} .\end{aligned}$$ This result can be further simplified using the transformation $$\label{eq:P_action}
\exp \left[ i q \hat{P}_w \right] \, \hat{\phi}({\mathbf{r}}) \, \exp \left[ -i q
\hat{P}_w \right]
= \hat{\phi}\big( x, y, z \!-\! q w_\perp({\mathbf{r}}) \big) ,$$ which results in $$\begin{aligned}
\label{eq:reduced3}
\fl \sigma_+({\mathbf{r}}_1, {\mathbf{r}}_2) = \int \! dP \int \! dq \int \! dq' \,
\underline{M}^\ast(q) \underline{M}(q')
\exp \! \left\{ i
P \left[ \frac{z_2 w_\perp({\mathbf{r}}_2) \!-\! z_1 w_\perp({\mathbf{r}}_1)}{\langle
\hat{N}_w \rangle} +
q' \!-\! q \right] \right\} \nonumber \\
\times \, \left\langle \hat{\phi}^\dagger \big(x_2, y_2, z_2 \!-\! q
w_\perp({\mathbf{r}}_2)\big) \, \hat{\phi} \big(x_1, y_1, z_1 \!-\! q
w_\perp({\mathbf{r}}_1) \big) \,
e^{i (q \!-\! q') \hat{P}_w} \right\rangle . \end{aligned}$$ Performing then the $P$ and $q'$ integrations we finally obtain $$\begin{aligned}
\label{eq:single-atom-density-matrix}
\fl \sigma_+({\mathbf{r}}_1, {\mathbf{r}}_2) = 2 \pi \! \int \! dq \,
\underline{M}^\ast(q) \underline{M}\big(q \!+\! [z_1 w_\perp({\mathbf{r}}_1)
\!-\! z_2 w_\perp({\mathbf{r}}_2)]/ \langle \hat{N}_w \rangle \big) \nonumber \\
\fl \times \left\langle
\hat{\phi}^\dagger \big( x_2, y_2, z_2 \!-\! q w_\perp({\mathbf{r}}_2) \big)
\, \hat{\phi} \big(x_1, y_1, z_1 \!-\! q w_\perp({\mathbf{r}}_1) \big)
\, e^{i \hat{P}_w
[z_2 w_\perp({\mathbf{r}}_2) - z_1 w_\perp({\mathbf{r}}_1)] / \langle \hat{N}_w
\rangle} \right\rangle_- ,\end{aligned}$$ which shows that the single-atom density matrix after the feedback depends in general on higher atom-atom correlations before the feedback. In Eq. (\[eq:single-atom-density-matrix\]) this is encoded by the occurrence of the exponential operator.
Feedback-induced change of energy {#sec:energy-change}
=================================
Important features of the application of a single feedback step can be given in analytical form. For example, the difference of energy after and before the feedback step $\Delta E$ can be calculated from Eq. (\[eq:single-atom-density-matrix\]). The information contained therein allows us to recognise noise sources and determine optimal parameters for maximum cooling. The parameters that can be optimised are the measurement resolution $\sigma$ and the geometrical characteristics given by the size and location of the control-laser beam with respect to the trapping potential.
In the following we use the thermal equilibrium state of the atomic ensemble to calculate the average energy change. This allows us to obtain a natural description of cooling in terms of energy $\Delta E (T)$ that is subtracted or, possibly, added by a feedback operation at a certain temperature point. However, this approach does not necessarily reflect the most general experimental situation, since specific correlations generated step by step in the feedback process are not taken into account. These correlations may possibly lead to enhancement of cooling via various effects, as shown in Refs [@stochastic-raizen; @stochastic-raizen-josa]. Thus the results of this paper, where we assume a thermal equilibrium state, represent the leading cooling/heating mechanisms for the quasi-equilibrium case.
Having the expression for the single-atom density matrix (\[eq:single-atom-density-matrix\]) the change of total energy due to the application of a single feedback step can be formulated. The Hamiltonian of the system of non-interacting atoms in the isotropic, harmonic trap potential is $$\label{eq:hamilton}
\hat{H} = \int \! dV \, \hat{\phi}^\dagger({\mathbf{r}})
\left[ - \frac{\nabla^2}{2m}
+ \frac{m\omega^2}{2} \, {\mathbf{r}}^2 \right]
\hat{\phi}({\mathbf{r}}) ,$$ where $m$ and $\omega$ are the atomic mass and the vibrational trap frequency, respectively. We divide this Hamiltonian into parts describing the energy of the motion in $z$ direction, i.e. in longitudinal direction with respect to the measured momentum $\hat{P}_w$, and into parts related to the energy of the transverse motion in the $xy$ plane.
The average total energy of the system in a given many-body quantum state can be written as the sum of these different contributions as $$\label{eq:E-def}
E = \langle \hat{H} \rangle = T_\parallel + T_\perp + V_\parallel + V_\perp
.$$ Using the definition of the single-atom density matrix (\[eq:single-atom-dm\]), the corresponding kinetic parts of Eq. (\[eq:E-def\]) are given as $$\begin{aligned}
\label{eq:K-parallel-def}
T_\parallel & = & - \frac{1}{2m} \int \! dV \! \int \! dV' \, \delta({\mathbf{r}}
\!-\! {\mathbf{r}}') \, \partial_z^2 \, \sigma({\mathbf{r}}, {\mathbf{r}}') , \\
\label{eq:K-perp-def}
T_\perp & = & - \frac{1}{2m} \int \! dV \, \delta({\mathbf{r}} \!-\! {\mathbf{r}}') \,
\nabla_\perp^2
\, \sigma({\mathbf{r}}, {\mathbf{r}}') ,\end{aligned}$$ where $\nabla_\perp^2 \!=\! \partial_x^2 \!+\! \partial_y^2$, and the potential-energy contributions read $$\begin{aligned}
\label{eq:V-parallel-def}
V_\parallel & = & \frac{m\omega^2}{2} \int \! dV \, z^2 \,
\sigma({\mathbf{r}}, {\mathbf{r}}) , \\
\label{eq:V-perp-def}
V_\perp & = & \frac{m\omega^2}{2} \int \! dV \, (x^2 \!+\! y^2) \,
\sigma({\mathbf{r}}, {\mathbf{r}}) .\end{aligned}$$
A change of the average energy due to the application of a single step of stochastic cooling is dominantly generated by the energy exchange with the externally applied optical fields that implement the momentum shift of atoms. The unidirectional flow of energy from the system to the optical fields is determined by the irreversibility introduced in the quantum measurement process. Apart from that, however, there are several sources of heating, among them also the back-action noise of the measurement itself. Especially at ultralow temperatures a decay or even reversal of the net energy flow may be expected, when the detrimental heating terms compensate the sought cooling effect of the momentum shift. In the following we will extract all these energetic terms by considering the feedback-induced change of energy based on Eqs (\[eq:E-def\])–(\[eq:V-perp-def\]). More specifically, we consider the change of the average energy, i.e. the difference between the energy after a single step of stochastic cooling and that before, $$\label{eq:DE-total}
\Delta E = E_+ \!-\! E_- = \Delta T_\parallel + \Delta T_\perp + \Delta
V_\parallel + \Delta V_\perp .$$
Energy change in the longitudinal motion
----------------------------------------
The dominant change of energy will occur in the potential and kinetic energies associated with the longitudinal motion in $z$ direction. The longitudinal potential energy $V_\parallel$ is given by Eq. (\[eq:V-parallel-def\]) and together with Eq. (\[eq:single-atom-density-matrix\]) it can be shown that the change of longitudinal potential energy is $$\label{eq:DV-parallel}
\Delta V_\parallel = \frac{m\omega^2}{8 \sigma^2} \langle \hat{N}_w \rangle
.$$ This positive energy contribution arises from the back-action noise of the total-momentum measurement in $z$ direction. The centre-of-mass $z$ coordinate of the affected atoms, associated with the total mass $m \langle \hat{N}_w
\rangle$, is then subject to an increased uncertainty of the size $(2\sigma)^{-1}$, which Eq. (\[eq:DV-parallel\]) shows to introduce a heating term in the potential energy.
For the kinetic energy of the motion in $z$ direction a more involved calculation of the second-order $z$-derivative of the density matrix (\[eq:single-atom-density-matrix\]) is required. After some lengthy but straightforward calculation the following expression for the feedback-induced change of longitudinal kinetic energy is then obtained: $$\label{eq:DK-parallel}
\Delta T_\parallel = \frac{\sigma^2}{2 m \langle
\hat{N}_w \rangle} -
\frac{\langle \hat{P}_w^2 \rangle}{2m \langle \hat{N}_w \rangle} +
\frac{\langle \Delta \hat{N}_w \hat{P}_w^2
\rangle}{2m \langle \hat{N}_w \rangle^2} .$$ Here $\Delta \hat{N}_w \!=\! \hat{N}_w \!-\! \langle \hat{N}_w \rangle$ is the fluctuation of the actual atom number in the control beam around its average. The first term in Eq. (\[eq:DK-parallel\]) is the kinetic energy left in the system by the imprecise total-momentum measurement with resolution $\sigma$. The second term is the sought cooling effect, where the centre-of-mass kinetic energy of the affected atoms is removed from the system. The last term, though, arises from quantum fluctuations of the number of atoms in the control-laser beam. This heating term appears since in the momentum-shift operation the actual atom number $\hat{N}_w$ is not known but only estimated by $\langle \hat{N}_w \rangle$. Thus this term represents a quantum-statistical imperfection of the feedback loop of stochastic cooling.
Energy change in the transverse motion
--------------------------------------
At first sight one may guess that the transverse motion is not affected by the feedback loop, since only momentum in $z$ direction is measured and shifted. However, since the atoms that contribute to the measured signal are confined within the laser beam waist $w_\perp({\mathbf{r}})$, the measurement of $\hat{P}_w$ also contains an indirect measurement of the transverse position of atoms with a resolution roughly given by the diameter of the beam. Thus a back-action noise in the transverse momenta can be expected that may lead to further contributions to the kinetic energy. It is now left to show how large these energy contributions are compared with those emerging from the longitudinal motion.
From Eqs (\[eq:single-atom-density-matrix\]) and (\[eq:V-perp-def\]) it can be easily seen that the potential energy in transverse $x$ and $y$ directions is unchanged, i.e. $$\label{eq:DV-perp}
\Delta V_\perp = 0 .$$ This result is obvious since only the momentum in $z$ direction with a transverse spatial confinement is measured without affecting the noise in the transverse coordinates. Let us therefore consider the kinetic energy of the transverse coordinates as defined in Eq. (\[eq:K-perp-def\]). Calculating the required second-order derivatives of Eq. (\[eq:single-atom-density-matrix\]) and performing the integrations, after some lengthy but straightforward calculus, we obtain for the change in transverse kinetic energy $$\begin{aligned}
\label{eq:K-perp}
\fl \Delta T_\perp & = & \frac{1}{2 m} \int \! dV \, [\nabla
w_\perp({\mathbf{r}})]^2 \bigg\{ \frac{m}{2\sigma^2} \langle
\hat{T}_\parallel({\mathbf{r}}) \rangle + \frac{1}{\langle \hat{N}_w \rangle^2}
\left[\sigma^2 z^2 \!+\! {\textstyle\frac{3}{4}} \,
w_\perp^2({\mathbf{r}}) \right] \langle \hat{\phi}^\dagger({\mathbf{r}})
\hat{\phi}({\mathbf{r}}) \rangle \nonumber \\
\fl & & + \frac{1}{2 \sigma^2 \langle \hat{N}_w \rangle^2} \left[ \sigma^2
z^2 + {\textstyle\frac{1}{4}} \, w_\perp^2({\mathbf{r}}) \right] \langle \{
\hat{\phi}^\dagger({\mathbf{r}}) \hat{\phi}({\mathbf{r}}), \hat{P}_w^2 \} \rangle
\\
\fl & & + \frac{1}{4 \sigma^2 \langle \hat{N}_w \rangle}
w_\perp({\mathbf{r}}) \langle \{ \hat{p}_z({\mathbf{r}}),
\hat{P}_w \} \rangle \bigg\}
- \frac{1}{2 m \langle \hat{N}_w \rangle} \int \! dV \, z \, \nabla
w_\perp({\mathbf{r}}) \!\cdot\! \langle \{ \hat{{\mathbf{p}}}({\mathbf{r}}) , \hat{P}_w \}
\rangle , \nonumber \end{aligned}$$ where $\{ \hat{A} , \hat{B} \} \!=\! \hat{A} \hat{B} \!+\! \hat{B} \hat{A}$ is the anti-commutator and the momentum density reads $$\label{eq:vector-p}
\hat{{\mathbf{p}}}({\mathbf{r}}) = - \frac{i}{2} \left\{ \hat{\phi}^\dagger ({\mathbf{r}})
\nabla \hat{\phi} ({\mathbf{r}}) - \left[\nabla \hat{\phi}^\dagger ({\mathbf{r}})
\right] \hat{\phi} ({\mathbf{r}}) \right\} .$$ For a thermal equilibrium state, space dependent averages will have a symmetry with respect to $z \!\to\! -z$, and thus the second integral in Eq. (\[eq:K-perp\]) can be shown to vanish as an odd moment of $z$.
Gaussian beam-waist function
----------------------------
At this point the specific form of the beam-waist function shall be introduced. We consider here a Gaussian beam with the following definition for $w_\perp({\mathbf{r}})$: $$\label{eq:w-perp-def}
w_\perp({\mathbf{r}}) = \exp\!\left[- \frac{(x \!-\! x_0)^2 \!+\! (y \!-\!
y_0)^2}{r_0^2} \right] .$$ In this way the area $A_0$ of the integrated beam intensity, $$\label{eq:area}
A_0 = \int \! dA \, w_\perp^2({\mathbf{r}}) = \pi r_0^2 ,$$ allows us to interpret $r_0$ as an effective radius of the laser beam. The squared gradient of $w_\perp({\mathbf{r}})$ results from Eq. (\[eq:w-perp-def\]) as $$\label{eq:sqr-grad}
[\nabla w_\perp({\mathbf{r}})]^2 = w_\perp^2({\mathbf{r}}) \, \frac{
(x \!-\! x_0)^2 \!+\! (y \!-\! y_0)^2}{r_0^4} ,$$ and using these results, Eq. (\[eq:K-perp\]) reads $$\begin{aligned}
\label{eq:K-perp2}
\fl \Delta T_\perp & = & \frac{1}{2 m} \int \! dV \,
w_\perp^2({\mathbf{r}}) \bigg\{
\frac{m}{2\sigma^2} \langle
\hat{T}_\parallel({\mathbf{r}}) \rangle
+ \frac{1}{\langle \hat{N}_w \rangle^2} \left[\sigma^2 z^2 \!+\!
{\textstyle\frac{3}{4}} \,
w_\perp^2({\mathbf{r}}) \right]
\langle \hat{\phi}^\dagger({\mathbf{r}}) \hat{\phi}({\mathbf{r}}) \rangle
\nonumber \\
\fl & & + \frac{1}{2 \sigma^2 \langle \hat{N}_w \rangle^2} \left[ \sigma^2
z^2 + {\textstyle\frac{1}{4}} \, w_\perp^2({\mathbf{r}}) \right] \langle \{
\hat{\phi}^\dagger({\mathbf{r}}) \hat{\phi}({\mathbf{r}}), \hat{P}_w^2 \} \rangle
\nonumber \\
\fl & & + \frac{1}{4 \sigma^2 \langle \hat{N}_w \rangle}
w_\perp({\mathbf{r}}) \langle \{ \hat{p}_z({\mathbf{r}}), \hat{P}_w \}
\rangle \bigg\} \, \frac{
(x \!-\! x_0)^2 \!+\! (y \!-\! y_0)^2}{r_0^4} .\end{aligned}$$ This kinetic-energy change will determine the heating effect due to the transverse confinement of atoms in the laser beam.
Non-degenerate atomic vapour {#sec:non-degenerate-atomic-vapor}
============================
For ultracold temperatures near the condensation temperature $T_0$ the longitudinal energy change has been discussed already in the approximation of a rectangular shape of the beam waist in Ref. [@stochastic]. In the following we evaluate the complete energy change (\[eq:DE-total\]), with the contributions given by Eqs. (\[eq:DV-parallel\]), (\[eq:DK-parallel\]), (\[eq:DV-perp\]) and (\[eq:K-perp2\]). We consider the regime of a non-degenerate gas, where the thermal de Broglie wavelength is much smaller than the interatomic distance. In that case the feature of indistinguishability of atoms can be neglected, keeping however the full wavemechanics of the single atom. That is, the single atom’s position and momentum still obey the canonical commutator relation, from which several important effects emerge.
Calculating expectation values for a thermal state in the canonical ensemble at temperature $T$ and total atom number $N$, treating the atoms as distinguishable particles, the number of atoms in the laser beam, for example, results as $$\label{eq:Nw}
\langle \hat{N}_w \rangle = N \frac{ s^2}{2
\!+\! s^2} \exp\!\left( - \frac{d^2}{2 \!+\! s^2} \right) .$$ Here we used the scaled distance $d$ of the control beam from the trap origin and the scaled beam radius $s$, defined by $$\label{eq:d-scaled}
d = \sqrt{(x_0^2 \!+\! y_0^2)} / L_{\rm th}, \qquad s = r_0 / L_{\rm
th} .$$ The rms extension of the atomic cloud is given by $$\label{eq:Lth}
L_{\rm th} = \Delta x_0 \left[ \tanh \left( \frac{\omega}{2 k_{\rm B} T}
\right) \right]^{-1/2} ,$$ with $\Delta x_0 \!=\! \sqrt{1 / (2 m \omega)}$ being the ground-state position uncertainty in the trap potential and $k_{\rm B}$ being the Boltzmann constant. In the following we also use the size of the atomic cloud in units of the ground-state uncertainty: $$\label{eq:lth}
l_{\rm th} = L_{\rm th} / \Delta x_0 .$$
Longitudinal energy change
--------------------------
The complete change of energy in the longitudinal motion in units of vibrational energy quanta reads $$\begin{aligned}
\label{eq:dE-parallel}
\Delta E_\parallel / \omega & = & \frac{1}{4} \left[
\frac{(\sigma / \Delta
p_0)^2}{\langle \hat{N}_w \rangle} + \frac{\langle \hat{N}_w
\rangle}{(\sigma / \Delta p_0)^2} \right] - \frac{l_{\rm
th}^2}{4}
\\ \nonumber
& & + \frac{l_{\rm th}^2}{4 N} \left\{ \frac{(2 \!+\!
s^2)^2}{s^2 (4 \!+\!
s^2)} \exp\!\left[ \frac{4 d^2}{(2 \!+\! s^2) (4 \!+\! s^2)}
\right] - 1 \right\} ,\end{aligned}$$ where $\Delta p_0 \Delta x_0 =\frac{1}{2}$ so that $\Delta p_0 = \sqrt{m
\omega / 2}$. The first term represents the measurement-induced noise leading to an increase of kinetic and potential energies. Taking into account only the energy change as given here this heating effect can be minimised by adapting the measurement resolution to the number of atoms in the control-laser beam as $$\label{eq:sigma-opt}
\sigma = \Delta p_0 \sqrt{ \langle \hat{N}_w \rangle} .$$ The minimum heating due to this noise results then as one half energy quantum.
The sought cooling effect is represented by the second term in Eq. (\[eq:dE-parallel\]), which is the centre-of-mass kinetic energy of the atoms addressed by the feedback. This value does neither depend on size nor location of the feedback region. For large temperatures $l_{\rm th} \!
\rightarrow \! \sqrt{2 k_{\rm B} T / \omega}$, so that the subtracted kinetic energy reduces to $k_{\rm B} T / (2 \omega)$, manifesting the removed energy as being given by the equipartition theorem.
Finally, the third term in Eq. (\[eq:dE-parallel\]) represents quantum noise due to the transverse spatial confinement of atoms subject to the feedback. It is produced by atom-number fluctuations that crucially depend via the average atom number on the size and location of the control-laser beam. This noise is always positive, leading thus to an unavoidable heating contribution, and does vanish only for $s \!\to\! \infty$. The limit $s \!\to\! \infty$ realizes the situation where all atoms are inside the control-laser beam, thus containing exactly $N$ atoms with vanishing atom-number fluctuations. Moreover, the strength of this heating term diminishes with increasing total number of atoms, as can be observed in Fig. \[fig:long\].
There the contours $\Delta E_\parallel \!=\! 0$ have been plotted in the parameter space $(s,d)$ for varying total atom numbers at fixed temperature $T
\!=\! 10 \, T_0$, where $$\label{eq:T0}
T_0 = \frac{\omega}{k_{\rm B}} \left( \frac{N}{\zeta (3)} \right)^{1/3},$$ is the condensation temperature in the thermodynamic limit [@trapped_bec] with $\zeta (n)$ being the Riemann $\zeta$ function. These contours represent the boundaries between cooling (-) on the right-hand side of the contour and heating (+) on its left-hand side. They show that finer spatial resolutions $s$ and larger distances from the trap centre $d$ are allowed for cooling when the total atom number increases.
![Boundaries between cooling (-) and heating (+) for the temperature $T \!=\! 10 \, T_0$ and varying total numbers of atoms. $\sigma$ is chosen as the optimal value given in Eq. (\[eq:sigma-opt\]).[]{data-label="fig:long"}](fig2.eps){width="50.00000%"}
Whereas the measurement-induced noise is constant and negligibly small, the major contribution leading to a restriction of the parameters $s$ and $d$ comes from the atom-number fluctuations in the chosen beam waist.
Transverse energy change
------------------------
The energy change in the transverse motion can be obtained in the same way as that for the longitudinal one, cf. Eq. (\[eq:dE-parallel\]), and reads $$\begin{aligned}
\label{eq:dE-transverse}
\fl \Delta E_\perp / \omega & = & \frac{N}{4 \langle \hat{N}_w
\rangle} \left[ \frac{\langle \hat{N}_w \rangle}{
(\sigma / \Delta p_0)^2} + \frac{(\sigma / \Delta p_0)^2}{ \langle
\hat{N}_w \rangle}
\right]\frac{4 + s^2 (2 +
d^2)}{(2 + s^2)^3} \exp\!\left[ - \frac{d^2}{2 + s^2} \right]
\nonumber \\
\fl & & + \frac{N}{4 \langle \hat{N}_w \rangle} \left[ \frac{l_{\rm th}^2 +
l_{\rm th}^{-2}}{\langle \hat{N}_w \rangle} + \frac{2}{(\sigma / \Delta
p_0)^2} \right] \frac{8 + s^2 (2 + d^2)}{(4 + s^2)^3} \exp\!\left[ -
\frac{2 d^2}{4 + s^2} \right]
\nonumber \\
\fl & & + \frac{N ( N \!-\! 1)}{\langle \hat{N}_w \rangle^2} \frac{l_{\rm
th}^2}{4}
\frac{s^2 [ 4 + s^2 (2 + d^2) ]}{(2 +
s^2)^4} \exp\!\left[ -
\frac{2d^2}{2 + s^2} \right]
\nonumber \\
\fl & & +
\frac{1}{4 (\sigma/\Delta p_0)^2 \langle \hat{N}_w \rangle^2} \Bigg\{
N \frac{12 + s^2 (2 + d^2)}{(6 + s^2)^3} \exp\!\left[ -
\frac{3 d^2}{6 + s^2} \right]
\nonumber \\
\fl & & + N ( N \!-\! 1) \frac{s^2}{2 + s^2} \frac{8 + s^2 (2+
d^2)}{(4 + s^2)^3} \exp\!\left[ -
\frac{d^2 (8 + 3 s^2)}{(4 + s^2) ( 2 + s^2)} \right] \Bigg\} .\end{aligned}$$ As mentioned before these heating contributions vanish in the limit $s
\rightarrow \infty$. Moreover, they depend on the total number of atoms $N$, which is also mediated by the dependence on $\langle \hat{N}_w \rangle$, see Eq. (\[eq:Nw\]), and possibly on $\sigma$.
Let us first consider the emerging changes in the boundary between cooling and heating, when now also the transverse energy change is taken into account. That is, we look for the contour in $(s,d)$ parameter space that satisfies $\Delta E \!=\! 0$, where $\Delta E \!=\! \Delta E_\parallel \!+\! \Delta
E_\perp$. In Fig. \[fig:trans1\] this contour (solid curve) is shown and compared to the boundary based only on longitudinal terms (dashed curve) for a total atom number of $N \!=\! 10^6$.
![Contours $\Delta E_\parallel \! =\! 0$ (dashed lines) and $\Delta
E \! = \! 0$ (solid lines) for different temperatures and $ N \!
= \! 10^6 $.[]{data-label="fig:trans1"}](fig3.eps){width="50.00000%"}
It is clearly observed that the additional heating terms due to the transverse confinement of atoms in the control beam leads to a shift of the boundary to larger values of $s$ and smaller values of $d$. The former is due to the fact that the measurement of the total momentum in $z$ direction of atoms in the beam, indirectly also represents a measurement of transverse coordinates within the beam-waist size. This leads again to measurement back-action noise, which is naturally reduced by increasing $s$.
The fact that the boundary is shifted to smaller values of $d$ is due to relative atom-number fluctuations that decrease for increasing atom numbers found near to the trap centre. It should be noted that either curve actually contains two different curves at temperatures $T \!=\! 10 \, T_0$ and $T \!=\!
10000 \, T_0$, which however cannot be distinguished in the plot. The only explicit dependence in Eqs. (\[eq:dE-parallel\]) and (\[eq:dE-transverse\]) is due to the occurrence of $l_{\rm th}$. For $N \!
\gg \! 1$, however, this dependence is very weak. Nevertheless, all features discussed here and in the following implicitly depend on temperature via the chosen scaling of $s$ and $d$ by $L_{\rm th}$.
Assuming that for increasing total atom number the optimised measurement resolution $\sigma$ increases, as for the value given in Eq. (\[eq:sigma-opt\]) for longitudinal terms, for large atom numbers $N
\!\gg\! 1$ an asymptotic expansion can be given for Eq. (\[eq:dE-transverse\]): $$\label{eq:dE-transverse-asymptotic}
\Delta E_\perp / \omega = \frac{1}{4} \left[ \frac{\langle \hat{N}_w
\rangle}{ (\sigma / \Delta p_0)^2} + \frac{(\sigma / \Delta p_0)^2}{
\langle \hat{N}_w \rangle} + l_{\rm th}^2 \right]\frac{4 \!+\! s^2 (2
\!+\! d^2)}{s^2 (2 \!+\! s^2)^2} ,$$ where Eq. (\[eq:Nw\]) has been used. In the same limit the longitudinal energy change can be approximated to finally obtain the total change of energy as $$\label{eq:dE-asymptotic}
\fl \Delta E / \omega =
\frac{1}{4} \left[ \frac{(\sigma / \Delta
p_0)^2}{\langle \hat{N}_w \rangle} \!+\! \frac{\langle \hat{N}_w
\rangle}{(\sigma / \Delta p_0)^2} \right] \left[ 1 \!+\! \frac{4
\!+\! s^2 (2 \!+\!
d^2)}{s^2 (2 \!+\! s^2)^2} \right]
- \frac{l_{\rm
th}^2}{4} \left[ 1 \!-\! \frac{4 \!+\! s^2 (2 \!+\!
d^2)}{s^2 (2 \!+\! s^2)^2} \right] .$$ This asymptotic expansion represents a good approximation starting already from atom numbers $N \!>\! 100$, as can be seen from Fig. \[fig:trans2\]. There it is observed that for fixed temperature the boundaries converge quickly to the asymptotic one for $N \rightarrow \infty$.
![Boundaries $\Delta E \! = \! 0$ for $T \! = \! 10 \, T_0$ and varying total atom numbers: $N \!=\! 1$ (dotted curve), $N \!=\! 10$ (dashed curve), $N \!=\! 100$ (dot-dashed curve), $N \!=\! 10^6$ (solid curve).[]{data-label="fig:trans2"}](fig4.eps){width="50.00000%"}
Equation (\[eq:dE-asymptotic\]) shows that again the optimal value for the measurement resolution is given by (\[eq:sigma-opt\]), for which the final result reads $$\label{eq:dE-asymptotic-opt}
\Delta E / \omega =
\frac{1}{2} \left[ 1 + \frac{4
\!+\! s^2 (2 \!+\!
d^2)}{s^2 (2 \!+\! s^2)^2} \right]
- \frac{l_{\rm
th}^2}{4} \left[ 1 - \frac{4 \!+\! s^2 (2 \!+\!
d^2)}{s^2 (2 \!+\! s^2)^2} \right] .$$ For this expression the solution of the condition $\Delta E \!=\! 0$ can be analytically given as $$\label{eq:d(s)}
d(s) = (2 \! + \! s^2) \sqrt{\frac{l_{\rm th}^2 \! - \! 2}{l_{\rm th}^2 \! +
\! 2} - \frac{2}{s^2 (2 \!+\! s^2)}} .$$ For $d \!=\! 0$ the corresponding value for $s$ is the minimal beam-waist radius. This minimal radius is obtained as $$\label{eq:s-min}
s_{\rm min} = \left( \sqrt{1 \!+\! 2 \, \frac{l_{\rm th}^2 \!+\! 2}{l_{\rm
th}^2 \!-\! 2}} - 1 \right)^{\frac{1}{2}} .$$ In Fig. \[fig:temper\] this function is shown in dependence on $\l_{\rm
th}^2$, which for high temperatures is proportional to $T$. At $l_{\rm th}^2
\!=\! 2$ the removed centre-of-mass kinetic energy exactly compensates the measurement-induced heating, which requires $s_{\rm min} \rightarrow \infty$. For values $\l_{\rm th}^2 \!<\! 2$ cooling does not occur, since the unavoidable measurement-induced noise can no longer be compensated. However, the corresponding temperatures for $l_{\rm th}^2 \!\leq\! 2$ are below the condensation temperature $T_0$, where our approach for a non-degenerate gas is no longer valid. For this regime see Ref. [@stochastic].
![Dependence of $s_{\rm min}$ on $l_{\rm th}^2$ for large total number of atoms $N \rightarrow \infty$.[]{data-label="fig:temper"}](fig5.eps){width="50.00000%"}
For larger values of $l_{\rm th}^2$, or $T$ correspondingly, the limiting value $s_{\rm min} \rightarrow (\sqrt{3} \!-\! 1)^{1/2} \! \approx \! 0.86$ is reached. In this regime only a fraction of the atomic cloud needs to be subject to the feedback loop since $s_{\rm min} \!<\! 1$. The unscaled minimum beam-waist radius is thus 86% of the rms extension of the atomic cloud.
Conclusions {#sec:conclusions}
===========
In summary we have studied the effects of transverse confinement in stochastic cooling of trapped atoms. It could be clearly shown that these effects are substantial for the cooling process and that minimum values for both the size and location of the control-laser beam exist.
In the regime of non-degenerated gases analytical expressions could be derived, that contain the full quantum-fluctuation effects. Among these effects are atom-number fluctuations that appear due to the finite volume of the control beam. They appear in form of an imperfection of the feedback loop and in form of back-action noise due to the indirect measurement of transverse coordinates.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was supported by Deutsche Forschungsgemeinschaft.
References {#references .unnumbered}
==========
[99]{}
Anderson M H, Ensher J R, Matthews M R, Wieman C E and Cornell E A 1995 [*Science*]{} [**269**]{} 198
Bradley C C, Sackett C A, Tollett J J and Hulet R G 1995 [ *Phys. Rev. Lett.*]{} [**75**]{} 1687
Davies K B, Mewes M- O, Andrews M R, van Druten N J, Durfee D S, Kurn D M and Ketterle W 1995 [*Phys. Rev. Lett.*]{} [**75**]{} 3969
Bradley C C, Sackett C A and Hulet R G 1997 [*Phys. Rev. Lett.*]{} [**78**]{} 985
Fried D G, Killian T C, Willmann L, Landhuis D, Moss S C, Kleppner D and Greytak T J 1998 [*Phys. Rev. Lett.*]{} [**81**]{} 3811
DeMarco B and Jin D S 1999 [*Science*]{} [**285**]{} 1703
Truscott A G, Strecker K E, McAlexander W I, Patridge G B and Hulet R G 2001 [*Science*]{} [**291**]{} 2570
Schreck F, Khaykovich L, Corwin K L, Ferrari G, Bourdel T, Cubizolles J and Salomon C 2001 [*Phys. Rev. Lett.*]{} [**87**]{} 080403
Granade S R, Gehm M E, O’Hara K M and Thomas J E 2002 [ *Phys. Rev. Lett.*]{} [**88**]{} 120405
Mewes M-O, Andrews M R, Kurn D M, Durfee D S, Townsend C G and Ketterle W 1997 [*Phys. Rev. Lett.*]{} [ **78**]{} 582
Bloch I, Hänsch T W and Esslinger T 1999 [ *Phys. Rev. Lett.*]{} [**82**]{} 3008
Hagley E W, Deng L, Kozuma M, Wen J, Helmerson K, Rolston S L and Phillips W D 1999 [*Science*]{} [**283**]{} 1706
Reichel J, Hänsel W and Hänsch T W 1999 [*Phys. Rev. Lett.*]{} [**83**]{} 3398
Cassettari D, Hessmo B, Folman R, Maier T and Schmiedmayer J 2000 [*Phys. Rev. Lett.*]{} [**85**]{} 5483
Hänsel W, Reichel J, Hommelhoff J and Hänsch T W 2001 [*Phys. Rev. Lett.*]{} [**86**]{} 608
Ott H, Fortagh J, Schlotterbeck G, Grossmann A and Zimmermann C 2001 [*Phys. Rev. Lett.*]{} [**87**]{} 230401
Chu S 1998 [*Rev. Mod. Phys.*]{} [**70**]{} 685
Cohen–Tannoudji C N 1998 [*Rev. Mod. Phys.*]{} [ **70**]{} 707
Phillips W D 1988 [*Rev. Mod. Phys.*]{} [**70**]{} 721
Masuhara N, Doyle J M, Sandberg J C, Kleppner D, Greytak T J, Hess H F and Kochanski G P 1988 [*Phys. Rev. Lett.*]{} [**61**]{} 935
Davis K B, Mewes M- O, Joffe M A, Andrews M R and Ketterle W 1995 [*Phys. Rev. Lett.*]{} [**74**]{} 5202
Davis K B, Mewes M- O, Joffe M A, Andrews M R and Ketterle W 1995 [*Phys. Rev. Lett.*]{} [**75**]{} 2909
Myatt C J, Burt E A, Ghrist R W, Cornell E A and Wieman C E 1997 [*Phys. Rev. Lett.*]{} [**78**]{} 586
Raizen M G, Koga J, Sundaram B, Kishimoto Y, Takuma H and Tajima T 1998 [*Phys. Rev. A*]{} [**58**]{} 4757
Möhl D, Petrucci G, Thorndahl L and van der Meer S 1980 [*Phys. Reports*]{} [**58**]{} 73
van der Meer S 1985 [*Rev. Mod. Phys.*]{} [ **57**]{} 689
Ivanushkin P S, Sundaram B and Raizen M G 2003 [*J. Opt. Soc. Am. B*]{} [**20**]{} 1141
Ivanov D, Wallentowitz S and Walmsley I A 2003 [ *Phys. Rev. A*]{} [**67**]{} 061401(R)
Caves C M, Milburn G J 1987 [*Phys. Rev. A*]{} [**36**]{} 5543
Mancini S, Vitali D and Tombesi P 2000 [*Phys. Rev. A*]{} [**61**]{} 053404
Morrow N V, Dutta S K and Raithel G 2002 [ *Phys. Rev. Lett.*]{} [**88**]{} 093003
Fischer T, Maunz P, Pinkse P W H, Puppe T and Rempe G 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 163002
Guo J, Berman P R, Dubetsky B and Grynberg G 1992 [*Phys. Rev. A*]{} [**46**]{} 1426
Courtois J-Y, Grynberg G, Lounis B and Verkerk P 1994 [*Phys. Rev. Lett.*]{} [**72**]{} 3017
Meacher D R, Boiron D, Metcalf H, Salomon C and Grynberg G 1994 [*Phys. Rev. A*]{} [**50**]{} R1992
Barchielli A, Lanz L and Prosperi G M 1983 [*Nuovo Cimento B*]{} [**72**]{} 79
Caves C M 1986 [*Phys. Rev. D*]{} [**33**]{} 1643
Neumark M A 1943 [*Dokl. Acad. Sci. USSR*]{} [**41**]{} 359
Wallentowitz S 2002 [*Phys. Rev. A*]{} [**66**]{} 032114
Dalfovo F, Giorgini S, Pitaevskii L P and Stringari S 1999 [*Rev. Mod. Phys.*]{} [**71**]{} 463
[^1]: Throughout the paper we use $\hbar = 1$.
[^2]: The set of operators $\hat{M}^\dagger(P) \hat{M} (P)$ forms a positive operator-valued measure [@neumark].
|
{
"pile_set_name": "ArXiv"
}
|
addtoreset[equation]{}[section]{}
DPNU-94-35\
hep-th/9408096\
August 1994
[**Topology and quantization\
of abelian sigma model\
in $ (1 + 1) $ dimensions**]{}[^1]\
[Shogo Tanimura]{}[^2]\
[*Department of Physics, Nagoya University,\
Nagoya 464-01, Japan*]{}\
It is known that there exist an infinite number of inequivalent quantizations on a topologically nontrivial manifold even if it is a finite-dimensional manifold. In this paper we consider the abelian sigma model in $ (1+1) $ dimensions to explore a system having infinite degrees of freedom. The model has a field variable $ \phi : S^1 \to S^1 $. An algebra of the quantum field is defined respecting the topological aspect of this model. A central extension of the algebra is also introduced. It is shown that there exist an infinite number of unitary inequivalent representations, which are characterized by a central extension and a continuous parameter $ \alpha $ $ ( 0 \le \alpha < 1 ) $. When the central extension exists, the winding operator and the zero-mode momentum obey a nontrivial commutator.
Introduction
============
In both field theory and string theory there are several models which have manifold-valued variables. For instance, the nonlinear sigma model has a field variable $ \phi : {\mbox{\boldmath $ R $}}^{\, 3} \to G/H $, where $ G/H $ is a homogeneous space. This manifold $ G/H $ is closely related to vacua associated with spontaneous symmetry breaking. As another example, the toroidal compactification model of closed bosonic string has a variable $ X : S^1 \to T^{\, n} $. To study global aspects of these models in quantum theory, we should have a quantization scheme respecting topological nature. However in the scheme of usual canonical quantization and perturbation method, the global aspects are obscure.
On the other hand it is known [@OK], [@Tani2] that there exist an infinite number of inequivalent quantizations on a topologically nontrivial manifold even if it is a finite-dimensional manifold. Unfortunately it remains difficult to extend those quantization schemes to include field theory.
In this paper we consider the abelian sigma model in $ (1+1) $ dimensions to explore a system having infinite degrees of freedom. In the context of classical theory, a field variable of the model is a map from $ S^1 $ to $ S^1$. An algebra of the quantum field is defined respecting the topological aspect of this model. Special attention is paid for the zero-mode and the winding number. A central extension of the algebra is also introduced. Representation spaces of the algebra are constructed using the Ohnuki-Kitakado representation and the Fock representation. It is shown that there exist an infinite number of unitary inequivalent representations, which are parametrized by a continuous parameter $ \alpha $ $ ( 0 \le \alpha < 1 ) $. It is expected that this model gives a physical insight to nonlinear sigma models of field theory and orbifold models of string theory.
Ohnuki-Kitakado representation
==============================
Here we briefly review quantum mechanics of a particle on $ S^1 $ considered by Ohnuki and Kitakado [@OK]. They assume that a unitary operator $ \hat{U} $ and a self-adjoint operator $ \hat{P} $ satisfy the commutation relation $$[ \, \hat{P} \, , \, \hat{U} \, ] = \hat{U}.
\label{2.1}$$ An irreducible representation of the above algebra is defined to be quantum mechanics on $ S^1 $. The operators $ \hat{U} $ and $ \hat{P} $ are called a position operator and a momentum operator, respectively. It is shown below that this naming is reasonable.
A representation space is constructed as follows. Let $ | \alpha {\rangle}$ be an eigenvector of $ \hat{P} $ with a real eigenvalue $ \alpha $; $ \hat{P} \, | \alpha {\rangle}= \alpha \, | \alpha {\rangle}$. Assume that $ {\langle}\alpha | \alpha {\rangle}= 1 $. The commutator (\[2.1\]) implies that the operator $ \hat{U} $ increases an eigenvalue of $ \hat{P} $ by a unit. If we put $$| n + \alpha {\rangle}:= \hat{U}^n \, | \alpha {\rangle}\;\;\;
( n = 0 , \pm 1 , \pm 2 , \cdots ),
\label{2.2}$$ it is easily seen that $$\begin{aligned}
&&
\hat{P} \, | n + \alpha {\rangle}= ( n + \alpha ) \, | n + \alpha {\rangle},
\label{2.3}
\\
&&
\hat{U} \, | n + \alpha {\rangle}= | n + 1 + \alpha {\rangle}.
\label{2.4}\end{aligned}$$ Unitarity of $ \hat{U} $ and self-adjointness of $ \hat{P} $ imply that $${\langle}m + \alpha | n + \alpha {\rangle}= \delta_{ m \, n }.
\label{2.5}$$ Let $ H_\alpha $ denote the Hilbert space defined by completing the space of finite linear combinations of $ | n + \alpha {\rangle}\, ( n = 0 , \pm 1 , \pm 2 , \cdots ) $. By (\[2.3\]) and (\[2.4\]), $ H_\alpha $ becomes an irreducible representation space of the algebra (\[2.1\]).
$ H_\alpha $ and $ H_\beta $ are unitary equivalent if and only if the difference $ ( \alpha - \beta ) $ is an integer. Consequently there exists an inequivalent representation for each value of the parameter $ \alpha $ ranging over $ 0 \le \alpha < 1 $. At this point, quantum mechanics on $ S^1 $ is in contrast to quantum mechanics on $ {\mbox{\boldmath $ R $}}$. For the one on $ {\mbox{\boldmath $ R $}}$, it is well-known that the algebra of the canonical commutation relations has a unique irreducible representation upto unitary equivalence.
To clarify the physical meaning of the parameter $ \alpha $, they [@OK] study eigenstates of the position operator $ \hat{U} $. If we put $$| \lambda {\rangle}:=
\sum_{n \, = \, - \infty}^{\infty} \,
e^{ - i \, n \, \lambda } \, | n + \alpha {\rangle},
\label{2.6}$$ it follows that $$\begin{aligned}
&&
\hat{U} \, | \lambda {\rangle}= e^{ i \, \lambda } \, | \lambda {\rangle},
\label{2.7}
\\
&&
| \lambda + 2 \pi {\rangle}= | \lambda {\rangle},
\label{2.8}
\\
&&
{\langle}\lambda | \lambda' {\rangle}=
2 \pi \, \delta( \lambda - \lambda' ).
\label{2.9}\end{aligned}$$ In the last equation it is assumed that the $ \delta $-function is periodic with periodicity $ 2 \pi $. It is also easily seen that $$\exp( - i \mu \hat{P} ) | \lambda {\rangle}=
e^{ -i \, \alpha \, \mu } \, | \lambda + \mu {\rangle},
\label{2.10}$$ which says that $ \hat{P} $ is a generator of translation along $ S^1 $. It should be noticed that an extra phase factor $ e^{ - i \, \alpha \, \mu } $ is multiplied. These states $ | \lambda {\rangle}\, ( 0 \le \lambda < 2 \pi ) $ define a wave function $ \psi ( \lambda ) $ for an arbitrary state $ | \psi {\rangle}\in H_\alpha $ by $ \psi ( \lambda ) := {\langle}\lambda | \psi {\rangle}$. This definition gives an isomorphism between $ H_\alpha $ and $ L^2 ( S^1 ) $ that is a space of square-integrable functions on $ S^1 $. A bit calculation shows that the operators act on the wave function as $$\begin{aligned}
&&
\hat{U} \psi ( \lambda )
:= {\langle}\lambda | \hat{U} | \psi {\rangle}= e^{ i \, \lambda } \, \psi ( \lambda ),
\label{2.11}
\\
&&
\hat{P} \psi ( \lambda )
:= {\langle}\lambda | \hat{P} | \psi {\rangle}= \left( - i \frac{\partial}{\partial \lambda} + \alpha \right)
\psi ( \lambda ).
\label{2.12}\end{aligned}$$ In the last expression the parameter $ \alpha $ is interpreted as the vector potential for magnetic flux $ \Phi = 2 \pi \alpha $ surrounded by $ S^1 $. Physical significance of $ \alpha $ is further discussed in the reference [@Tani].
Algebra
=======
Fundamental algebra
-------------------
Next we would like to extend Ohnuki-Kitakado’s quantum mechanics on $ S^1 $ to a field-theoretical model. We shall propose an algebra of the model. To motivate definition of the algebra we remind another expression of the canonical commutation relations. If we put $ \hat{V}(a) := \exp ( - i \sum_j \, a_j \hat{p}_j ) $ for real numbers $ a = ( a_1 , \cdots , a_n ) $, $ \hat{V}(a) $ is a unitary operator and satisfies $$\begin{aligned}
&&
\hat{x}_j \, \hat{x}_k = \hat{x}_k \, \hat{x}_j,
\label{3.1}
\\
&&
\hat{V}(a)^\dagger \, \hat{x}_j \, \hat{V}(a) = \hat{x}_j + a_j,
\label{3.2}
\\
&&
\hat{V}(a) \, \hat{V}(b) = \hat{V}( a + b ).
\label{3.3}\end{aligned}$$ Geometrical meaning of the above algebra is obvious; positions $ \hat{x} $’s are simultaneously measurable and movable by the displacement operator $ \hat{V}(a) $; displacement operators satisfy associativity.
Quantum mechanics on $ S^1 $ is easily generalized to the one on $ n $-dimensional torus $ T^{\, n} = ( S^1 )^n $. For this purpose we introduce unitary operators $ \hat{U}_j $ and self-adjoint operators $ \hat{P}_j ( j = 1 , \cdots , n ) $. Put $ \hat{V}( \mu ) := \exp( -i \sum_j \, \mu_j \hat{P}_j ) $ for $ \mu = ( \mu_1 , \cdots , \mu_n ) \in {\mbox{\boldmath $ R $}}^n $. Naive generalization of (\[2.1\]) leads the following relations; $$\begin{aligned}
&&
\hat{U}_j \, \hat{U}_k = \hat{U}_k \, \hat{U}_j,
\label{3.4}
\\
&&
\hat{V}(\mu)^\dagger \, \hat{U}_j \, \hat{V}(\mu)
= e^{ i \mu_j } \, \hat{U}_j,
\label{3.5}
\\
&&
\hat{V}(\mu) \, \hat{V}(\nu) = \hat{V}( \mu + \nu ).
\label{3.6}\end{aligned}$$ Representations of this algebra are constructed by tensor products of Ohnuki-Kitakado representations $ H_{\alpha_1} \otimes \cdots \otimes H_{\alpha_n} $. Therefore irreducible representations are parametrized by $ n $-tuple parameter $ \alpha = ( \alpha_1 , \cdots , \alpha_n ) $.
Now we turn to the abelian sigma model in $ (1+1) $ dimensions. The space-time is $ S^1 \times {\mbox{\boldmath $ R $}}$ and the target space is $ S^1 $, on which the group $ U(1) $ acts. In the classical sense the model has a field variable $ \phi \in Q = {\mbox{Map}}(S^1; S^1) $. Let $ \Gamma = {\mbox{Map}}(S^1; U(1)) $ a group by pointwise multiplication. The group $ \Gamma $ acts on the configuration space $ Q $ by pointwise action; for $ \gamma \in \Gamma $ and $ \phi \in Q $ let us define $ \gamma \cdot \phi \in Q $ by $$( \gamma \cdot \phi ) ( \theta ) := \gamma(\theta) \cdot \phi(\theta)
\;\;\;
(\theta \in S^1),
\label{3.7}$$ where $ \theta $ denotes a point of the base space. In the right-hand side the multiplication indicates the action of $ U(1) $ on $ S^1 $.
To quantize this system let us assume that $ \hat{\phi}(\theta) $ is a unitary operator for each point $ \theta \in S^1 $ and $ \hat{V}(\gamma) $ is a unitary operator for each element $ \gamma \in \Gamma $. Moreover we assume the following algebra $$\begin{aligned}
&&
\hat{\phi}(\theta) \, \hat{\phi}(\theta') =
\hat{\phi}(\theta') \, \hat{\phi}(\theta),
\label{3.8}
\\
&&
\hat{V}(\gamma)^\dagger \, \hat{\phi}(\theta) \, \hat{V}(\gamma) =
\gamma(\theta) \, \hat{\phi}(\theta),
\label{3.9}
\\
&&
\hat{V}(\gamma) \, \hat{V}(\gamma') =
e^{ - i \, c ( \gamma, \gamma' ) } \, \hat{V}(\gamma \cdot \gamma')
\;\;\;
( \gamma, \, \gamma' \in \Gamma ).
\label{3.10}\end{aligned}$$ At the last line a function $ c : \Gamma \times \Gamma \to {\mbox{\boldmath $ R $}}$ is called a central extension, which satisfies the cocycle condition $$c( \gamma_1 , \gamma_2 )
+ c( \gamma_1 \gamma_2 , \gamma_3 )
= c( \gamma_1 , \gamma_2 \gamma_3 )
+ c( \gamma_2 , \gamma_3 )
\;\;\;
( \mbox{mod} \: 2 \pi ).
\label{3.11}$$ If $ c \equiv 0 $, the algebra (\[3.8\])-(\[3.10\]) is a straightforward generalization of (\[3.4\])-(\[3.6\]) to a system with infinite degrees of freedom. We call the algebra (\[3.8\])-(\[3.10\]) the fundamental algebra of the abelian sigma model.
To clarify geometrical implication of the algebra we shall decompose the degrees of freedom of $ \phi \in Q $ and $ \gamma \in \Gamma $. In the classical sense we may rewrite $ \phi : S^1 \to S^1 \cong U(1) $ by $$\phi(\theta) = U \, e^{ i \, ( \varphi(\theta) + N \theta ) },
\label{3.12}$$ where $ U \in U(1) $, $ N \in {\mbox{\boldmath $ Z $}}$ and $ \varphi $ satisfies the no zero-mode condition; $${\mbox{Map}}_0 (S^1; {\mbox{\boldmath $ R $}}) :=
\{
\varphi : S^1 \to {\mbox{\boldmath $ R $}}\, | \,
C^\infty , \,
\int_0^{2 \pi} \varphi(\theta) \, d \theta = 0
\}.
\label{3.13}$$ The decomposition (\[3.12\]) says that $ Q \cong S^1 \times {\mbox{Map}}_0 ( S^1; {\mbox{\boldmath $ R $}}) \times {\mbox{\boldmath $ Z $}}$. Geometrical meaning of this decomposition is apparent; $ U $ describes the zero-mode or collective motion of the field $ \phi $; $ \varphi $ describes fluctuation or local degrees of freedom of $ \phi $; $ N $ is nothing but the winding number. Topologically nontrivial parts are $ U $ and $ N $.
Similarly $ \gamma : S^1 \to U(1) $ is also rewritten as $$\gamma ( \theta ) = e^{ i \, ( \mu + f(\theta) + m \, \theta ) },
\label{3.14}$$ where $ \mu \in {\mbox{\boldmath $ R $}}$, $ f \in {\mbox{Map}}_0 ( S^1 ; {\mbox{\boldmath $ R $}}) $ and $ m \in {\mbox{\boldmath $ Z $}}$. The action (\[3.7\]) of $ \gamma $ (\[3.14\]) on $ \phi $ (\[3.12\]) is decomposed into $$\begin{aligned}
U & \to & e^{ i \mu } \, U,
\label{3.15}
\\
\varphi(\theta) & \to & \varphi(\theta) + f(\theta),
\label{3.16}
\\
N & \to & N + m.
\label{3.17}\end{aligned}$$ So the first component of $ \gamma $ (\[3.14\]) translates the zero-mode; the second one gives a homotopically trivial deformation; the third one increases the winding number.
As a nontrivial central extension for $ \gamma $ (\[3.14\]) and $$\gamma' ( \theta ) = e^{ i \, ( \nu + g(\theta) + n \, \theta ) },
\label{3.18}$$ we define $$c( \gamma , \gamma' ) :=
\frac{1}{4 \pi} \int_0^{2 \pi}
\left( f'(\theta) g(\theta) - f(\theta) g'(\theta) \right) d \theta
+ m \, \nu
- n \, \mu.
\label{3.19}$$ This central extension is the simplest but nontrivial one which is invariant under the action of $ {\mbox{Diff}}( S^1 ) $; $ c( \gamma \circ \omega , \gamma' \circ \omega ) = c( \gamma , \gamma' ) $ for any $ \omega \in {\mbox{Diff}}( S^1 ) $. The group $ \Gamma $ associated with such an invariant central extension is called a Kac-Moody group. The relation (\[3.10\]) means that $ \hat{V} $ is a unitary representation of the Kac-Moody group. For classification of central extensions see the literature [@Segal].
Algebra without central extension
---------------------------------
According to decomposition of classical variables (\[3.12\]) and (\[3.14\]), quantum operators are also to be decomposed. For simplicity we consider the fundamental algebra (\[3.8\])-(\[3.10\]) without the central extension, that is, here we restrict $ c \equiv 0 $.
Corresponding to (\[3.12\]) we introduce a unitary operator $ \hat{U} $, self-adjoint operators[^3] $ \hat{\varphi} ( \theta ) $ for each $ \theta \in S^1 $ constrained by $$\int_0^{ 2 \pi } \hat{\varphi} ( \theta ) \, d \theta = 0,
\label{3.20}$$ and a self-adjoint operator $ \hat{N} $ satisfying $$\exp( 2 \pi i \, \hat{N} ) = 1,
\label{3.21}$$ which is called the integer condition for $ \hat{N} $. We demand that the quantum field $ \hat{\phi} ( \theta ) $ is expressed by them as $$\hat{\phi} ( \theta )
=
\hat{U} \, e^{ i \, ( \hat{\varphi} ( \theta ) + \hat{N} \theta ) }.
\label{3.22}$$
Next, corresponding to (\[3.14\]) we introduce a self-adjoint operator $ \hat{P} $, self-adjoint operators $ \hat{\pi} ( \theta ) $ for each $ \theta \in S^1 $ constrained by $$\int_0^{ 2 \pi } \hat{\pi} ( \theta ) \, d \theta = 0,
\label{3.23}$$ and a unitary operator $ \hat{W} $. When $ \gamma $ is given by (\[3.14\]), the operator $ \hat{V} ( \gamma ) $ is defined by $$\hat{V} ( \gamma )
=
e^{ - i \, \mu \, \hat{P} } \,
\exp
\left[
- i \int_0^{2 \pi} f( \theta ) \, \hat{\pi} ( \theta )
\, d \theta
\right]
\hat{W}^m.
\label{3.24}$$
Using these operators the fundamental algebra is now rewritten as $$\begin{aligned}
&&
[ \, \hat{P} , \hat{U} \, ] = \hat{U},
\label{3.25}
\\
&&
[ \, \hat{\varphi} ( \theta ) , \hat{\pi} ( \theta' ) \, ]
=
i \Bigl( \delta( \theta - \theta' ) - \frac{1}{2 \pi} \Bigr),
\label{3.26}
\\
&&
[ \, \hat{N} , \hat{W} \, ] = \hat{W},
\label{3.27}\end{aligned}$$ and all other commutators vanish. In (\[3.26\]) it is understood that the $ \delta $-function is defined on $ S^1 $. These commutators are equivalent to $$\begin{aligned}
&&
e^{ i \, \mu \, \hat{P} } \, \hat{U} \, e^{ - i \, \mu \, \hat{P} }
=
e^{ i \, \mu } \, \hat{U},
\label{3.28}
\\
&&
\exp
\left[
i \int_0^{2 \pi} f( \theta ) \hat{\pi} ( \theta )
\, d \theta
\right]
\, \hat{\varphi} ( \theta ) \,
\exp
\left[
- i \int_0^{2 \pi} f( \theta ) \hat{\pi} ( \theta )
\, d \theta
\right]
=
\hat{\varphi} ( \theta ) + f ( \theta ),
\label{3.29}
\\
&&
\hat{W}^\dagger \, \hat{N} \, \hat{W} = \hat{N} + 1.
\label{3.30}\end{aligned}$$ This algebra realize (\[3.15\])-(\[3.17\]) by means of (\[3.9\]). Observing the relation (\[3.30\]) we call $ \hat{N} $ and $ \hat{W} $ the winding number and the winding operator, respectively. Then remaining task is to construct representations of the algebra.
Algebra with central extension
------------------------------
Before constructing representations we reexpress the fundamental algebra with the central extension (\[3.19\]) respecting the decomposition (\[3.12\]) and (\[3.14\]). The decomposition (\[3.22\]) of $ \hat{\phi} $ still works. On the other hand the decomposition (\[3.24\]) of $ \hat{V} $ should be modified a little. We formally introduce an operator $ \hat{\Omega} $ by $$\hat{W} = e^{ - i \, \hat{\Omega} }.
\label{3.31}$$ Although $ \hat{W} $ itself is well-defined, $ \hat{\Omega} $ is ill-defined. If $ \hat{\Omega} $ exists, (\[3.27\]) would imply $ [ \, \hat{N} , \hat{\Omega} \, ] = i $, which is nothing but the canonical commutation relation. Therefore $ \hat{N} $ should have a continuous spectrum, that contradicts the integer condition (\[3.21\]). Consequently $ \hat{\Omega} $ must be eliminated after calculation. Bearing the above remark in mind, we replace (\[3.24\]) by $$\hat{V} ( \gamma )
=
\exp
\left[
- i
\Bigl(
\mu \hat{P}
+
\int_0^{2 \pi} f( \theta ) \, \hat{\pi}(\theta )
\, d \theta
+
m \, \hat{\Omega}
\Bigr)
\right].
\label{3.32}$$ For the central extension (\[3.19\]) it is verified that the following commutation relations should be added to (\[3.25\])-(\[3.27\]) to satisfy the fundamental algebra; $$\begin{aligned}
&&
[ \, \hat{\Omega} , \hat{P} \, ] = 2 i,
\label{3.33}
\\
&&
[ \, \hat{\pi} (\theta) , \hat{\pi} (\theta') \, ]
=
- \, \frac{i}{\pi} \, \delta' ( \theta - \theta' ).
\label{3.34}\end{aligned}$$ Using (\[3.31\]) Eq. (\[3.33\]) implies $$[ \, \hat{P} , \hat{W} \, ] = - 2 \, \hat{W},
\label{3.35}$$ which says that the zero-mode momentum $ \hat{P} $ is decreased by two units when the winding number $ \hat{N} $ is increased by one unit under the operation of $ \hat{W} $. This is an inevitable consequence of the central extension. We call this phenomenon “twist”. Using (\[3.33\]) the decomposition (\[3.32\]) results in $$\hat{V} ( \gamma )
=
e^{ - i \, \mu \, m }
\,
\exp
\left[
- i
\Bigl(
\mu \hat{P}
+
\int_0^{2 \pi} f( \theta ) \, \hat{\pi}(\theta )
\, d \theta
\Bigr)
\right]
\hat{W}^m .
\label{3.37}$$
Here we summarize a temporal result; with the notations (\[3.22\]) and (\[3.37\]) and the constraints (\[3.20\]), (\[3.21\]) and (\[3.23\]), the algebra (\[3.25\]), (\[3.26\]), (\[3.27\]), (\[3.34\]) and (\[3.35\]) is equivalent to the fundamental algebra (\[3.8\]), (\[3.9\]) and (\[3.10\]) including the central extension (\[3.19\]). Noticeable effects of the central extension are the twist relation (\[3.35\]) and the anomalous commutator (\[3.34\]). These features also affect representation of the algebra as seen in the following sections.
Representations
===============
Without the central extension
-----------------------------
Now we proceed to construct representations of the algebra defined by (\[3.25\])-(\[3.27\]) and other vanishing commutators with the constraints (\[3.20\]), (\[3.21\]) and (\[3.23\]).
Remember that $ \hat{P} $ and $ \hat{N} $ are self-adjoint and that $ \hat{U} $ and $ \hat{W} $ are unitary. Both of the relations (\[3.25\]) and (\[3.27\]) are isomorphic to (\[2.1\]). Hence the Ohnuki-Kitakado representation works well for them. $ \hat{P} $ and $ \hat{U} $ act on the Hilbert space $ H_\alpha $ via (\[2.3\]) and (\[2.4\]). $ \hat{N} $ and $ \hat{W} $ act on another Hilbert space $ H_\beta $ via $$\begin{aligned}
&&
\hat{N} \, | n + \beta {\rangle}= ( n + \beta ) \, | n + \beta {\rangle},
\label{4.1}
\\
&&
\hat{W} \, | n + \beta {\rangle}= | n + 1 + \beta {\rangle}.
\label{4.2}\end{aligned}$$ The value of $ \alpha $ is arbitrary. However $ \beta $ is restricted to be an integer if we impose the condition (\[3.21\]).
For $ \hat{\varphi} $ and $ \hat{\pi} $ the Fock representation works. We define operators $ \hat{a}_n $ and $ \hat{a}^\dagger $ by $$\begin{aligned}
&&
\hat{\varphi} (\theta)
=
\frac{1}{2 \pi} \, \sum_{ n \ne 0 } \,
\sqrt{ \frac{ \pi }{ | n | } } \,
( \hat{a}_n \, e^{ i \, n \, \theta }
+ \hat{a}_n^\dagger \, e^{ - i \, n \, \theta } ),
\label{4.3}
\\
&&
\hat{\pi} (\theta)
=
\frac{i}{2 \pi} \, \sum_{ n \ne 0 } \,
\sqrt{ \pi | n | } \,
( - \hat{a}_n \, e^{ i \, n \, \theta }
+ \hat{a}_n^\dagger \, e^{ - i \, n \, \theta } ).
\label{4.4}\end{aligned}$$ In the Fourier series the zero-mode $ n = 0 $ is excluded because of the constraints (\[3.20\]) and (\[3.23\]). It is easily verified that the commutator (\[3.26\]) is equivalent to $$[ \, \hat{a}_m , \hat{a}_n^\dagger ] = \delta_{ m \, n }
\;\;\;
( m , n = \pm 1 , \pm 2 , \cdots )
\label{4.5}$$ with the other vanishing commutators. Hence the ordinary Fock space $ F $ gives a representation of $ \hat{a} $’s and $ \hat{a}^\dagger $’s.
Consequently the tensor product space $ H_\alpha \otimes F \otimes H_0 $ gives an irreducible representation of the fundamental algebra without the central extension. The inequivalent ones are parametrized by $ \alpha $ $ ( 0 \le \alpha < 1 ) $.
A remark is in order here; the coefficients in front of $ \hat{a} $’s in (\[4.3\]) and (\[4.4\]) are chosen to diagonalize the Hamiltonian of free field $$\begin{aligned}
\hat{H}
& := &
\frac12
\int_0^{ 2 \pi }
\left[
\Bigl( \frac{1}{ 2 \pi } \hat{P} + \hat{\pi}(\theta) \Bigr)^2
+
\Bigl( \partial \hat{\varphi}(\theta) + \hat{N} \Bigr)^2
\right]
d \theta
\nonumber
\\
& = &
\frac12
\Bigl( \frac{1}{ 2 \pi } \hat{P}^2 + 2 \pi \hat{N}^2 \Bigr)
+
\sum_{ n \ne 0 } | n |
\Bigl( \hat{a}_n^\dagger \, \hat{a}_n + \frac12 \Bigr).
\label{4.6}\end{aligned}$$ This Hamiltonian corresponds to the Lagrangian density $${\cal L} =
\frac{1}{2} \, \partial_\mu \phi^\dagger \, \partial^{\, \mu} \phi.
\label{4.7}$$ Interacting field theory will be briefly discussed later.
With the central extension
--------------------------
Next we shall construct representations of the algebra defined by (\[3.25\]), (\[3.26\]), (\[3.27\]), (\[3.34\]) and (\[3.35\]) and other vanishing commutators with the constraints (\[3.20\]), (\[3.21\]) and (\[3.23\]). The way of construction is similar to the previous one.
Taking account of the twist relation (\[3.35\]), the representation of $ \hat{P} $, $ \hat{U} $, $ \hat{N} $ and $ \hat{W} $ are given by $$\begin{aligned}
&&
\hat{P} \, | \, p + \alpha ; \, n {\rangle}= ( p + \alpha ) \, | \, p + \alpha ; \, n {\rangle},
\label{4.8}
\\
&&
\hat{U} \, | \, p + \alpha ; \, n {\rangle}= | \, p + 1 + \alpha ; \, n {\rangle},
\label{4.9}
\\
&&
\hat{N} \, | \, p + \alpha ; \, n {\rangle}= n | \, p + \alpha ; \, n {\rangle},
\label{4.10}
\\
&&
\hat{W} \, | \, p + \alpha ; \, n {\rangle}= | \, p - 2 + \alpha ; \, n + 1 {\rangle}.
\label{4.11}\end{aligned}$$ The inner product is defined by $${\langle}p + \alpha ; \, m \, | \, q + \alpha ; \, n {\rangle}=
\delta_{ p \, q } \, \delta_{ m \, n }
\;\;\;
( p , q , m , n \in {\mbox{\boldmath $ Z $}}).
\label{4.12}$$ The Hilbert space formed by completing the space of linear combinations of $ | \, p + \alpha ; \, n {\rangle}$ is denoted by $ T_\alpha $. ($ T $ indicates “twist”.)
Let us turn to $ \hat{\varphi} $ and $ \hat{\pi} $. Considering the anomalous commutator (\[3.34\]), after a tedious calculation we obtain a Fourier expansion $$\begin{aligned}
&&
\hat{\varphi} (\theta)
=
\sum_{ n \ne 0 } \,
\frac{1}{ \sqrt{ 2 | n | } } \,
( \hat{a}_n \, e^{ i \, n \, \theta }
+ \hat{a}_n^\dagger \, e^{ - i \, n \, \theta } ),
\label{4.13}
\\
&&
\hat{\pi} (\theta)
=
\frac{i}{2 \pi} \, \sum_{ n = 1 }^{ \infty } \,
\sqrt{ 2 n } \,
( - \hat{a}_n \, e^{ i \, n \, \theta }
+ \hat{a}_n^\dagger \, e^{ - i \, n \, \theta } )
\label{4.14}\end{aligned}$$ and the commutation relations (\[4.5\]). It should be noticed that only positive $ n $’s appear in the expansion of $ \hat{\pi} $ even though both of positive and negative $ n $’s appear in $ \hat{\varphi} $. The algebra (\[4.5\]) is also represented by the Fock space $ F $. Hence the tensor product space $ T_\alpha \otimes F $ gives an irreducible representation of the fundamental algebra with the central extension for each value of $ \alpha \, ( 0 \le \alpha < 1 ) $.
Summary and discussion
======================
In this paper we have defined the algebra of the abelian sigma model in $ (1+1) $ dimensions and constructed representation spaces. In the context of classical theory, this model has a field variable $ \phi \in Q = {\mbox{Map}}( S^1 ; S^1 ) $. The degrees of freedom are decomposed as $$Q \cong S^1 \times {\mbox{Map}}_0 ( S^1 ; {\mbox{\boldmath $ R $}}) \times {\mbox{\boldmath $ Z $}}\label{5.1}$$ by (\[3.12\]). The right-hand side is a direct product of topological spaces. The first component represents the zero-mode; the second one describes the fluctuation mode; the third one corresponds the winding number. Topological nature is concentrated in the first and the third components. On the other hand the group $ \Gamma = {\mbox{Map}}( S^1 ; U(1) ) $ acts on $ Q $ transitively. Its covering group $ \tilde{\Gamma} $ is also decomposed as $$\tilde{\Gamma} \cong {\mbox{\boldmath $ R $}}\times {\mbox{Map}}_0 ( S^1 ; {\mbox{\boldmath $ R $}}) \times {\mbox{\boldmath $ Z $}}\label{5.2}$$ by (\[3.14\]). The right-hand side is a direct product of topological groups. We assign the algebra (\[3.8\])-(\[3.10\]) to $ Q $ and $ \tilde{\Gamma} $. According to (\[5.1\]) and (\[5.2\]), the algebra is decomposed into (\[3.25\])-(\[3.27\]). When the central extension (\[3.19\]) is included, the anomalous commutator (\[3.34\]) and the twist relation (\[3.35\]) must be added. An irreducible representation space is constructed by tensor product of two Ohnuki-Kitakado representations with one Fock representation. We obtain inequivalent ones parametrized by a parameter $ \alpha \, ( 0 \le \alpha < 1 ) $. The anomalous commutator eliminates negative-modes from $ \hat{\pi} (\theta) $ by means of (\[4.14\]). The twist relation causes (\[4.11\]); the winding operator $ \hat{W} $ increases the winding number by one unit and simultaneously decreases the zero-mode momentum by two units.
Physical implication of our model is not clear yet. Roles of the parameter $ \alpha $, the anomalous commutator and the twist relation are to be examined further. As a model with interaction, the sine-Gordon model $${\cal L} =
\frac12 \partial_\mu \psi (x) \partial^{\, \mu} \psi (x)
+
\kappa^2 \, \cos ( \psi(x) )
\label{5.3}$$ may be interesting. Substitution of $ \phi = e^{ i \, \psi} $ defines the corresponding Hamiltonian $$\begin{aligned}
\hat{H}
& := &
\frac12
\int_0^{ 2 \pi }
\left[
\Bigl( \frac{1}{ 2 \pi } \hat{P} + \hat{\pi} \Bigr)^2
+
\partial \hat{\phi}^\dagger \, \partial \hat{\phi}
-
\kappa^2 ( \hat{\phi} + \hat{\phi}^\dagger )
\right]
d \theta
\nonumber
\\
& = &
\frac12
\Bigl( \frac{1}{ 2 \pi } \hat{P}^2 + 2 \pi \hat{N}^2 \Bigr)
+
\sum_{ n \ne 0 } | n |
\Bigl( \hat{a}_n^\dagger \, \hat{a}_n + \frac12 \Bigr)
\nonumber
\\
&&
-
\frac{\kappa^2}{2}
\int_0^{ 2 \pi } \Bigl(
\hat{U} e^{i \, ( \hat{\varphi} ( \theta ) + \hat{N} \theta ) }
+
\hat{U}^\dagger
e^{ - i \, ( \hat{\varphi} ( \theta ) + \hat{N} \theta ) }
\Bigr) d \theta.
\label{5.4}\end{aligned}$$ The last term yields highly nonlinear complicated interaction. It is known [@Coleman] that this model has a topological soliton, which behaves like a fermion. It is expected that our formulation may shed light on the soliton physics of sigma models.
From both points of view, field theory and string theory, it is hoped to extend our model to nonabelian cases and to higher dimensions. Our model has a field configuration manifold $ Q = {\mbox{Map}}( S^1 ; S^1 ) $. The most general model has $ Q = {\mbox{Map}}( M ; N ) $. (1)An immediate extension is a choice $ M = S^1 $ and $ N = T^{\, n} = ( S^1 )^n $. This is the toroidal compactification model of string theory [@Narain]. (2)A rather easy extension is a choice $ M = S^n $ or $ T^{\, n} $ and $ N = S^1 $. For $ M = S^n ( n \ge 2 ) $ there is no winding number. Yet we should pay attention to the zero-mode. There may be nontrivial central extensions. (3)Another nontrivial extension is a choice $ M = S^n $ or $ T^{\, n} $ and $ N = G $, that is a Lie group. This corresponds to a chiral Lagrangian model. For a nonabelian group $ G $, we know neither existence nor uniqueness of the decomposition $$\Gamma
:= {\mbox{Map}}( S^n ; G )
\cong G \times {\mbox{Map}}_0 ( S^n ; \mbox{Lie}(G) ) \times \pi_n(G),
\label{5.5}$$ where $ \pi_n $ denotes the $ n $-th homotopy group and $${\mbox{Map}}_0 ( S^n ; \mbox{Lie}(G) )
:=
\{
\, g : S^1 \to \mbox{Lie}(G)
\, | \,
C^\infty, \,
\int_0^{2 \pi} g ( \theta ) = 0
\}.
\label{5.6}$$ Even if it exists, it may not be a direct product of topological groups because of nonabelian nature. (4)A highly nontrivial one is a choice $ N = G/H $, that is a homogeneous space. This model is a nonlinear sigma model. Quantum mechanics on $ G/H $ is already well-established [@Tani2]. However extension to field theory remains difficult. (5)Another interesting one is a choice $ M = S^1 $ and $ N = T^{\, n}/P $, that is an orbifold. This is nothing but the orbifold model of string theory [@Sakamoto]. It is found [@Sakamoto] that zero-mode variables obey peculiar commutators and nontrivial quantization is obtained for string theory on an orbifold with a background 2-form. It is expected that our model may work as a simplified model to understand such a complicated behavior.
Acknowledgments {#acknowledgments .unnumbered}
===============
This investigation is stimulated by seminars given by Prof. Sakamoto and Dr. Tachibana.
[99]{} Y. Ohnuki and S. Kitakado, [*J. Math. Phys.*]{} [**34**]{} (1993) 2827 G. W. Mackey, “Induced Representations of Groups and Quantum Mechanics”, W. A. Benjamin, INC. New York (1968);
N. P. Landsman and N. Linden, [*Nucl. Phys.*]{} [**B365**]{} (1991) 121;
S. Tanimura, “Quantum Mechanics on Manifolds”, Nagoya University preprint DPNU-93-21 (1993) S. Tanimura, [*Prog. Theor. Phys.*]{} [**90**]{} (1993) 271 G. Segal, [*Commun. Math. Phys.*]{} [**80**]{} (1981) 301;
A. Pressley and G. Segal, “Loop Groups”, Oxford University Press, New York (1986) S. Coleman, [*Phys. Rev.*]{} [**D11**]{} (1975) 2088;
S. Mandelstam, [*Phys. Rev.*]{} [**D11**]{} (1975) 3026;
S. G. Rajeev [*Phys. Rev.*]{} [**D29**]{} (1984) 2944 K. S. Narain, M. H. Sarmadi and E. Witten, [*Nucl. Phys.*]{} [**B279**]{} (1987) 369 J. O. Madsen and M. Sakamoto, [*Phys. Lett.*]{} [**B322**]{} (1994) 91;
M. Sakamoto, [*Nucl. Phys.*]{} [**B414**]{} (1994) 267
[^1]: Contributed to Yamada Conference (XXth International Colloquium on Group Theoretical Methods in Physics; July 1994 at Toyonaka in Japan)
[^2]: e-mail address : [email protected]
[^3]: Expressing rigorously $ \hat{\varphi} (\theta) $ is an operator-valued distribution.
|
{
"pile_set_name": "ArXiv"
}
|
2[[[pc]{}\^[-2]{}]{}]{} 2[[cm\^[-2]{}]{}]{} 2[[[cm]{}\^2]{}]{} 3[[[cm]{}\^[-3]{}]{}]{} 3[[[gcm\^[-3]{}]{}]{}]{} 6.5in 8.5in -0.25in
[G. Gyuk]{}
[Department of Physics, Enrico Fermi Institute, University of Chicago, 5630 Ellis Avenue, Chicago IL 60637-1433]{}
[NASA/Fermilab Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510-0500]{}
**ABSTRACT**
We analyze the first-year MACHO collaboration observations of microlensing towards the Galactic center using a new direct likelihood technique that is sensitive to the distribution of the events on the sky. We consider the full set of 41 events, and calculate the direct likelihood against a simply-parameterized Galactic model consisting of either a gaussian or exponential bar and a double exponential disk. Optical depth maps are calculated taking into account the contribution of both disk lenses and sources. We show that based on the presently available data, a slope in the optical depth has been clearly detected ($3\sigma$) in Galactic latitude and that there are indications of a small slope in Galactic longitude. We discuss limits that can be set on the mass, angle and axis ratio of the Galactic bulge. We show that based on microlensing considerations alone, $M_{Bulge}>1.5\times 10^{10}\Msol$ at the 90% confidence level and that the bulge inclination angle is less than $30
\deg$ also at the 90% confidence level. The mostly likely bar mass is $M_{Bulge}=3.5\times10^{10}\Msol$. Such a high mass would imply a low MACHO fraction for the halo. We consider disk parameters and show that there are two degeneracies between the effects of a disk and those of a bar on the optical depths. Finally, we discuss how to break these degeneracies and consider various strategies for future microlensing observations.
Introduction
============
Although the Galaxy has been studied for a long time, determining its structure has proved to be an extremely difficult task. Our basic picture of the Milky Way as a spiral galaxy with a roughly exponentially falling disk, a central bulge and an extended halo has been settled for a considerable time[@galstudy]. Unfortunately, despite our wealth of detailed knowledge of the Galaxy, gaining a more precise knowledge of its global parameters, such as we have for external galaxies, is complicated by our position. Even such basic quantities as the scale length of the disk or the rotation curve of our own Galaxy are less well known than those of many external galaxies, due to the unfavorable geometry and intervening dust and gas[@rotation; @scalelength]. Because of our privileged position within it, the Milky Way provides us with a unique opportunity for studying questions such as how galaxies form or what their major constituents are. Ironically, many of the techniques we have for studying such questions are essentially measurements of light, while the basic questions have more to do with the mass and its distribution. The translation between these measurements and the information we would like is complicated by our fundamental lack of knowledge of the stellar mass function for small masses.
Gravitational microlensing searches are particularly exciting because, unlike other astrophysical observations, they can detect objects regardless of their luminosity. Within its mass range, a microlensing search is sensitive to the integrated density in massive compact objects of any type between the observer and source star. Originally intended for probing the baryonic component of the Galactic halo, gravitational microlensing has been increasingly recognized as a powerful new way to probe the structure of our entire Galaxy[@Galprobe]. Although microlensing can also be used to probe the stellar mass function, we will concentrate on using it to directly constrain the mass density.
Over the past few years, microlensing has moved rapidly from a proposal to an established fact. With its characteristically shaped light curve and achromaticity, there can be little doubt that microlensing has been observed in abundance towards the galactic bulge. Over one hundred events have now been observed by three collaborations, MACHO, DUO and OGLE, in the general direction of Baade’s Window [@MACHO1st; @oglebulge; @DUO]. Small theoretically expected modifications to the light curve, such as effects from parallax and binary lenses have also been observed in some events, further confirming their interpretation as microlensing[@lightcurve].
One of the most exciting of the recent microlensing results has been the observation of many more microlensing events in the direction of the Galactic bulge than had been predicted. One explanation for the higher than expected event rates is that the the Milky-Way is actually a barred spiral, with the bar oriented almost directly towards us[@barsug]. A bar pointing at us would concentrate mass along our line of sight, increasing the number of microlensing events, but not leave an obvious signature of asymmetry in other observations. The suggestion that the galactic bulge is actually a bar is not new. As far back as 1964 de Vaucouleurs suggested a bar as a possible explanation for similarities between the gas dynamics seen towards the Galactic center and that seen in barred galaxies[@deVauc]. This idea, however, was not universally embraced. More recently, non-circular motions of the gas towards the Galactic center have been accounted for by a bar. A variety of other observations, such as star counts and luminosity studies, also indicate such a structure[@Binney; @whitelock; @barevid]. Consistent with this, although independently not compelling, are the infrared maps from the Diffuse Infra-Red Background Experiment (DIRBE) on the COBE satellite [@Weiland; @Dwek]. Taken together, these more recent observations have led to a resurgence of the bar model. The high microlensing rate observed towards the Galactic center is not only consistent with this new picture, but can also be used as a probe to refine our knowledge of the Galactic structure and in particular the Galactic Bulge. A better understanding of the Galactic Bulge is important not only for its own sake, but also because of its interactions with the disk and halo. A bar may be intimately tied to the spiral density waves in our disk and as we discuss later, its mass plays an important role in constraining our knowledge of the baryonic content of the halo[@long].
Previous work comparing the results of microlensing searches to Galactic models has mostly focused on a single number: the microlensing optical depth in the direction of Baade’s window[@earlymap]. However, in this region the optical depth is expected to be a rapidly varying function of latitude. The use of an average quantity, such as reported by the MACHO collaboration, limits our ability to match predictions from Galactic models with the data. Further, because a given optical depth is achievable in a variety of ways we cannot discriminate between the many possible models on the basis of this single number. It is only with using the gradient information, by a comparison of the distribution of the event locations to a map of the microlensing optical depth, that the various models can be sorted out. Thus we have developed a method for calculating the relative likelihood that a given map of the optical depth would produce the observed events. Using this likelihood technique we explore the constraints that the MACHO collaboration first year events impose on Galactic structure. We consider a class of models with bars based on the G2 and E2 models of Dwek et al. [@Dwek], and a simple double exponential disk. We calculate maps of the microlensing optical depth for each of these models and compare them to the observed events, calculating the likelihood as a function of the model parameters.
The paper is structured as follows: in the second section we briefly review the phenomenon of microlensing and the optical depth for microlensing, and develop the formalism we use in our likelihood analysis. We then discuss the models of Galactic structure we adopt for this paper and what constraints exist on the parameters for these models. In section three we examine the set of events and fields that we will use and discuss the observational efficiencies and sky coverage. We continue with a look at the structure of the data, determine whether the data support a gradient in the optical depth and produce a crude map of the optical depth. Section four contains the results of applying the likelihood formalism to the models mentioned earlier. We discuss the limits that can be placed on various parameters and which parameters are correlated. The fifth section discusses future directions and examines strategies for maximizing what we learn from our microlensing investment. Finally in the last section we summarize the main results of this investigation.
Methods
=======
Microlensing Optical Depth
--------------------------
When a massive object passes by the line of sight to a distant object, the object’s image is distorted according to general relativity. Stunning confirmations of this have been seen in a variety of systems involving galaxy clusters and background quasars or galaxies where typical deflection angles are on the order of an arcsecond[@biglenses]. In microlensing the source is typically a distant star and the lens an intervening massive object. With the masses and distances much smaller the deflections are on the order of milliarcseconds, far too small to be resolved[@compendium]. The distortion of the source shape is, however, not the only effect. The intervening mass also acts as a lens, concentrating the light of the source. The amplification is given by, A = r\_E= where $r_E$ is the Einstein ring radius, $u$ is the dimensionless impact parameter, $r/r_E$, $M$ is the mass of the lens, $D$ is the distance to the source and $d$ is the distance to the lens[@pac]. The amplification is more easily measured. When the dimensionless impact parameter, $u$, is less than one, we say that the lens is within the microlensing tube of the source. In this regime, the amplification is greater than about 1.34, corresponding roughly to experimental cutoffs.
What is probability that a given star is being microlensed? If this chance is small, as it is in all cases we consider, then it can be expressed as the typical number of lenses in the microlensing tube. Thus we have the optical depth[@halocont], = \_0\^L r\_E\^2 dl = \_0\^D (d) dd, where $\rho$ is the spatial mass density of lenses. Note that the dependence on the mass of the lens cancels out. This, together with the lack of a lens velocity dependence (which enters into calculations of the rate of microlensing events) is one of the great advantages of using the microlensing optical depth. It means we need not consider the mass spectrum of the lenses or their distribution in velocity space and can consider only their spatial mass density. We must, however, give up the possibility of using the distribution of event durations to learn about the stellar mass function or velocity distribution. In the formalism developed in the following we consider only the optical depth.
For a field with all sources at the same distance, such as the Large Magellanic Cloud or the Small Magellanic Cloud, Eq.(3) can be used directly to calculate the optical depth. If the source stars are at varying distances from the observer they will sample different optical depths. The observed depth is given by integrating over the source density along the line of sight: = [4 Gc\^2]{} [\^\_0 dD D\^\_s (D) \^D\_0 dx \_l (x) [x(D-x)/ D]{} \^\_0 dD D\^\_s (D)]{}, where $\rho_s$ is the mass density in source stars, $\rho_l$ is the mass density in lenses, and $\alpha$ controls the source integration volume. Two factors enter into the determination of $\alpha$. On the one hand, as the distance from the observer increases, the number of stars seen in a given solid angle will increase as the square of the distance. On the other hand, as the distance from the observer increases, fewer stars will be above the magnitude limit to be seen by the observer. For source stars on the main sequence these effects almost cancel out and $\alpha=0$ is appropriate. For giants, such as the red clump giants used in the MACHO collaboration’s analysis of a subset of their data, which can be seen throughout the bulge, $\alpha=2$ is more reasonable. Since we use the full sample, which is composed of mainly main-sequence stars, in our analysis we use $\alpha=0$.
Likelihood
----------
Although the average optical depth towards Baade’s window is useful, it is not the entire story. In the standard technique one estimates the optical depth for a sample of events by summing up the total durations for stars that have been microlensed, $\hat{t}_i$, (weighted by the efficiency), and dividing by the total Exposure, $E$, (star years observed):[^1] = This procedure inevitably loses information, because it averages over the events, ignoring their locations. For regions such as the MACHO fields, where there are strong variations of the optical depth, it is unclear how meaningful a procedure this is. The location at which one has measured the optical depth is undefined. This makes it difficult to compare ones’ results to models of the optical depth. Additionally, it is difficult to reliably quantify the gradient information, even using latitude cuts, because of the uncertainty in the “distance” between the two sub-regions.
We would like to extract the gradient information and avoid the question of precisely “where” the optical depth is measured. For this we must deal with maps of the optical depth as a function of Galactic longitude and latitude and not simply average values. Central to this is an ability to quantify how well a given theoretical map of the optical depth compares to the observed events. Thus we construct a likelihood function sensitive not only to the number and durations of the events but also their positions. Constructing the required likelihood is not entirely trivial. It is instructive to look first at the case for maps of the microlensing rate. We then show how this is modified when dealing with the optical depth.
For any small patch of the sky, $dxdy$, the expected number of events in a time $T$ is $T\Gamma dx dy$, where $\Gamma$ is the rate per area. If $n_i$ is the number of events actually observed in the $i$th sky-patch then by simple Poisson statistics the likelihood of the true rate being $\Gamma(x,y)$, given the observations, is \_i e\^[-Tdx dy]{} (T dx dy)\^[n\_i]{}/[n\_i!]{}. Since for a sufficiently small patch size $n_i$ will always be either 0 or 1 we arrive at L=([-Tdx dy]{}) dx\^N dy\^N T\^N \_[events]{} (x,y), where $N$ is the total number of events. Dropping the $dx^N dy^N T^N$ factor, we have the relative likelihood for a model $\Gamma(x,y)$, L=([-Tdx dy]{}) \_i\^[events]{} (x\_i,y\_i).
The case of optical depth is more subtle but parallel. We consider a patch $dx dy dt$, where $t$ is the time coordinate. Since the optical depth gives the probability that any given star will be lensed at a given time, the probability of observing $n_i$ microlensing events in progress in this patch is e\^[-dxdy]{} . Here $\sigma$ is the number density of source stars observed on the sky and $\tau$ is the optical depth in the patch. Note that this is independent of $dt$. For the entire region then, our likelihood is simply L=\^[dx dy dt]{}\_[all cells]{} e\^[-dxdy]{} . As before, for $dx dy$ small enough, $n_i\rightarrow 0,1$ and so L=\^[dt]{} ([-dxdy]{}) \^[dx dy dt]{}\_[events]{}dxdy where the second product is over all the cells containing events. Since the first term is independent of time the product over all intervals $dt$ is easy. To do the second product we note that for each event the average time spent in the microlensing tube for an event with the measured $\hat{t}$ is $\frac{\pi}{4}\hat{t}$. Hence we have L = ([-dxdy]{}) \_[events]{} ((x\_i,y\_i)dxdy)\^. If all of the events have not been detected, that is the efficiency $\epsilon < 1$, then we will be missing terms in the product over events. To account for this we write L = ([-dxdy]{}) \_[events]{} ((x\_i,y\_i)dxdy)\^ where $\epsilon_i$ is the efficiency for detecting events of length $\hat{t}_i$. Efficiencies for present experiments are typically of order 0.5. There is an immediate difficulty with this equation however: as $dt\rightarrow 0$ we have infinite exponents. This comes about because we are multiplying an infinite number of finite probabilities: one for each timeslice $dt$. What this procedure ignores is that there is a characteristic time interval over which microlensing in a cell $dxdy$ will be correlated. Thus we rescale the infinities with an ad hoc correlation constant $t_0$ which we will determine later. Thus we have finally, L = ([-dxdy]{}) \_[events]{} ((x\_i,y\_i))\^ where we have dropped the $dx dy$ in the product.
In the limiting case where $\tau=\tau_0$, a constant over the region of interest, we expect to recover the standard formula for optical depth. In this case our likelihood is L = ([-A\_0]{}) (\_0)\^[\_i ]{} where A is the area of the region and the sum is over the events observed. Setting the derivative with respect to $\tau_0$ equal to zero we find - AL + \_i =0. So the maximum is at \_0 = \_i = \_i which is the standard formula. Looking at Eq. 14 we see that it can be rewritten as a scaled Poisson distribution L = ([-A\_0]{}) (A\_0)\^[\_i ]{} (A)\^[-\_i ]{} with maximum given above. This form allows us to fix $t_0$. We see that the distribution above has a width ()\^2 = ([\_i ]{})\^[-1]{}. We compare this to the expected uncertainty as calculated by Han & Gould [@HanGould]. We see that our = in their notation. This works out to t\_0= which completely specifies our likelihood function.
The beauty of this likelihood function approach is in its flexibility. We can use it to directly compare the models to the data without first calculating an “observed” optical depth, or we can use it to explore what the data say about the optical depth. We will discuss later our construction of a primitive map of optical depth from the presently observed data. This approach also shines in the analysis of observations of a disparate collection of lines of sight, especially if some have low optical depth and produce no events. One simply uses a density function which is non-simply connected and the null data is taken into account automatically. Overlapping fields are also easy to handle in this formalism. Areas in which one has an overlap are simply counted as having twice the density.
We use our likelihood to rank the models we will discuss in the next section. It is important to note that we will be able to compute only relative likelihoods. The true structure of the Galaxy almost certainly does not fall exactly into the classes of models we consider. By considering a range of models that have passed a variety of other tests we hope to include at least some models that approximate reality.
Models
------
Our Galaxy can be described loosely as consisting of three parts: a disk, a dark halo, and a central bulge. In the following section, we describe the models we use in our calculations of optical depths. For each component, we indicate a range for its parameters that we feel is reasonable based on the literature. Because the values of these parameters are so uncertain we felt that it would indicate a false level of certainty to use a Gaussian prior for the parameters of our models. Accordingly, we have taken flat priors over the parameter ranges indicated. Since we expect the amount of microlensing due to objects in the halo to be negligible [@halocont] and almost constant over the small range of directions examined, we do not model the halo.
The Galactic density enters the equations for the optical depth in two distinct ways: source and lens. One must therefore be very careful to keep the two roles separate. If both the bulge and the disk have the same lens to source ratio (or equivalently the same mass to light ratio), then the distinction becomes meaningless and can be dropped. It is not clear that this is the case. We handle the issue in the following manner. The visible content of the disk, that which could be seen as sources, is relatively well known. On the other hand, the ratio of bulge to disk source stars in the MACHO fields has been estimated by the MACHO collaboration as around 20% based on the fraction of the 2.2 micron flux contributed by the disk[@MACHO1st]. We therefore model the disk with a source disk and a lens disk, and adjust the bulge source fraction to give approximately an 80% contribution along our line of sight.
### Bulge
Models of the Galactic bulge are very uncertain. The difficulties are both practical (most fields towards the bulge suffer extremely high extinction) and theoretical (it is extremely difficult to invert gas and stellar dynamics to obtain potentials). Nevertheless, a number of models have emerged as standards[@Dwek]. These models have widely ranging functional forms. We consider two representative forms, Dwek et al’s G2 and E2. Their densities can be expressed $$\begin{aligned}
~~~~~\rho_{G2} & = & 1.2172\frac{M_{BAR}}{8\pi x_0y_0z_0} e^{-{r_s^2}/{2}} \\
~~~~~\rho_{E2} & = & \frac{M_{BAR}}{8\pi x_0 y_0 z_0} e^{-r}\end{aligned}$$ $$\begin{aligned}
~~~~~r_s & = & {\left\{\left[\left(\frac{x}{x_0}\right)^2+\left(\frac{y}{y_0}\right)^2\right]^2+\left(\frac{z}{z_0}\right)^4\right\}^{1/4}}\nonumber\\
~~~~~r & = & {\left\{\left(\frac{x}{x_0}\right)^2+\left(\frac{y}{y_0}\right)^2+\left(\frac{z}{z_0}\right)^2\right\}^{1/2}}\nonumber\end{aligned}$$ with the major axis inclined towards us at an angle $\theta_B$, which we will take to be between $0\deg$ and $45\deg$, with the near side in the first Galactic quadrant. The G2 model is the best fit to the DIRBE infrared maps [@Dwek], while the E2 is favored by an analysis of the distribution of red clump giants in the OGLE fields[@OGLEfields]. At the location of the MACHO fields we will be analyzing, G2 models tend to have high optical depths while E2 models produce optical depth considerably less efficiently [@lateZhao]. By considering both models with a wide range of parameters we hope to cover a large portion of possible bulge models.
Estimates of bulge masses cover a wide range. More recently, however, estimates have fallen approximately in the range $(2.0 -
3.0)\times10^{10}\Msol$[@lateZhao]. To be conservative we consider the range $1.0 -
4.0\times10^{10}\Msol$. The bulge scale factors are less well known. The best known of these are the two vertical scale heights. We follow the results of Stanek et al.[@OGLEfields] and fix these at $0.43\kpc$ for the G2 models and $0.25\kpc$ for the E2 models. We consider the ranges $0.3 - 2.7\kpc$ (G2) and $0.2 - 1.8\kpc$ (E2) for both $x_0$ and $y_0$. Following the review paper by Kerr & Lynden-Bell [@R0] we fix the distance to the bulge to be the IAU recommended value of 8.5kpc.
### Disk
As discussed above the structure of the luminous component of the disk is relatively better known than a possible dark component [@morelumdisk]. Thus we consider two separate disks. The first is a thin luminous double exponential disk, $$\rho_{lum} = \frac{\Sigma_{lum}}{2 r_z}\exp\left({-\frac{z}{r_z}}\right)\exp\left({\frac{r_0-r}{r_d}}\right),$$ composed of luminous stars that can serve as sources. The parameters for this disk are fixed: $\Sigma_{lum}=15\Msol \pc2$, $r_z=0.3 \kpc$, and $r_d=3.5\kpc$.[@GWK; @morelumdisk] Not all stars in the disk are bright enough to been seen, however, and in fact there is evidence that the disk contains considerable mass beyond that which is visible, perhaps distributed somewhat differently from the luminous mass [@sigma0; @sigma1; @sigma2]. Hence we consider also a second double exponential disk, $$\rho = \frac{\Sigma_0}{2 r_z}\exp\left({-\frac{z}{r_z}}\right)\exp\left({\frac{r_0-r}{r_d}}\right),$$ whose parameters are not fixed. This component is allowed to serve as lenses. Analysis of the vertical velocity distributions of stars in the vicinity of the sun, gives $2\sigma$ upper limits on the total surface density within $1\kpc$ of the disk plane of at most $85\Msol
\pc2$[@sigma0; @sigma1; @sigma2]. Of this about $30\Msol \pc2$ is in the form of bright stars and gas, while perhaps $10\Msol \pc2$ is in the form of M dwarfs which could serve as lenses. About $10\Msol \pc2$ is contributed by the halo, more in flattened models. This leaves a maximum of $35\Msol \pc2$ for a possible dark disk. Adding in the M dwarf lenses, the surface density of lenses should be in the range $10-45\Msol
\pc2$. To be conservative we choose the range $10-55\Msol \pc2$. The scale height for the dark component of the disk is unknown. We therefore consider the range $0.2-1.5\kpc$, which corresponds to populations tighter than the visible stars at the low end, to a significantly heated population at the high end. We fix the disk scale length at a value of 3.5kpc[@scalelength].
Data
====
Events & Fields
---------------
The basic requirement for microlensing searches is a very large sample of distant stars. For studies of the halo the obvious choices of fields are towards the Large Magellanic Cloud and the Small Magellanic Cloud. Searches towards the bulge are complicated by the extinction from intervening dust obscuring the Galactic Center. In the red and blue bands used by the MACHO collaboration, there are only a relatively few fields with a large enough density of stars. One of these “holes in the dust” is Baade’s Window at Galactic longitude (l) and latitude (b) (1$\deg$,-4$\deg$), towards which the first-year observations are clustered. We show in Fig. 1 a plot of the 24 fields reported on by the MACHO collaboration. For reference, also included are some of the OGLE fields, which are also of relatively high density. The MACHO collaboration observed 12.6 million stars for a period of 190 days during the 1993 season in these 24 fields[@MACHO1st]. The 41 events that pass their criteria for microlensing are plotted on Fig. 1 as dots with radii proportional to their duration. The location and duration of each event can be found in Alcock et al. Table 1[@MACHO1st]. In addition to the location and duration of the events we also need a variety of other numbers. Information on the location, size and orientation of the 24 fields can be found at the MACHO website, http://wwwmacho.anu.edu.au.
=13.5cm
The two quantities that pose the greatest problems are the observational efficiencies and the density of the stars in each field. As of this writing, the MACHO collaboration has not completed its analysis of the full blending efficiencies for the bulge fields. As a reasonable approximation, they suggest using their sampling efficiencies with a 0.75 factor correction. Accordingly, we following this prescription and use their standard cut sampling efficiencies from Fig. 5 of Alcock et al[@MACHO1st]. Since the fields under consideration are similar and all are crowding limited, we assume that the efficiencies are uniform across the sample. The reader is warned however, that this is a major source of uncertainty. It is possible that the efficiency is not only a function of duration but also of position [@HanGould] which would introduce spurious spatial structure into the microlensing distribution.
The final quantity we need for our analysis is the density of observed sources. In view of the fact that the fields are crowding limited and in the absence of better data, we assume a uniform density of source stars across the fields. Accounting for overlap we get $\sigma=
1.06\times 10^6 {\rm deg}^{-2}$. We note an encouraging point. The efficiencies are likely to get better as the fields get less crowded since blending effects are smaller. On the other hand, less crowded fields mean a smaller source density. Hence the errors due to our assumptions of constant efficiencies and source densities should be in opposite directions and are likely to at least partly cancel.
Structure of the Data
---------------------
Before attempting to extract information about the structure of the Galaxy from the data, we first look at what we can learn about the structure of the data itself. We apply our likelihood formalism to the very simplest model of the optical depth possible: $$\tau = \tau_0,$$ a flat optical depth over the entire set of fields. We obtain the result $\tau_0 = 1.93\pm0.39\times 10^{-6}$ where we quote 68% confidence limits. The MACHO collaboration’s reported value of $2.4\pm0.5\times10^{-6}$ for the same events includes a “correction” factor of $1/0.8$ introduced to adjust for contamination by source stars in the disk. Undoing this “correction”, we obtain $1.9\pm0.4\times10^{-6}$, in agreement with our result. Note that our errors are from our analytic calculations and not the result of Monte Carlo simulations.
Gradients
---------
We next apply our formalism to a slightly more complicated set of models: those with an optical depth gradient in the $b$ and $l$ directions. We consider models with the form, $$\tau = \tau_0 + \frac{d\tau}{db} (b-b_0) + \frac{d\tau}{dl} (l-l_0),$$ where $b_0=-4$ and $l_0=2$. Fig .2 shows contours of the likelihood, marginalized over $\tau_0$, in the $\frac{d\tau}{db}$ – $\frac{d\tau}{dl}$ plane. A gradient in the optical depth in latitude is clearly indicated. The case for a slope in longitude is less clear-cut. Marginalizing over the remaining parameter in each case we obtain $\frac{d\tau}{db}=1.12\pm0.37\times10^{-6}/{\rm deg}$ and $\frac{d\tau}{dl}=-1.71\pm1.19\times10^{-7}/{\rm deg}$, again with 68% confidence limits.
Our latitude gradient can be compared with an estimate based on the MACHO clump giant optical depths reported for fields above and below $b=-3.5\deg$[@MACHO1st]. Scaling the calculated slope, $s=(6.32-1.57)\times10^{-6}/1.38\deg$, to the full sample and undoing their $1/0.80$ disk correction, we obtain an estimate of $\frac{d\tau}{db}=1.7\times10^{-6}/{\rm deg}$ with likely errors of at least $1.0\times10^{-6}/{\rm deg}$. This is fully consistent with our results.
Maps
----
Just how much information about the structure of the event distribution can we extract? Our likelihood method lends itself nicely to the construction of the model independent “most likely” map of the microlensing optical depth. Consider a general function $\tau(b,l)$ giving the optical depth. We would like to find the positive definite function $\tau(b,l)$ which maximizes the likelihood or equivalently minimize the negative log likelihood, $$LL = \int\int \left[ \frac{T}{t_0}\tau(b,l) \sigma(b,l) - Q(b,l) \log{(\tau(b,l))} \right] db dl$$ where Q= \_[events]{}\_i (l-l\_i,b-b\_i). Taken alone, however, this condition is insufficient. The solutions to this equation turn out to be delta functions at the event locations, a clearly unphysical situation. What is missing is that the optical depth should be a smooth function of position on the sky. Thus we must add a smoothing term to the log likelihood to be minimized. Our smoothing term must discourage structure unmotivated by the data, yet at the same time not penalize legitimate gradients such as we have seen in the data. We choose to minimize the extrinsic curvature, given by K= ++2. Thus we require a minimum of $$LL = \int\int \left[ \frac{T}{t_0} \tau(b,l) \sigma(b,l) - Q(b,l) \log{(\tau(b,l))} + \lambda K^2\right] db dl$$ where $\lambda$ controls how much smoothing we require.
We solve for $\tau$ by an iterative scheme, starting with a flat $\tau(l,b)$. Setting $\lambda$ to provide a reasonable smoothness we produce the map shown in Fig. 3. Our generated map contains few surprises. We note a pronounced tilt in galactic latitude and a small one in galactic longitude just as we saw in the earlier sections. A slight bending of the contours to wrap around the Galactic center is also present. It is important to remember that although the generated map is smooth and does not appear “noisy”, this is an artifact of the way it is created: the smoothing term ensures that the resulting map is fairly smooth. The significance of the present map is low, due to the small number of events. A considerably larger data set would be needed before the map could be used to yield detailed information.
Results
=======
With the results from our look at the data in mind, we now consider the more realistic models of the Galaxy discussed above. For each type of bulge, G2 and E2, we calculate the likelihood as a function of the various Galactic parameters with the stated ranges. Since the functional form of the bulge is so poorly constrained, we resist the temptation to make a direct comparison between the two bulge models. Given the number of parameters in our models and the present number of events, any such comparison would be of marginal significance. Instead, we focus on the parameters which have meaning independent of the functional form such as the mass of the bar, $M_B$, the inclination angle of the bar, $\theta_B$ and the bar axis ratio, $r=\frac{x_0}{y_0}$. In this section we will discuss the limits we can put on bulges of only these functional forms.
Bar
---
Due to the large number of parameters in our full model and limits on computational power, in our exploration of the implication of the microlensing events for bulge parameters we fix the disk parameters to reasonable values: $\Sigma_0 = 30.0, r_z=0.3\kpc,$ and $r_d=3.5\kpc$, while varying $M_B$, $\theta_B$, $x_0$ and $y_0$. The bulge quantities we are most interested in, $M_B$, $\theta_B$ and $r$, should be most strongly influenced by the magnitude and longitudinal gradient of the observed optical depth. Of these, only the magnitude of the optical depth is sensitive to the disk parameters. We expect that as we increase the disk surface density, the inferred mass will decrease. We have checked our results and find that this effect amounts to a less than $0.3\times10^{10}\Msol$ shift even when we increase the disk to $55\Msol
\pc2.$ Our limits on other bulge parameters are unaffected.
### Bar Mass and Orientation
Our major results concerning the bar mass and orientation are summarized in Figs. 4&5. These figures show contours of likelihood as a function of the mass of the bar and orientation angle away from our line of sight. The two horizontal scale lengths, $x_0$ and $y_0$ were marginalized. Fig. 4 shows results for G2 models, while Fig. 5 presents E2 models. We discuss the G2 case first.
One feature of Fig. 4 is immediately apparent: mass can be traded off for angle. A ridge in the likelihood lies on the line $M_{BAR}(10^{10}\Msol)-0.11\theta_B({\rm deg})=1.4$. There are limits to this trade-off, however. If the mass of the bar is much beyond $4.0\times10^{10}\Msol$, too high an optical depth will be produced to fit the data even if the angle is increased dramatically. There is also a sharp cutoff when the mass drops to below about 1.7. At such low masses, decreasing the angle no longer helps but rather hurts since the maximum optical depth is at a non-zero angle.[@lateZhao] The situation is much the same for the E2 models (Fig. 5), except shifted by about $1.1\times10^{10}\Msol$ in bulge mass. The E2 models drop off too rapidly to be efficient at producing microlensing optical depth even when optimally aligned, and hence need considerably higher bar masses[@lateZhao]. Below a mass of about $2.0\times10^{10}\Msol$ it becomes difficult to produce the high optical depths required by the data. We show in Fig. 6 the likelihood for the bar mass now marginalized over the bar orientation as well. The 90% confidence limit is at $1.75\times10^{10}\Msol$ for the G2 bulge and $2.6\times10^{10}\Msol$ for the E2 bulge. Taking into account the uncertainty in the disk normalization, we arrive at lower bounds on the bulge mass of $1.5\times10^{10}\Msol$ (G2) and $2.3\times10^{10}\Msol$ (E2). The most likely values are around $3.5\times10^{10}\Msol$ for G2 models and beyond $4.0\times10^{10}\Msol$ for E2 models.
In Fig. 7 we plot the marginalized likelihood versus bulge inclination angle. Low inclination angles are clearly favored. The likelihood has dropped off strongly by $30\deg$ and $20\deg$ for the G2 and E2 models respectively. Beyond these angles, even bar masses as high as $4.0\times10^{10}\Msol$ cannot produce enough microlensing to be compatible with the experimental results. The $90\%$ confidence limits are $30\deg$ (G2) and $21\deg$ (E2).
Our results, using only a single year of microlensing data, fit very nicely with attempts to constraint the bulge mass and orientation angle by other means. Analysis of the stellar motions in the bulge gives a range of mass estimates from slightly below $2.0\times10^{10}\Msol$ to almost $3.0\times10^{10}\Msol$.[@Kent; @zhao; @blum] Several authors have derived bulge masses around $2.2\times10^{10}\Msol$ using a variety of methods based on modeling of the gas content of the bulge.[@Binney; @ZRS96] A simple argument by Zhao et al. gives this as an upper limit[@lateZhao]. Our limit on the bulge mass is consistent with the reported values for the bulge mass.
The OGLE collaboration has reported an analysis of the distribution of the bulge red clump giants within their fields shown in Fig. 1. They report a bulge orientation of between 20 and $30\deg$ almost independent of the bulge model.[@OGLEfields] Dwek et al. analyze the DIRBE maps of the infrared emmision from the bulge and conclude that the orientation angle lies in the range $10-40\deg$. Our range of $0-30\deg$ is also consistent with the value $16\pm2\deg$ suggested by Binney et al.’s analysis of gas dynamics[@Binney].
Perhaps more important than our limits on $M_B$ and $\theta_B$ separately, are the full contours of likelihood in the $M_B$ - $\theta_B$ plane showing the correlation between high bulge mass and high orientation angle. As we discuss later, increases in the number of events at Baade’s window do little to break the $M_B$ - $\theta_B$ degeneracy, but make the ridge considerably narrower. It requires more information to uniquely pick out a bulge mass. An analysis using the tensor virial theorem for the bulge by Blum [@blum] gives exactly the opposite degeneracy: high mass is correlated with low angle. Thus the results of microlensing and dynamics arguments are complementary and may be able to break the mass-angle degeneracy for either alone.
### Bar Axis Ratios
The other bulge quantity we look at is the bar axis ratio $r=\frac{x_0}{y_0}$. Although we do not explicitly have the axis ratio as an input quantity for our bulge models, microlensing puts limits on the ratio of the scale lengths. In Figs. 8&9 we have marginalized $\theta_B$ and the $x_0$-$y_0$ pair subject to the constraint $r=\frac{x_0}{y_0}$. One can immediately see a trade-off between the axis ratio and the mass. A higher axis ratio concentrates the mass where it will do the most microlensing. Hence, in conjunction with a low orientation angle, this allows a lower mass. Since a bar-like configuration seems to be favored by the most recent data on the bulge it is interesting to note that our results do not completely rule out axisymmetric models. The preferred axisymmetric models have very high masses. On the other hand, measurements of velocity dispersions in the bulge give low constraints on masses of axisymmetric models. The strongest microlensing evidence against axisymmetric models comes from a consideration of the more limited bulge giant subsample of the events, which probe the optical depth for sources distributed throughout the bar. An axisymmetric model can not produce enough microlensing to account for the high optical depth implied by these events[@long]. Our most likely values, r=3.5 (G2) and r=2.5 (E2), are consistent with the axis ratios determined by other means. Dwek et al. give a range $2.5-5.0$ for their models[@Dwek]. Stanek et al. obtain a value in the range $2.0-2.5$ again independent of model choice[@OGLEfields]. The distribution of bulge Mira variables gives an axis ratio of 3.9.[@whitelock].
Disk-Bulge Discrimination
-------------------------
Since we hope to discriminate between the bulge and disk on the basis of the latitude gradient information, the parameters $M_B$, $\theta_B$, $r_z$, and $\Sigma_0$ are most relevant. Thus we vary these quantities while holding the two horizontal bulge scale lengths constant at $x_0=1.58\kpc$ and $y_0=0.62\kpc$ for G2 models. The results for the E2 models are similar to those for the G2 models, except for a shift in the bulge mass as noted above. Since we are interested only in the disk parameters we discuss only the results for the G2 models.
### Surface Density
Fig. 10 shows the marginalized likelihood as a function of the surface density of the disk versus mass of the bulge. It is immediately clear that we cannot uniquely fix the surface density of the disk. Rather, as was the case with the orientation angle of the bulge, $\Sigma_0$ shows a linear degeneracy with $M_{BAR}$, with the ridge of the likelihood at $M_{B}(10^{10}\Msol)+0.0088\Sigma_0(\Msol/pc^2)= 2.88.$ A heavy disk can add as much as $0.7\times10^{-6}$ to the optical depth, allowing the bar to be less massive. The widening of the likelihood contours towards the top shows a slight tendency towards a more massive disk. This tendency, although not significant, is due to the slope of the optical depth in latitude.
### Scale Height
We show in Fig. 11 the likelihood as a function of the scale height of the disk and the bar mass. The scale height is only very weakly correlated with the bar mass and is basically unconstrained. There is a spreading of the likelihood contours for low scale heights possibly favoring scale heights below about $0.6\kpc$. At low scale lengths the gradient in optical depth of the disk contribution is high. As the scale length increases, the disk gradient decreases. Hence low values of $r_z$ are favored, with high values, where the gradient is low, suppressed.
### Disk Bulge Degeneracy
Our results for disk parameters are something of a disappointment. With the present data we can say virtually nothing about the disk structure. Part of the problem is the trade-off that occurs between $M_B$ and $\Sigma_0$, keeping the optical depth constant and producing the ridge in the $M_B$ versus $\Sigma_0$ likelihood plot. We had hoped, however, that gradient information would allow us to break the degeneracy between $M_B$ and $\Sigma_0$. The reason that this does not happen is not that the area that the MACHO fields span is too small to show strong structure with these few events; slopes in $b$ and $l$ are indicated. The problem lies with the position of Baade’s window and the details of the Galactic models. Firstly, because the longitudes of the bulk of the MACHO fields are low, the longitude slope expected in this region is small for virtually any model. This makes the slope in longitude a poor diagnostic. Secondly, at the location of Baade’s window the $M_B$ - $\Sigma_0$ trade-off also keeps the latitude gradient fairly constant over a large range. We show this in Fig. 12, where the latitude gradient is shown as a function of $\Sigma_0$, with the optical depth kept constant by varying the $M_B$. The different curves show various values of the scale height of the disk. Since the $b$ slope in the region of Baade’s window is constant for a wide range of models it is very difficult to discriminate among them. This is why we see only hints of structure in our likelihood plots.
Future Directions
=================
Over the next year, two collaborations, OGLE and EROS, will be moving to dedicated telescopes and fully automated analysis systems. Together with the currently running MACHO system, these groups have the potential for producing many times the data we have used here. How will such a wealth of data effect our results? To explore this question we have synthesized 4 years of observations and rerun our analysis. Our synthetic observations were constructed assuming a G2 model with $M_B=3.0\times10^{10}\Msol, x_0=1.58\kpc, y_0=0.62\kpc,
\theta_B=15\deg, \Sigma_0=30.0\Msol \pc2, r_z=0.3/kpc$. The total number of events expected was calculated as N\_[exp]{}= (l,b) dl db [w]{}/ t =\_i/\_i, where the average was done over the present data set and the integral is over the MACHO first-year fields. From this expected number, the actual number was picked assuming Poisson statistics. Since the rate is proportional to the optical depth (assuming constant average duration), the events were laid down randomly according to the optical depth. A duration was picked out of the efficiency weighted set of observed durations. The proposed event was then “observed” or not based on the efficiency for that duration. The final set of “observed” events was run through our analysis. The results are shown in Fig. 13. We note a number of features of the results. First is that the confidence regions have tightened somewhat as expected. Second, the basic degeneracy between $M_B$ and $\theta_B$ is uneffected. Although $\theta_B$ now seems to be quite well constrained, $M_B$ still varies over a wide range. As we discussed earlier, part of the problem is in the distribution of our present lines of sight. Baade’s window is a poor location for determining longitude slope, and the latitude slope is correlated with the optical depth, making it less useful as a diagnostic tool. If we wish to determine the bulge parameters solely from microlensing, simply collecting more data in the same fields will not easily break the degeneracies between parameters. We must look to expanding our range of fields.
So, what is the best strategy to use? Microlensing searches are costly and very time-consuming: we would like to find a strategy that maximizes the scientific return. We attempt to address this issue by analyzing a set of very stylized strategies that will allow us some insight into real searches.
Let us assume that we observe four identical fields centered at $(1.0,-3.0)$, $(1.0+\Delta l,-3.0)$, $(1.0,-3.0-\Delta b)$ and $(1.0+\Delta
l,-3.0-\Delta b)$. Further, we take the limit as the size of the fields goes to zero but the exposure for each field, $E$, stays constant. In this way the only gradient information will come from the field separation, and not from the distribution of events within the fields. Each choice of $\Delta
l$, $\Delta b$ will be a distinct strategy. Let $\hat{t}_i^j$ represent the durations of the events observed in the $j$-th field. Then the likelihood for a given model, $\tau(l,b)$ is L=\_j e\^[-(\_j)]{}(\_j)\^[\_i]{} where $\tau_j$ is the predicted optical depth at the $j$-th field. Let $\tau^0(l,b)$ represent the optical depth of the underlying model. Then on average $\frac{\pi}{4t_0}\sum_i\frac{\hat{t}_i^j}{\epsilon_i}=\frac{E}{t_0}\tau^0_j$ and we will have L=\_j e\^[-(\_j)]{}(\_j)\^[\^0\_j]{}. Using this likelihood we can now determine how well a given strategy can recover our underlying model. Figs. 14,15&16 show the magnitude of the 68% confidence intervals in various quantities as a function of the longitude and latitude separation of the fields assuming twice the exposure of present experiments. We used the same model as in the previous section. The first thing one notices is that the varying strategies don’t make as much difference as might be hoped. As the separation of fields is increased, our lever arm for making determinations of the quantities increases. However, at the same time the outer fields are in regions with low $\tau$ and hence few events and poor statistics. These two effects tend to cancel out, making dramatic improvements difficult. Nevertheless, what can we learn from these graphs? First of all, the present data correspond to roughly $\Delta l=3.0$, $\Delta b=2.0$. This is very close to the worst possible region for determination of all three parameters.
If we wish to improve our determination of $M_B$ the results suggest a relatively large $\Delta b$ and a smallish $\Delta l$. For $\Sigma_0$, on the other hand, we should have a high $\Delta l$ and $\Delta
b$ is irrelevant. Finally, for $\theta_B$, a moderate $\Delta l$ is required and again $\Delta b$ is mostly unimportant. A single search strategy to fix the parameters would need to probe the entire range of scales in longitude. Coverage in latitude appears to be less important; only the large scales need to be probed. The best strategy would seem to be one which includes many fields scattered over the entire bulge instead of concentrated in one region. Although the optical depth at any given location would be less well defined, better limits on the global parameters would be obtained.
Conclusions
===========
In this paper we have developed a novel likelihood technique for the analysis of the microlensing data towards the bulge. We construct a likelihood that is sensitive to the spatial distribution of the events. Our technique is both more flexible than calculations that have been done before, and allows for a direct comparison of the data to models of the mass distribution for the Galaxy. It is particularly good for dealing with data from more than one line of sight, field overlaps and variations in the density of stars observed. Its sensitivity to the position of events makes it ideal for determining gradients in the optical depth. Applying this technique to the first year MACHO data we have confirmed the strong slope in latitude found by the MACHO collaboration, and found hints of one in longitude. We have for the first time, given quantitative measure of these slopes, $\frac{d\tau}{dl}=-1.71\pm1.19\times10^{-7}/{\rm deg}$ and $\frac{d\tau}{db}=1.12\pm0.37\times10^{-6}/{\rm deg}$. We also apply our analysis to constructing a crude “most likely” map of the microlensing optical depth over the observed region.
We confront a set of Galactic models consisting of a bulge with either G2 or E2 functional form and an exponential disk, with the data. With only one season of microlensing data, we can already set meaningful limits on various bulge parameters. We find that $M_B>1.5
(2.0)\times10^{10}\Msol$, for the G2 (E2) based models. Most likely values for the bulge mass are much higher: $M_B=3.5 (>4.0)\times10^{10}\Msol$. Previous work has shown that such high bulge masses imply low halo MACHO fractions[@long]. A massive bulge puts tight constraints on the contribution of the disk to the rotation curve at small radii. A small disk, however, leaves more room for the halo in the outer rotation curve, implying a massive halo. Since microlensing results towards the LMC fix the MACHO content of the halo, a massive halo implies a smaller MACHO fraction.
We also constrain the inclination angle of the bulge finding that $\theta_B < 30\deg (21\deg)$, consistent with other measurements. Our most likely values for the axis ratio of the bulge, $r=\frac{x_0}{y_0} = 3.5
(2.5)$, are consistent with determinations by other methods. We note that axisymmetric bulge models are not entirely ruled out with the full sample of events. Such models, however, typically need very high ($\approx4.0\times10^{10}\Msol)$ bulge masses. Such high bulge masses are unlikely for axisymmetric bulge models[@kentpriv]. No limits could be set on the disk component due to a degeneracy in the latitude slope of the optical depth between the bulge and disk contributions. We discuss what can be expected with an increase in the number of seasons of data.
Finally, we have attempted to quantitatively discuss various strategies for microlensing searches and conclude that a strategy observing many fields well scattered in longitude offers the best return in determining bulge and disk parameters. The distribution of fields in latitude is less important. Despite the difficulties, this is perhaps more important than simply getting the optical depth more accurately at one location as it will allow a better determination of the relative contributions of the bar and the disk. Ideally, the optical depth can be mapped over a wide range in both latitude and longitude, yielding detailed information about the mass distribution in the inner Galaxy. The field of microlensing promises to be an eventful one for the foreseeable future!
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the DOE (at Chicago and Fermilab) and the NASA (at Fermilab through grant NAG 5-2788). I would like to thank Craig J. Copi for many stimulating conversations, and for his gracious work on the hMgo package. I would also like to thank Michael Turner, Evalyn Gates and Robert Nichol.
[std]{}
S. Kent, private communication (1996).
Figure Captions {#figure-captions .unnumbered}
===============
[**Figure 1:**]{} [$20\deg \times 20\deg$ region centered at the galactic center. Light boxes are the 24 MACHO 1st year fields. Smaller heavy boxes are OGLE fields. The 41 events we analyse are shown as dots with radii proportional to the event duration.]{}
[**Figure 2:**]{} [Likelihood contours for $l$ and $b$ gradients in the optical depth. Solid lines are the 68% confidence contours. Dotted lines denote 95% confidence. Dashed lines denote 99% confidence. Also included, to guide the eye are long dashed lines for 38% confidence.]{}
[**Figure 3:**]{} [“Most likely” map of the optical depth based on the first year events. The solid lines show contours of $4.0, 3.0, 2.0,
1.0, {\rm and} 0.5\times10^{-6}$ from the top down.]{}
[**Figure 4:**]{} [Contours of likelihood in the $\theta_B - M_B$ plane for G2 models with $x_0$ and $y_0$ marginalized. Disk values where held fixed at $\Sigma_0=30\Msol pc^-2$ and $r_z=0.3\kpc$. Contours are as in Fig. 2.]{}
[**Figure 5:**]{} [Same as Fig. 4. but for E2 models.]{}
[**Figure 6:**]{} [Likelihoods from Figs. 4&5 now with $\theta_B$ marginalized. Solid line for G2 models, dashed line for E2 models.]{}
[**Figure 7:**]{} [Same as Fig. 6, but with $M_B$ marginalized.]{}
[**Figure 8:**]{} [Contours of likelihood in the $M_B$ - $r$(axis ratio) plane. $\theta_B$ and one degree of freedom from $x_0$, $y_0$ have been marginalized. As in Fig. 4, disk parameters have been fixed. Contours are as in Fig. 2.]{}
[**Figure 9:**]{} [Same as Fig. 8 but for E2 models.]{}
[**Figure 10:**]{} [Contours of likelihood in the $M_B$ - $\Sigma_0$ plane. $\theta_B$ and $r_z$ have been marginalized. Bulge parameters $x_0$ and $y_0$ have been held fixed at $x_0=1.58\kpc$ and $y_0=0.62\kpc$. Contours are as in Fig. 2.]{}
[**Figure 11:**]{} [Contours of likelihood in the $M_B$ - $r_z$ plane. $\theta_B$ and $\Sigma_0$ have been marginalized. Bulge parameters $x_0$ and $y_0$ have been held fixed at $x_0=1.58\kpc$ and $y_0=0.62\kpc$. Contours are as in Fig. 2.]{}
[**Figure 12:**]{} [Slope in latitude for the optical depth as a function of $\Sigma_0$, the disk surface density. The total optical depth has been held constant by varying $M_B$ as $\Sigma_0$ varies. The lines represent: solid ($r_z=0.3\kpc$), dashed ($r_z=0.5\kpc$), long dashed ($r_z=1.0\kpc$), and dot-dashed ($r_z=1.5\kpc$). Total variation across our range of disk surface densities is less than 10%. ]{}
[**Figure 13:**]{} [Likelihood in the $M_B$ - $\theta_B$ plane for a simulated 4 seasons of data based on a G2 model with $M_B=3.0\times10^{10}\Msol, x_0=1.58\kpc, y_0=0.62\kpc, \theta_B=15\deg,
\Sigma_0=30.0\Msol \pc2, r_z=0.3/kpc$. Contours same as Fig. 2. ]{}
[**Figure 14:**]{} [Contours of the size of the 68% confidence regions in the determination of $M_B$. Values are calculated as a function of the latitude and longitude spacing of the observed fields. Solid: 1.46, Dotted: 1.53, Dashed: 1.61, Long Dash: 1.69$\times10^{10}\Msol$ ]{}
[**Figure 15:**]{} [Same as Fig. 14 but for $\theta_B$. Solid: 16$\deg$, Dotted: 19$\deg$, Dashed: 23$\deg$, Long Dash: 26$\deg$]{}
[**Figure 16:**]{} [Same as Fig. 14, but for $\Sigma_0$. Solid: 29, Dotted: 35, Dashed: 42, Long Dash: 48$\Msol \pc2$.]{}
[^1]:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'If $k$ is a field, $A$ and $B$ $\!k\!$-algebras, $M$ a faithful left $\!A\!$-module, and $N$ a faithful left $\!B\!$-module, we recall the proof that the left $\!A\otimes_k B\!$-module $M\otimes_k N$ is again faithful. If $k$ is a general commutative ring, we note some conditions on $A,$ $B,$ $M$ and $N$ that do, and others that do not, imply the same conclusion. Finally, we note a version of the main result that does not involve any algebra structures on $A$ and $B.$'
address: |
Department of Mathematics\
University of California\
Berkeley, CA 94720-3840, USA
author:
- 'George M. Bergman'
title: Tensor products of faithful modules
---
`Comments, corrections, and related references welcomed, as always!`\
[TeX]{}ed
[^1]
I needed Theorem \[T.OX\] below, and eventually found a roundabout proof of it. Ken Goodearl found a simpler proof, which I simplified further to the argument given here. But it seemed implausible that such a result would not be classical, and I posted a query [@query], to which Benjamin Steinberg responded, noting that Passman had proved the result in [@DP Lemma 1.1]. His proof is virtually identical to one below.
In the mean time, I had made some observations on what is true when the base ring $k$ is not a field, and on the “irrelevance” of the algebra structures of $A$ and $B,$ and added these to the write-up. So while I no longer expect to publish this note, I will arXiv it, and keep it online as an unpublished note, to make those observations available. (Ironically, I eventually found an easier way to get the result for which I had needed Theorem \[T.OX\].)
The main statement, and a generalization {#S.main}
========================================
Except where the contrary is stated, we understand algebras to be associative, but not necessarily unital.
For $k$ a commutative ring, $A$ and $B$ $\!k\!$-algebras, $M$ a left $\!A\!$-module, and $N$ a left $\!B\!$-module, we recall the natural structure of left $\!A\otimes_k B\!$-module on $M\otimes_k N$: An element $$\begin{minipage}[c]{35pc}\label{d.aOXb}
$f\ =\ \sum_{1\leq i\leq n}\ a_i\otimes b_i\ \in\ A\otimes_k B,$
\quad where $a_i\in A,\ b_i\in B,$
\end{minipage}$$ acts on decomposable elements $u\otimes v$ $(u\in M,\,v\in N)$ of $M\otimes_k N$ by $$\begin{minipage}[c]{35pc}\label{d.aOXb(uOXv)}
$u\otimes v\ \mapsto\ \sum_i\ a_i u\,\otimes\,b_i v.$
\end{minipage}$$ Since the right-hand side is bilinear in $u$ and $v,$ this map extends $\!k\!$-linearly to general elements of $M\otimes_k N.$ The resulting action is easily shown to be compatible with the $\!k\!$-algebra structure of $A\otimes_k B.$
\[T.OX\] Let $k$ be a field, $A$ and $B$ $\!k\!$-algebras, $M$ a faithful left $\!A\!$-module, and $N$ a faithful left $\!B\!$-module. Then the left $\!A\otimes_k B\!$-module $M\otimes_k N$ is also faithful.
Given nonzero $f\in A\otimes_k B,$ we wish to show that it has nonzero action on $M\otimes_k N.$ Clearly, we can choose an expression for $f$ such that the $b_i$ are $\!k\!$-linearly independent. (We could simultaneously make the $a_i$ $\!k\!$-linearly independent, but will not need to.) Since we have assumed nonzero, not all of the $a_i$ are zero; so as $M$ is a faithful $\!A\!$-module, we can find $u\in M$ such that not all the $a_i u\in M$ are zero. Hence there exists a $\!k\!$-linear functional $\varphi: M\to k$ such that not all of the $\varphi(a_i u)$ are zero. Since the $b_i$ are $\!k\!$-linearly independent, the element $\sum_i \varphi(a_i u)\,b_i\in B$ will thus be nonzero. So as $N$ is a faithful $\!B\!$-module, we can choose $v\in N$ such that $$\begin{minipage}[c]{35pc}\label{d.neq0}
$(\,\sum_i\,\varphi(a_i u)\,b_i)\,v\ \neq\ 0$\quad in $N.$
\end{minipage}$$
We claim that for the above choices of $u$ and $v,$ if we apply $f$ to $u\otimes v\in M\otimes_k N,$ the result, i.e., the right-hand side of , is nonzero. For if we apply to that element the map $\varphi\otimes{\mathrm}{id}_N:
M\otimes_k N\to k\otimes_k N\cong N,$ we get the nonzero element . Thus, as required, $f$ has nonzero action on $M\otimes_k N.$
The above result assumes $k$ a field. Succumbing to the temptation to examine what the method of proof can be made to give in the absence of that hypothesis, we record
\[C.OX\] Let $k$ be a commutative ring, $A$ and $B$ $\!k\!$-algebras, $M$ a faithful left $\!A\!$-module, and $N$ a faithful left $\!B\!$-module.
Suppose, moreover, that elements of $M$ can be separated by $\!k\!$-module homomorphisms $M\to k,$ and that every finite subset of $B$ belongs to a free $\!k\!$-submodule of $B.$
Then the left $\!A\otimes_k B\!$-module $M\otimes_k N$ is again faithful.
Exactly like the proof of Theorem \[T.OX\]. The added hypothesis on $B$ is what we need to conclude that any element of $A\otimes_k B$ can be written in the form with $\!k\!$-linearly independent $b_i;$ the added hypothesis on $M$ is what we need to construct $\varphi.$ (Of course, the parenthetical comment in the proof of Theorem \[T.OX\] about also making the $a_i$ $\!k\!$-linearly independent does not go over.)
Warren Dicks (personal communication) pointed out early on another proof of Theorem \[T.OX\]: The actions of $A$ and $B$ on $M$ and $N$ yield embeddings $A\to{\mathrm}{End}_k(M)$ and $B\to{\mathrm}{End}_k(N).$ Taking $\!k\!$-bases $X$ and $Y$ of $M$ and $N,$ one can regard the underlying vector spaces of ${\mathrm}{End}_k(M)$ and ${\mathrm}{End}_k(N)$ as $M^X$ and $N^Y.$ By two applications of [@Bourbaki II, §3, 7, Cor. 3 to Prop. 7, p.AII.63], or one of [@KG Theorem 2], one concludes that the natural map $M^X\otimes_k N^Y\to(M\otimes_k N)^{X\times Y}$ is an embedding, and deduces that the map $A\otimes_k B\to{\mathrm}{End}(M\otimes_k N)$ is an embedding, the desired conclusion.
Counterexamples to variant statements {#S.cegs}
=====================================
The hypotheses of the above corollary are strikingly asymmetric in the pairs $(A,M)$ and $(B,N).$ We can, of course, get the same conclusion if we interchange the assumptions on these pairs. But what if we try to use one or the other hypothesis on both pairs; or concentrate both hypotheses on one of them?
It turns out that none of these modified hypotheses guarantees the stated conclusion. Here are three closely related constructions that give the relevant counterexamples. In all three examples, the $\!k\!$-algebras $A$ and $B$ are in fact commutative and unital.
\[L.OX.cegs\] Let $C$ be a commutative principal ideal domain with infinitely many primes ideals, and let $P_0,$ $P_1$ be two disjoint infinite sets of nonzero prime ideals of $C.$ Then for $k,$ $A,$ $M,$ $B,$ $N$ specified in each of the following three ways, the $\!A\!$-module $M$ and the $\!B\!$-module $N$ are faithful, and $A\otimes_k B$ is nonzero, but $M\otimes_k N$ is zero, hence not faithful. In each example, the variant of the hypotheses of Corollary \[C.OX\] satisfied by that example is noted at the end of the description.
Let $A=B=k=C,$ and let $M=\bigoplus_{p\in P_0} k/p$ and $N=\bigoplus_{p\in P_1} k/p.$ In this case, every finite subset of $A$ or of $B$ trivially lies in a free $\!k\!$-submodule of that algebra.
In the remaining two examples, let $C^+$ be the commutative $\!C\!$-algebra obtained by adjoining to $C$ an indeterminate $y_p$ for each $p\in P_0\cup P_1,$ and imposing the relations $p\,y_p=\{0\}$ for each such $p.$
Let $k=C^+,$ let $M$ be the ideal of $k$ generated by the $y_p$ for $p\in P_0,$ let $N$ be the ideal of $k$ generated by the $y_p$ for $p\in P_1,$ let $A=k/N,$ and let $B=k/M.$ Note that since $MN=\{0\},$ we may regard $M$ as an $\!A\!$-module and $N$ as a $\!B\!$-module. In this case, $M$ and $N$ embed in $k,$ hence $\!k\!$-module homomorphisms from each of those modules into $k$ separate elements.
Let $k=A=C^+,$ and let $M$ be the ideal of $A$ generated by all the $y_p.$ The partition of our infinite family of primes into $P_0$ and $P_1$ will not be used here. On the other hand, let $B$ be the field of fractions of $C,$ made a $\!k\!$-algebra by first mapping $k$ to $C$ by sending the $y_p$ to $0,$ then mapping $C$ to its field of fractions; and let $N$ be any nonzero $\!B\!$-vector-space. In this case, every finite subset of $A$ trivially lies in a free $\!k\!$-submodule, and $\!k\!$-module homomorphisms into $k$ clearly separate elements of $M.$
The fact that $P_0$ and $P_1$ are infinite sets of primes in the commutative principal ideal domain $C$ implies that each of those sets has zero intersection, hence that the $M$ and $N$ of (i) are faithful $\!C\!$-modules, equivalently, are a faithful $\!A\!$-module and a faithful $\!B\!$-module. That their tensor product is zero is clear.
Similar considerations show that in (ii), any element of $A$ or $B$ with nonzero constant term in $C$ acts nontrivially on $M,$ respectively, $N.$ On the other hand, a nonzero element of $A$ or $B$ with zero constant term, i.e., a nonzero element $u$ of the ideal $M$ or $N,$ will act nontrivially on the module $M,$ respectively, $N,$ because for some $p$ in $P_0,$ respectively $P_1,$ the element $u$ must involve a nonzero polynomial in $y_p,$ which will have nonzero action on $y_p\in M$ or $N$ as the case may be. (Since $y_p\,y_{p'}=\{0\}$ for $p\neq p',$ every element of $M$ or $N$ is a sum of one-variable polynomials in the various $y_p$ with zero constant term.) Again, it is clear that $M\otimes_k N=\{0\}.$ On the other hand, $A\otimes_k B = (k/M)\otimes_k (k/N)\cong k/(M+N)\cong C\neq\{0\}.$
In case (iii), $M$ is faithful over $A$ for the same reason as in (ii), while faithfulness of $N$ over $B$ is clear, as is the condition $M\otimes_k N=\{0\}.$ On the other hand, since $A$ admits a homomorphism into $C,$ we have $A\otimes_k B\neq\{0\}.$
We remark that in the above constructions, the condition that the principal ideal domain $k$ have infinitely many primes can be weakened to say that it has at least two primes, $p_0$ and $p_1,$ by replacing the modules $\bigoplus_{p\in P_0} k/p$ and $\bigoplus_{p\in P_1} k/p$ in (i) with $\bigoplus_{n>0} k/p_0^n$ and $\bigoplus_{n>0} k/p_1^n,$ and making similar adjustments in (ii) and (iii). (In the last, only one prime is needed.) One just has to be a little more careful in the proofs.
In another direction, one may ask:
\[Q.OX\] If we add to the hypotheses of Corollary \[C.OX\] the condition that $M$ be finitely generated as an $\!A\!$-module, and/or that $N$ be finitely generated as a $\!B\!$-module, can some of the other hypotheses of that corollary be weakened, dropped, or modified perhaps in some of the ways that Lemma \[L.OX.cegs\] shows is [*not*]{} possible without such finite generation conditions?
$A$ and $B$ don’t have to be algebras {#S.nonalg}
=====================================
The sharp-eyed reader may have noticed that the proof of Theorem \[T.OX\] makes no use of the algebra structures of $A$ and $B.$ This led me to wonder whether the result was actually a special case of a statement that involved no such structure. As Theorem \[T.nonalg\] below shows, the answer is, in a way, yes. But as the second proof of that theorem shows, one can equally regard Theorem \[T.nonalg\] as a special case of Theorem \[T.OX\].
Given a commutative ring $k,$ and $\!k\!$-modules $A,$ $M_0$ and $M_1,$ let us define an [*action*]{} of $A$ on $(M_0,M_1)$ to mean a $\!k\!$-linear map $A\to{\mathrm}{Hom}_k(M_0,M_1),$ which will be written $(a,u)\mapsto au$ $(a\in A,\ u\in M_0,\ au\in M_1);$ and let us call such an action [*faithful*]{} if it is one-to-one as a map $A\to{\mathrm}{Hom}(M_0,M_1).$
Given two actions, one of a $\!k\!$-module $A$ on a pair $(M_0,M_1)$ and the other of a $\!k\!$-module $B$ on a pair $(N_0,N_1),$ we see that an action of $A\otimes_k B$ on $(M_0\otimes_k N_0,\,M_1\otimes_k N_1)$ can be defined just as for algebras, with each element acting by .
\[T.nonalg\] Let $k$ be a field, and suppose we are given an action of a $\!k\!$-vector-space $A$ on a pair $(M_0,M_1)$ and an action of a $\!k\!$-vector-space $B$ on a pair $(N_0,N_1).$
Then if each of these actions is faithful, so is the induced action of the $\!k\!$-vector-space $A\otimes_k B$ on the pair $(M_0\otimes_k N_0,\,M_1\otimes_k N_1).$
Exactly like proof of Theorem \[T.OX\]. (Note that $u$ will be chosen from $M_0,$ while $\varphi$ will be a $\!k\!$-linear functional on $M_1;$ and that $v$ will be chosen from $N_0$ to make hold in $N_1.)$\
[*Second proof.*]{} Let us make $A$ and $B$ into $\!k\!$-algebras by giving them zero multiplication operations. Then we can make the vector space $M=M_0\oplus M_1$ a left $\!A\!$-module using the action $a(u_0,u_1)=(0,a\,u_0),$ and similarly make $N=N_0\oplus N_1$ a $\!B\!$-module. The faithfulness hypotheses on the given actions clearly make these modules faithful.
Hence by Theorem \[T.OX\], $M\otimes_k N$ is a faithful $\!A\otimes_k B\!$-module. Now $M\otimes_k N$ is a fourfold direct sum $(M_0\otimes_k N_0)\oplus (M_0\otimes_k N_1)\oplus
(M_1\otimes_k N_0)\oplus (M_1\otimes_k N_1);$ but the action of $A\otimes_k B$ annihilates all summands but the first, and has image in the last, so its faithfulness means that every nonzero element of $A\otimes_k B$ induces a nonzero map from $M_0\otimes_k N_0$ to $M_1\otimes_k N_1,$ which is the desired conclusion.
Of course, the analog of Corollary \[C.OX\] holds for actions as in Theorem \[T.nonalg\].
Remarks {#S.remarks}
=======
Composing the proof of Theorem \[T.nonalg\] from Theorem \[T.OX\], and the proof of Theorem \[T.OX\] from Theorem \[T.nonalg\], we see that the general case of Theorem \[T.OX\] follows from the zero-multiplication case of the same theorem. This is striking, since the main interest of the result is for algebras with nonzero multiplication.
One may ask why I made the convention that algebras are associative, if the algebra operations were not used in the theorem. The answer is that there is no natural definition of a module over a not-necessarily-associative algebra. (There is a definition of a module over a Lie algebra, based on the motivating relation between Lie algebras and associative algebras. But there is no natural definition of a Lie or associative structure on a tensor product of Lie algebras, so the result can’t be used in that case.)
Which of Theorems \[T.OX\] and \[T.nonalg\] is the “nicer” result? I would say that Theorem \[T.nonalg\] shows with less distraction what is going on, while Theorem \[T.OX\] is likely to be more convenient for applications.
[00]{}
George Bergman, [http://mathoverflow.net/questions/247332/
is-this-result-on-tensor-products-of-faithful-modules-known](http://mathoverflow.net/questions/247332/
is-this-result-on-tensor-products-of-faithful-modules-known).
N. Bourbaki, [*Éléments de mathématique. Algébre,*]{} Chapitres 1 á 3. Hermann, Paris 1970. MR0274237
K. R. Goodearl, [*Distributing tensor product over direct product,*]{} Pacific J. Math. [**43**]{} (1972) 107–110. MR0311714
D. S. Passman, [*Elementary bialgebra properties of group rings and enveloping rings: an introduction to Hopf algebras,*]{} Comm. Algebra [**42**]{} (2014) 2222–2253. MR3169701
[^1]: This preprint is readable online at <http://math.berkeley.edu/~gbergman/papers/unpub/>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- '[**Peggy Cénac**]{}'
- '[**Basile de Loynes**]{}'
- '[**Arnaud Le Ny**]{}'
- '[**Yoann Offret**]{}'
bibliography:
- 'biblio-vnby.bib'
title: '**Persistent random walks I : recurrence *versus* transience**'
---
------------------------------------------------------------------------
[ We consider a walker on the line that at each step keeps the same direction with a probability which depends on the discrete time already spent in the direction the walker is currently moving. More precisely, the associated left-infinite sequence of jumps is supposed to be a Variable Length Markov Chain (VLMC) built from a probabilized context tree given by a double-infinite comb. These walks with memories of variable length can be seen as generalizations of Directionally Reinforced Random Walks (DRRW) introduced in [@Mauldin1996 Mauldin & al., Adv. Math., 1996] in the sense that the persistence times are anisotropic. We give a complete characterization of the recurrence and the transience in terms of the probabilities to persist in the same direction or to switch. We point out that the underlying VLMC is not supposed to admit any stationary probability. Actually, the most fruitful situations emerge precisely when there is no such invariant distribution. In that case, the recurrent and the transient property are related to the behaviour of some embedded random walk with an undefined drift so that the asymptotic behaviour depends merely on the asymptotics of the probabilities of change of directions unlike the other case in which the criterion reduces to a drift condition. Finally, taking advantage of this flexibility, we give some (possibly random and lacunar) perturbations results and treat the case of more general probabilized context trees built by grafting subtrees onto the double-infinite comb.]{}\
[ Persistent random walk . Directionally reinforced random walk . Variable length Markov chain . Variable length memory . Probabilized context tree . Recurrence . Transience . Random walk with undefined mean or drift - Perturbation criterion]{}\
[ 60G50 . 60J15 . 60G17 . 60J05 . 37B20 . 60K35 . 60K37 ]{}\
------------------------------------------------------------------------
Introduction {#intro}
============
Classical random walks are usually defined from a sequence of independent and identically distributed (*i.i.d.*) increments $\{X_k\}_{k\geq 1}$ by $$\label{def-persist-part}
S_0:=0 \quad \textrm{ and } \quad S_n:=\displaystyle\sum_{k=1}^n X_k \quad \textrm{for all integers} \quad n \geq 1.$$
When the jumps are defined as a (finite-order) Markov chain, a short memory in the dynamics of the stochastic paths is introduced and the random walk $\{S_{n}\}_{n\geq 0}$ itself is no longer Markovian. Such a process is called in the literature a *persistent* random walk, a *Goldstein-Kac* random walk or also a *correlated* random walk. Concerning the genesis of the theory, we allude to [@Goldstein:51; @Kac:74; @renshaw81; @weissbook94; @eckstein00; @weiss02] as regards the discrete-time situation but also its connections with the continuous-time telegraph process.
In this paper, we aim at investigating the asymptotic behaviour of one-dimensional random walk in particular random environment, where the latter is partially the trajectory of the walk itself (the past) which acts as a reinforcement. Roughly speaking, we consider a walker that at each step keeps the same direction (or switches) with a probability which directly depends on the time already spent in the direction the walker is currently moving. In order to take into account possibly infinite reinforcements, we need to consider a two-sided process of jumps $\{X_n\}_{n\in\mathbb Z}$ with a variable finite but possibly unbounded memory.
The following probabilistic presentation of the Variable Length Markov Chains (VLMC), initially introduced in [@Rissanen], comes from [@ccpp]. Besides, we refer to [@INSR:INSR062_1 pp. 117-134] for an overview on VLMC.
VLMC and associated persistent random walks
-------------------------------------------
Introduce the set $\mathcal{L}= {\mathcal{A}}^{-{\mathbb{N}}}$ of left-infinite words on a given alphabet ${\mathcal{A}}$ and consider a complete tree on this alphabet, *i.e.* a tree such that each node has $0$ or $\mathsf{card}({\mathcal{A}})$ children, whose leaves $\mathcal{C}$ are words (possibly infinite) on $\mathcal A$. To each leaf $c\in\mathcal C$, called a context, is attached a probability distribution $q_{c}$ on $\mathcal A$. Endowed with this probabilistic structure, such a tree is named a probabilized context tree. Different context trees lead to different probabilistic impacts of the past and different dependencies. The model of (very) persistent random walks we consider in this paper corresponds to the double infinite comb which is given in Figure \[double-peigne\].
![\[double-peigne\] Probabilized context tree (double infinite comb).](tree){width="0.8\linewidth"}
For this particular context tree, the set of leaves $\mathcal{C}$ is defined from a binary alphabet $\mathcal{A}:=\{{\mathtt u},{\mathtt d}\}$ consisting of a letter ${\mathtt u}$, for moving up, and a letter ${\mathtt d}$ for moving [down]{}. More precisely, it is given by $$\label{leaf}
\mathcal{C}:=\{{\mathtt u}^n{\mathtt d}: n\ge 1\}\cup \{{\mathtt u}^\infty\}\cup\{{\mathtt d}^n{\mathtt u}: n\ge 1\}\cup \{{\mathtt d}^\infty\},$$ where ${\mathtt u}^n{\mathtt d}$ represents the word ${\mathtt u}\ldots {\mathtt u}{\mathtt d}$ composed with $n$ characters ${\mathtt u}$ and one character ${\mathtt d}$. Note that the set of leaves contains also two infinite words ${\mathtt u}^\infty$ and ${\mathtt d}^\infty$. The distributions $q_{c}$ are then Bernoulli distributions and we set, for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $n\geq 1$, $$\label{eq:double-peigne}
q_{{\mathtt u}^n {\mathtt d}}({\mathtt u}) = 1-q_{{\mathtt u}^n{\mathtt d}}({\mathtt d}) =: 1-\alpha^{\mathtt u}_n,\quad
q_{{\mathtt d}^n{\mathtt u}}({\mathtt d}) = 1-q_{{\mathtt d}^n{\mathtt u}}({\mathtt u})=:1-\alpha^{\mathtt d}_n\quad \mbox{and}\quad q_{\ell^\infty}(\ell)=:1-\alpha_{\infty}^{\ell}.$$ It turns out from equalities (\[changedir\]) below that $\alpha_{n}^\ell$ given in (\[eq:double-peigne\]) stands for the probability of changing letter after a run of length $n$ (possibly infinite) of letters $\ell$.
For a general context tree and any left-infinite word $U\in\mathcal L$, the prefix ${\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(U)\in\mathcal C$ is defined as the shortest suffix of $U$, read from right to left, appearing as a leaf of the context tree. In symbols, it is given for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $n\geq 1$ by $${\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(\ldots {\mathtt d}{\mathtt u}^{n})={\mathtt u}^n{\mathtt d},\quad {\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(\ldots {\mathtt u}{\mathtt d}^{n})={\mathtt d}^n{\mathtt u}\quad\mbox{and}\quad {\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(\ell^{\infty})=\ell^{\infty}.$$ Then the associated VLMC, entirely determined by $\{q_{c} : c \in \mathcal{C}\}$ and $U_0\in\mathcal L$, is the $\mathcal{L}$-valued Markov chain $\{U_n\}_{n\geq 0}$ given, for any $\ell\in\mathcal A$ and $n\geq 0$, by the transitions $$\label{eq:def:VLMC}
{\mbox{$\mathbb{P}$}}(U_{n+1} = U_n\ell| U_n)=q_{{\smash{\raisebox{2.5pt}{\!\!\!\begin{tabular}{c}$\hskip-2pt\scriptscriptstyle\longleftarrow$ \\[-9pt]{$\scriptstyle\hskip 1pt\rm pref$}\end{tabular}\!\!}}}(U_n)}(\ell).$$ Since $\mathcal L$ is naturally endowed with a one-sided space-shift corresponding to the left inverse of the usual time-shift, the whole past can be recovered from the state of $U_n$ for any $n \geq 0$. Thus we can introduce $X_n\in\mathcal A$ for any $n\in\mathbb Z$ as the rightmost letter of $U_n$. In particular, we can write $$U_{n}=\cdots X_{n-1}X_{n}.$$ Supposing the context tree is infinite, the so called letter process $\{X_n\}_{n\in\mathbb Z}$ is not a finite-order Markov chain in general. Furthermore, given an embedding $\mathcal A{}G$ of the alphabet in an additive group $G$, the resulting random walk $\{S_n\}_{n\geq 0}$ defined in is no longer Markovian and somehow very persistent.
In the sequel, we make the implicit coding $\{{\mathtt d},{\mathtt u}\}\simeq \{-1,1\}\subset\mathbb R$ (${\mathtt d}$ for a descent and ${\mathtt u}$ for a rise) so that the letter process $\{X_{n}\}_{n\in\mathbb Z}$ represents the jumps in $\{-1,1\}$ of the persistent random walk $S$ taking its values in $\mathbb Z$. In other words, the persistent random walk is defined by the transitions probabilities of changing directions, that is, for all $n \geq 1$, $m \geq 0$ $$\label{changedir}
\mathbb P(S_{m+1}=S_{m}+ 1 \,|\, U_{m}=\cdots {{\mathtt u}} {\mathtt d}^{n})=\alpha^{{\mathtt d}}_{n}\quad\mbox{and}\quad \mathbb P(S_{m+1}=S_{m}- 1 \,|\, U_{m}=\cdots {{\mathtt d}} {\mathtt u}^{n})=\alpha^{{\mathtt u}}_{n}.$$
Overview of the results
-----------------------
In this paper, necessary and sufficient conditions on the transitions probabilities $\alpha_{n}^{\ell}$ for the corresponding persistent random walk $S$ to be recurrent or transient are investigated.
Under some moment conditions on the persistence times (to be defined) whose distributions depend only on the $\alpha_n^\ell$’s, a Strong Law of Large Numbers (SLLN) as well as a Central Limit Theorem (CLT) are stated in [@peggy]. These conditions imply that the underlying VLMC admits a unique stationary probability distribution.
In the following, we extend these results by providing a complete characterization of the recurrence and the transience. We also slightly relax the assumptions for the SLLN. These results are robust in the sense that they do not rely neither on the existence of any stationary probability nor on any reversibility property. A summary of the different situations is given in Table \[tableau\]. The generalization of the CLT in [@peggy], when we do not assume the square integrability of the running times, is a work being drafted for a forthcoming companion paper.
Basically, when the random elapsed times between two changes of directions (also termed the persistence times) $\tau^{{\mathtt u}}$ and $\tau^{{\mathtt d}}$ are integrable (or at least one of them) the criterion for the recurrence reduces to a classical null drift condition. In the remaining case, the recurrence requires the distribution tails of the persistence times to be comparable. Thus, in the former case, the criterion involves the parameters $\alpha_n^\ell$ globally whereas it only depends on their asymptotics in the latter case. It follows, in the undefined drift context, that we can slightly perturb the symmetric configuration while remaining recurrent contrary to the well-defined drift case for which a perturbation of exactly one transition can lead to a transient behaviour. In fact, in the undefined drift case, the persistent random walk may stay recurrent or transient as long as the perturbation remains asymptotically controlled.
The proofs of these results rely on the study of the skeleton (classical) random walk $$\{M_{n}\}_{n\geq 0}:= \{S_{T_{n}}\}_{n\geq 0},$$ where $\{T_{n}\}_{n\geq 0}$ is the (classical) random walk of up-down breaking times. Then we use classical results, especially the structure theorem in [@Erickson2] of Erickson on random walks with undefined mean. These random walks together with the length of runs, also called the times of change of direction, are illustrated in Figure \[marche\] at the beginning of Section \[def-marche-persist\].
Related results
---------------
First, some optimal stopping rules have been obtained in [@Allaart2001; @Allaart2008] for the walks considered. Secondly, recurrence and transience as well as scaling limits have been widely investigated for correlated random walks, that is, in our framework, persistent random walks with a probabilized context tree of finite depth (in particular, the increment process is a finite-order Markov chain). For instance, regarding persistent random walks in random environments (where the latter are the transition probabilities to change direction) CLT are proved in [@Toth84; @Toth86]. Besides, recurrence and transience have been studied in [@renshaw83; @Lenci] for correlated random walks in dimension two.
Closely related to our model, Directionally Reinforced Random Walks (DRRW) has been introduced in [@Mauldin1996] to model ocean surface wave fields. Those are nearest neighborhood random walks on $\mathbb Z^{d}$ keeping their directions during random times $\tau$, independently and identically drawn after every change of directions, themselves independently and uniformly chosen among the other ones. In dimension one, our model generalizes these random walks since asymmetrical transition probabilities $(\alpha_{n}^{{\mathtt u}})$ and $(\alpha_{n}^{{\mathtt d}})$ lead in general to running times $\tau^{{\mathtt u}}$ and $\tau^{{\mathtt d}}$ with distinct distributions.
Due to their symmetry, the recurrence criterion of DRRW in dimension one takes the simple form given in [@Mauldin1996 Theorem 3.1., p. 244] and obviously we retrieve this particular result in our more general situation (see Proposition \[crit-rec\] and Theorem \[undefinedrt\]). Under some significant moment conditions on the running time, it is stated in [@Mauldin1996 Theorem 3.3. and Theorem 3.4., p. 245] that these random walks are recurrent in $\mathbb Z^{2}$, when the waiting time between changes of direction is square integrable, and transient in $\mathbb Z^{3}$ under the weaker assumption of a finite expectation. In higher dimension, it is shown that it is always transient. In dimension three the latter result has been recently improved in [@Rainer2007 Theorem 2., p. 682] by removing the integrability condition. Also, the assertion in [@Mauldin1996] that the DRRW is transient when its embedded random walk of successive locations of change into the first direction is transient has been partially invalided in [@Rainer2007 Theorem 4., p. 684]. Thus, even in the symmetric situation, the characterization of recurrence or transience is a difficult task. The case of an anisotropic persistent random walks built from VLMC in higher dimension is a work in progress and this paper is somehow a first step.
Besides, as regards the scaling limits of DRRW, we refer to [@Scalingdirectionaly1998; @Rastegar:2012; @Scalingdirectionaly2014] where are revealed diffusive and super-diffusive behaviours. We expect in a forthcoming paper to extend these results to the asymmetric situation and fill some gaps left open.
Finally, these walks are also somehow very similar to some continuous-time random motion, also called random flights, for which the changes of directions are Poisson random clocks. They have been extensively considered as generalizations of the Goldstein-Kac telegraph process. A sample of this field can be found for instance in [@Kolesnik:2005; @Orsingher2007; @Kolesnik:2008].
Outline of the article
----------------------
The paper is organized as follows. Section \[def-marche-persist\] is devoted to the presentation of the general framework including the main assumptions and notations summarized in Figure \[marche\]. Section \[Rec-et-Trans\] is focused on the recurrence and transience. It is first shown that the recurrent or transient behaviour of $S$ can be deduced from the oscillating or drifting behaviour of the skeleton classical random walk $M$. From this follows an almost sure comparison lemma involving stochastic domination in the context of couplings. The main result of this paper requires to discriminate two situations depending on whether the persistent random walk admits an almost sure drift or not. The bulk of the work in the latter case consists of giving a characterization in a form as simple as possible by applying the results in [@Erickson2]. To this end we need to reduce the problem to the study of a derived random walk which can be viewed as a randomized version of the embedded random walk $M$. Finally, in Section \[perturbations\] we give some perturbation results in the undefined drift case and also an [*a.s.*]{} comparison result weakening the assumptions of the former one providing all the needed informations in the specific undefined drift context.
Settings and assumptions {#def-marche-persist}
========================
![Persistent random walk[]{data-label="marche"}](Marche2){width="0.80\linewidth"}
Foremost, we refer to Figure \[marche\] that illustrates our notations and assumptions by a realization of a linear interpolation of our persistent random walk $\{S_n\}_{n\geq 0}$, built from a double infinite comb given in Figure \[double-peigne\] with probabilities of changing directions $(\alpha_{n}^{{\mathtt u}})$ and $(\alpha_{n}^{{\mathtt d}})$ as in (\[eq:double-peigne\]).
Renewal hypothesis
------------------
In order to avoid the trivial situations in which the persistent random walk can stay almost surely frozen in one of the two directions, we make the assumption below. It roughly means that the underlying VLMC denoted by $U$ given in (\[eq:def:VLMC\]) has a renewal property. We refer to [@Comets; @Gallo] for more refinement about regenerative schemes and related topics.
\[A1\] For any initial distribution $\mu$ on $\mathcal L$, $$\label{a0}
\mathbb P_{\mu}\left({\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(U_{n})={\mathtt d}{\mathtt u}\quad i.o.\right)
=
\mathbb P_{\mu}\left((X_{n-1},X_{n})=({\mathtt u},{\mathtt d}) \quad i.o.\right)=1,$$ where the abbreviation [*i.o.*]{} means that the events depending on $n$ occur for infinitely many $n$. It turns out that this hypothesis is equivalent to one of the following statements:
1. For any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $r\geq 1$, $\alpha_{\infty}^{\ell}\neq 0$ and $$\label{a1}
\prod_{k=r}^{\infty}(1-\alpha_{k}^\ell)=0.$$
2. For any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $r\geq 1$, $\alpha_{\infty}^{\ell}\neq 0$ and, either there exists $n\geq r$ such that $\alpha_{n}^{\ell}=1$, or $$\label{a1b}
\sum_{k=r}^{\infty} \alpha_{k}^{\ell}=\infty.$$
Furthermore, since between two up-down events $(X_{n-1},X_{n})=({\mathtt u},{\mathtt d})$ there is at least one down-up event $(X_{n-1},X_{n})=({\mathtt d},{\mathtt u})$ and reciprocally, assumption (\[a0\]) can be alternatively stated switching ${\mathtt u}$ and ${\mathtt d}$.
This assumption disallows a too strong reinforcement, that is a too fast decreasing rate for the probabilities of change of directions. Sequences of transition satisfying this assumption are said to be [admissible]{}. Below are given typical examples for which the assumption holds or fails.
### Examples of admissible or inadmissible sequences {#examples-of-admissible-or-inadmissible-sequences .unnumbered}
[*Denoting for any integer $p\geq 0$, the $p$-fold composition of the logarithm function by $$\log_{[p]}:=\log\circ\cdots\circ\log,$$ it is obvious that (\[a1b\]) holds for instance when $$\label{ex2moins}
\frac{1}{n\log(n)\cdots\log_{[p]}(n)}\underset{n\to\infty}{=}{\mathcal O}(\alpha_{n}^{\ell}),$$ where $\mathcal O$ stands for the big O notation. In contrast, it fails when there exists $\varepsilon>0$ such that $$\alpha_{n}^{\ell}\underset{n\to\infty}{=}
{\mathcal O} \left(\frac{1}{n\log(n)\cdots\log_{[p-1]}(n)(\log_{[p]}(n))^{1+\varepsilon}}\right).$$*]{}
Persistence times and embedded random walk
------------------------------------------
Thereafter, under Assumption \[A1\], we can consider the sequence of almost surely finite breaking times $(B_n)_{n\geq 0}$ (see Figure \[marche\]) defined inductively for all $n\geq 0$ by $$B_0=\inf\left\{k\geq 0 : X_{k}\neq X_{k+1}\right\}\quad\mbox{and}\quad
B_{n+1}=\inf\left\{k>B_{n} : X_{k}\neq X_{k+1}\right\}.$$ For the sake of simplicity, throughout this paper, we deal implicitly with the conditional probability $$\label{condition}
\mathbb P(\;\cdot\;\mid {\smash{\raisebox{3.5pt}{\!\!\!\begin{tabular}{c}$\hskip-4pt\scriptstyle\longleftarrow$ \\[-7pt]{\rm pref}\end{tabular}\!\!}}}(U_{1})={\mathtt d}{\mathtt u})=\mathbb P(\;\cdot\;\mid (X_{0},X_{1})=({\mathtt u},{\mathtt d})).$$ In particular, $B_{0}=0$ with probability one. In other words, we suppose that the initial time is a so called up-down breaking time. Thanks to Assumption \[A1\] and to the renewal properties of the chosen variable length Markov chain $U$, the latter condition can be done without loss of generality and has no fundamental importance in the long time behaviour of $S$ . Furthermore, the length of rises $(\tau^{\mathtt u}_{n})$ and of descents $(\tau^{\mathtt d}_{n})$ are then defined for all $n\geq 1$ by $$\label{tau}
\tau_n^{{\mathtt d}}:=B_{2n-1}-B_{2n-2}
\quad\mbox{and}\quad
\tau_n^{{\mathtt u}} :=B_{2n}-B_{2n-1}.$$ Due to [@peggy Proposition 2.6.] for instance, $(\tau_n^{\mathtt d})$ and $(\tau_n^{\mathtt u})$ are independent sequences of [*i.i.d.*]{} random variables. Besides, an easy computation leads to their distribution tails and expectations, given and denoted for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and all integers $n\geq 1$ by $$\label{def-tail}
\mathcal T_{\ell}(n):={\mbox{$\mathbb{P}$}}(\tau^{\ell}_{1} \geq n)=\prod_{k=1}^{n-1}(1-\alpha_{k}^\ell)\quad\mbox{and}\quad \Theta_\ell(\infty):={\mbox{$\mathbb{E}$}}[\tau^{\ell}_{1}]=\sum_{n=1}^{\infty}\prod_{k=1}^{n-1}(1-\alpha_{k}^\ell).$$
At this stage, we exclude for simplicity the situation of almost surely constant length of runs which trivializes the analysis of the underlying persistent random walk. Besides, note that the persistent random walk $S$ can be equivalently defined either via the distribution tails ${\mathcal{T}}_\ell$ or the probabilities $(\alpha_n^\ell)$ with $\ell\in\{{\mathtt d},{\mathtt u}\}$. Thus, depending on the context, we will choose the more suitable description of the parameters of the model.
In order to deal with a more tractable random walk built with possibly unbounded but *i.i.d.* increments, we consider the underlying skeleton random walk $\{M_{n}\}_{n\geq 0}$ associated with the even breaking times random walk $\{T_{n}\}_{n\geq 0}$ (up-down breaking times) defined for all $n\geq 0$ by $$\label{marche-paire}
T_{n}:=\sum_{k=1}^{n}(\tau_{k}^{{\mathtt d}}+\tau_{k}^{{\mathtt u}})\quad\mbox{and}\quad
M_{n}:=S_{T_{n}}=\sum_{k=1}^{n}(\tau^{{\mathtt u}}_{k}-\tau^{{\mathtt d}}_{k}).$$ The so called (almost sure) drift $\mathbf d_{{\scriptscriptstyle T}}$ of the increasing random walk $T$ always exists whereas that of $M$, denoted by $\mathbf d_{{\scriptscriptstyle M}}$, is well-defined whenever one of the expectations $\Theta_{{\mathtt u}}(\infty)$ or $\Theta_{{\mathtt d}}(\infty)$ defined in the right-hand side of (\[def-tail\]) is finite. They are given by $$\label{drift-def1}
{\mathbf d_{{\scriptscriptstyle T}}}=\Theta_{{\mathtt d}}(\infty)+\Theta_{{\mathtt u}}(\infty)\in[0,\infty]\quad\mbox{and}\quad {\mathbf d_{{\scriptscriptstyle M}}}:=\Theta_{{\mathtt u}}(\infty)-\Theta_{{\mathtt d}}(\infty)\in{\mathbb{R}}\cup\{\pm \infty\}.$$ Furthermore, if the drift of $M$ is well-defined, we can set (extended by continuity whenever necessary) $$\label{drift-def2}
{\mathbf d}_{{\scriptscriptstyle S}}:=\frac{\Theta_{{\mathtt u}}(\infty)-\Theta_{{\mathtt d}}(\infty)}{\Theta_{{\mathtt u}}(\infty)+\Theta_{{\mathtt d}}(\infty)}\in [-1,1].$$ In regards to the convergence (\[LFGN1\]), the latter quantity is naturally termed the (almost sure) drift of $S$. For the record, we give some relevant examples for which the mean of the length of run is finite or not.
In the following, two non-negative sequences $(a_n)$ and $(b_n)$ are said to be of the same order, and we shall write it $a_n \asymp b_n$, when there exists a positive constant $c$ such that for $n$ sufficiently large, $$c^{-1} a_n \leq b_n \leq c a_n.$$
### Typical examples of running time means {#typical-examples-of-running-time-means .unnumbered}
[*Simple calculations ensure that the expectation $\Theta_{\ell}(\infty)$ is finite whenever there exists $p\geq 0$ and $\varepsilon>0$ such that either there exists $n\geq 1$ with $\alpha_{n}^{\ell}=1$ or, for $n$ sufficiently large, $$\label{ex1}
\alpha_{n}^{\ell}\geq \frac{1}{n}+\frac{1}{n\log(n)}+\cdots+\frac{1}{n\log(n)\cdots\log_{[p-1]}(n)}+\frac{1+\varepsilon}{n\log(n)\cdots\log_{[p]}(n)}.$$ On the contrary, $\Theta_{\ell}(\infty)$ is infinite whenever there exists $p\geq 0$ such that, for all $n\geq 1$ we have $\alpha_{n}^{\ell}\neq 1$ and for $n$ large enough, $$\label{ex2}
\alpha_{n}^{\ell}\leq \frac{1}{n}+\frac{1}{n\log(n)}+\cdots+\frac{1}{n\log(n)\cdots\log_{[p]}(n)}.$$ The proofs of these claims follow from the computation of the asymptotics of the distribution tails. More precisely, when the transitions belongs to $[0,1)$ and are given, for positive parameters $\lambda_{0},\lambda_{1},\cdots,\lambda_{p}$ and large $n$, by $$\label{ex4}
\alpha_{n}^{\ell}:=\frac{\lambda_0}{n}+\frac{\lambda_{1}}{n\log(n)}+\cdots+\frac{\lambda_{p}}{n\log(n)\cdots\log_{[p]}(n)},$$ then, $$\label{tailasymp}
{\mathcal{T}}_{\ell}(n) \asymp \frac{1}{n^{\lambda_{0}}(\log(n))^{\lambda_{1}}\cdots(\log_{[p]}(n))^{\lambda_{p}}},$$ and the claims follow.* ]{}
Recurrence and transience {#Rec-et-Trans}
=========================
First recall that a stochastic process $\{M_n\}_{n\geq 0}$ on the grid $\mathbb Z$ is said to be recurrent if for any $x\in\mathbb Z$, $$\sup\{n\geq 0 : M_n=x\}=\infty\quad a.s.,$$ and it is said to be transient (respectively transient to $\infty$, transient to $-\infty$) if $$\lim_{n\to\infty} |M_n|=\infty\quad\left(\mbox{resp.}\quad
\lim_{n\to\infty} M_n=\infty,\quad\lim_{t\to\infty} M_n=-\infty\right)\quad a.s..$$
Below, it is shown that the recurrence or the transience property of the persistent random walk $S$ is completely determined by the oscillating or drifting behaviour of the underlying skeleton random walk $M$ for which suitable criteria are available. Also, a comparison theorem, involving only the distribution tails of the length of runs, is given.
Equivalent criteria and comparison lemma
----------------------------------------
The following structure theorem, stated in [@Feller Theorem 1., Chap. XII and Theorem 4., Chap. VI] for instance, describes the long time behaviour of a one-dimensional random walk.
\[RW\] Any random walk $M$ on $\mathbb R$ which is not almost surely constant satisfies either $$\limsup_{n\to\infty} M_{n}=\infty\quad\mbox{and}\quad\liminf_{n\to\infty} M_{n}=-\infty\quad a.s.\quad(\mbox{said to be oscillating}),$$ or $$\lim_{n\to\infty} M_{n}=\infty\quad(\mbox{resp.}\; -\infty)\quad a.s.\quad(\mbox{said to be drifting to}\;\pm\infty).$$ Moreover, when the drift of $M$ denoted by ${\mathbf d}_{{\scriptscriptstyle M}}$ is well-defined, then $M$ is oscillating if and only if ${\mathbf d}_{{\scriptscriptstyle M}}=0$ whereas $M$ is drifting to $\infty$ (resp. $-\infty$) if and only if $\mathbf d_{{\scriptscriptstyle M}}>0$ (resp. $\mathbf d_{{\scriptscriptstyle M}}<0$). In any case, $$\lim_{n\to\infty} \frac{M_{n}}{n}={\mathbf d}_{{\scriptscriptstyle M}}\quad a.s..$$
Our strategy to study recurrence *versus* transience consists in reducing the determination of the type of the persistent random walk $S$ defined in the previous section by studying some properties of the underlying embedded random walk $M$ associated with the up-down breaking times given in (\[marche-paire\]) and illustrated in Figure \[marche\]. This is made clear by the following lemma.
\[eqrec\] The persistent random walk $S$ is either recurrent or transient according as the type (in the sense of Theorem \[RW\]) of the classical random walk $M$ of even breaking times. More precisely, one has:
a) $S$ is recurrent if and only if $M$ is oscillating.
b) $S$ is transient to $\infty$ (resp. $-\infty$) if and only if $M$ is drifting to $\infty$ (resp. $-\infty$).
First, when $M$ is oscillating, $S$ is recurrent. Next if $M$ is drifting to $-\infty$, then $S$ is transient to $-\infty$ since the trajectory of $S$ is always under the broken line formed by the $M_n$’s. Finally, from Theorem \[RW\], the oscillating and drifting to $\pm\infty$ behaviour form, up to a null set, a partition of the universe. Therefore, it only remains to prove that if $M$ is drifting to $\infty$, then $S$ is transient to $\infty$. It is worth to note that we assume the initial time to be an up-down breaking time as in Figure \[marche\] so that the geometric argument considered above does not apply straightforwardly. Nonetheless, the expected assertion follows by remarking that, up to an independent random variable, the skeleton random walk at odd breaking times (down-up breaking times) is equal in distribution to $M$ which ends the proof of the lemma.
Let us end this part with a comparison lemma helpful to study some perturbed persistent random walks (see Section \[perturbations\]) but also to prove the extended SLLN in Proposition \[crit-rec\]. It means that if the rises of $S$ are stochastically smaller than those of $\widetilde S$ and the opposite for the descents, then $S$ is stochastically smaller than $\widetilde S$.
\[comp\] Let $S$ and $\widetilde S$ be two persistent random walks such that the associated distribution tails of their length of runs satisfy for all $n\geq 1$, $$\label{comp1}
{\mathcal{T}}_{{\mathtt u}}(n) \leq \widetilde {{\mathcal{T}}}_{\mathtt u}(n) \quad\mbox{and}\quad{\mathcal{T}}_{{\mathtt d}}(n) \geq\widetilde{{\mathcal{T}}}_{{\mathtt d}}(n).$$ Then there exists a coupling, still denoted by $(S,\widetilde S)$ up to a slight abuse, such that for all $n \geq 1$, $$\label{comp2}
S_n\leq \widetilde S_n\quad a.s..$$
This lemma can equivalently be stated in terms of the transition probabilities of change of directions, denoted respectively by $(\alpha^{\ell}_n)$ and $(\widetilde \alpha^{\ell}_n)$ for any $\ell\in\mathcal \{{\mathtt d},{\mathtt u}\}$, with the same conclusions, by considering instead of the equivalent hypothesis requiring that for all $n\geq 1$, $$\label{comp3}
{\widetilde \alpha}^{{\mathtt u}}_{n}\leq \alpha_{n}^{{\mathtt u}}\quad\mbox{and}\quad\widetilde \alpha^{{\mathtt d}}_{n}\geq \alpha_{n}^{{\mathtt d}}.$$
Let $(\tau_{n}^{\ell})$ and $(\widetilde{\tau}_{n}^{\ell})$ be the associated lengths of runs and $G_{\ell}$ and $\widetilde{G}_{\ell}$ be the left continuous inverse of their cumulative distribution functions. Then inequalities in (\[comp1\]) yield that for all $x\in [0,1]$, $$G_{{\mathtt u}}(x)\leq \widetilde G_{{\mathtt u}}(x)\quad\mbox{and}\quad
G_{{\mathtt d}}(x)\geq \widetilde G_{{\mathtt d}}(x).$$ Then we can construct a coupling (see for instance the book [@Thorisson Chap. 1.3.]) of the lengths of runs such that, with probability one, for all $n\geq 1$, $$\tau_{n}^{{\mathtt u}}\leq \widetilde \tau_{n}^{\,{\mathtt u}}\quad\mbox{and}\quad \tau_{n}^{{\mathtt d}}\geq \widetilde \tau_{n}^{\,{\mathtt d}}.$$ To be more specific, considering two independent sequences $(V^{\ell}_{n})$ of uniform random variables on $[0,1]$, we can set $$\tau_{n}^{\ell}:=G_{\ell}(V_{n}^{\ell})\quad\mbox{and}\quad \tilde{\tau}_{n}^{\ell}:=\widetilde G_{\ell}(V_{n}^{\ell}).$$ Consequently, there exists a coupling of the persistent random walks $S$ and $\widetilde S$ satisfying inequality (\[comp2\]) since they are entirely determined by these lengths of runs.
With respect to the considerations above, it seems natural to distinguish two cases providing whether one of the mean length of runs between $\Theta_{\mathtt u}(\infty)$ and $\Theta_{\mathtt d}(\infty)$ given in (\[def-tail\]) is finite or both are infinite. The former case correspond to the situation in which the drift of $M$ is well-defined and is considered in the next section. The latter case, when the definition of the drift in is meaningless, is considered apart in Section \[und-drift\].
Well-defined Drift case {#wd-drift}
-----------------------
In this part, assume that the drift is well defined, that is $\Theta_{\mathtt u}(\infty)$ or $\Theta_{\mathtt d}(\infty)$ is finite so that $\mathbf d_{{\scriptscriptstyle S}}$ given in is well-defined. We will highlight a Strong Law of Large Number (SLLN) for the persistent random walk and we shall prove a null drift recurrence criterion similarly to the classical context of random walks with integrable jumps.
\[crit-rec\] The persistent random walk $S$ is recurrent if and only if $\mathbf d_{{\scriptscriptstyle S}}=0$ and transient otherwise. Furthermore, one has $$\label{LFGN1}
\lim_{t\to\infty}\frac{S_{n}}{n}=\mathbf d_{{\scriptscriptstyle S}}\in[-1,1]\quad a.s..$$
First remark that, in this setting, the recurrence criterion is a straightforward consequence of Theorem \[RW\] and Lemma \[eqrec\]. Besides, the law of large numbers (\[LFGN1\]) when $\Theta_{{\mathtt u}}(\infty)$ and $\Theta_{{\mathtt d}}(\infty)$ are both finite is already proved in [@peggy Proposition 4.5, p. 33] under the assumption that $\mathbf d_{{\scriptscriptstyle S}}\in(-1,1)$. Then by symmetry it only remains to consider the situation with $\Theta_{{\mathtt u}}(\infty)=\infty$ and $\Theta_{{\mathtt d}}<\infty$ (and thus $\mathbf d_{{\scriptscriptstyle S}}=1$). Note that it is sufficient to prove the minoration in (\[LFGN1\]) since $S_{n}\leq n$ for all $n\geq 0$. To this end, we shall construct for any $0<\varepsilon<1$ a persistent random walk $S^{\,\varepsilon}$ such that for all $n\geq 0$, $$\label{minor}
S_n^{\,\varepsilon}\leq S_n\quad\mbox{and}\quad \liminf_{t\to\infty}\frac{S^{ \,\varepsilon}_{n}}{n}\geq 1-\varepsilon\quad a.s..$$ More specifically, introduce for any $\ell\in\{{\mathtt d},{\mathtt u}\}$ the truncated mean of runs defined for all $m\geq 1$ by $$\label{truncatedmeaninit}
\Theta_{\ell}(m):=\sum_{n=1}^{m}\prod_{k=1}^{n-1}(1-\alpha_{k}^\ell),$$ and choose $N\geq 1$ sufficiently large so that $$\label{approx}
\frac{\Theta_{{\mathtt u}}(N)-\Theta_{{\mathtt d}}(\infty)}{\Theta_{{\mathtt u}}(N)+\Theta_{{\mathtt d}}(\infty)}\geq
1-\varepsilon.$$ Then, we can consider a persistent random walk $S^{\,\varepsilon}$ associated with the transitions $$\label{suite-admissibles-recurrence}
\alpha_{n}^{{\mathtt u},\varepsilon}:=
\left\{
\begin{array}{lll}
\alpha_{n}^{{\mathtt u}},&\mbox{when} & 1\leq n\leq N-1,\\
1,&\mbox{when}& n\geq N,\\
\end{array}
\right.
\quad\mbox{and}\quad
\alpha_n^{{\mathtt d},\varepsilon}:= \alpha_n^{{\mathtt d}}.$$ Finally, noting that the drift of $S^{\,\varepsilon}$ is nothing but the left-hand side of (\[approx\]), the latter SLLN together with the comparison Lemma \[comp\] lead to (\[minor\]) and end the proof.
Undefined drift case {#und-drift}
--------------------
In this section we consider the remaining case in which both $\Theta_{\mathtt u}(\infty)$ and $\Theta_{\mathtt d}(\infty)$ are infinite. In this case, the information given by the expectation of one increment of $M$ is no longer sufficient to discriminate between transience and recurrence.
In fact, following Erickson [@Erickson2 Theorem 2., p. 372], the oscillating or drifting behaviour of the skeleton random walk $M$ is characterized through the cumulative distribution function of its increments $(Y_{n}):=(\tau^{{\mathtt u}}_{n}-\tau^{{\mathtt d}}_{n})$, especially if the mean is undefined. Roughly speaking, the criterion of Erickson together with the lemma \[eqrec\] imply that the persistent random walk $S$ is recurrent if the distribution tails of the positive and negative parts of an increment are comparable, transient otherwise.
However, Erickson’s criterion does not suit to our context since the distribution of an increment is not explicitly given by the parameters of the model, but merely by the convolution of two *a priori* known distributions. More precisely, this criterion requires to settle whether the quantities $$\label{erick1}
J_{+}:=\sum_{n=1}^{\infty}\frac{n\mathbb P(Y_{1}=n)}{\sum_{k=1}^{n} \mathbb P(Y_{1}\leq -k)}\quad\mbox{and}\quad J_{-}:=\sum_{n=1}^{\infty}\frac{n\mathbb P(Y_{1}=-n)}{\sum_{k=1}^{n} \mathbb P(Y_{1}\geq k)},$$ are finite or infinite which is clearly not convenient in concrete cases. To circumvent these difficulties, we consider a sequence $(\xi_n)$ of non-degenerate [*i.i.d.*]{} Bernoulli random variables with parameter $p\in(0,1)$, independent of the sequences of length of runs $(\tau_n^{\mathtt u})$ and $(\tau_n^{\mathtt d})$. Then we introduce the following classical random walk defined for all $n\geq 0$ by $$\label{Mxi-rw}
M^{{\scriptscriptstyle \xi}}_n := \sum_{k=1}^n Y_k^{{\scriptscriptstyle \xi}},\quad \mbox{with}\quad Y_k^{{\scriptscriptstyle \xi}}:=\xi_k \tau_k^{\mathtt u}- (1-\xi_k) \tau_k^{\mathtt d}.$$ In fact, the original random walk $M$ is built from a strict alternation of rises and descents. The random walk $M^{{\scriptscriptstyle \xi}}$ can be seen as a randomized version of $M$ in the sense that the choice of a descent or a rise in the alternation is determined by flipping a coin. It turns out that the randomly modified random walk $M^{{\scriptscriptstyle \xi}}$ and the embedded random walk $M$ share the same behaviour in the sense of Theorem \[RW\] above.
When the drift is well-defined this result is only true in the symmetric situation $p=1/2$. The fact that it holds for arbitrary $p\in(0,1)$ can be disturbing at one sight but it is due to the general fact that the position of a one-dimensional random walk without mean (undefined or infinite) is primarily given by the last big jumps. For more details one can consult [@Kesten; @kesten:prob; @kesten:sol].
The proof of the following lemma is postponed to the end of this part.
\[lemme\] In the setting of Theorem \[RW\] the random walks $M$ and $M^{{\scriptscriptstyle \xi}}$ are of the same type.
Therefore, in order to obtain the oscillating or drifting property of $M$ we can apply the criterion of Erickson to $M^{{\scriptscriptstyle \xi}}$. It is then not difficult to see that the criterion consists of determining the convergence or divergence of the more tractable series (compare to (\[erick1\])) given, for any $\ell_{1},\ell_{2}$ in $\{{\mathtt u},{\mathtt d}\}$, by $$\label{jgen0}
J_{\ell_{1}\mid\ell_{2}}:=\sum_{n=1}^{\infty} \frac{n\mathbb P(\tau^{\ell_{1}}=n)}{\sum_{k=1}^{n} \mathbb P(\tau^{\ell_{2}}\geq k)}=\sum_{n=1}^{\infty} \frac{n(-\Delta {\mathcal{T}}_{\ell_{1}}(n))}{\sum_{k=1}^{n} {\mathcal{T}}_{\ell_{2}}(k)},$$ where $\Delta V(n)$ denotes the forward discrete derivative at point $n$ of the real sequence $(V_{n})$, [*i.e.*]{} $$\Delta V(n)=V(n+1)-V(n).$$ As stated in the theorem \[undefinedrt\] below, the criterion can be rewritten with the series defined by $$\label{kgen}
K_{\ell_{1}\mid\ell_{2}}:=\sum_{n=1}^{\infty}\left (1-\frac{n {\mathcal{T}}_{\ell_2}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\ell_2}(k)} \right ) \frac{\mathcal T_{\ell_{1}}(n)}{\sum_{k=1}^{n} \mathcal T_{\ell_{2}}(k)}.$$ Compare to $J_{\ell_{1}\mid\ell_{2}}$, the quantities $K_{\ell_{1}\mid\ell_{2}}$ have the advantage to involve only the distribution tails and not their derivatives, *i.e.* their densities. The distribution tails are obviously more tractable in computations because of their monotonicity. It turns out the quantity $$\label{error-term}
1-\frac{n {\mathcal{T}}_\ell(n)}{\sum_{k=1}^n {\mathcal{T}}_\ell(k)}$$ may be arbitrarily small, for instance when the distribution corresponding to ${\mathcal{T}}_\ell$ is slowly varying. We refer to [@BGT] for further considerations about regularly and slowly varying functions. However, as soon as this quantity is well-controlled, typically when it stays away from $0$, the criterion can be rewritten in terms of $$\widetilde K_{\ell_1\mid\ell_2}:=\sum_{n=1}^\infty \frac{{\mathcal{T}}_{\ell_1}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\ell_2}(k)}.$$ Finally, it is worth noting that the quantity in remains non-negative as shown in the proof of Theorem \[undefinedrt\] below. We want also to point out that the [S]{}tolz-[C]{}esaró lemma [@StolCes; @Stoltz2011], a discrete L’Hôpital’s rule, can be convenient to study the asymptotic of the latter quantities.
\[undefinedrt\] \[cas-sans-drift\] The persistent random walk $S$ is recurrent if and only if $$\label{J-infini}
J_{{\mathtt u}\mid{\mathtt d}}=\infty\quad \mbox{and}\quad J_{{\mathtt d}\mid{\mathtt u}}=\infty,$$
and transient to $\infty$ (resp. transient to $-\infty$) if and only if $$\label{un-J-fini}
J_{{\mathtt u}\mid{\mathtt d}}=\infty\quad\mbox{and}\quad J_{{\mathtt d}\mid{\mathtt u}}<\infty\quad(\mbox{resp. }\quad J_{{\mathtt u}\mid {\mathtt d}}<\infty\quad\mbox{and}\quad J_{{\mathtt d}\mid{\mathtt u}}=\infty).$$ Moreover, when $J_{{\mathtt u}\mid{\mathtt d}}=\infty$ (resp. $J_{{\mathtt d}\mid{\mathtt u}}=\infty$), $$\label{limsup}
\limsup_{t\to\infty}\frac{S_{n}}{n}=1 \quad \quad \left ( \mbox{resp.}\quad \liminf_{t\to\infty}\frac{S_{n}}{n}=-1 \right ) \quad a.s..$$ Alternatively, the quantities $J_{{\mathtt u}|{\mathtt d}}$ and $J_{{\mathtt d}|{\mathtt u}}$ can be substituted with ${K}_{{\mathtt u}|{\mathtt d}}$ and $K_{{\mathtt d}|{\mathtt u}}$ respectively.
Note that the case in which $J_{{\mathtt u}\mid{\mathtt d}}$ and $J_{{\mathtt d}\mid{\mathtt u}}$ are both finite does not appear in the theorem. In fact, it follows from [@Erickson2] that, in such a case, the drift of the persistent random walk $S$ is well-defined and belongs to $(-1,1)$ which is excluded from this section. This theorem ends the characterization of the type of persistent random walks. In Table \[tableau\] the conditions for the recurrence and the transience are summarized and we give some applications of these criteria below.
--------------------------------------------- ----------------------------------------- ------------------------------------------------------------------------ ---------------------------------------------------------------------------- --
\*[$\Theta_{\mathtt d}(\infty) < \infty$]{} Transient to $+\infty$
Recurrent $\mathbf{d}_{{\scriptscriptstyle S}} \in (0,1)$
$\mathbf{d}_{{\scriptscriptstyle S}}=0$ Transient to $-\infty$
$\mathbf{d}_{{\scriptscriptstyle S}} \in (-1,0)$
\*[$\Theta_{\mathtt d}(\infty) = \infty$]{} Transient to $+\infty$
Recurrent $\infty = J_{{\mathtt u}\mid{\mathtt d}} > J_{{\mathtt d}\mid{\mathtt u}}$
$J_{{\mathtt u}\mid{\mathtt d}}=J_{{\mathtt u}\mid{\mathtt d}}=\infty$ Transient to $-\infty$
$\infty = J_{{\mathtt d}\mid{\mathtt u}} > J_{{\mathtt u}\mid{\mathtt d}}$
--------------------------------------------- ----------------------------------------- ------------------------------------------------------------------------ ---------------------------------------------------------------------------- --
: \[tableau\]Recurrence and transience criteria.
### Example of harmonic transitions {#example-of-harmonic-transitions .unnumbered}
[*Consider sequences of transitions $(\alpha_{n}^{{\mathtt u}})$ and $(\alpha_{n}^{{\mathtt d}})$ in $[0,1)$ with $\lambda_{\mathtt u}$ and $\lambda_{{\mathtt d}}$ in $(0,1)$ such that for sufficiently large $n$, $$\label{harmonic}
\alpha_{n}^{{\mathtt u}}=\frac{\lambda_{{\mathtt u}}}{n}\quad\mbox{and}\quad \alpha_{n}^{{\mathtt u}}=\frac{\lambda_{{\mathtt u}}}{n}.$$ Then, using the asymptotics of tails given in (\[tailasymp\]), the corresponding persistent random walk is recurrent if and only if $\lambda_{{\mathtt u}}=\lambda_{{\mathtt d}}$.* ]{}
We need to stress that, for this toy example, the result does not hold if the equalities in (\[harmonic\]) are replaced by asymptotic equivalences. These conditions are somehow very stiff on the transitions since the distribution tails of the length of runs involve an infinite product. Still, interesting perturbation criteria in various context are given in Section \[perturbations\].
First, the statements (\[J-infini\]) and (\[un-J-fini\]) related to the recurrence and transience properties are direct consequences of Erickson’s criteria [@Erickson2] and of lemma \[lemme\] since the two-sided distribution tails of the increments of the random walk $M^{{\scriptscriptstyle \xi}}$ given in (\[Mxi-rw\]) satisfies for all $n\geq 1$, $$\mathbb P(Y^{{\scriptscriptstyle \xi}}_{1}\geq n)=\mathbb P(\xi_{1}=1)\mathbb P(\tau^{{\mathtt u}}\geq n)\quad\mbox{and}\quad \mathbb P(Y^{{\scriptscriptstyle \xi}}\leq -n)= \mathbb P(\xi_{1}=0)\mathbb P(\tau^{{\mathtt d}}\geq n).$$
Besides, from the equalities $$T_{n}=\sum_{k=1}^{n}\tau_{k}^{{\mathtt u}}+\sum_{k=1}^{n}\tau_{k}^{{\mathtt d}}\quad\mbox{and}\quad
S_{T_{n}}=\sum_{k=1}^{n}\tau_{k}^{{\mathtt u}}-\sum_{k=1}^{n}\tau_{k}^{{\mathtt d}}$$ we can see that (\[limsup\]) is satisfied if for all $c>0$, $$\label{bigjump}
{\mbox{$\mathbb{P}$}}\left ( \tau_n^{\mathtt u}\geq c \sum_{k=1}^{n} \tau_k^{\mathtt d}\quad i.o.\right ) =1.$$ In fact, using the Kolmogorov’s zero–one law, we only need to prove that this probability is not zero. To this end, we can see that [@Kesten Theorem 5., p. 1190] applies and it follows that $$\label{kesten}
\limsup_{n\to\infty}\frac{(Y_{n}^{{\scriptscriptstyle \xi}})^{+}}{\sum_{k=1}^{n} (Y_{n}^{{\scriptscriptstyle \xi}})^{-}}=\limsup_{n\to\infty}\frac{\xi_{n}\tau^{{\mathtt u}}_{n}}{\sum_{k=1}^{n}(1-\xi_{k})\tau_{k}^{{\mathtt d}}}=\infty\quad a.s..$$ Roughly speaking, this theorem states that the position of a one-dimensional random walk with an undefined mean is essentially given by the last big jump. Introducing the counting process given for all $n\geq 1$ by $$\label{count}
N_{n}:=\#\{1\leq k\leq n : \xi_{k}= 0\},$$ we shall prove that $$\label{randomwalk}
\left\{\sum_{k=1}^{n}(1-\xi_{k})\tau_{k}^{{\mathtt d}}\right\}_{n\geq 1}\overset{\mathcal L}{=}
\left\{\sum_{k=1}^{N_{n}}\tau_{k}^{{\mathtt d}}\right\}_{n\geq 1}.$$ For this purpose, we will see that the sequences of increments consists of independent random variables and are equal in distribution in the following sense $$\label{randomwalkincr}
\left\{(1-\xi_{n})\tau^{{\mathtt d}}_{n}\right\}_{n\geq 1}\overset{\mathcal L}{=} \left\{(1-\xi_{n})\tau^{{\mathtt d}}_{N_{n}}\right\}_{n\geq 1}.$$ First note that for any $n\geq 1$, $$\label{loi}
\mathbb P((1-\xi_{n})\tau_{N_{n}}^{{\mathtt d}}=0)=\mathbb P(\xi_{1}=1)=\mathbb P((1-\xi_{n})\tau_{n}^{{\mathtt d}}=0).$$ Moreover, up to a null set, we have $\{\xi_{n}=0\}=\{N_{n}=N_{n-1}+1\}$ and $N_{n-1}$ is independent of $\xi_{n}$ and of the lengths of runs. We deduce that for any $k\geq 1$, $$\label{loi2}
\mathbb P((1-\xi_{n})\tau_{N_{n}}^{{\mathtt d}}=k)=\mathbb P(\xi_{1}=0,\tau_{1}^{{\mathtt d}}=k)=\mathbb P((1-\xi_{n})\tau_{n}^{{\mathtt d}}=k).$$ Hence the increments of the random walks in (\[randomwalk\]) are identically distributed. Since the increments on left-hand side in (\[randomwalkincr\]) are independent, it only remains to prove the independence of those on the right-hand side in to obtain the equality in distribution in . Let us fix $n\geq 1$ and set for any non-negative integers $k_{1},\cdots,k_{n}\geq 0$, $$I_{n}:=\{1\leq j\leq n : k_{j}\neq 0\} \quad\mbox{and}\quad m_{n}:=\mathsf{card}(I_{n}).$$ Remark that $\ell\longmapsto m_{\ell}$ is increasing on $I_{n}$ and up to a null set,$$\bigcap_{\ell\notin I_{n}}\{\xi_{\ell}=1\}\cap\bigcap_{\ell\in I_{n}}\{\xi_{\ell}=0\}\subset \{N_{n}=m_{n}\}.$$ Then using (\[loi\]) and (\[loi2\]) together with the independence properties we can see that $$\mathbb P\left(\bigcap_{j=1}^{n} \{(1-\xi_{j})\tau_{N_{j}}^{{\mathtt d}}=k_{j}\}\right)=
\mathbb P\left(\bigcap_{\ell\notin I_{n}}\{\xi_{\ell}=1\}\cap\bigcap_{\ell\in I_{n}}\{\xi_{\ell}=0,\tau_{m_{\ell}}^{{\mathtt d}}=k_{\ell}\}\right)=
\prod_{j=1}^{n} \mathbb P((1-\xi_{j})\tau_{N_{j}}^{{\mathtt d}}=k_{j}),$$ which ends the proof of (\[randomwalk\]).
Next, by the standard LLN for *i.i.d.* sequences, we obtain that for any integer $q$ greater than $1/p$, with probability one, the events $\{N_{n}\geq \lfloor n/q\rfloor\}$ hold for all sufficiently large $n$. We deduce by (\[randomwalk\]) and (\[kesten\]) that $$\label{bigjump2}
{\mbox{$\mathbb{P}$}}\left(\tau_n^{\mathtt u}\geq c \sum_{k=1}^{\left\lfloor n/q\right\rfloor} \tau_k^{\mathtt d}\quad i.o. \right)=1.$$ As a consequence, $$\label{bigjump3}
\mathbb P\left(\bigcup_{\ell=0}^{q-1}
\left\{\tau_{qn+\ell}^{{\mathtt u}}\geq c \sum_{k=1}^{n} \tau_k^{\mathtt d}\right\}\quad i.o.\right)=1.$$ Again, applying the Kolmogorov’s zero-one law, we get that the $q$ sequences of events (having the same distribution) in the latter equation occur infinitely often with probability one. We deduce that (\[bigjump\]) is satisfied and this achieves the proof of (\[limsup\]).
For the alternative form of the theorem, it remains to prove that $J_{{\mathtt u}|{\mathtt d}}=\infty$ if and only if $K_{{\mathtt u}\mid{\mathtt d}}=\infty$. Summing by parts (the so called *Abel transformation*) we can write for any $r\geq 1$, $$\sum_{n=1}^r \frac{n (-\Delta {\mathcal{T}}_{{\mathtt u}}(n))}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)} = \left[1-\frac{(r+1){\mathcal{T}}_{{\mathtt u}}(r+1)}{\sum_{k=1}^{r+1} {\mathcal{T}}_{\mathtt d}(k)}\right]+\sum_{n=1}^{r} \Delta\left( \frac{n}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)}\right){\mathcal{T}}_{{\mathtt u}}(n+1).$$ Besides, a simple computation gives $$\label{positif}
\Delta \left (\frac{n}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)} \right ) = \frac{\sum_{k=1}^{n} {\mathcal{T}}_{\mathtt d}(k)-n {\mathcal{T}}_{\mathtt d}(n+1)}{\sum_{k=1}^{n+1}{\mathcal{T}}_{\mathtt d}(k)
\sum_{k=1}^{n\phantom{+1}}{\mathcal{T}}_{\mathtt d}(k)}
=\frac{\mathbb E[\tau^{{\mathtt d}}\mathds 1_{\tau^{{\mathtt d}}\leq n}]}{\sum_{k=1}^{n+1}{\mathcal{T}}_{\mathtt d}(k)
\sum_{k=1}^{n\phantom{+1}}{\mathcal{T}}_{\mathtt d}(k)}\geq 0.$$ It follows that $$\label{eqJK1}
\sum_{n=1}^r \frac{n (-\Delta {\mathcal{T}}_{{\mathtt u}}(n))}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)} = \left[1-\frac{(r+1){\mathcal{T}}_{{\mathtt u}}(r+1)}{\sum_{k=1}^{r+1} {\mathcal{T}}_{\mathtt d}(k)}\right]+\sum_{n=1}^{r} \left(1-\frac{n{\mathcal{T}}_{{\mathtt d}}(n+1)}{\sum_{k=1}^{n}{\mathcal{T}}_{\mathtt d}(k)}\right)\frac{{\mathcal{T}}_{{\mathtt u}}(n+1)}{\sum_{k=1}^{n+1}{\mathcal{T}}_{\mathtt d}(k)}.$$ Due to the non-negativeness of the forward discrete time derivative (\[positif\]) and due to our assumption for $\Theta_{{\mathtt u}}(\infty)$ and $\Theta_{{\mathtt d}}(\infty)$ to be infinite, the general term of the series in the right-hand side of the latter equation is non-negative and (up to a shift) equivalent to that of $K_{{\mathtt u}|{\mathtt d}}$. Moreover, we get again from (\[positif\]) that $$\label{eqJK2}
\frac{r {\mathcal{T}}_{\mathtt u}(r)}{\sum_{k=1}^r {\mathcal{T}}_{\mathtt d}(k)}= \sum_{m=r+1}^\infty \frac{r(-\Delta {\mathcal{T}}_{\mathtt u}(m))}{\sum_{k=1}^r {\mathcal{T}}_{\mathtt d}(k)} \leq \sum_{m=r+1}^\infty \frac{m (-\Delta {\mathcal{T}}_{\mathtt u}(m))}{\sum_{k=1}^m {\mathcal{T}}_{\mathtt d}(k)}.$$ Thus, if $J_{{\mathtt u}|{\mathtt d}}$ is infinite then so is $K_{{\mathtt u}\mid{\mathtt d}}$. Conversely, the finiteness of $J_{{\mathtt u}\mid{\mathtt d}}$ together with the estimate implies the first term on the right-hand side in remains bounded achieving the proof.
Deeply exploiting the Theorem \[RW\] stating that any non-constant random walk is, with probability one, either (trichotomy) oscillating or drifting to $\pm\infty$, the proof is organized as follows:
1. At first, we shall prove the result in the symmetric case $p=1/2$.
2. Secondly, we shall deduce the statement for any arbitrary $p\in(0,1)$ from the latter particular case.
To this end, assume that the supremum limit of $M^{{\scriptscriptstyle \xi}}$ is [*a.s.*]{} infinite. Following exactly the same lines as in the proof of , we obtain that $M$ is non-negative infinitely often with probability one. Applying Theorem \[RW\] we deduce that the supremum limit of $M$ is also [*a.s.*]{} infinite. Thereafter, again from the latter theorem and by symmetry, we only need to prove that if $M^{{\scriptscriptstyle \xi}}$ is drifting, then so is $M$. When the [*i.i.d.*]{} Bernoulli random variables $(\xi_{n})$ are symmetric, that is $p=1/2$, it is a simple consequence of the equalities $$M^{{\scriptscriptstyle \xi}}
\overset{\mathcal L}{=}
M^{{\scriptscriptstyle 1}-\xi}\quad\mbox{and}\quad M=M^{{\scriptscriptstyle \xi}}+M^{{\scriptscriptstyle 1}-\xi}.$$ At this stage it is worth noting that the lemma is proved in the symmetric situation.
When $p\neq 1/2$, we shall prove that if the supremum limit of $M$ is [*a.s.*]{} infinite, then so is the supremum limit of $M^{{\scriptscriptstyle \xi}}$. Since the converse has already been proved at the beginning, we shall deduce our lemma. It is not difficult to see that if the supremum limit of $M$ is [*a.s.*]{} infinite, then so is for the supremum limit of the subordinated random walk $Q=\{M_{qn}\}_{n\geq 0}$ for any $q \geq 1$ (see [@Erickson] for instance). Moreover, given an [*i.i.d.*]{} sequence of random variables $(\epsilon_{n})$ independent of the length of runs and distributed as symmetric Bernoulli distributions, we deduce from the first point that $Q$ and $Q^{\epsilon}$ are of the same type. Therefore, the supremum limit of $Q^{\epsilon}$ is also [*a.s.*]{} infinite and mimicking the proof of (\[bigjump\]) we get that for all $c>0$, $$\label{bigjump6}
1=\mathbb P\left(\sum_{\ell=1}^{q}\tau_{q(n-1)+\ell}^{{\mathtt u}}\geq c\sum_{k=1}^{qn}\tau_{k}^{{\mathtt d}}\quad i.o.\right)\leq \mathbb P\left(\bigcup_{\ell=1}^{q}\left\{\tau_{q(n-1)+\ell}^{{\mathtt u}})\geq \frac{c}{q}\sum_{k=1}^{qn}\tau_{k}^{{\mathtt d}}\right\}\quad i.o.\right).$$ Since the $q$ sequences of events in the right-hand side of the latter inequality are identically distributed and independent, we get from the zero-one law that, for any integers $r,q\geq 1$, and all $c>0$, $$\label{bigjump7}
\mathbb P\left(\tau_{rn}^{{\mathtt u}}\geq c\sum_{k=1}^{qn}\tau_{k}^{{\mathtt d}}\quad i.o.\right)=1.$$ Let $N$ be the renewal process associated with $(\xi_{n})$ and given in (\[count\]). Similarly to (\[randomwalk\]) we show that $$\label{randomwalk2}
\{M_{n}^{{\scriptscriptstyle \xi}}\}_{n\geq 1}=\left\{\sum_{k=1}^{n-N_{n}}\tau_{k}^{{\mathtt u}}-\sum_{k=1}^{N_{n}}\tau_{k}^{{\mathtt d}}\right\}_{n\geq 1}.$$ The key point is to write, for all $n\geq 1$ and non-zero $k_{1},\cdots, k_{n}$ in $\mathbb Z$, $$\mathbb P\left(\bigcap_{j=1}^{n} \{\xi_{j}\tau_{j-N_{j}}^{{\mathtt u}}-(1-\xi_{j})\tau_{N_{j}}^{{\mathtt d}}=k_{j}\}\right)=
\mathbb P\left({}
\bigcap_{\ell\in I_{n}^{{\mathtt u}}}\{\xi_{\ell}=1,\tau_{m_{\ell}^{{\mathtt u}}}^{{\mathtt u}}=k_{\ell}\}\cap
\bigcap_{\ell\in I_{n}^{{\mathtt d}}}\{\xi_{\ell}=0,\tau_{m_{\ell}^{{\mathtt d}}}^{{\mathtt d}}=k_{\ell}\} \right),$$ where $m_{n}^{\ell}$, with $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $n\geq 1$, denote respectively the cardinal of the sets $$I_{n}^{{\mathtt u}}:=\{1\leq j\leq n : k_{j}> 0\}
\quad\mbox{and}\quad
I_{n}^{{\mathtt d}}:=\{1\leq j\leq n : k_{j}< 0\}.$$ Thereafter, by applying the strong law of large numbers to the counting process $N$, we get that for any integer $c$ greater than $1/p$, choose $c\geq 2$, the events $\{N_{n}\geq \lfloor n/c\rfloor\}$ occur for all large enough $n$ with probability one. We deduce from (\[randomwalk2\]) and (\[bigjump7\]) and that $$\limsup_{n\to\infty} M_{n}^{{\scriptscriptstyle \xi}}\geq \limsup_{n\to\infty} M_{cn}^{{\scriptscriptstyle \xi}}\geq \limsup_{n\to\infty}\sum_{k=1}^{(c-1)n}\tau_{k}^{{\mathtt u}}-\sum_{k=1}^{n}\tau_{k}^{{\mathtt d}}=\infty\quad a.s..$$ This achieves the proof the lemma.
\[asymp-rem\] Contrary to the well-defined drift case for which a small perturbation on the parameters of a recurrent persistent random walk leads in general to a transient behaviour, in the case of an undefined drift the persistent random walk may stay recurrent as long as the perturbation remains asymptotically controlled. To put it in a nutshell, the criterion is global in the former case and asymptotic in the latter case.
In respect to this remark, in the next section, we give some examples of perturbations exhibiting stability or instability of the recurrent and transient properties in the context of Section \[und-drift\], that is the undefined drift situation.
Perturbations results {#perturbations}
=====================
In this part, we still assume that $\Theta_{\mathtt u}(\infty)=\Theta_{\mathtt d}(\infty)=\infty$. In this context, needing weaker conclusions for our purpose, assumptions of the comparison Lemma \[comp\] can be suitably relaxed in order to fit with the subsequent applications which can be roughly split into two cases.
The first one corresponds to the case of bounded perturbations in the sense of the condition below. In particular, under slight assumptions, it is shown that, for randomly chosen probabilities of change of directions, the recurrence or transience depends essentially on their deterministic means.
In the case of unbounded perturbations, the analysis is more subtle. In fact, we distinguish two regimes depending on whether the rough rate of the $\alpha_n^{\ell}$’s is close to the right-hand side of (termed in the sequel the upper boundary) or to the left-hand side of (the lower boundary). Here, the notion of closeness has to be understood in the sense of a large $p$ in and . In short, the perturbations needs to be thin (and actually, thinner and thinner as $p$ is increasing) in the former case, whereas they can be chosen relatively thicker in the latter one. Finally, we provide example of (possibly random) lacunar unbounded perturbations for which the persistent random walk remains recurrent with $\alpha_n^{\ell}$’s probabilities with a rate close to the lower boundary.
Asymptotic comparison lemma and application
-------------------------------------------
Lemma \[asymp-comp\] below is somehow an improvement of the comparison Lemma \[comp\]. Actually, it means in the context of indefinite means for the lengths of runs that it suffices to compare the tails in asymptotically to obtain a comparison of the infimum and supremum limits of the corresponding persistent random walks. Pointing out though the coupling inequality in no longer holds.
\[asymp-comp\] Let $S$ and $\widetilde S$ be two persistent random walks whose corresponding distribution tails of the lengths of runs satisfy, for $n$ large enough, $$\label{queues-asymptotiques}
{\mathcal{T}}_{\mathtt u}(n) \leq \widetilde{{\mathcal{T}}}_{\mathtt u}(n)\quad \textrm{and} \quad {\mathcal{T}}_{\mathtt d}(n) \geq \widetilde{{\mathcal{T}}}_{\mathtt d}(n).$$ Then, denoting by $K_{{\mathtt u},{\mathtt d}}$ and $\widetilde{K}_{{\mathtt u},{\mathtt d}}$ the associated quantities defined in , $$\label{compJ}
K_{{\mathtt u}\mid{\mathtt d}} = \infty\;\Longrightarrow \;\widetilde{K}_{{\mathtt u}\mid{\mathtt d}}=\infty.$$ Equivalently (see in particular (\[limsup\]) in Theorem \[undefinedrt\]), $$\label{equicomp}
\limsup_{n\to\infty} S_{n}=\infty
\;\Longrightarrow \;
\limsup_{n\to \infty}\widetilde S_{n}=\infty
\quad\mbox{or}\quad
\limsup_{n\to\infty} \frac{S_{n}}{n}=1\;\Longrightarrow \;
\limsup \frac{\widetilde S_{n}}{n}=1 \quad a.s..$$ Again, the quantity $K_{{\mathtt u}\mid{\mathtt d}}$ can be substituted with $J_{{\mathtt u}\mid{\mathtt d}}$ given in (\[jgen0\]) and a similar comparison lemma can be deduced by symmetry exchanging ${\mathtt u}$ and ${\mathtt d}$.
We stress that in the proof of this lemma we make use of the quantities $K_{\ell_{1},\ell_{2}}$ and not the $J_{\ell_{1},\ell_{2}}$ ones since the comparisons of the tails (\[queues-asymptotiques\]) are required only for large $n$. Therefore, the inequalities on the associated densities (\[comp3\]) do not necessarily hold even asymptotically.
Let $N\geq 1$ be such that the inequalities of (\[queues-asymptotiques\]) are satisfied for all $n\geq N$ and consider the persistent random walks $S^{\mathtt c}$ and ${\widetilde S}^{\mathtt c}$ associated with the modified distribution tails given for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ by $${\mathcal{T}}_\ell^{\mathtt c}(n)=
\left\{\begin{array}{lll}
{\mathcal{T}}_\ell(n), & \textrm{when} & n \geq N, \\
1, & \textrm{when} & n<N, \\
\end{array}\right.
\quad\textrm{and}\quad
\widetilde{{\mathcal{T}}}_\ell^{\mathtt c}(n)=
\left\{\begin{array}{lll}
\widetilde{{\mathcal{T}}}_{\ell}(n) & \textrm{when} & n\geq N, \\
1 & \textrm{when} & n<N. \\
\end{array} \right .$$ Due to Lemma \[comp\], there exists a coupling such that $S^{\mathtt c} \leq \widetilde{S}^{\mathtt c}$ a.s.. It follows from Theorem \[cas-sans-drift\] that (\[compJ\]) is satisfied with the quantities $K^{\mathtt c}_{{\mathtt u}\mid{\mathtt d}}$ and $\widetilde{K}^{\mathtt c}_{{\mathtt u}\mid{\mathtt d}}$, corresponding to the perturbed persistent random walk $S^{\mathtt c}$ and ${\widetilde S}^{\mathtt c}$, in place of those of $S$ and $\widetilde S$. To conclude the proof, it remains to show that $K^{\mathtt c}_{{\mathtt u}\mid{\mathtt d}}$ is infinite if and only if $K_{{\mathtt u}\mid{\mathtt d}}$ is infinite, and similarly for the tilde quantities. Since ${\mathcal{T}}_\ell^{\mathtt c}(n)$ and ${\mathcal{T}}_\ell(n)$ only differ for finitely many $n$, it comes that $$\frac{{\mathcal{T}}_{{\mathtt u}}^{\mathtt c}(n)}{\left (\sum_{k=1}^n {\mathcal{T}}^{\mathtt c}_{{\mathtt d}}(k)\right)^2}
\;\underset{n\to \infty}{\sim}\;{}
\frac{{\mathcal{T}}_{{\mathtt u}}(n)}{\left(\sum_{k=1}^n {\mathcal{T}}_{{\mathtt d}}(k)\right)^2}.$$ Here we recall that the denominators in the latter equation tends to infinity since the means of the lengths of runs are both supposed infinite. Moreover, for the same reasons, we can see that $$1- \frac{n {\mathcal{T}}_{\mathtt d}^{\mathtt c}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}^{\mathtt c}(k)}
=
\frac{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}^{\mathtt c}(k) - n {\mathcal{T}}_{\mathtt d}^{\mathtt c}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}^{\mathtt c}(k)}
\;\underset{n\to\infty}{\sim}\;
\frac{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k) - n {\mathcal{T}}_{\mathtt d}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)}=
1- \frac{n {\mathcal{T}}_{\mathtt d}(n)}{\sum_{k=1}^n {\mathcal{T}}_{\mathtt d}(k)}.$$ Indeed, the numerators around the equivalent symbol are nothing but truncated means of the lengths of runs and go to infinity. Consequently, the proof follows from the two latter equations and the expression given in (\[kgen\]).
Now we apply this lemma to the study of the recurrence and the transience of persistent random walks when the associated distribution tails are of the same order. As the previous Remark \[asymp-rem\] has already emphasized, the recurrent or the transient behaviour of a persistent random walk, provided the persistence times are not integrable, depend only on the asymptotic properties of their distribution tails. More precisely, the following property holds.
\[sameorder\] Two persistent random walks $S$ and $\widetilde S$ are simultaneously recurrent or transient if their associated distribution tails of the lengths of runs satisfy, for any $\ell\in\{{\mathtt u},{\mathtt d}\}$, $$\label{comptail}
{\mathcal{T}}_\ell(n) \asymp \widetilde{{\mathcal{T}}}_\ell(n).$$
From the assumptions follow that there exist positive constants $c_{\mathtt u}$ and $c_{\mathtt d}$ such that for $n$ sufficiently large, $${\mathcal{T}}_{\mathtt u}(n) \leq c_{{\mathtt u}} \widetilde{{\mathcal{T}}}_{\mathtt u}(n)
\quad\textrm{and}\quad
{\mathcal{T}}_{\mathtt d}(n) \geq c_{{\mathtt d}}^{-1} \widetilde{{\mathcal{T}}}_{\mathtt d}(n).$$ Considering the following truncated distribution tails satisfying for all sufficiently large $n$, $$\widetilde {\mathcal{T}}_{\mathtt u}^{\mathtt c}(n):=c_{{\mathtt u}}\widetilde{{\mathcal{T}}}_{\mathtt u}(n)
\quad\textrm{and}\quad
\widetilde {{\mathcal{T}}}_{\mathtt d}^{\mathtt c}(n):=c_{{\mathtt d}}^{-1} \widetilde{{\mathcal{T}}}_{\mathtt d}(n).$$ Then according to Lemma \[asymp-comp\] the corresponding quantities (\[kgen\]) satisfy $$K_{{\mathtt u}\mid{\mathtt d}} = \infty\;\Longrightarrow\; \widetilde K_{{\mathtt u}\mid{\mathtt d}}^{\mathtt c}=\infty.$$ Besides, one can see that the general terms of the series defining $\widetilde K_{{\mathtt u}\mid{\mathtt d}}^{\mathtt c}$ and $\widetilde K_{{\mathtt u}\mid{\mathtt d}}$ are equivalent, namely $$\left (1-\frac{n \widetilde{{\mathcal{T}}}_{{\mathtt d}}^{\mathtt c}(n)}{\sum_{k=1}^n \widetilde{{\mathcal{T}}}_{{\mathtt d}}^{\mathtt c}(k)} \right ) \frac{\widetilde{\mathcal T}_{{\mathtt u}}^{\mathtt c}(n)}{\sum_{k=1}^{n} \widetilde{\mathcal T}_{{\mathtt d}}^{\mathtt c}(k)}\underset{n\to\infty}{\sim}
c_{{\mathtt u}}c_{{\mathtt d}}\left (1-\frac{n \widetilde{{\mathcal{T}}}_{{\mathtt d}}(n)}{\sum_{k=1}^n \widetilde{{\mathcal{T}}}_{{\mathtt d}}(k)} \right ) \frac{\widetilde{\mathcal T}_{{\mathtt u}}(n)}{\sum_{k=1}^{n} \widetilde{\mathcal T}_{{\mathtt d}}(k)}.$$ Therefore if $K_{{\mathtt u}\mid{\mathtt d}}$ is infinite, so is $\widetilde K_{{\mathtt u}\mid{\mathtt d}}$ and conversely by symmetry of the problem. The proposition follows from Theorem \[cas-sans-drift\].
Bounded perturbation criterion and application to random perturbations
----------------------------------------------------------------------
Let $S$ be a persistent random walk (without drift) associated with the probabilities of change of directions still denoted by $(\alpha^{\ell}_{n})$ for $\ell\in\{{\mathtt u},{\mathtt d}\}$. By a perturbation of this persistent random walk, we mean sequences $(\gamma^{\ell}_n)$ satisfying, for all $n\geq 1$, $$\label{perturb}
\widetilde{\alpha}_{n}^{\ell}:=\alpha_{n}^{\ell}+\gamma_{n}^{\ell}\in[0,1].$$ In addition, we say that the perturbation is bounded if for any $\ell\in\{{\mathtt u},{\mathtt d}\}$, $$\label{bounded-per}
\left\{\sum_{k=1}^n \log\left(1-\frac{\gamma^{\ell}_{k}}{1-\alpha^{\ell}_{k}}\right) : n\geq 1\right\} \quad\mbox{is bounded}.$$ It holds if for instance $$\label{bounded-per2}
\limsup_{n\to\infty}\alpha_{n}^{\ell}<1, \quad
\left\{\sum_{k=1}^n \gamma_{k}^{\ell} : n\geq 1\right\}\quad\mbox{is bounded,}\quad \mbox{and}\quad \sum_{n=1}^\infty (\gamma_n^\ell)^2 < \infty.$$
In the sequel, we shall denote by $\widetilde S$ the persistent random walk associated with the probabilities of change of directions given in (\[perturb\]). The proposition below shows that bounded perturbations do not change the recurrence or the transience behaviour. We point out that it generally fails in the well-defined drift case even for compactly supported perturbations.
\[bounded-per3\] Under a bounded perturbation the drift remains undefined. Moreover, the original and perturbed persistent random walks $S$ and $\widetilde{S}$ are simultaneously recurrent or transient.
Simply apply Proposition \[sameorder\] with the distribution tails associated with the persistent random walk $S$ and the perturbed one $\widetilde S$ noting that $$\log(\widetilde {\mathcal{T}}_{\ell}(n)) = \log({\mathcal{T}}_{\ell}(n)) +\sum_{k=1}^n \log\left(1-\frac{\gamma^{\ell}_{k}}{1-\alpha^{\ell}_{k}}\right).$$
This criterion leads to the next interesting example of a persistent random walk in a particular random environment. Given a probability space $(\Omega,\mathcal F,\mathbb Q)$ (the environment) we consider two sequences (not necessarily independent) of independent random variables $(A_{n}^{{\mathtt u}})$ and $(A_{n}^{{\mathtt d}})$ (not necessarily identically distributed) taking values in $[0,1)$ with $\mathbb Q$-probability one. We assume furthermore that their mean sequences $(\alpha^{{\mathtt u}}_{n})$ and $(\alpha^{{\mathtt d}}_{n})$ defined by $$\alpha^{{\mathtt u}}_{n}:=\mathbb E[A_{n}^{{\mathtt u}}]\in[0,1)\quad\mbox{and}\quad \alpha^{{\mathtt d}}_{n}:=\mathbb E[A_{n}^{{\mathtt d}}]\in[0,1),$$ lead to a persistent random walk $S$ with an undefined drift. Besides, we introduce the variance sequences $(v^{{\mathtt u}}_{n})$ and $(v^{{\mathtt d}}_{n})$ defined by $$v_{n}^{{\mathtt u}}:=\mathbb V[A_{n}^{{\mathtt u}}]\quad\mbox{and}\quad v^{{\mathtt d}}_{n}:=\mathbb V[A_{n}^{{\mathtt d}}],$$ and for $\mathbb Q$-almost all $\omega$, we denote by $S^{\omega}$ the persistent random walk in the random medium $\omega\in\Omega$, that is the persistent random walk associated with the transitions $(A_{n}^{{\mathtt u}}(\omega))$ and $(A_{n}^{{\mathtt d}}(\omega))$.
\[randomperturb\] Assume that for any $\ell\in\{{\mathtt u},{\mathtt d}\}$, $$\label{condition-var}
\limsup_{n \to \infty} \alpha_n^\ell < 1 \quad \mbox{and}
\quad \sum_{n=1}^{\infty} v_{n}^{\ell} < \infty.$$ Then, for $\mathbb Q$-almost all $\omega\in\Omega$, the drift of $S^{\omega}$ remains undefined and the latter persistent random walk is recurrent or transient simultaneously with the so called mean-persistent random walk $S$.
First note that for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ and $\omega\in\Omega$, we can write $$A_n^\ell(\omega)=\alpha_n^\ell+(A_n^\ell(\omega)-\alpha_n^\ell),$$ so that $S^{\omega}$ can be seen as a random perturbation of $S$. Note that the random residual terms in the previous decomposition are centered and independent as $n$ varies. Moreover, the condition on the variances in implies by the Kolmogorov’s one-series theorem (see for instance [@varadhan:01 Theorem 3.10, p. 46]) that $$\mathbb Q\left(\lim_{n\to\infty}\sum_{k=1}^n (A_k^\ell-\alpha_k^\ell)\in\mathbb R\quad\mbox{exists}\right)=1.$$ Again by and the independence of the $A_{n}^{\ell}$’s, for any $\ell\in\{{\mathtt u},{\mathtt d}\}$, we get that $$\sum_{n=1}^\infty (A_n^\ell-\alpha_n^\ell)^2 < \infty\quad\mathbb Q-a.s.$$ Then the proposition follows from Proposition \[bounded-per3\] and more precisely conditions given in (\[bounded-per2\]).
In the following example, we retrieve the result on harmonic transitions given in (\[harmonic\]).
### Example of random harmonic transitions {#example-of-random-harmonic-transitions .unnumbered}
[*Let us consider for any $\ell\in\{{\mathtt u},{\mathtt d}\}$ a sequence of independent random variables $(\varepsilon_{n}^{\ell})$ with common means $\lambda_{\ell}\in(0,1)$, almost surely bounded, and such that for all $n\geq 1$, $$A_{n}^{\ell}:=\frac{\varepsilon^{\ell}_{n}}{n}\in[0,1)\quad a.s..$$ Then the persistent random walk in the random environment given by the random probabilities of change of directions $(A_{n}^{\ell})$ is almost surely recurrent or almost surely transient according as the means $\lambda_{{\mathtt u}}$ and $\lambda_{{\mathtt d}}$ are equal or not.*]{}
Relative thinning and thickening of the perturbations near the boundaries
-------------------------------------------------------------------------
As opposed to a bounded perturbation, a perturbation is said unbounded if (\[bounded-per\]) is not satisfied. Even though general criteria for unbounded perturbations of persistent random walks can be investigated, these criteria are tedious to write precisely and irrelevant to have an insight into the phenomena. Nevertheless, we highlight some families of examples.
To this end, introduce for any $p\geq 0$ the so called boundaries defined for large $n$ by $$\label{boundaries}
\widecheck \beta^{(p)}_{n}:=\frac{1}{n\log(n)\cdots\log_{[p]}(n)}\quad\mbox{and}\quad
\quad\widehat\beta_{n}^{(p)}:=\frac{1}{n}+\frac{1}{n\log(n)}+\cdots+\frac{1}{n\log(n)\cdots\log_{[p]}(n)}.$$
Close meaning $p$ large in , we show that the closer we are to the sequence given in the right-hand side of (the so called upper boundary) the thinner the perturbation needs to be to avoid a phase transition between recurrence and transience. Conversely, the closer we are to the sequence given in the left-hand side of (the so called lower boundary) the thicker perturbation can be while the recurrence or transience behaviour remains unchanged.
We start with the toy example of harmonic transitions. The proofs of the following three propositions are based on the application of Theorem \[cas-sans-drift\] and the determination of the order of distribution tails given in (\[tailasymp\]). Their proofs are left to the reader.
Let us consider for any $\lambda\in(0,1)$ and $c\in\mathbb R$, probabilities of change of directions staying away from one and satisfying, for $n$ sufficiently large, $$\label{perturb1}
\alpha_{n}^{{\mathtt u}}:=\frac{\lambda}{n} \quad\mbox{and}\quad \alpha_{n}^{{\mathtt d}}:=\frac{\lambda}{n}+\frac{c}{n\log(n)}.$$ Then the associated persistent random walks are recurrent or transient according as $|c|\leq 1$ or $|c|>1$. In particular, the relative maximal perturbation while remaining recurrent is of order $$\frac{|\alpha_{n}^{{\mathtt d}}-\alpha_{n}^{{\mathtt u}}|}{\alpha_{n}^{{\mathtt u}}}=
\frac{1}{\lambda\log(n)}.$$
When $\lambda=1$ the permitted perturbation leaving unchanged the recurrent behaviour is negligible with respect to that in (\[perturb1\]). More precisely, this general phenomenon appears near the upper boundary.
Let us consider for any $p\geq 0$ and $c\in\mathbb R$, probabilities of change of directions staying away from one and satisfying, for $n$ sufficiently large, $$\alpha_{n}^{{\mathtt u}}:=\widehat \beta_{n}^{(p)}\quad\mbox{and}\quad
\alpha_{n}^{{\mathtt d}}:=\widehat\beta_{n}^{(p)}+\frac{c}{n\log(n)\cdots\log_{[p+2]}(n)}.$$ Then the associated persistent random walks are recurrent or transient according as $|c|\leq 1$ or $|c|>1$. In particular, the relative maximal perturbation leaving unchanged the recurrent behaviour is of order $$\frac{|\alpha_{n}^{{\mathtt d}}-\alpha_{n}^{{\mathtt u}}|}{\alpha_{n}^{{\mathtt u}}}\leq
\frac{1}{\log(n)\cdots\log_{[p+2]}(n)}.$$
On the contrary, close to the lower boundary, the allowed perturbations to preserve the same type of behaviour are (relatively) larger than the previous ones.
\[underbound\] Let us consider for any $p\geq 0$ and $c\in\mathbb R$, probabilities of change of directions staying away from one and satisfying, for $n$ sufficiently large, $$\alpha_{n}^{{\mathtt u}}:=\widecheck \beta_{n}^{(p)}\quad\mbox{and}\quad
\alpha_{n}^{{\mathtt d}}:=\widecheck\beta_{n}^{(p)}+\frac{c}{n\log(n)\cdots\log_{[p+1]}(n)}.$$ Then the associated persistent random walks are recurrent or transient according as $|c|\leq 1$ or $|c|>1$. In particular, the relative maximal perturbation while remaining recurrent is of order $$\frac{|\alpha_{n}^{{\mathtt d}}-\alpha_{n}^{{\mathtt u}}|}{\alpha_{n}^{{\mathtt u}}}\leq
\frac{1}{\log_{[p+1](n)}}.$$
We focus now on some examples of lacunar perturbations of some symmetric persistent random walks (thus recurrent). They illustrate that such a perturbed persistent random walk can remain recurrent while the lacunes have in some sense a large density in the set of integers. As previously, these examples are not as general as possible but still contain the main ideas.
### Example of harmonic lacunar perturbations along prime numbers {#example-of-harmonic-lacunar-perturbations-along-prime-numbers .unnumbered}
*Consider $\mathbb P\subset \mathbb N$ the set of prime number and set, for any positive integer $r$, $$\mathbb P_{r}:=\bigcup_{k=1}^{r} k\mathbb P.$$ Introduce for $\lambda\in(0,1]$ the transition probabilities (staying away from one) given for large $n$ by $$\alpha_{n}^{{\mathtt u}}:=\frac{\lambda}{n}\quad\mbox{and}\quad \alpha_{n}^{{\mathtt d}}:=\frac{\lambda}{n}\mathds 1_{\mathbb N\setminus \mathbb P_{r}} (n).$$ Then we can prove that the associated persistent random walk is recurrent if and only if $$\left(\sum_{k=1}^{r}\frac{1}{k}\right)\lambda\leq 1.$$ In particular, when $r=1$, it is transient when $\lambda=1$ and recurrent otherwise. The proof is a consequence of our results and the prime number theorem which imply $$\sum_{p\in\mathbb P\cap[0,n]}\frac{1}{p}\underset{n\to\infty}{=}\log\log(n)+\mathcal O(1).$$*
### Example of random lacunar perturbations {#example-of-random-lacunar-perturbations .unnumbered}
*Fix $p\in\mathbb N$ and consider independent Bernoulli random variables $(\varepsilon_{n})$ such that for $n$ large enough, $$\mathbb P(\varepsilon_{n}=0):=\frac{1}{\log_{[p+1]}(n)}.$$ By the second Borel-Cantelli Lemma [@durrett2010 Theorem 5.4.11., p. 218], it comes that $$\frac{1}{2}\sum_{k=1}^{n}\mathds 1_{\{\varepsilon_{n}=0\}}\underset{n\to\infty}{\sim}\frac{1}{\log_{[p+1]}(n)}\quad a.s..$$ Even though the lacunar set $\mathbb L_{p}:=\{n\geq 1 : \varepsilon_n=0\}$ is [*a.s.*]{} of density zero in the set of integers, these sets are somehow [*a.s.*]{} asymptotically of density one as $p$ tends to infinity. As a matter of fact, the function in the right-hand side of the latter equation is arbitrarily slow as $p$ tends to infinity.*
Furthermore, we can see by applying Propositions \[underbound\] and \[randomperturb\] above that the persistent random walks associated with the random transitions (staying away from one) given for $n$ sufficiently large by $$\alpha_{n}^{{\mathtt u}}:=\widecheck \beta_{n}^{(p)}\quad\mbox{and}\quad \alpha_{n}^{{\mathtt d}}:= \widecheck \beta_{n}^{(p)}\varepsilon_{n}=\widecheck \beta_{n}^{(p)}\mathds 1_{\mathbb L_{p}}(n),$$ are [*a.s.*]{} recurrent.
Perturbed probabilized context tree with grafts
-----------------------------------------------
Consider the double infinite comb defined in Figure \[double-peigne\] and attach to each finite leaf $c\in\mathcal C$ given in (\[leaf\]) another (possibly void) context tree $\mathbb T_{c}$ (see Figure [\[double-peigne-2\]]{}). Denote by $\mathcal C_{c}$ the leaves (possibly infinite) of $\mathbb T_{c}$ such that those of the resulting context tree are given by $$\widetilde{\mathcal C}:=\bigcup_{c\in\mathcal C}\mathcal C_{c}.$$ Then to each leaf $\tilde c\in\widetilde{\mathcal C}$ of the modified context tree corresponds a Bernoulli distribution $q_{\tilde c}$ on $\{{\mathtt u},{\mathtt d}\}$ so that we can define the persistent random walk $\widetilde S$ associated with this enriched probabilized context tree. We introduce the persistent random walks $\widehat S$ and $\widecheck S$ built from the double infinite comb and the respective probabilities of change of directions given by $$\widecheck \alpha_{n}^{{\mathtt u}}:= \sup\{q_{\tilde c}({\mathtt d}) : \tilde c\in \mathcal C_{{\mathtt u}^{n}{\mathtt d}}\}\quad\mbox{and}\quad
\widecheck \alpha_{n}^{{\mathtt d}}:= \inf\{q_{\tilde c}({\mathtt u}) : \tilde c\in \mathcal C_{{\mathtt d}^{n}{\mathtt u}}\},$$ and $$\widehat \alpha_{n}^{{\mathtt u}}:= \inf\{q_{\tilde c}({\mathtt d}) : \tilde c\in \mathcal C_{{\mathtt u}^{n}{\mathtt d}}\} \quad\mbox{and}\quad \widehat \alpha_{n}^{{\mathtt d}}:= \sup\{q_{\tilde c}({\mathtt u}) : \tilde c\in \mathcal C_{{\mathtt d}^{n}{\mathtt u}}\}.$$
![\[double-peigne-2\]Probabilized context tree (grafting of the double infinite comb)](tree2){width="0.8\linewidth"}
We can state the following lemma and give the following application whose proofs follow from the same coupling argument as in the proof of Lemma \[comp\]. Remark that the situation of well-defined drift is allowed in this context.
\[trees\] Assume that the persistent random walks $\widecheck S$ and $\widehat S$ are of the same recurrent or transient type, then $\widetilde S$ is recurrent or transient accordingly. More precisely, we have $$\limsup_{n\to\infty} \widecheck S_{n}=\infty\quad a.s.
\;\Longrightarrow \;
\limsup_{n\to\infty} \widetilde S_{n}=\infty\quad a.s.,$$ and $$\liminf_{n\to\infty} \widehat S_{n}=-\infty\quad a.s.
\;\Longrightarrow \;
\liminf_{n\to\infty} \widetilde S_{n}=-\infty\quad a.s..$$
As an application, for instance, we can extend our results to some non-degenerated (in some sense) probabilized context trees built from a finite number of grafts from the double infinite comb.
### Example of finite number of non-degenerated grafts {#example-of-finite-number-of-non-degenerated-grafts .unnumbered}
[*Consider a persistent random walk $S$ built from a double infinite comb with an undefined drift and our original assumptions. Then perturb that context tree attaching a finite number of trees $\mathbb T_{c}$, [*i.e.*]{} $$\mathsf{card}(\{c\in\mathcal C : \mathbb T_{c}\neq \emptyset\})<\infty.$$ Assume furthermore that the grafts satisfies, for all $n\geq 1$ such that $\mathbb T_{{\mathtt u}^{n}{\mathtt d}}\neq \emptyset$ or $\mathbb T_{{\mathtt d}^{n}{\mathtt u}}\neq \emptyset$, $$\widecheck \alpha_{n}^{{\mathtt u}}<1\quad\mbox{and}\quad {\widehat \alpha}_{n}^{{\mathtt d}}<1$$ Then the resulting persistent random walk $\widetilde S$ is of the same type as $S$. For example, the latter condition is satisfied when the probabilized context trees $\mathbb T_{c}$ are finite and their attached Bernoulli distributions are non-degenerated. Note that in that case, the random walk is particularly persistent in the sense that the rises and descents are no longer independent. A renewal property persists but is heavier to write.*]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this article, we consider polynomials of the form $f(x)=a_0+a_{n_1}x^{n_1}+a_{n_2}x^{n_2}+\cdots+a_{n_r}x^{n_r}\in \Z[x],$ where $|a_0|\ge |a_{n_1}|+\dots+|a_{n_r}|,$ $|a_0|$ is a prime power and $|a_0|\nmid |a_{n_1}a_{n_r}|$. We will show that under the strict inequality these polynomials are irreducible for certain values of $n_1$. In the case of equality, apart from its cyclotomic factors, they have exactly one irreducible non-reciprocal factor.'
author:
- |
Biswajit Koley, A.Satyanarayana Reddy[^1]\
Department of Mathematics, Shiv Nadar University, India-201314\
(e-mail: [email protected], [email protected]).
title: An irreducible class of polynomials over integers
---
[**[Key Words]{}**]{}: Irreducible polynomials, cyclotomic polynomials.\
[**[AMS(2010)]{}**]{}: 11R09, 12D05, 12D10.\
Introduction {#sec:intro}
============
The question of finding an irreducibility criterion for polynomials depending upon its coefficients has been studied extensively. One of the well-known criteria is Eisenstein’s criterion [@eisen], which demands prime decomposition of the coefficients of a given polynomial. Another famous criterion known as Perron’s criterion [@perron], does not require prime decomposition of the coefficients:
Let $f(x)=x^n+a_{n-1}x^{n-1}+\dots+a_1x+a_0\in \Z[x]$ be a monic polynomial with $a_0\ne 0.$ If $|a_{n-1}|>1+|a_{n-2}|+|a_{n-3}|+\dots+|a_1|+|a_0|,$ then $f(x)$ is irreducible over $\Z.$
Finding an irreducibility criterion similar to Perron is of great interest to mathematicians. One of the works in this direction is due to Panitopol and Stefänescu [@LP]. They studied polynomials with integer coefficients having a constant term as a prime number and proved the following.
\[th1\] If $p$ is a prime and $p>|a_1|+\cdots+|a_n|,$ then $a_nx^n+\cdots+a_1x\pm p$ is irreducible.
A.I. Bonciocat and N.C. Bonciocat extended this work in [@bonciocat1] and [@bonciocat2] to prime powers. Before stating their results, we define $\tilde{g}(x)=x^{\deg(g)}g(x^{-1})$ as the [*reciprocal polynomial*]{} of the polynomial $g(x).$ It is easy to see that both $g(x)$ and $\tilde{g}(x)$ reducible or irreducible together provided $g(0)\ne 0$. A polynomial $f(x)$ is said to be [*reciprocal*]{} if $f(x)=\pm \tilde{f}(x),$ otherwise it is called [*non-reciprocal*]{}. With this notation and observation we express the results of A.I. Bonciocat and N.C. Bonciocat in terms of $\tilde{f}(x)$. In [@bonciocat1] it is shown that if $p$ is a prime number, $p\nmid a_{2}a_0$, $p^u>|a_0a_{2}|p^{3e} +\sum_{i=3}^n |a_{0}^{i-1}a_{i}|p^{ie}$ and $u\not\equiv e\pmod{2}, u\ge 1, e\ge 0,$ then $a_nx^n+\cdots+a_{2}p^ex^2+a_0p^u$ is irreducible. In [@bonciocat2] they proved a similar result for $a_{1}\ne 0$ instead of $a_{2},$ that is, if $p\nmid a_0a_{1}$, $p^u>|a_{1}|p^{2e}+\sum_{i=2}^n|a_{0}^{i-1}a_{i}|p^{ie}, u\ge 1, e\ge 0,$ then $a_nx^n+\cdots+a_{1}p^ex+a_0p^u$ is irreducible.
Note that if $e=0, a_0=1$, then both of these conditions are the same as that of Theorem \[th1\]. A similar study of the irreducibility of polynomials with constant term divisible by a prime or prime power can be found in [@bksr], [@jankauskas], [@jonassen], [@lipka], [@weisner]. For example, Jonassen[@jonassen] gave a complete factorization of trinomials of the form $x^n\pm x^m\pm 4$. He proved that they are irreducible except for six distinct families of polynomials. Weisner [@weisner] proved that if $p$ is a prime number and $n\ge 2, m\ge 1,$ then $x^n\pm x\pm p^m$ is irreducible whenever $p^m>2$. The authors [@bksr] have shown that apart from cyclotomic factors, $x^n\pm x\pm 2$ has exactly one non-reciprocal irreducible factor.
Suppose $n_r>n_{r-1}>\cdots>n_1>0$ and $p$ is a prime number. Let $$\S_{n_1}=\{a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep|u\ge 2, \ep=\pm 1, p\nmid a_{n_1}a_{n_r}\; \mbox{and} \;a_{n_i}\ne 0\}$$ and $\S_{n_1}^\prime=\{\, f\in \S_{n_1}\mid u \not\equiv 0\pmod{n_1}\,\}$. It is clear that for $n_1=1$, $\S_{n_1}'=\emptyset$. With these notations, the results of [@bonciocat1] and [@bonciocat2] can be combined as: [*$f\in \S_1\cup \S_2'$ is irreducible if $p^u>|a_{n_1}|+\cdots+|a_{n_r}|$* ]{}. The main result in this article is the following.
\[mainthm\] Let $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in \S_1\cup \S_2'\cup \S_3'$ and $p^u>|a_{n_1}|+\cdots+|a_{n_r}|$. Then $f(x)$ is irreducible.
Above result need not be true if $f(x)\in \S_2\cup \S_3.$ For example, $x^4+4\ep x^3+3^3\in \S_3\setminus \S_3'$ and $$x^4+4\ep x^3+3^3=(x+3\ep)^2(x^2-2\ep x+3),\;\; \mbox{where}\; \ep=\pm 1.$$ In [@bonciocat1], it is given that $
f(x)=x^4+(2^{k+1}-1)x^2+2^{2k}=(x^2+x+2^k)(x^2-x+2^k)\in \S_2\setminus \S_2',$ for every $k\ge 1$. Another example, $x^7+x^5+x^3+2^3=(x^3-x^2-x+2)(x^4+x^3+3x^2+2x+4)$ is the problem $007:14$ stated at West Coast Number Theory conference in 2007 by Walsh [@myerson]. The example $$x^{12}+x^8+x^4-16=(x^3-x^2-x+2)(x^3+x^2-x-2)(x^6+3x^4+5x^2+4)$$ is collected from [@jankauskas].
The following examples illustrate the necessity of the condition $p\nmid a_{n_r}a_{n_1}$ in the definition of $\S_{n_1}:$ $$\begin{aligned}
& x^3-x^2-10x+16=(x-2)(x^2+x-8);\notag\\
& 2x^3-3x^2-27=(x-3)(2x^2+3x+9);\notag\\
& 3x^6+x^5-3x^3-81=(x^2-3)(3x^4+x^3+9x^2+27);\notag\\
& x^8+2x^6+6x^4-81=(x^2+3)(x^6-x^4+9x^2-27).\notag\end{aligned}$$ The last example shows that it is not possible to drop the condition $p|a_{n_1}$ even for larger values of $n_1$. The condition on the coefficients given in Theorem \[mainthm\] enforces all the roots of $f$ to lie outside the unit circle. More generally,
\[rem:non-reciprocal\] Let $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+a_0$ be any polynomial of degree $n_r$ and $|a_0|>|a_{n_1}|+\cdots+|a_{n_r}|.$ Then every root of $f(x)$ lies outside the unit circle.
Let $z$ be a root of $f(x)$ with $|z|\le 1$. Then $f(z)=0$ and taking modulus on both sides of $$-a_0=a_{n_1}z^{n_1}+\cdots+a_{n_r}z^{n_r},$$ we get $|a_0|\le |a_{n_1}|+\cdots+|a_{n_r}|$ which contradicts the hypothesis. Therefore, all the roots of $f(x)$ lies in the region $|z|>1$.
Recall that if $z\ne 0$ is a root of a reciprocal polynomial, then so is $\frac{1}{z}.$ In other words, every reciprocal polynomial contains a root that lies inside or on the unit circle. Hence, if $f(x)$ satisfies the hypothesis of Remark \[rem:non-reciprocal\], then every factor of $f(x)$ is non-reciprocal. A natural question: is it possible to find number of factors of $f(x)?$
The remark of Schinzel given by Jankauskas in [@jankauskas] states that there are at most $\Omega(k)$ irreducible non-reciprocal factors for polynomials of the form $x^n+x^m+x^r+k, k\in \N$, where $\Omega(k)$ denotes the total number of prime factors of $k$ with repetitions. Jankauskas [@jankauskas] gave the following example $$x^{12}+x^8+x^4+52=(x^2-2x+2)(x^2+2x+2)(x^8-3x^4+13)$$ to establish the sharpness of the above remark. However, for the present family of polynomials, the number of irreducible factors is usually much less than that of $\Omega(p^u)=u$. We apply the method followed by Ljunggren[@lju] to study the behavior of the factors of $f(x)\in \S_{n_1}$ and we will show that
\[cor:n\_1=2\] Suppose $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in \S_{n_1}$ is reducible, where $n_1\in \{2,3\}$ and $p^u>|a_{n_1}|+\cdots+|a_{n_r}|$. Then $f(x)$ has at most $n_1$ non-reciprocal irreducible factors.
Later we consider the equality condition $p^u=|a_{n_1}|+\cdots+|a_{n_r}|$. The authors have already considered the case $u=1$ and $p=|a_{n_1}|+\cdots+|a_{n_r}|$ in [@bksr]. Here we will establish similar results for $u\ge 2$.
\[cyclothm\] Let $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in \S_1\cup \S_2'\cup \S_3'$ be reducible and $|a_{n_r}|+|a_{n_{r-1}}|+\cdots+|a_{n_1}|=p^u$. Then $f(x)=f_c(x)f_n(x)$, where $f_n(x)$ is the irreducible non-reciprocal factor of $f(x)$ and $f_c(x)=\gcd(x^{n_1}+{\operatorname{sgn}}(a_{n_1}\ep),\ldots,x^{n_r}+{\operatorname{sgn}}(a_{n_r}\ep) ),$ ${\operatorname{sgn}}(x)$ being the sign of $x\in \R.$
The following example does not satisfies the hypothesis of Theorem \[cyclothm\] and it has more than one non-reciprocal factor. However, the cyclotomic factor arises from the expression of $f_c(x)$ given in Theorem \[cyclothm\], $$4x^6+5x^2+9=(x^2+1)(2x^2-4x+3)(2x^2+4x+3).$$
There are polynomials for which $p|a_{n_1}a_{n_r}$ and they may or may not be of the form $f_c(x)f_n(x)$. For example,
1. $$\begin{aligned}
& 3x^4+11x^2+2x+16=(x^2+x+2)(3x^2-3x+8);\notag\\
& 9x^5+5x^3+2x+16=(x+1)(9x^4-9x^3+14x^2-14x+16),\label{eq:eq3}
$$
2. $$\begin{aligned}
& 3x^8+2x^6+9x^4+2x^2+16=(x^4-x^2+2)(3x^4+5x^2+8);\notag\\
& 9x^{10}+5x^6+2x^2+16=(x^2+1)(9x^8-9x^6+14x^4-14x^2+16),\label{eq:eq2}\end{aligned}$$
3. $$\begin{aligned}
& 3x^{12}+11x^6+2x^3+16=(x^6+x^3+2)(3x^6-3x^3+8);\notag\\
& 5x^{15}+9x^9+2x^3+16=(x+1)(x^2-x+1)(5x^{12}-5x^9+14x^6-14x^3+16),\label{eq:eq1}\end{aligned}$$
Equations ,, shows that the form of $f_c(x)$ in the Theorem \[cyclothm\] is same when $n_1\le 3$ even though they do not belong to $\S_{n_1}$. This motivates us to show that the second part of Theorem \[cyclothm\] is true even for a larger class of polynomials.
\[pro:fc(x)\] Let $f(x)=a_{n_r}x^{n_r}+\cdots+a_{n_1}x^{n_1}+a_0\in \Z[x]$ be a polynomial with $|a_0|= |a_{n_1}|+\cdots+|a_{n_r}|$ and $f_c(x)= \gcd(x^{n_r}+{\operatorname{sgn}}(a_0a_{n_r}), x^{n_{r-1}}+{\operatorname{sgn}}(a_0a_{n_{r-1}}), \ldots, x^{n_1}+{\operatorname{sgn}}(a_0a_{n_1}))$. If $f(x)$ has a cyclotomic factor, then $f_c(x)|f(x)$ and $f_c(x)$ is the product of all cyclotomic factors of $f(x).$
The polynomial $f(x)=x^4+x^2-8$ is irreducible, but $f_c(x)=\gcd(x^4-1,x^2-1)=x^2-1$ does not divided $f(x).$ Let $g(x)=x^4+(k+1)x^3+x^2-k,$ where $k\ge 1.$ Then $g(x)=\Phi_3(x)h(x)$ for some $h(x)\in \Z[x]$ where as $g_c(x)=\gcd(x^4-1, x^3-1, x^2-1)=x-1$ and $g(1)\ne 0$. Thus, Proposition \[pro:fc(x)\] is no longer true apart from the equality condition on the coefficients. One can conclude that for any $f(x)=a_{n_r}x^{n_r}+\cdots+a_{n_1}x^{n_1}+a_0\in \Z[x],$ the followings hold:
1. if $|a_0|>|a_{n_1}|+\cdots+|a_{n_r}|$, then from the Remark \[rem:non-reciprocal\], $f(x)$ is not divisible by a cyclotomic polynomial.
2. if $|a_0|=|a_{n_1}|+\cdots+|a_{n_r}|$, then from the Proposition \[pro:fc(x)\], $f(x)$ has cyclotomic factors if and only if $f_c(x)\ne 1.$
Let $n$ be a positive integer. We denote $e(n)$ as the largest even part of $n$, that is if $n=2^an_1$ with $n_1$ being odd, then $e(n)=2^a$. Under some special restrictions on the exponents of $x$ in $f(x)$, Theorem \[cyclothm\] provides various useful irreducibility criterion for polynomials of this nature. For example,
\[cor:pos\] Suppose $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in (\S_1\cup \S_2'\cup \S_3')\cap \Z_+[x]$ is a polynomial and $a_{n_1}+a_{n_2}+\dots+a_{n_{r-1}}+a_{n_r}=p^u$. Then $f(x)$ is irreducible if and only if there exist distinct $i,j$ such that $e(n_i)\ne e(n_j).$
Few applications of these results in the case of trinomials are shown in section \[sec:app\].
Proofs {#sec:proofs}
======
Suppose $n,m$ are two positive integers. It is known that $(x^n-1,x^m-1)=x^{(n,m)}-1$. We will use the following lemma later in the paper to draw several consequences of Theorem \[mainthm\] and Theorem \[cyclothm\]. See [@bksr] for the detailed proof.
\[lem:basic\_cyclo\] Suppose $n,m$ are two positive integers. Then $$(x^n+1, x^m+1)=\begin{cases}
x^{(n,m)}+1 &\mbox{ if $e(m)=e(n);$}\\
1 &\mbox{ otherwise,}
\end{cases}$$ and $$(x^n+1, x^m-1)=\begin{cases} x^{(n,m/2)}+1 &\mbox{ if $e(m)\ge 2e(n);$}\\
1 &\mbox{ otherwise.}
\end{cases}$$
Since it is known that Theorem \[mainthm\] is true when $f\in \S_1\cup \S_2'$, we prioritize the irreducibility of $f\in \S_3'$. Theorem \[mainlem\] will provide an alternate proof for the irreducibility of $f\in \S_1\cup \S_2'$ by the approach followed in the proof of Lemma \[lem1\].
If $f\in \S_1\cup \S_2\cup\S_3$, then either $f(0)=p^u$ or $f(0)=-p^u.$ Since irreducible factors of $f(x)$ and $-f(x)$ are same upto sign, without loss of generality we will assume at least one of the irreducible factors of $f(x)$ has a positive constant term.
\[lem1\] Let $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in \S_3'$ be reducible. Then the constant term of the one of the irreducible factors of $f(x)$ is $|f(0)|.$
Suppose $f(x)=f_1(x)f_2(x)$ is a non trivial factorization of $f(x)$ with $\deg(f_1)=s$. Let $g(x)=f_1(x)\tilde{f}_2(x)=\sum\limits_{i=0}^{n_r} b_ix^i.$ Then $\tilde{g}(x)=\sum\limits_{i=0}^n b_{n_r-i}x^i$. Since $g(x)\tilde{g}(x)=f(x)\tilde{f}(x)$, comparing the leading coefficient and the coefficient of $x^{n_r}$, we get $$b_0b_{n_r}=p^ua_{n_r}\ep; \qquad \sum\limits_{i=0}^{n_r} b_i^2=p^{2u}+\sum\limits_{i=1}^{r}a_{n_i}^2,$$ respectively. Let $b_0=p^{\alpha}d$ and $b_{n_r}=p^{u-\alpha}d_1,$ where $dd_1=a_{n_r}\ep$ and $\alpha\ge 0$. Then the equation $\sum\limits_{i=0}^{n_r} b_i^2=p^{2u}+\sum\limits_{i=1}^{r}a_{n_i}^2$ can be written as $$\sum\limits_{i=1}^{n_r-1}b_i^2=p^{2u}-p^{2\alpha}d^2-p^{2(u-\alpha)}d_1^2+\sum\limits_{i=1}^{r}a_{n_i}^2.$$ Suppose that $b_i$ is nonzero whenever $i\in\{0,j_1,j_2,\ldots,j_t,n_r\},$ where $0<j_t<j_{t-1}<\cdots<j_1<n_r.$ Then $g(x)=b_{n_r}x^{n_r}+b_{j_1}x^{j_1}+\cdots+b_{j_t}x^{j_t}+b_0$ and $$\label{eq1}
g(x)\tilde{g}(x)=p^ua_{n_r}\ep x^{2n_r}+b_{n_r}b_{j_t}x^{2n_r-j_t}+b_0b_{j_1}x^{n_r+j_1}+\cdots+p^ua_{n_r}\ep.$$
Our goal is to show that $\alpha=0$. On the contrary, we assume that $1\le \alpha\le u/2$.
First we will show that $1\le \alpha\le u/2$ is not possible if $n_r\ge n_1+n_{r-1}=3+n_{r-1}.$ Let $n_r\ge 3+n_{r-1}.$ Then the term with second largest exponent of $x$ in $$\label{eq2}
f(x)\tilde{f}(x)=p^ua_{n_r}\ep x^{2n_r}+a_{n_r}a_{n_1} x^{2n_r-3}+p^u\ep a_{n_{r-1}}x^{n_r+n_{r-1}}+\cdots+p^ua_{n_r}\ep,$$ is either $a_{n_r}a_{n_1}x^{2n_r-3}$ or $a_{n_r}a_{n_1}x^{2n_r-3}+p^u\ep a_{n_{r-1}} x^{n_r+n_{r-1}}$. Because of the condition $p\nmid a_{n_1}a_{n_r}$ in the definition of $\S_{n_1},$ the coefficient of the second largest exponent of $x$ in Equation is not divisible by $p.$ Therefore, if we are able to show that the corresponding coefficient in Equation is always divisible by $p,$ then we arrive at a contradiction which in turn implies that the assumption $1\le \alpha\le u/2$ is not correct, and hence $\alpha$ has to be zero. So we aim to find out the coefficient of the second largest exponent of $x$ in Equation and will show that it is divisible by $p.$ That coefficient depends on $j_t$ and $j_1$ and the possible cases for $j_t$ and $j_1$ are as follows.
[3]{}
1. $j_t=3$ or $j_1=n_r-3$
2. $j_t>3$ and $j_1<n_r-3$
3. $j_t>3$ and $j_1>n_r-3$
4. $j_t<3$ and $j_1<n_r-3$
5. $j_t<3$ and $j_1>n_r-3.$
If $j_t=3$ or $j_1=n_r-3,$ then we are through as $p|b_0$ and $p|b_{n_r}$. If $j_t>3, j_1<n_r-3$, then for every $i$ $$2n_r-j_i\le 2n_r-j_t<2n_r-3,$$ and for every $i\ne l$ $$n_r+j_i-j_l<n_r+j_i<2n_r-3.$$ Hence the second largest exponent in $g(x)\tilde{g}(x)$ is less than $2n_r-3$ implies that the case $j_t>3$ and $j_1<n_r-3$ cannot arise.
Let $j_t>3$ and $j_1>n_r-3.$ Then $$2n_r-j_i\le 2n_r-j_t<2n_r-3$$ for every $i$ and $j_1>n_r-3$ implies either $j_1=n_r-1$ or $j_1=n_r-2.$ If $j_1=n_r-1$, then $x^{2n_r-1}$ has coefficient $b_0b_{j_1}(\ne 0)$ in $g(x)\tilde{g}(x)$ while the term is absent in $f(x)\tilde{f}(x).$ Similar case arise when $j_1=n_r-2.$
With little work in the similar manner, one can show that the case $j_t<3$ and $j_1<n_r-3$ is also not possible.
Let $j_t< 3$ and $j_1> n_r-3$. There are two possibilities: either $j_t=1, j_1>n_r-3$ or $j_t=2, j_1>n_r-3$. We consider both the cases separately.
[*Case I:*]{} Let $j_t=1$ and $j_1>n_r-3$. If $j_1=n_r-2,$ then $x^{2n_r-1}$ has coefficient $b_{n_r}b_{j_t}$ in Equation while $x^{2n_r-1}$ is absent in Equation . So, $j_1$ has to be $n_r-1$ and $b_{n_r}b_{j_t}+b_0b_{j_1}=0$. By using the values of $b_0$ and $b_{n_r}$, we deduce that $$\label{neq1}
b_{j_1}=-\frac{p^{u-2\alpha}d_1b_{j_t}}{d},$$ and hence $p^{u-2\alpha}|b_{j_1}$. Similar to the values of $j_1, j_t,$ we now consider the different possible values of $j_2$ and $j_{t-1}$. Note that it is not possible to hold $j_{t-1}>3, j_2<n_r-3$ simultaneously. Otherwise $g(x)\tilde{g}(x)$ has second largest exponent $<2n_r-3$.
Let $j_{t-1}=3$ or $j_2=n_r-3$. Then the coefficient of $x^{2n_r-3}$ in Equation is $$\begin{cases}
b_{n_r}b_{j_{t-1}}+b_{0}b_{j_2} &\mbox{ if $j_{t-1}=3, j_2=n_r-3$;}\\
b_{n_r}b_{j_{t-1}} &\mbox{ if $j_{t-1}=3, j_2\ne n_r-3$;}\\
b_0b_{j_2} &\mbox{ if $j_{t-1}\ne 3, j_2=n_r-3$,}
\end{cases}$$ each of them is divisible by $p$.
Let $j_{t-1}< 3$ and $j_2<n_r-3$. Since $j_{t-1}=2$, the coefficient of $x^{2n_r-3}$ in Equation is $$\begin{cases}
b_{j_1}b_{j_{t-1}}+b_{n_r}b_{j_{t-2}} & \mbox{ if $j_{t-2}=3$;}\\
b_{j_1}b_{j_{t-1}} &\mbox{ otherwise.}
\end{cases}$$ From Equation , $p$ will divide the above coefficient provided $u\ne 2\alpha$.
If $u=2\alpha$, then Equation reduces to $$\label{neq2}
b_{j_1}=-\frac{d_1b_{j_t}}{d}.$$ Since $j_t=1, j_1=n_r-1, j_{t-1}=2, j_2<n_r-3$, the coefficient of $x^{2n_r-2}$ in $g(x)\tilde{g}(x)$ is $b_{j_1}b_{j_t}+b_{j_{t-1}}b_{n_r}=0$. As $p|b_{n_r},$ using Equation , $p|b_{j_t}$, which in turn implies that $p|b_{j_1}$. Thus, if $u=2\alpha$, then also $p$ divides the coefficient of $x^{2n_r-3}$ in $g(x)\tilde{g}(x)$.
Let $j_{t-1}>3$ and $j_2>n_r-3$. As $j_2=n_r-2$, the coefficient of $x^{2n_r-2}$ in $g(x)\tilde{g}(x)$ is $$b_0b_{j_2}+b_{j_1}b_{j_t}=0.$$ If $u\ne 3\alpha$, then using in the last equation, either $p|b_{j_2}$ or $p|b_{j_t}$. The coefficient of $x^{2n_r-3}$ in is then $$\begin{cases}
b_{j_2}b_{j_t}+b_{j_3}b_0 & \mbox{ if $j_3=n_r-3;$}\\
b_{j_2}b_{j_t} &\mbox{ otherwise,}
\end{cases}$$ each of which is divisible by $p$.
Let $j_{t-1}< 3$ and $j_2> n_r-3$. Then $j_{t-1}=2, j_2=n_r-2$ and the coefficient of $x^{2n_r-2}$ in is $$\label{chap6:eq5}
b_{n_r}b_{j_{t-1}}+b_{j_1}b_{j_t}+b_0b_{j_2}=0.$$ On the other hand, the coefficient of $x^{2n_r-3}$ in is $$\begin{cases}
b_{j_1}b_{j_{t-1}}+b_{j_2}b_{j_t}+b_{n_r}b_{j_{t-2}}+b_0b_{j_3} &\mbox{ if $j_{t-2}= 3, j_3=n_r-3$;}\\
b_{j_1}b_{j_{t-1}}+b_{j_2}b_{j_t}+b_{n_r}b_{j_{t-2}} &\mbox{ if $j_{t-2}=3, j_3\ne n_r-3$;}\\
b_{j_1}b_{j_{t-1}}+b_{j_2}b_{j_t}+b_0b_{j_3} &\mbox{ if $j_{t-2}\ne 3, j_3=n_r-3$;}\\
b_{j_1}b_{j_{t-1}}+b_{j_2}b_{j_t} &\mbox{ if $j_{t-2}\ne 3, j_3\ne n_r-3$.}
\end{cases}$$ Let $u=2\alpha$. By using and , we get $$p^{\alpha}d_1b_{j_{t-1}}-\frac{d_1b_{j_t}^2}{d}+p^{\alpha}db_{j_2}=0.$$ From the last equation, $p|b_{j_t}$ and hence $p|b_{j_1}$ by . This implies that the coefficient of $x^{2n_r-3}$ in is divisible by $p$.
Let $u>2\alpha$ and $u\ne 3\alpha$. By using , Equation reduces to $$p^{u-\alpha}d_1b_{j_{t-1}}-\frac{p^{u-2\alpha}d_1b_{j_t}}{d}+p^{\alpha}db_{j_2}=0.$$ If $u<3\alpha,$ then $p$ would divide $b_{j_t}$ and $p$ already divides $b_{j_1}$ by . If $u>3\alpha$, then $p|b_{j_2}$. Hence, in this particular case also, the coefficient of $x^{2n_r-3}$ is divisible by $p$ in .
[*Case II:*]{} Let $j_t=2$ and $j_1>n_r-3$. With a similar analysis, it can be seen that either $j_{t-1}=3$ or $j_2=n_r-3$ or both has to be true. But in those cases, the corresponding coefficient is divisible by $p$ in $g(x)\tilde{g}(x)$.
If $n_r=n_{r-1}+1$ or $n_r=n_{r-1}+2$, then instead of considering the second largest exponent in and , we will consider the coefficients of $x^{2n_r-3}$ in both the equations. With a similar analysis, one can show that the coefficient of $x^{2n_r-3}$ in Equation is divisible by $p$ while it is not the case in Equation . Therefore, $\alpha$ has to be zero.
The lemma is even true for polynomials belonging to $\S_1\cup \S_2'.$ In other words,
\[mainlem\] Let $f(x)=a_{n_r}x^{n_r}+a_{n_{r-1}}x^{n_{r-1}}+\cdots+a_{n_1}x^{n_1}+p^u\ep\in \S_1\cup \S_2'\cup \S_3'$ be reducible. Then the constant term of the one of the irreducible factors of $f(x)$ is $|f(0)|$.
We use the same notations as used in the proof of Lemma \[lem1\]. We have $$\label{eq4}
f(x)\tilde{f}(x)=p^ua_{n_r}\ep x^{2n_r}+a_{n_r}a_{n_1} x^{2n_r-n_1}+p^u\ep a_{n_{r-1}}x^{n_r+n_{r-1}}+\cdots+p^ua_{n_r}\ep,$$ and $$\label{eq5}
g(x)\tilde{g}(x)=p^ua_{n_r}\ep x^{2n_r}+b_{n_r}b_{j_t}x^{2n_r-j_t}+b_0b_{j_1}x^{n_r+j_1}+\cdots+p^ua_{n_r}\ep.$$
It is sufficient to consider $n_1=1,2$. If $n_1=1$, then either $j_t=1$ or $j_1=n_r-1.$ The coefficient of $x^{2n_r-1}$ is then divisible by $p$ in but not in .
Suppose $n_2=1$ and $n_r\ge 2+n_{r-1}$. If $j_t=2$ or $j_1=n_r-2,$ then the coefficient of $x^{2n_r-2}$ is divisible by $p$ in but not in . Since the term $x^{2n_r-1}$ is absent in , we cannot have $j_t=1, j_1<n_r-1$ or $j_t>1, j_1=n_r-1$. Hence, $j_t=1, j_1=n_r-1$ and $b_{n_r}b_{j_t}+b_0b_{j_1}=0$. Since $u$ is odd, this would imply $p|b_{j_1}.$ Also $j_t<j_{t-1}$ and $j_2<j_1$ implies that either $j_{t-1}=2$ or $j_2=n_r-2.$ Then the coefficient of $x^{2n_r-2}$ in equation is $$\begin{cases}
b_{j_1}b_{j_t}+b_0b_{j_2}+b_{n_r}b_{j_{t-1}} & \mbox{ if $j_{t-1}=2, j_2=n_r-2$;}\\
b_{n_r}b_{j_{t-1}}+b_{j_1}b_{j_t} & \mbox{ if $j_{t-1}=2, j_2\ne n_r-2$;}\\
b_{j_1}b_{j_t}+b_0b_{j_2} &\mbox{ if $j_{t-1}\ne 2, j_2=n_r-2$;}\\
b_{j_1}b_{j_{t}} &\mbox{ if $j_{t-1}\ne 2, j_2\ne n_r-2$,}
\end{cases}$$ each divisible by $p$. Similarly if $n_r=n_{r-1}+1,$ then one can arrive at the same kind of contradiction by comparing the coefficient of $x^{2n_r-2}$ in Equation and . Hence $\alpha=0.$
\[\][**Proof of Theorem \[mainthm\]:**]{} Follows from Theorem \[mainlem\] and Remark \[rem:non-reciprocal\].
\[\][**Proof of Corollary \[cor:n\_1=2\]:**]{} Let $f(x)\in \S_2$. From the proof of Theorem \[mainlem\], if $f(x)$ is reducible then $\alpha$ has to be either $u/2$ or $0$. If $f(x)\in \S_3$ is reducible, then from the proof of Lemma \[lem1\], $\alpha$ has to be either $u/3$ or $0$. Because of the hypothesis, in either case, $\alpha$ can’t be $0$, from which the result follows.
\[\][**Proof of Theorem \[cyclothm\]:**]{} Suppose $f(x)=f_1(x)f_2(x)$ is a proper factorization of $f(x)$. By using Theorem \[mainlem\], without loss of generality, we can assume that $|f_1(0)|=1$, $|f_2(0)|=p^u.$ As a consequence, $f_2(x)$ is irreducible.
With a proof similar to that of Remark \[rem:non-reciprocal\] one can prove that all the roots of $f(x)$ lies in the region $|z|\ge 1$. Let $z_1, z_2, \ldots, z_s$ be all the roots of $f_1(x)$, where $\deg(f_1)=s<\deg(f).$ Then $$\prod\limits_{i=1}^s |z_i|=\frac{1}{|d|},$$ where $d$ is the leading coefficient of $f_1(x)$, dividing $a_{n_r}$. Since $z_i$’s are roots of $f(x)$, we have $|z_i|\ge 1$ and hence $|d|=1$. Consequently, all the roots of $f_1(x)$ lies on the unit circle and by Kronecker’s theorem $\pm f_1(x)$ becomes a product of cyclotomic polynomials.
The second part is a special case of the proof of Proposition \[pro:fc(x)\].
[**Proof of Proposition \[pro:fc(x)\]:**]{} Let $\zeta$ be a primitive $t^{\text{th}}$ root of unity with $f(\zeta)=0$. Then $$\label{eq7}
-a_0=a_{n_1}\zeta^{n_1}+a_{n_2}\zeta^{n_2}+\cdots+a_{n_r}\zeta^{n_r}.$$ Taking modulus on both sides $$|a_0|=|a_{n_1}\zeta^{n_1}+a_{n_2}\zeta^{n_2}+\cdots+a_{n_r}\zeta^{n_r}|=\sum\limits_{i=1}^r|a_{n_i}|.$$ From triangle inequality, the last two equations hold if and only if the ratio of any two parts is a positive real number. Therefore, $a_{n_r}\zeta^{n_r-n_i}/a_{n_i}=|a_{n_r}\zeta^{n_r-n_i}/a_{n_i}|$ gives $\zeta^{n_r-n_i}={\operatorname{sgn}}(a_{n_r}a_{n_i})$ for $1\le i\le r-1$. From , we have $$-a_0= a_{n_i}\zeta^{n_i}\left[ \left|\frac{a_{n_1}}{a_{n_i}}\right|+\dots+\left|\frac{a_{n_{i-1}}}{a_{n_i}}\right|+1+\left|\frac{a_{n_{i+1}}}{a_{n_i}}\right|+\dots+\left|\frac{a_{n_r}}{a_{n_i}}\right|\right],$$ so that $\zeta^{n_i}=-{\operatorname{sgn}}(a_0a_{n_i})$. From $\zeta^{n_r-n_i}\zeta^{n_i}$, one gets the last equation. Remaining all the equations satisfied by $\zeta$ can be drawn from these $r$ equations. Conversely, if $\zeta$ satisfy each of the $r$ equations $x^{n_i}+{\operatorname{sgn}}(a_0 a_{n_i})=0,$ then $f(\zeta)=0$. It remains to show the separability of the cyclotomic part of $f(x)$. Let $\zeta$ be a roots of unity satisfying $x^{n_i}+{\operatorname{sgn}}(a_0 a_{n_i})=0$ for $1\le i\le r$ and $f(\zeta)=0, f'(\zeta)=0$. Using the $r$ relations satisfied by $\zeta$ in $f'(\zeta)=0$, we derive that $$n_r|a_{n_r}|+\cdots+|a_{n_1}|n_1=0,$$ which is not possible.
\[\][**Proof of Corollary \[cor:pos\]:**]{} For any $g(x)\in \Z[x]$, $g(x)$ has cyclotomic factor if and only if $g(x^d)$ has a cyclotomic factor, $d\ge 1$. Therefore it is sufficient to prove the result for polynomials whose exponents are relatively prime. The result follows from Theorem \[cyclothm\], and Lemma \[lem:basic\_cyclo\].
We now see an application of Corollary \[cor:pos\]. In [@bksr] we showed that, if $(n_1,n_2,\ldots, n_p)=1$ then $x^{n_1}+x^{n_2}+\cdots+x^{n_p}+p$ is irreducible. The same result is true when the prime number is replaced with a prime power and the power is not divisible by $2$ and $3$. In particular, if $p$ is a prime number and $n_1>n_2>\ldots> n_p$ are positive integers with $(n_1,n_2, \ldots, n_p)=1, n_p\le 3$, then $x^{n_1}+x^{n_2}+\cdots+x^{n_p}+p^u$ is irreducible for any integer $u$ with $(u,6)=1$.
Let $f(x)\in \S_1\cup \S_2'\cup \S_3'$ be a polynomial with $n_{j-1}=n_j-1$ for some $j$, $2\le j\le r$ and $|a_{n_r}|+|a_{n_r-1}|+\cdots+|a_{n_1}|=p^u$. Then $f(x)$ is reducible if and only if either $f(1)=0$ or $f(-1)=0$.
From Lemma \[lem:basic\_cyclo\], $(x^n\pm 1 , x^{n-1}\pm 1)$ is either $1$ or $x\pm 1$. Hence the proof of the above Corollary follows directly by applying Theorem \[cyclothm\].
Applications {#sec:app}
============
Suppose $u\ge 2$ and $a,b,p\in \N, p$ being a prime number, $p\nmid ab$. In this section we consider the trinomials of the form $f(x)=ax^n+b\ep_1x^m+p^u\ep_2,$ where $\ep_i\in \{\, -1,+1\,\}$ and $n>m>0$. One can see that results in the previous section are applicable for trinomials. In this section, we will discuss the reducibility of $f$ in the case $p^u=a+b$. From above, we know
Let $f(x)=ax^{n}+b\ep_1x^{m}+p^u\ep_2\in \S_1\cup \S_2'\cup \S_3'$ and $p^u=a+b$. Then apart from cyclotomic factors, $f(x)$ has only one irreducible non-reciprocal polynomial.
From Theorem \[cyclothm\], all such $f(x)$ are separable over $\Q$. Here $m\le 3$. However, one can find the separability criterion for a bigger class of trinomials with arbitrary values of $m$ by using the discriminant formula.
\[th0\] The discriminant of the trinomial $x^n+ax^m+b$ is $$D=(-1)^{\binom{n}{2}}b^{m-1}\left[ n^{n/d}b^{n-m/d}-(-1)^{n/d}(n-m)^{n-m/d}m^{m/d}a^{n/d}\right]^d,$$ where $d=(n,m),$ and $a,b\in \Z\setminus \{\, 0\,\}$.
Note that if $h(x)\in \Z[x],h(0)\ne 0,$ then $h(x)$ is separable if and only if $h(x^k)$ is separable for every $k\ge 1.$ Hence in order to check separability of polynomials whose constant term is nonzero, it is sufficient to consider the polynomials whose gcd of the exponents is $1.$
\[sepa\] Let $a,b,p\in \N$, $p$ be a prime number, $p\nmid ab$ and $(a,b)=1$, $(n,m)=1$. Then $f(x)=ax^{n}+b\ep_1x^{m}+p^u\epsilon_2,$ where $u\ge 2, b<p^u$ and $\epsilon_i\in\{-1,1\}$ is not separable over $\Q$ if and only if $b=n, p|m, p^{u(n-m)}a^m=(n-m)^{n-m}m^m, \ep_2^{n-m}(-\ep_1)^n=1$.
From Theorem \[th0\], the discriminant of $f$ is $$D_f=(-1)^{\binom{n}{2}}(p^u\epsilon_2)^{n-m}a^{n-m-1}\left[ n^{n/d}(p^u\epsilon_2)^{n-m/d}a^{m/d}-(-1)^{n/d}(n-m)^{n-m/d}m^{m/d}(b\epsilon_1)^{n/d}\right]^d,$$ where $d=(n,m)$. Here $d=1.$ Now $f(x)$ has a multiple root if and only if $D_f=0$, [*i.e.,*]{} $$\label{eq:separable}
n^{n}(p^u\epsilon_2)^{n-m}a^m=(-1)^{n}(n-m)^{n-m}m^{m}(b\epsilon_1)^{n}.$$
Since $(n,m)=(n,n-m)=1$ and $n^{n}|(-1)^{n}(n-m)^{n-m}m^{m}(b\epsilon_1)^{n}$, we have $n|b.$ Let $b=ns$ for some $s\in \N.$ Equation (\[eq:separable\]) then becomes $$(p^u\epsilon_2)^{n-m}a^m=(-1)^{n}(n-m)^{n-m}m^{m}(\epsilon_1)^{n}s^n.$$ Thus we have $s^n|(p^u\epsilon_2)^{n-m}a^m$ and from the hypothesis we have $(p,s)=(a,s)=1.$ Hence $s=1$ and $b=n.$ The last equation reduces to $$p^{u(n-m)}\ep_2^{n-m}a^m=(-\ep_1)^n(n-m)^{n-m}m^m.$$
As a consequence, either $p|m$ or $p|n-m.$ In order to show that $p|m,$ it is sufficient to show $p\nmid n-m.$ If $n-m=1$, then $p\nmid n-m.$ Suppose $n-m>1$ and $p|n-m.$ From above equation we have $p^u|n-m.$ Consequently, $p^u\le n-m<n=b<p^u$ is a contradiction. Finally the equations $$p^{u(n-m)}a^m=(n-m)^{n-m}m^m,$$ and $\ep_2^{n-m}(-\ep_1)^n=1$ follow easily from the last equation. Converse part is clear. But for the converse we do not require $p|m.$
The following example illustrates all the conditions given in Theorem \[sepa\]. Let $p$ be an odd prime. Then we have $$x^{p+1}+(p+1)x^p+p^p=(x+p)^2g(x),\mbox{where}\;\;g(x)\in \Z[x].$$
Let $f(x)=ax^{n}+b\ep_1x^{m}+p^u\ep_2\in (\S_2\setminus \S_2')\cup (\S_3\setminus \S_3')$, where $\epsilon_i\in\{-1,1\}$, $(a,b)=(m,n)=1$ and $b<p^u.$ Then $f(x)$ is separable over $\Q$ except for $x^3+3\ep_1x^2-4\ep_1$ and $x^4+4\ep_1x^3+27$.
m=2:
: Let $f\in \S_2\setminus \S_2'$ and satisfying the conditions of hypothesis. If $f$ is not separable, then from Theorem \[sepa\] $p|m$ implies $p=2$ and $n$ is odd. Further we have $2^{u(n-2)}a^2=(n-2)^{n-2}4.$ It is easy to see that if $u\ne 2$ or $n\ne 3,$ then we get a contradiction. If $u=2$ and $n=3,$ then $f(x)=x^3+3\ep_1x^2-4\ep_1=(x-\ep_1)(x+2\ep_1)^2$ follows from $\ep_2\ep_1=-1$.
m=3:
: Let $f\in \S_3\setminus \S_3'$ and satisfying the conditions of hypothesis. If $f$ is not separable, then from Theorem \[sepa\] $p|m$ implies $p=3$ and $3\nmid n-3$. If $n\ge 5,$ then $u$ being greater than 1, $3|n-3$ is a contradiction. If $n=4,$ then $u=3$ and $a^3=1$. In other words, $f(x)=x^4+4\ep_1x^3+27\ep_2$. From $\ep_2(-\ep_1)^4=1$, we get $\ep_2=1$ and $x^4+4\ep_1x^3+27=(x+3\ep_1)^2(x^2-2\ep_1x+3)$.
Since trinomials in $\S_1\cup \S_2'\cup \S_3'$ have simple zeros, $\Phi_t(x)^2\nmid f(x)$ for any $t\ge 1$. With this observation, we will characterize the irreducibility criteria for trinomials. One can recall Proposition \[pro:fc(x)\] to know the cyclotomic factors of $ax^n+b\ep_1 x^m+\ep_2 p^u,$ where $p^u=a+b$ and $u\ge 2$ for any $m\ge 1.$
\[thm:redoftri\] Let $p$ be a prime, $a,b\in \N$ and $u\ge 2, p^u=a+b$. Let $f(x)=ax^{n}+b\ep_1x^{m}+p^u\ep_2\in \S_1\cup \S_2'\cup \S_3'$ be a polynomial of degree $n$.
1. \[thm:redoftri:1\] If $\ep_1=1$, then $f(x)$ is reducible if and only if $e(n)=e(m)$ and in that case $f_c(x)=x^{(n,m)}+{\operatorname{sgn}}(\ep_2).$
2. \[thm:redoftri:2\] If $\ep_1=-1,$ then $f(x)$ is reducible if and only if $\ep_2e(m)>\ep_2e(n).$ Moreover the reciprocal part of $f(x)$ is $f_c(x)=x^{(n,m)}+1$.
We prove the result only for the case $\ep_1=\ep_2=-1.$ One can prove the remaining three cases in similar lines. In this case, we have to show that $f(x)$ is reducible if and only if $e(m)<e(n).$
From Theorem \[cyclothm\], $f(x)$ is irreducible if and only if $f_c(x)=1$, where $f_c(x)$ is the greatest common divisor of $x^n-1$ and $x^m+1$. From Lemma \[lem:basic\_cyclo\], we have $$f_c(x)=(x^{n}-1, x^{m}+1)=\begin{cases}
x^{(n/2,m)}+1 & \mbox{ if $e(n)\ge 2e(m);$}\\
1 & \mbox{ otherwise.}
\end{cases}$$
Thus $f(x)$ reducible if and only if $e(n)\ge 2e(m)$, that is, if and only if $e(m)<e(n).$
Suppose $d=\gcd(m,n)$. Then there exists $n_1,m_1\in \N$ such that $n=dn_1, m=dm_1$ and $(n_1,m_1)=1.$ If $\ze$ denotes a primitive $2d^{th}$ root of unity, then $$a \ze^{dn_1}-b\ze^{dm_1}-p^u=a+b-p^u=0,$$ as $\ep_1=\ep_2=-1$, $n_1$ even ($e(n)>e(m)$) and $p^u=a+b.$
We conclude the paper with few comments on generalization of Theorem \[mainthm\]. It is natural to ask whether Theorem \[mainthm\] can be extended to arbitrary $n_1$. The example $$x^8+x^6+x^4+4=(x^4-x^3+x^2-2x+2)(x^4+x^3+x^2+2x+2),$$ given by Jankauskas[@jankauskas] suggests that the generalization is not possible as $x^8+x^6+x^4+4\in\S_4'$ and is reducible.
The example $x^{p+1}+(p+1)x^p+p^p$ shows that Theorem \[mainthm\] is not true for $f\in\S_p\backslash \S_p'$, $p$ being odd prime. We conjecture that
Let $p$ and $q$ be two prime numbers, $u\ge 2$ and $\ep\in \{-1,1\}$. Suppose $f(x)=a_{n_r}x^{n_r}+\cdots+a_{q}x^{q}+p^u\ep\in \S_q'.$ If $p^u>|a_q|+\cdots+|a_{n_r}|,$ then $f(x)$ is irreducible.
A.I. Bonciocat and N.C. Bonciocat, [*Some classes of irreducible polynomials*]{}, Acta Arith., 123(2006), 349–360.
A.I. Bonciocat and N.C. Bonciocat, [*On the irreducibility of polynomials with leading coefficient divisible by a large prime power*]{}, Amer. Math. Monthly, 116 (8) (2009), 743–745.
F.G.M. Eisenstein, [*Uber die Irreducibilitat und einige andere Eigenschaften der Gleichung, von welcher die Theilung der ganzen Lemniscate abhangt*]{}, J. reine angew. Math., 39(1850), 166–167.
C.R. Greenfield, D. Drucker, *On the Discriminant of a Trinomial*, Linear Algebra its Appl., 62(1984), 105–112.
J. Jankauskas, [*On the reducibility of certain quadrinomials*]{}, Glas. Mat. Ser. III, 45 (65)(2010), 31–41.
A.T. Jonassen, [*On the irreducibility of the trinomials $x^n\pm x^m\pm 4$*]{}, Math. Scand., 21(1967), 177–189.
Biswajit Koley, A. Satyanarayana Reddy, [*An irreducibility criterion of polynomials over integers*]{}, Bulletin mathématique de la Société des Sciences Mathématiques de Roumanie, to appear.
S. Lipka, [*Über die Irreduzibilität von Polynomen*]{}, Math. Ann., 118(1941), 235–245.
W. Ljunggren, [*On the irreducibility of certain trinomials and quadrinomials*]{}, Math. Scand., 8 (1960), 65–70.
G. Myerson, [*Western Number Theory Problems*]{}, 17–19 Dec. 2007, 6. Available online at http://www.math.colostate.edu/ achter/wntc/problems/problems2007.pdf.
L. Panitopol, D. Stefänescu, *Some criteria for irreducibility of polynomials*, Bull. Math. Soc. Sci. Math. R. S. Roumanie (N. S.), 29 (1985), 69–74.
O. Perron, [*Neue kriterien für die irreduzibilität algebraischer gleichungen*]{}, J. reine angew. Math., 132 (1907), 288–307.
L. Weisner, [*Criteria for the irreducibility of polynomials*]{}, Bull. Amer. Math. Soc., 40(1934), 864–870.
[^1]: The research of this author is supported by Matrics MTR/2019/001206 of SERB, India.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A big challenge in continuous variable quantum key distribution is to prove security against arbitrary coherent attacks including realistic assumptions such as finite-size effects. Recently, such a proof has been presented in \[Phys. Rev. Lett. 109, 100502 (2012)\] for a two-mode squeezed state protocol based on a novel uncertainty relation with quantum memories. But the transmission distances were fairly limited due to a direct reconciliation protocol. We prove here security against coherent attacks of a reverse reconciliation protocol under similar assumptions but allowing distances of over $16$ km for experimentally feasible parameters. We further clarify the limitations when using the uncertainty relation with quantum memories in security proofs of continuous variable quantum key distribution.'
author:
- Fabian Furrer
title: Reverse Reconciliation Continuous Variable Quantum Key Distribution Based on the Uncertainty Principle
---
Introduction
============
The most advanced quantum information technology is quantum key distribution (QKD), which is the art of using quantum properties to distribute a secure key between two remote parties. Its challenge lies in the combination of state of the art experimental implementations and newly developed quantum information theoretic principles to ensure its security. There exist two different implementations both of which have different benefits. More established is the encoding of the information in a quantum system with discrete degrees of freedoms, as, e.g., the polarization of a photon. Such discrete variable protocols are usually based on single photon sources and detectors with the latter suffer from low efficiency at room temperature and being susceptible to loopholes (see, e.g., [@Makarov11; @Eisaman2011]). The advantage of such protocols is that conditioned on the arrival of a single photon, the channel noise is generally weaker allowing for long distances.
An alternative implementation encodes the information into the quadratures of the electromagnetic field (see the recent review [@Weedbrook2011] and references therein). Since the quadratures have a continuous spectrum they are called continuous variable QKD protocols. Compared to discrete variable protocols, they are based on variants of homodyne detection which is a robust and efficient measurement technique already used in current telecommunication systems. Although CV QKD systems are secure against blinding attacks, they are particularly vulnerable to manipulations of the phase reference signal (local oscillater) (see, e.g., [@Haeseler2008; @jouguet2013b]). Since the information is directly encoded in the phase and amplitude of the laser beam, the fiber losses severely damp the transmitted signal and with that the encoded information. Nevertheless, it was shown in [@Grosshans03] that a key can be generated for arbitrary losses using reverse reconciliation protocols. This has recently also been experimentally demonstrated against restricted attacks [@jouguet2012].
Up to recently, the security of continuous variable QKD protocols has only been analyzed in the asymptotic limit assuming an infinite number of communication rounds (see, e.g., [@Grosshans03; @Weedbrook04; @Leverrier2011]). For protocols based on a Gaussian phase and amplitude modulation this simplifies the security analysis tremendously. For instance, so-called collective attacks in which each signal is attacked independently and identically are as powerful as general (coherent) attacks [@Renner_Cirac_09]. Moreover, it has been shown that Gaussian collective attacks are optimal among all collective attacks [@Cerf2006; @Navascues2006]. But these powerful results can no longer be applied if finite-size effects due to only a finite number of communication rounds are considered. And furthermore, even under a restricted set of collective Gaussian attacks a significantly lower key rate is obtained for feasible block lengths [@Leverrier2010].
A big challenge in continuous variable QKD is to prove security against coherent attacks including all finite-size effects. Since the Hilbert space of the system is infinite-dimensional certain techniques that are standard for discrete variable security proofs cannot be applied. For instance, the exponential quantum de-Finetti theorem [@renner_nature] or the post-selection technique [@Renner_Postselection] that are used to lift security against collective attacks to security against coherent attacks do not directly apply in infinite dimensions (c.f. [@Renner_Cirac_09]). Recently, the post-selection technique has been extended in order to apply it to continuous variable QKD [@leverrier13], but its practical implementation relies on a cumbersome symmetry step which is unpractical for real life applications.
Another promising approach has been presented in [@Furrer12] which is based on a newly extended uncertainty relation including the effect of entangled observers [@berta09; @Berta13]. The corresponding protocol is based on the distribution of entangled two-mode squeezed states and homodyne detection implemented in [@Eberle2013]. The uncertainty relation allows to bound the information of an eavesdropper Eve solely by the correlation strength between the honest parties Alice and Bob. It has thus the advantage that no tomography, or equivalently, quantum channel estimations are necessary with the consequence of not relying on collective attacks. But in [@Furrer12; @Furrer12E] only losses up to $20$% could be tolerated since a direct reconciliation protocol has been used. Moreover, the potential and limitations of the proof technique have not been fully investigated.
Here, we show that using a reverse reconciliation protocol significantly higher losses of over $50$% can be tolerated enabling transmission distances of over $16$ km including finite-size effects. This makes the protocol suitable for practical short distance continuous variable QKD providing security against coherent attacks. The security proof has the advantage that it does not require any assumptions on Alice’s measurement device and is thus one-sided device independent.
Compared to [@Furrer12], the reverse reconciliation protocol requires Bob to apply a test to control the energy of the incoming signal. The test is based on a beam splitter to reflect a negligible part of the signal which is then measured with a heterodyne detector. We then show that conditioned that the outcomes of the heterodyne detector are sufficiently small the probability of Eve using a large energy attack can be neglected. This test further allows one to overcome the problem that homodyne detectors only operate faithfully in a limited detection range. Moreover, we provide a new statistical estimation procedure that enables us to deal with high energy signals which was not possible in [@Furrer12].
We also clarify the theoretical limitations of the proof technique based on the extended uncertainty relation. In particular, we provide the optimal key rate in the asymptotic limit of an infinite number of exchanged signals and without statistical uncertainty. Unfortunately, it turns out that even under these ideal conditions the tolerated losses are limited. An investigation of the asymptotic key rate for a broad range of continuous variable protocols based on the uncertainty relation has recently been given in [@Walk14].
The paper is organized as follows. We start in Section \[sec:KeyRate\] by introducing the security definitions and the classical part of the protocol. This enables us to give a general formula for the key rate presented in . In Section \[sec:Setup\], we discuss the experimental setup and how the raw key is formed. The different steps of the protocol are then listed in Section \[sec:Protocol\] together with the assumptions. The main result is Theorem \[thm:KeyLength\] which gives the explicit formula for the key length. In Section \[sec:KeyRates\], we present plots of the key rates for experimentally feasible parameters. The security analysis is given in Section \[sec:SecAnalysis\]. The tightness of the security proof is analyzed in Section \[sec:Tightness\]. Eventually, we conclude our results in Section \[sec:Conclusion\].
Security of a QKD Protocol and Finite-Key Rate {#sec:KeyRate}
==============================================
Security Definitions
--------------------
A generic QKD protocol consists of two phases. The first phase is given by the quantum part and includes the transmission and measurement of the quantum system. The second phase is purely classical and consists of the extraction of a secure key from the measured data by means of classical post-processing. In the following, we consider an entanglement based scenario in which the source is trusted and located in Alice’s laboratory. She then sends one part of the quantum system through a quantum channel to Bob. It is always understood that Alice’s and Bob’s laboratory’s are closed, that is, no unwanted information can leak to an eavesdropper. Once all quantum systems are distributed, Alice and Bob perform measurements to obtain the data from which the raw keys $X_A$ and $X_B$ are formed. At the same time a parameter estimation test is done which concludes whether one proceeds with the key extraction or one aborts the protocol. Since the key generation is a statistical process, one can assign a probability $p_{\textnormal{pass}}$ to the event that the parameter test is passed.
Given that the parameter estimation test is passed, Alice and Bob proceed with the classical post processing to generate the final keys $S_A$ and $S_B$, respectively. Here, $S_A$ and $S_B$ are classical random variables which might be correlated with a quantum system $E$ hold by an eavesdropper. We denote the associated classical-quantum state by $\rho_{S_AS_BE}$. The state $\rho_{S_AS_BE}$ can conveniently be written as a classical quantum state $$\label{eq:KeyState}
\rho_{S_AS_BE} = \sum_{s_A,s_B} p(s_A,s_B) \kettbra{s_A,s_B} \otimes \rho_E^{s_A,s_B} \, ,$$ where the classical values for the keys $s_A$ and $s_B$ are associated with orthonormal states $\ket{s_A,s_B}$ in a Hilbert space. Here, $p(s_A,s_B) $ denotes the distribution of keys and $\rho_E^{s_A,s_B} $ the quantum state of the eavesdropper conditioned on $S_A=s_A$ and $S_B=s_B$.
We characterize a quantum key distribution protocol by its correctness and secrecy. For that we use a notion of security which is composable and based on the approach developed in [@renner:04n; @BenOr05; @Renner_ComposableSecurity]. A protocol is called $\epsilon_c$*-correct* if the probability that $S_A$ is not equal to $S_B$ is smaller than $\epsilon_c$: $$\label{eq:correctness}
\text{Pr}[S_A \neq S_B ] \leq \epsilon_c \, .$$ Roughly speaking, a protocol is secret if the key $S_B$ is almost uniformly distributed and completely uncorrelated to Eve’s system $E$. The ideal state is thus given by $\text{u}_{S_B} \otimes \rho_E$, where $\text{u}_{S_B}$ denotes the uniform distribution over all keys and $\rho_E$ is the reduction of the state to system $E$. We then say that a protocol is $\epsilon_s$*-secret* if $$(1-p_{\textnormal{pass}}) \, \Vert \rho_{S_BE} - \text{u}_{S_B} \otimes \rho_E \Vert_1 \leq \epsilon_s\, ,$$ where $\Vert \cdot \Vert_1$ denotes the trace norm and the infimum is taken over all possible states of Eve’s system. Eventually, a protocol is called $\epsilon_{\textnormal{sec}}$*-secure* if it is $\epsilon_c$-correct and $\epsilon_s$-secret with $\epsilon_c+\epsilon_s\leq \epsilon_{\textnormal{sec}}$. Note that the above security definition is composable in the sense that security is guaranteed if any part of the key is used for any other cryptographic protocol. This follows from the monotonicity of the trace distance.
Classical Post-Processing {#sec:ClPostPro}
-------------------------
As discussed in the previous section, the classical post-processing transforms the raw keys $X_A$ and $X_B$ into the final keys $S_A$ and $S_B$. In the first step of this post-processing an information reconciliation protocol is applied to diminish the discrepancy of $X_A$ and $X_B$. It was shown in [@Grosshans03] that it is beneficial for continuous variable QKD protocols to use a reverse reconciliation scheme in which Alice corrects her raw key $X_A$ in order to match $X_B$. This is especially crucial for long distance QKD. Throughout this paper, we assume that a one-way reverse reconciliation protocol is used in which $\ell_{{\textnormal{IR}}}$ bits of information about $X_B$ is sent to Alice via an authenticated classical channel. Given this information and $X_A$, Alice outputs a guess $ X^{{\textnormal{c}}}_A$ of $X_B$.
In order to ensure correctness for the raw keys $X^{{\textnormal{c}}}_A$ and $X_B$ (and thus for the keys), Bob applies a random function of a family of two-universal hash functions [@Carter79; @Wegman81] onto an alphabet of size $ 1/\epsilon_c$ on $X_B$. He then sends Alice over an authenticated public channel a description of the applied function together with the obtained value. This leaks additional $\log1/\epsilon_c$ bits of information, where the logarithm is always taken to base $2$. Alice applies the function to her corrected raw key $ X^{{\textnormal{c}}}_A$ and checks if the obtained value matches with the one from Bob. If this is the case, they proceed with the protocol otherwise they abort [^1]. This then ensures that the generated key is $\epsilon_c$ correct.
In a second step of the classical post-processing the raw key is hashed to a sufficiently small alphabet by means of a family of two-universal hash functions such that the key is $\epsilon_{{\textnormal{sec}}}$-secure. Let us assume that the output of the hash functions is a bit string of length $\ell_{\epsilon_{\textnormal{sec}}}$. For finite-dimensional $E$ systems it has been shown in [@RennerPhD; @Tomamichel10] that the length of the bit string $\ell_{\epsilon_{\textnormal{sec}}}$ can be expressed by the smooth min-entropy $H^\epsilon_{\min}(X_B|E)$ which is related to the maximal probability that Eve guesses $X_B$ correctly (see Section \[sec:UR\]). This result has been extended in [@Furrer11] to the case where Eve’s system E is modeled by an infinite-dimensional Hilbert space, which is necessary for applications to continuous variable systems. In particular, it holds that if $\epsilon \leq (\epsilon_s-\epsilon_1)/(2p_{\textnormal{pass}})$, $$\label{eq:KeyLength}
H_{\min}^\epsilon(X_B|E)_\rho- \ell_{{\textnormal{IR}}} - \log \frac{1}{\epsilon_1^2\epsilon_c} +2$$ is a tight lower bound on the key length $\ell_{\epsilon_{\textnormal{sec}}}$ with $\epsilon_{{\textnormal{sec}}} = \epsilon_c + \epsilon_s$ (see, e.g., [@FurrerPhD] for details). The state $\rho_{X_B E}$ for which the smooth min-entropy is evaluated corresponds to the classical-quantum state describing the joint state of Bob’s raw key $X_B$ and Eve’s system $E$ conditioned that the protocol passes. The goal of the security analysis is to obtain a tight lower bound on using the data collected in the parameter estimation step.
The Protocol and Key Rates {#sec:ProtocolKeyRates}
==========================
Experimental Setup and Generation of Data {#sec:Setup}
-----------------------------------------
The protocol is similar to the one in [@Furrer12] and consists of the distribution of an entangled two-mode squeezed state and homodyne detection first proposed in [@Cerf01]. But additionally, Bob performs a test in order to estimate whether the incoming signal exceeds a certain energy threshold. This test allows one to exclude high energy eavesdropping attacks and to restrict onto a bounded measurement range. This is crucial in order to do finite statistics with reliable error bounds. The test requires only two additional homodyne detectors.
The source is assumed to be in Alice’s laboratory and generates a two-mode squeezed entangled state often referred to as an EPR state. This can be implemented by mixing two squeezed modes over a balanced beam splitter [@Furusawa98]. The important characteristic of a two-mode squeezed state is that there are two quadratures with a phase difference of $\pi/2$ for which the two modes are highly correlated. We call these quadratures amplitude $Q$ and phase $P$ in the following. Alice then keeps one mode in her laboratory and performs at random an amplitude or phase measurement using a homodyne detector, where the probability for phase is $0<r<1$. The other mode is sent through a fiber to Bob who is as well performing randomly an amplitude or phase measurement with probability $1-r$ and $r$, respectively. Due to the property of a two-mode entangled state, Alice’s and Bob’s measurement outcomes are highly correlated if they both perform amplitude or phase measurement and uncorrelated otherwise.
Before Bob measures amplitude or phase of the incoming signal he performs an energy test. In particular, he mixes the signal with a vacuum mode $a$ using a beam splitter with almost perfect transmittance $T$. The reflected signal $a'$ is measured via heterodyne detection, that is, mode $a'$ is mixed with another vacuum mode $b$ by a balanced beam splitter and homodyne detection is performed to measure amplitude of one output $q_{t^1}$ (mode $t^1$) and phase $p_{t^2}$ of the other output (mode $t^2$). The setup is illustrated in Figure \[fig:Energy\]. Bob then simply checks whether $|q_{t^1}|$ and $|p_{t^2}|$ is smaller than a prefixed value $\alpha$ for every incoming signal and aborts otherwise. In the following we denote the corresponding test by ${\mathcal{T}}(\alpha,T)$. In Section \[sec:Test\], we show that conditioned that ${\mathcal{T}}(\alpha,T)$ passes the probability for large amplitude and phase measurements can be bounded.
![\[fig:Energy\] The diagram shows the measurement setup of Bob’s test ${\mathcal{T}}(\alpha,T)$. He mixes the incoming signal with a vacuum mode $a$ using a beam splitter with very low reflectivity $1-T$ and applies a heterodyne detection on the reflected signal $a'$. The test then consists of checking whether the absolute value of the outcome of the amplitude measurement of mode $t^1$ and the phase measurement on $t^2$ is smaller than $\alpha$. ](fig1.pdf){width="8.8cm"}
While theoretically the spectrum of a homodyne measurement is the real line, any practical implementation is limited by a certain precision. We account for that by grouping outcomes into intervals of length $\delta$, where $\delta$ should be larger than the precision of the homodyne detector. We then choose an $M\geq 0$ smaller than the detector threshold and group the measurements into intervals $$\begin{aligned}
I_1 & =(-\infty,-M+\delta] \, , \\
I_k &=(-M +(k-1)\delta,-M +k\delta] , \ k=2,...,2M/\delta -1 \, , \\
I_{2M/\delta} & = (M-\delta,\infty) \, ,\end{aligned}$$ where we assume that $2M/\delta$ is in $\mathbb N$. We thus associate with any measurement result a value in ${\mathcal{X}}=\{1,2,...,2M/\delta\}$.
It is important for the protocol to have high correlations between Alice’s and Bob’s outcome in the index set ${\mathcal{X}}$. But due to losses in the fiber, Bob’s amplitude and phase quadratures $Q_B$ and $P_B$ will be damped. In order to account for that, we scale the quadrature measurements of Alice’s detector $Q_A$ and $P_A$ before grouping them into the intervals using the transformations $$Q_A \mapsto \tilde Q_A = t_q Q_A \ \text{and} \ P_A\mapsto \tilde P_A = t_p P_A \, .$$ The scaling factors $t_q$ and $t_p\leq 1$ are adjusted according to the channel losses in the transmission of the mode to Bob.
For the following, we also need a function to measure the strength of the correlations between two strings $X,Y\in {\mathcal{X}}^N$. For that we introduce the average distance $$\label{eq:Dist}
d(X,Y) = \frac 1 N \sum^N_{k=1} \vert X^k - Y^k\vert \, .$$ We further define the average second moment of the difference between the strings by $$\label{eq:DistVar}
d_2(X,Y) = \frac 1 N \sum_{k=1}^N \vert X^k - Y^k\vert^2 \, .$$ Moreover, we define average second moment for the discretized phase and amplitude measurements $X\in{\mathcal{X}}^N$ by $$\label{eq:Var}
\text{m}_2(X)=\frac 1N \sum_{k=1}^N (X^k-M/\delta)^2 \, .$$ Here, we subtract $M/\delta$ since in the absence of an eavesdropper the average value of $X$ will be (approximately) $M/\delta$ such that $\text{m}_2(X)$ simplifies to the variance. This holds because the first moments of the amplitude and phase measurements in the absence of Eve are $0$, which implies that the first moments of the discretized value will be approximately $M/\delta$.
The Protocol {#sec:Protocol}
------------
The protocol depends on the total number of prepared two-mode squeezed states $N_{\textnormal{tot}}$, the probability that Alice and Bob perform a phase measurement $r$, the interval length for the data generation $\delta$, the threshold parameters $\alpha$ and $M$ (see Section \[sec:Setup\]), and a fixed value $d_0>0$ used in the parameter estimation test. All classical communication is assumed to be authenticated. The different steps in the protocol are as follows.
1. *Distribution & Measurement:* Alice prepares $N_{\textnormal{tot}}$ two-mode squeezed states and sends half of it to Bob upon which both measure for each mode phase with probability $r$ and amplitude with probability $(1-r)$. Moreover, Bob applies the test ${\mathcal{T}}(\alpha,T)$, that is, he checks if $|q_{t^1}|,|p_{t^2}|\leq \alpha$ is satisfied for all of the $N_{\textnormal{tot}}$ incoming modes and aborts the protocol otherwise (see Figure \[fig:Energy\]).
2. *Data Generation:* Alice and Bob publicly announce their basis choice. We count with $n$ and $k$ the number of events in which Alice and Bob both chose amplitude and phase measurement, respectively. From the measurement with the same basis choice, they use the amplitude and phase measurements to form $X_A$ and $X_B$ in ${\mathcal{X}}^n$, and $Y_A^{{\textnormal{PE}}}$and $Y_B^{{\textnormal{PE}}}$ in ${\mathcal{X}}^k$ according to Section \[sec:Setup\]. Alice and Bob further form a string containing all discretized phase measurements denoted by $Y_A^{P}$ and $Y_B^{P}$, respectively, where we assume that both have length $m$.
3. *Parameter Estimation:* Using classical communication, they compute the distance $d^{{\textnormal{PE}}}=d(Y_A^{{\textnormal{PE}}},Y_B^{{\textnormal{PE}}})$ as in and check if $d^{{\textnormal{PE}}} \leq d_0$. If this does not hold they abort the entire protocol. Otherwise, they proceed with the protocol and compute the second moment of the distance $V_d^{\textnormal{PE}}= d_2(Y_A^{{\textnormal{PE}}},Y_B^{{\textnormal{PE}}}) $ according to . Moreover, they individually compute the average second moments of all their phase measurements $V_{Y_A}^{\textnormal{PE}}=\text{m}_2(Y_A^P)$ and $V_{Y_B}^{\textnormal{PE}}=\text{m}_2(Y_B^P)$ according to .
4. *Classical Post-Processing* They run a classical post-processing protocol as described in Section \[sec:ClPostPro\] by applying first a one-way reverse reconciliation protocol and secondly hash the corrected raw keys $X^{{\textnormal{c}}}_A$ and $X_B$ to final keys $S_A$ and $S_B$ of length $\ell$.
The crucial point is now to obtain a tight bound on the possible number of secure bits $\ell$ one can generate by the above protocol. Such a bound relies always on a set of assumptions. Such assumptions can, for instance, be a restriction on the attacks of the eavesdropper or simplifications used to model the experimental setup. We thus start, with a detailed description of our assumptions before presenting the key length formula.
We always assume that Alice’s and Bob’s laboratory are secure and closed, that is, no unwanted information leaks from their laboratory. It is further very important to assume that all random numbers used for the basis choice and the classical post-processing are truly random and independent. This implies for instance that Alice’s and Bob’s basis choice are random and independent which is crucial for the security. While these assumptions are at the ground of most of the security analysis the following are specific for our measurement setup and security proof.
1. *Assumptions.* We assume that Bob’s sequential measurement of the values in ${\mathcal{X}}$ are independent and correspond to perfect amplitude and phase measurements of the intervals $I_k$ defined in Section \[sec:Setup\]. Hence, they can be modeled by integration of the spectrum of one-mode amplitude and phase operators with perfect phase difference of $\pi/2$ [^2]. The same applies to Bob’s test measurement performed in ${\mathcal{T}}(\alpha,T)$.
We note that (A) includes the assumption that the local phase reference used by Bob is trusted. This can be practically justified by either monitoring the phase reference or generating it independently directly on Bob’s side. For possible attacks on the local oscillator and countermeasures see, for instance, [@jouguet2013b]. We emphasize that we do not make any assumptions on Eve’s attacks and that there are no requirements on Alice’s measurement device. The latter is sometimes referred to as one-sided device independent [@tomamichel11].
As we will discuss in details in Section \[sec:SecAnalysis\], security will be inferred from the uncertainty principle with quantum memory for continuous variable systems [@Berta13]. The principle says that Eve’s information about the amplitude measurements is bounded by an overlap term of Bob’s measurements expressed by $$\label{eq:overlap1}
c(\delta) \approx \delta ^2 /2\pi \, ,$$ and the uncertainty of Alice about Bob’s phase measurement. The latter can be estimated using the distance $d_0$ and the function $$\label{eq:gamma}
\gamma(t) = (t+\sqrt{1+t^2})\Big(\frac{t}{\sqrt{1+t^2}-1}\Big)^{t} \, .$$
Moreover, we use the test ${\mathcal{T}}(\alpha,T)$ to upper bound the probability that Bob measures an amplitude or phase quadrature larger than $M$ by (see equation ) $$n \Gamma(M,T,\alpha) \propto n \exp\big({{\ -\frac{(\mu M - \alpha)^2}{T(1+\lambda)/2}}}\big)\,$$ where $\mu=\sqrt{\frac{1-T}{2T}}$. Hence, the probability can be made sufficiently small by tuning the parameters $\alpha$, $T$, and $M$. Using large deviation bounds for the statistical estimation of the raw key sample we then obtain the following bound on the key length.
\[thm:KeyLength\] Let us consider the above protocol with parameters $(N_{\textnormal{tot}},r,\delta,M,\alpha,d_0)$ and assume that the conditions in (A) are satisfied. We further assume that the reconciliation protocol broadcasts $\ell_{\textnormal{IR}}$ bits of classical information and the correctness test is passed for two-universal hash functions onto an alphabet of size $1/\epsilon_c$. Then, if the protocol passes, an $\epsilon_{c}$-correct and $\epsilon_s$-secret key of length $$\label{thm,eq:KeyLengthCoherent}
n [\log \frac{1}{c(\delta)}-\log \gamma(d_0 + \mu)] - \ell_{{\textnormal{IR}}} - \log \frac{1}{\epsilon_1^2\epsilon_c} +2 ,$$ can be extracted, where $$\label{eq:mu}
\mu= \sqrt{2\log \xi^{-1} } \frac{(n+k) \sigma_* }{k\sqrt{n}} + \frac{4 (M/\delta) \log \xi^{-1}}{3} \frac{n+k}{n k} \, ,$$ with $$\begin{aligned}
\sigma_*^2 = & \ \frac kN (V_{d}^{\textnormal{PE}}- \frac kN (d^{\textnormal{PE}})^2) \ + \frac kN \big( V_{Y_A}^{\textnormal{PE}}+ V_{Y_B}^{\textnormal{PE}}+ 2 \frac{\nu}{\delta^2} \big) \nonumber
\\
& + 2 \frac kN \sqrt{(V_{Y_A}^{\textnormal{PE}}+\frac \nu{\delta^2}) (V_{Y_B}^{\textnormal{PE}}+\frac \nu{\delta^2})} \, , \label{eq:Sigma}\end{aligned}$$ for the smallest $\nu$ for which $$\begin{aligned}
\label{eq:xi}
\xi =& \, \Big(\epsilon_s - \epsilon_1 - 2\sqrt{2 n \ \Gamma(M,T,\alpha) }\Big)^2 \\
& \, - 2\exp\Big(-2(\nu/M)^2\frac{n m^2}{(n+m) (m+1)} \Big) \nonumber \end{aligned}$$ is positive and $\epsilon_1 - 2\sqrt{1 -p_E^n } < \epsilon_s$. In the case that there is no $\nu$ such that $\xi$ is positive or $\epsilon_1 - 2\sqrt{2 \Gamma(M,T,\alpha) } < \epsilon_s$ is not satisfied, the key length is $0$.
The proof of the above theorem will be given in Section \[sec:SecAnalysis\]. Before that we present some estimates of the obtained key rates for experimentally feasible parameters.
![\[fig:EC95\] The plot shows the key rate $\ell/N_{\textnormal{tot}}$ for squeezing and antisqueezing of $11$dB and $16$dB and reconciliation efficency $\beta =0.95$ depending on the number of signals $N_{\textnormal{tot}}$. Bob’s total losses $\eta_B$ are $0.45$ (solid line), $0.50$ (dashed line) and $0.55$ (dash-dotted line). Since the source is assumed to be in Alice’s laboratory her losses are set to $\eta_A=0$. We set the excess noise $\eta_{\textnormal{ex}}= 0.01$, the security parameters to $\epsilon_s=\epsilon_c=10^{-9}$, and the test parameters to $T=0.99$ and $\alpha = 28$. ](fig2.pdf){width="8.8cm"}
Discussion of Key Rates {#sec:KeyRates}
-----------------------
For the following, we consider a two-mode squeezed state with squeezing $\lambda_{\textnormal{sq}}$ and antisqueezing $\lambda_{\textnormal{asq}}$ given by $$\label{eq:CM}
\Gamma = \left( \begin{array}{cc}
\Gamma_A & \Gamma_{\text{cor}} \\
\Gamma_{\text{cor}} & \Gamma_B
\end{array} \right) \, ,$$ where $\Gamma_A = \Gamma_B = a \idty$ and $ \Gamma_{\text{cor}} = \sqrt{a^2-b^2} Z$ with $a=\frac 12 (10^{\frac{\lambda_{\textnormal{sq}}}{10}} + 10^{\frac{\lambda_{{\textnormal{asq}}}}{10}})$, $b=10^{\frac{\lambda_{\textnormal{asq}}-\lambda_{\textnormal{sq}}}{20}}$ and $Z={\textnormal{diag}}(1,-1)$. The fiber losses of the channel are simulated by mixing the signal with vacuum at a beam splitter. We quantify the losses on Alice’s and Bob’s arm by $\eta_A$ and $\eta_B$ which specifies the reflectivity of the beam splitter, and thus, the amount of vacuum in the outgoing signal. We further include excess noise $\eta_{\textnormal{ex}}$ modeled as a classical Gaussian noise channel acting on the variances of quadratures as $V\mapsto V + \eta_{\textnormal{ex}}tV_{{\textnormal{vac}}}$ with $t$ the transmittance of the channel and $V_{{\textnormal{vac}}}$ the variance of the vacuum (see, e.g., [@Lodewyck2007; @Weedbrook2011]). This transforms the covariance matrix in to $$\label{eq:LossModel}
\left( \begin{array}{cc}
\bar\eta_A \Gamma_A + (\eta_A+\eta_{\textnormal{ex}}\bar\eta_A )\Gamma_{{\textnormal{vac}}} & \sqrt{\bar\eta_A\bar\eta_B}\, \Gamma_{\text{cor}} \\
\sqrt{\bar \eta_A \bar\eta_B} \, \Gamma_{\text{cor}} & \bar\eta_B \Gamma_B+ (\eta_B+\eta_{\textnormal{ex}}\bar\eta_B)\Gamma_{{\textnormal{vac}}}
\end{array} \right)$$ where $\bar \eta_A = 1-\eta_A$, similar $\bar \eta_B$ and $\Gamma_{\textnormal{vac}}$ denotes the covariance matrix of the one-mode vacuum.
In the protocol, the scaling factors for Alice’s measurement $t_q$ and $t_p$ have to be adjusted. In an experiment, $t_p$ should be chosen such that the distance $d^{{\textnormal{PE}}}$ is small. A convenient way for that is to determine $\tilde Q_A$ and $\tilde P_A$ such that the second moments of Alice’s and Bob’s (continuous) amplitude and phase measurements match. These values can be determined locally and communicated in the classical post-processing step.
The important parameter of the protocol that is directly related to the state is $d_0$, which should be chosen such that with high probability the distance $d^{\textnormal{PE}}$ computed for many samples of the Gaussian state given by the covariance matrix is smaller than $d_0$.
The leakage in the reconciliation protocol $\ell_{{\textnormal{IR}}}$ is set to [@Leverrier2010] $$\ell_{IR} = H(X_B) - \beta I(X_B:X_A) \, ,$$ where $H(X_B) $ denotes the Shannon entropy of $X_B$, $I(X_A:X_B)$ the mutual information between $X_A$ and $X_B$, and $\beta$ the efficiency of the reconciliation protocol. The efficiency in the Shannon limit is $\beta=1$, while $\beta<1$ for any finite $n$.
It is now important that the protocol is robust, that is, it passes with high probability if no eavesdropper is presence. This means that the test ${\mathcal{T}}(\alpha,T)$ has to pass with high probability for the above two-mode squeezed state. The probability that ${\mathcal{T}}(\alpha,T)$ fails can be easily upper bounded by (see inequality ) $$\sqrt{ 8\pi} \sigma_t N_{\textnormal{tot}}\text{e}^{-\alpha^2 /(2 \sigma_t^2)}$$ where $\sigma_t$ is the maximum of the standard deviations of the outcome distributions of $q_{t^1}$ and $ p_{t^2}$. Hence, by setting $\alpha = \sqrt{2\sigma_t^2 \ln(\sqrt{8\pi}\sigma_t N_{\textnormal{tot}}/\epsilon_{\mathcal{T}})}$ we ensure that the ${\mathcal{T}}(\alpha,T)$ fails with probability smaller than $\epsilon_{{\mathcal{T}}}$. Depending on $\alpha$ and $T$, we then choose $M$ such that $2\sqrt{2 n \Gamma(\alpha,T,M)}=\epsilon_2$ is smaller than $\epsilon_s$.
![\[fig:EC90\] The plot shows the key rate $\ell/N_{\textnormal{tot}}$ for squeezing and antisqueezing of $11$dB and $16$dB and reconciliation efficiency $\beta =0.90$ depending on the number of signals $N_{\textnormal{tot}}$. Bob’s total losses $\eta_B$ are $0.40$ (solid line), $0.45$ (dashed line) and $0.50$ (dash-dotted line). The other parameters are as in Figure \[fig:EC95\]. ](fig3.pdf){width="8.8cm"}
We define the key rate as $\ell/N_{\textnormal{tot}}$ where $\ell$ is taken as in and optimized over the probability $r$ for choosing amplitude or phase. For that we simply express $n$, $k$, and $m$ in terms of $N_{\textnormal{tot}}$ and $r$. We further optimize the key rate over the spacing $\delta$ under the constraint $1\geq \delta \geq 0.01$ to account for the resolution of the detector. The security parameters are chosen as $\epsilon_s=\epsilon_c=10^{-9}$. Moreover, we set $\epsilon_{{\mathcal{T}}}=10^{-9}$, $T=0.99$ and $\epsilon_2 = \epsilon_s/10$ for which we find that $\alpha \leq 28 $ and $M\leq 8000$ in units of $\hbar =2$ for relevant values of $N_{\textnormal{tot}}$ and realistic squeezing strengths.
In Figure \[fig:EC95\] and \[fig:EC90\] we plotted the key rate against the total number of exchanged signals $N_{\textnormal{tot}}$ for a reconciliation efficiency $\beta=0.95$ and $\beta=0.9$, respectively. The squeezing and antisqueezing is chosen as $\lambda_{\textnormal{sq}}=11$ and $\lambda_{\textnormal{asq}}= 16$ which has experimentally been achieved in the laboratory [@Eberle11] at $1550$nm. Note that this squeezing values already include the efficiency of the homodyne detection. We further set the excess noise to $\eta_{{\textnormal{ex}}}=0.01$ in the plots. We note that a reconciliation efficiency of about $0.9$ is more realistic with current non-binary error correction codes. The maximal amount of losses to still obtain a secure key rate is slightly above $55$% for $\beta =0.95$ and $50$% for $\beta =0.9$. The key rate in dependence of the distance for different values of $\beta$ is plotted in Figure \[fig:Dist\]. For that we used a loss rate of $0.20$ dB per km and additional coupling losses of $0.05$. We see that for the same squeezing rates as above and an error correction efficiency of $0.95$, a positive key rate can be obtained for over $16$ km.
![\[fig:Dist\] The key rate is plotted against the distance for $N_{\textnormal{tot}}=10^9$, squeezing and antisqueezing of $11$dB and $16$dB and reconciliation efficiency $\beta$ of $0.95$ (solid line), $0.90$ (dashed line) and $0.85$ (dash-dotted line). We assumed losses of $0.20$dB per km plus $0.05$ coupling losses. All the other parameters are as in Figure \[fig:EC95\]. ](fig4.pdf){width="8.8cm"}
Security Analysis {#sec:SecAnalysis}
=================
Estimation of Eve’s Information by the Uncertainty Principle with Quantum Memories {#sec:UR}
----------------------------------------------------------------------------------
The first step of the security proof is the same as in [@Furrer12] except that the roles of Alice and Bob are exchanged and that the basis choices for parameter estimation and key generation are different. We start with the definition of the min- and max-entropies.
Let $X$ be a random variable over a countable set ${\mathcal{X}}$ distributed according to $p_x$. Suppose further that $X$ is correlated to a quantum system B associated with Hilbert space ${\mathcal{H}}_B$ and corresponding state space ${\mathcal{S}}({\mathcal{H}}_B)=\{\rho_B |\, \rho_B\geq 0,\, {\operatorname{tr}}\rho_B = 1\}$. The min-entropy of a classical quantum state $\rho_{XB}=\sum_{x} p_x \kettbra x \otimes \rho_B^x$ with $\rho_B^x\in{\mathcal{S}}({\mathcal{H}}_B)$ is defined as the negative logarithm of the optimal success probability to guess $X$ given access to the quantum memory $B$ [@koenig08]. In formulas, this is $$\label{minEnt}
H_{\min}(X|B)_\rho = -\log \Big( \sup_{\{E_x\}} \sum_x p_x {\operatorname{tr}}(E_x \rho_B^x) \Big) \, ,$$ where the supremum is taken over all positive operator valued measures (POVM) $\{E_x\}$, i.e., $E_x\geq 0$ and $\sum_x E_x =\idty$. A further entropy related to the min-entropy via the uncertainty relation is the max-entropy which is defined as $$H_{\max}(X|B)= 2 \log \Big( \sup_{\sigma_B} \sum_x \sqrt{F(p_x\rho_B^x,\sigma_B)} \Big) \, ,$$ where the supremum runs over all states $\sigma_B\in{\mathcal{S}}({\mathcal{H}}_B)$ and $F(\rho,\sigma)=({\operatorname{tr}}\vert\sqrt{\rho}\sqrt{\sigma}\vert)^2 $ denotes the fidelity.
The corresponding smooth min- and max-entropy are then obtained by optimizing the min- and max-entropy over nearby states. The closeness of states is measured with the purified distance ${\mathcal{P}}(\rho,\sigma) = \sqrt{1-F(\rho,\sigma)}$ [@Tomamichel09]. We also allow for sub-normalized states defining the smooth min- and max-entropy as $$\begin{aligned}
H_{\min}^\epsilon(X|B)_\rho &= \sup_{\tilde\rho_{XB}} H_{\min}(X|B)_{\tilde\rho} \, , \\
H_{\max}^\epsilon(X|B)_\rho &= \inf_{\tilde\rho_{XB}} H_{\max}(X|B)_{\tilde\rho} \, ,\end{aligned}$$ where the supremum and infimum are taken over sub-normalized states, i.e., $\tilde\rho_{XB} \geq 0$ and ${\operatorname{tr}}\tilde\rho_{XB} \leq 1$, with ${\mathcal{P}}(\rho_{XB},\tilde\rho_{XB})\leq \epsilon$.
Let us consider now the situation in the protocol. According to , we have to bound the smooth min-entropy of the state associated with the raw key of Bob $X_B$ and the system of Eve $E$. Suppose that $\rho_{A^nB^nE}$ denotes the state of the $n$ modes on which the amplitude measurements for the raw key generation are performed conditioned on the event that the protocol passes. The state $\rho_{X_BE}$ of $X_B$ and $E$ can then be obtained by measuring the amplitudes of $B^n$ according to the discretization induced by the intervals $\{I_k\}$. But since the intervals $I_1$ and $I_{2M/\delta}$ are of infinite length any uncertainty relation will get trivial for the associated measurements.
In order to avoid this problem, let us introduce phase and amplitude measurement with discretization $\{\tilde I_k\}_{k\in\mathbb Z}$, where $$\begin{aligned}
\tilde I_k =(M +(k-1)\delta,-M +k\delta] , \ k\in\mathbb Z \, .\end{aligned}$$ We note that $\tilde I_k = I_k$ for $k=2,3,...,2M/\delta-1$. We denote by $\tilde X_B$ ($\tilde Y_B^{{\textnormal{key}}}$) the classical random variable corresponding to Bob’s discretized amplitude (phase) measurement outcome $k\in\mathbb Z$. Moreover, the classical quantum state of $\tilde X_B$ ($\tilde Y_B^{{\textnormal{key}}}$) and $A^nE$ is denoted by $\rho_{\tilde X_B A^nE}$ ($\rho_{\tilde Y_B^{\textnormal{key}}A^nE}$). As we will see below, the energy test assures that the purified distance between $\rho_{ X_BE}$ and $\rho_{\tilde X_BE}$ as well as $\rho_{ Y^{\textnormal{key}}_BA^n}$ and $\rho_{\tilde Y^{\textnormal{key}}_BA^n}$ are small.
Let us assume for now that ${\mathcal{P}}(\rho_{ X_BE},\rho_{\tilde X_BE})$ and ${\mathcal{P}}(\rho_{ Y^{\textnormal{key}}_BA^n},\rho_{\tilde Y^{\textnormal{key}}_BA^n})$ are smaller than $\tilde\epsilon$. We then find that $$\begin{aligned}
\label{eq:boundMin}
H_{\min}^{\epsilon +\tilde\epsilon}(X_B|E)_\rho &\geq H_{\min}^{\epsilon}(\tilde X_B|E)_{\rho} \, , \\
H_{\max}^{\epsilon +\tilde\epsilon}(\tilde Y_B|A^n)_\rho &\leq H_{\max}^{\epsilon}(Y_B^{\textnormal{key}}|A^n)_{\rho} \, ,\label{eq:boundMax}\end{aligned}$$ which is a simple consequence of the definition of smooth min- and max-entropy. The uncertainty relation in [@Furrer11] then provides the inequality [^3] $$\label{eq:URsmooth}
H_{\min}^{\epsilon}(\tilde X_B|E)_{\rho} \geq - n \log c(\delta) - H_{\max}^{\epsilon}(\tilde Y_B|A^n)_{\rho} \, ,$$ with $$\label{eq:overlap2}
c(\delta) = \frac{1}{2\pi}\delta^2\cdot S_{0}^{(1)}\left(1,\frac{\delta^2}{4}\right)^{2} \, ,$$ where $S_{0}^{(1)}(\cdot,x)$ is the $0$th radial prolate spheroidal wave function of the first kind. In the regime of interest $\delta \leq 1$, $c(\delta)$ can be approximated as in .
If we combine now the inequalities , and , we obtain from the formula in a lower bound on the key length given by $$\label{eq:KeyLength2}
-n\log c(\delta) - H_{\max}^\epsilon(Y_B^{\textnormal{key}}|A^n)_\rho - \ell_{{\textnormal{IR}}} - \log \frac{1}{\epsilon_1^2\epsilon_c} +2 \, ,$$ where $\epsilon \leq (\epsilon_1-\epsilon_s)/(2p_{\textnormal{pass}}) - 2\tilde\epsilon$. In the next section we use the energy test to give a bound on $\tilde\epsilon$.
Failure Probabiltiy of the Energy Test {#sec:Test}
--------------------------------------
The goal of this section is to give a bound on the purified distance of $\rho_{ X_BE}$ and $\rho_{\tilde X_BE}$ as well as $\rho_{ Y^{\textnormal{key}}_BA^n}$ and $\rho_{\tilde Y^{\textnormal{key}}_BA^n}$. It turns out that they can be bounded by the probability that the energy test is passed although an amplitude or phase larger than $M$ is measured. We start with $\rho_{ X_BE}$ and $\rho_{\tilde X_BE}$.
In a first step we compute that $$\label{eq:Distance}
{\mathcal{P}}(\rho_{ X_BE},\rho_{\tilde X_BE}) \leq \sqrt{1-\Pr[ \wedge_i \{|q_i| \leq M\} | \rho_{A^nB^nE}]^2} \,$$ where $\{|q_i| \leq M\} $ denotes the event that the absolute value of the continuous amplitude measurement of Bob’s ith mode is smaller than $M$. This follows directly from the properties of the fidelity of a classical quantum state $$\begin{aligned}
{F(\rho_{ X_BE},\rho_{\tilde X_BE})}^{1/2}
&= \sum_{k=2}^{2M/\delta-1} F(p_k\rho^k_E, p_k\rho_E^k)^{1/2} \\
&+ \sum_{k=1,{2M}/{\delta}} F(p_k\rho^k_E + q_k\sigma^k_E, p_k\rho_E^k)^{1/2} \\
&\geq \sum_{k=1}^{2M/\delta-1} F(p_k\rho^k_E, p_k\rho_E^k)^{1/2} \, \\
& = \sum_{k=1}^{2M/\delta-1} p_k \, ,\end{aligned}$$ where $p_k$ is the probability of measuring an amplitude in the interval $\tilde I_k$, $\rho_E^k$ the corresponding conditional state of Eve, and $q_i$, $\sigma^i_E$ for $i=1,2M/\delta$ similar for amplitude measurements smaller than $-M$ and larger than $M$, respectively. The inequality follows from $F(\rho +\sigma,\rho)^{1/2}\geq F(\rho,\rho)^{1/2}$ for any two non-normalized states $\rho$ and $\sigma$. Note now that the last line of the above computation is nothing else than $\Pr[ \wedge_i \{|q_i| \leq M\} | \rho_{A^nB^nE}]$ such that the bound follows from the definition of the purified distance.
We then denote the probability that Bob measures an amplitude larger than $M$ conditioned that the protocol passes by $$\begin{aligned}
p_{\textnormal{fail}}& = \Pr[ \neg \wedge_i \{|q_i| \leq M\} | {\textnormal{pass}}] \\
& = 1- \Pr[ \wedge_i \{|q_i| \leq M\} | \rho_{A^nB^nE}] \, , \label{eq: Prob1}\end{aligned}$$ where the second inequality follows since $\rho_{A^nB^nE}$ is the state conditioned that the protocol passes. Using $\neg \wedge_i \{|q_i| \leq M\} = \vee_i\{|q_i| > M\}$ and Bayes theorem, we obtain by simple manipulations $$\begin{aligned}
p_{\textnormal{fail}}& = \frac{1}{{p_{\textnormal{pass}}}} \Pr [ \vee_i\{|q_i| > M\} \wedge {\textnormal{pass}}] \\
& \leq \frac{1}{{p_{\textnormal{pass}}}} \sum_i \Pr [|q_i |> M \wedge {\textnormal{pass}}] \\
& \leq \frac{1}{{p_{\textnormal{pass}}}} \sum_i \Pr [|q_i| > M \wedge |q_{t^1_i}|\leq \alpha ] \, ,\end{aligned}$$ where the last inequality holds since pass of the protocol implies that the energy test is passed which implies that $|q_{t^1_i}|\leq \alpha$. We now bound each term individually by $$\begin{aligned}
&\Pr \big[|q_i| > M \wedge |q_{t^1_i}|\leq \alpha \big] \\
& = \int_{|x|\geq M} \Pr[q_i=x] \Pr \big[|q_{t^1_i}|\leq \alpha \ \big| \ q_i = x \big] \ {\textnormal{d}}x \\
& \leq \sup_{|x|\geq M} \Pr \big[|q_{t^1_i}|\leq \alpha \ \big| \ q_i = x \big] \, , \end{aligned}$$ where the supremum in the last line refers to the essential supremum.
We then show the following lemma.
\[lem:FailureProb\] Let us assume that the energy test ${\mathcal{T}}(\alpha,T)$ is passed and set $\mu=\sqrt{\frac{1-T}{2T}}$ and $\lambda = (\frac{2T-1}{T})^2$. If $\alpha\leq \mu M$, then it holds that $\sup_{|x|\geq M} \Pr [|q_{t^1_i}|\leq \alpha \ | \ q_i = x ]$ is upper bounded by $$\label{lem,eq:FailureProb}
\Gamma(M,T,\alpha):=\frac{\sqrt{1+\lambda} + \sqrt{1+\lambda^{-1}}}{2} \exp\big({{\ -\frac{(\mu M - \alpha)^2}{T(1+\lambda)/2}}}\big)\, .$$
In the following, we suppress the index $i$ since the argument applies independently to all possible incoming modes. We further label the different modes in the energy test setup as in Figure \[fig:Energy\]. We are interested in computing $\Lambda_x=\Pr [|q_{t^1_i}|\leq \alpha | \ q_i = x ]$ and without loss of generality we can assume that $x\geq 0$.
In order to compute $\Lambda_x$, we write the characteristic function $\chi_{\text{out}}$ of the output state of modes $s'$, $t^1$, and $t^2$ in terms of the characteristic function $\chi_{\text{in}}$ of the input state of modes $a$, $b$, and $s$. Let $B$ be the matrix describing the linear transformation of the coordinates of the phase space induced by the beam splitters, that is, $r_{\text{out}} = B r_{\text{in}}$, where $r_{\text{in}}=(q_a,p_a,q_b,p_b,q_s,p_s)$ and $r_{\text{out}}=(q_{s'},p_{s'},q_{t^1},p_{t^1},q_{t^2},p_{t^2})$. For the following it will be important that $q_{s'} = \sqrt{T}q_s + \sqrt{1-T} q_a$ and $q_{t^1}= \sqrt{1/2} q_b + \sqrt{T/2} q_a + \sqrt{(1-T)/2} q_s$.
We then have that $\chi_{\text{out}}(r_{\text{out}}) = \chi_{\text{in}}(B^{-1}r_{\text{out}})$, where $\chi_{in}(r_{\text{in}}) = \chi_{{\textnormal{vac}}}(q_a,p_a) \chi_{{\textnormal{vac}}}(q_b,p_b) \chi_{s}(q_s,p_s) $ has product form. Integrating over all output modes under the condition $|q_{s'}|\leq \alpha$ and changing variables $r_{\text{in}} = B r_{\text{out}}$, we obtain that the probability $\Pr [|q_{t^1_i}|\leq \alpha ]$ is given by $$\begin{aligned}
\int_{\tilde A} \chi_{{\textnormal{vac}}}(q_a) \chi_{{\textnormal{vac}}}(q_b) \chi_s(q_s) \ {\textnormal{d}}q_a \ {\textnormal{d}}q_b \ {\textnormal{d}}q_s \, ,\end{aligned}$$ where $\chi_{*}(q)=\int \ {\textnormal{d}}p \chi_{*}(q,p)$ and $\tilde A$ is determined by the condition $$|q_{t^1}|=| \sqrt{1/2} q_b + \sqrt{T/2} q_a + \sqrt{(1-T)/2} q_s | \leq \alpha \, .$$ In order to condition on $q_{s'} = x$, we set $\chi_s(q_s) = \delta( q_s - [\sqrt{1/T} x + \sqrt{(1-T)/T} q_a])$ where $\delta$ denotes the Dirac delta distribution and we used that $q_s' = \sqrt{T}q_s + \sqrt{1-T} q_a$. Hence, integrating over $q_s$ results in $$\begin{aligned}
\label{eq:int1}
\Lambda_x \leq \int_{A}\chi_{{\textnormal{vac}}}(q_a) \chi_{{\textnormal{vac}}}(q_b) \ {\textnormal{d}}q_a \ {\textnormal{d}}q_b \, ,\end{aligned}$$ where $A=\{(q_a,q_b) | \ d_1q_a + d_2 q_b + \mu x \leq \alpha \} $. Here, we obtained $A$ from $\tilde A$ by setting $q_s=\sqrt{1/T} x + \sqrt{(1-T)/T}q_a$ and removing the absolute value.
In order to bound the integral in , we split the area $A$ into $A_1=A\cap\{q_a \geq 0\} $ and $A_2 = A\backslash A_1$. If we set $l(q_b)=\max \{0, 1/d_1(\mu x-\alpha -d_2q_b)\}$, we get that the integration over $A_1$ amounts to $$\begin{aligned}
& \frac{1}{2\pi} \int_{-\infty}^\infty \ {\textnormal{d}}q_b \ e^{-q_b^2/2 }\int_{l(q_b)}^\infty {\textnormal{d}}q_a \ e^{-q_a^2/2} \\
&\leq \frac{1}{2\sqrt{2\pi}} \int_{-\infty}^\infty \ {\textnormal{d}}q_b \ e^{-q_b^2/2-l(q_2)^2/2} \, ,\label{eq:int2}\end{aligned}$$ where the inequality follows from $$\label{eq:Tailbound}
\int_l^\infty e^{-q^2/2} {\textnormal{d}}q \ \leq \sqrt{\pi/2} \ e^{-l^2/2}$$ for $l\geq 0$. A straightforward calculation of gives $$\begin{aligned}
\label{eq:Bound1}
\frac12 \sqrt{1+\lambda^{-1}} \exp{\Big( -\frac{(\mu x - \alpha)^2}{T(1+\lambda)/2}\Big)} \, . \end{aligned}$$
In order to compute the integral over $A_2$, we note first that $A_2=\{(q_a,q_b) | q_a\leq 0 \ , \ -\infty < q_b \leq u(q_a) \}$ with $u(q_a)=1/d_2[d_1 q_a -(\mu x-\alpha)]$. Using that $u(q_a)\leq 0$ for all $q_a\leq 0$, we can apply again to bound $$\begin{aligned}
& \frac{1}{2\pi} \int_{-\infty}^0\ {\textnormal{d}}q_a \ e^{-q_a^2/2 }\int_{-\infty}^{u(q_a)}{\textnormal{d}}q_b \ e^{-q_b^2/2} \\
&\leq \frac{1}{2\sqrt{2\pi}} \int_{-\infty}^\infty \ {\textnormal{d}}q_a \ e^{-q_a^2/2-u(q_a)^2/2} \, , \label{eq:int3}\end{aligned}$$ where we also extended the integration over $q_a$ to run over the whole real line. Finally, the same calculation as before shows that is given by $$\begin{aligned}
\label{eq:Bound2}
\frac12 \sqrt{1+\lambda} \exp{\Big( -\frac{(\mu x - \alpha)^2}{T(1+\lambda)/2}\Big)} \, .\end{aligned}$$ We can thus conclude that $\Lambda_x$ is bounded by the sum of and . Finally, the supremum over $x$ is attained for $x=M$ which completes the proof.
By means of Lemma \[lem:FailureProb\], we can now bound $p_{\textnormal{fail}}\leq n\Gamma(M,T,\alpha)/p_{\textnormal{pass}}$. Using the relation in together with $(1-p_{\textnormal{fail}})^2 \geq 1-2 p_{\textnormal{fail}}$, we finally arrive at $$\begin{aligned}
{\mathcal{P}}(\rho_{ X_BE},\rho_{\tilde X_BE}) \leq \sqrt{\frac{ { 2 n \ \Gamma(M,T,\alpha)}}{{p_{\textnormal{pass}}}}} \, .\end{aligned}$$
Let us now consider the case of $\rho_{ Y^{\textnormal{key}}_BA^n}$ and $\rho_{\tilde Y^{\textnormal{key}}_BA^n}$. It is easy to see that the same strategy can be applied as in the previous situation. This is simply based on the fact that $|p_{t^2}| \leq \alpha$ if the test ${\mathcal{T}}(\alpha,M)$ is passed. Hence, following the exactly same steps for the phase measurements as before for amplitude, we find that also $$\begin{aligned}
{\mathcal{P}}(\rho_{ Y^{\textnormal{key}}_BA^n},\rho_{\tilde Y^{\textnormal{key}}_BA^n}) \leq \sqrt{\frac{ { 2 n \ \Gamma(M,T,\alpha)}}{{p_{\textnormal{pass}}}}} \, ,\end{aligned}$$ holds.
Summarizing the above arguments, we have thus shown that is a lower bound on the key rate if we set $$\label{eq:tildeEps}
\tilde \epsilon = \sqrt{\frac{ { 2 n \ \Gamma(M,T,\alpha)}}{{p_{\textnormal{pass}}}}} \, .$$
Statistical Estimation of the Max-Entropy
-----------------------------------------
The goal of this section is to use the information from the parameter estimation step to upper bound the smooth max-entropy $H_{\max}^\epsilon(Y_B^{\textnormal{key}}|A^n)_\omega$. In a first step, we apply Alice’s scaled and discretized phase measurement to $A^n$ mapping it to a classical outcome $Y^{\textnormal{key}}_A$ also in ${\mathcal{X}}^n$. Using now that the smooth max-entropy can only increase under processing of the side-information [@Tomamichel09; @Furrer11], we obtain that $$\label{eq:DataPr}
H^\epsilon_{\max}(Y_B^{\textnormal{key}}|A^n)_\rho \leq H^\epsilon_{\max}(Y_B^{\textnormal{key}}|Y^{\textnormal{key}}_A)_\rho \, .$$
We next note that it has been shown in [@Furrer12] that if $X$ and $Y$ are random variables on ${\mathcal{X}}^n\times {\mathcal{X}}^n$ distributed according to $Q_{XY}$ for which $\text{Pr}_{Q}[d(X,Y)\geq d] \leq \epsilon^2$ holds, it follows that $$\label{HmaxBound}
H^\epsilon_{\max}(X|Y)_Q \leq n \log \gamma(d) \, ,$$ with $\gamma$ as defined in . In order to apply this result to bound $H^\epsilon_{\max}(Y_B^{\textnormal{key}}|Y^{\textnormal{key}}_A)_\rho$, we have to find an estimation of $d^{\textnormal{key}}=d(Y_B^{\textnormal{key}},Y^{\textnormal{key}}_A)$ that holds with probability $\epsilon^2$. For that we use a large deviation bound and estimate the probability that $d^{\textnormal{key}}=d(Y_B^{\textnormal{key}},Y^{\textnormal{key}}_A)$ is larger than $d_0 +\mu$ where conditioned on pass $d_0\geq d^{{\textnormal{PE}}}=d(X_A^{{\textnormal{PE}}},X_B^{{\textnormal{PE}}})$. But since the alphabet size scales with $M$ and is thus very large, a direct application of a large deviation bound would result in a large failure probability. This can be avoided by employing a strategy that splits the problem into two estimation steps.
In the first step, we bound in Lemma \[lem:LargeDeviation1\] the probability that $\text{m}_2(Y_A^{\textnormal{key}})$ is larger than $V_{Y_A}^{\textnormal{PE}}+\nu$, respectively, that $\text{m}_2(Y_B^{\textnormal{key}})$ is larger than $V_{Y_B}^{\textnormal{PE}}+\nu$. This will be done using Serfling’s large deviation bound [@Serfling74]. Given that $\text{m}_2(Y_A^{\textnormal{key}}) \leq V_{Y_A}^{\textnormal{PE}}+\nu$ and $\text{m}_2(Y_B^{\textnormal{key}}) \leq V_{Y_B}^{\textnormal{PE}}+\nu$, we can bound the average variance of the distance $d(Y_B^{\textnormal{key}},Y_A^{\textnormal{key}})$ on $Y_B^{\textnormal{key}}\times Y_A^{\textnormal{key}}$, and thus, of the total population $Y_A^{\textnormal{tot}}\times Y_B^{\textnormal{tot}}$ formed by $Y_B^{\textnormal{key}}\times Y_A^{\textnormal{key}}$ and $Y_B^{\textnormal{PE}}\times Y_A^{\textnormal{PE}}$. Indeed, denoting $N=n+k$, we can bound the average variance of the population by $$\begin{aligned}
\sigma^2 & = \frac 1 N\sum_i \vert (Y_A^{\textnormal{tot}})_i - (Y_B^{\textnormal{tot}})_i \vert^2 - d(Y_B^{\textnormal{tot}},Y_A^{\textnormal{tot}})^2 \\
& \leq \frac k N V^{\textnormal{PE}}_d + \frac 1N \sum_i \vert (Y_A^{\textnormal{key}})_i - (Y_B^{\textnormal{key}})_i \vert^2 - (\frac k N d^{\textnormal{PE}})^2 \\
& \leq \frac k N (V^{\textnormal{PE}}_d - \frac k N (d^{\textnormal{PE}})^2) + \frac 1N \sum_i (\vert (Y_A^{\textnormal{key}})_i\vert + \vert( Y_B^{\textnormal{key}})_i\vert )^2 \end{aligned}$$ where we used that $d^{\textnormal{tot}}= \frac k N d^{\textnormal{PE}}+ \frac n N d^{\textnormal{key}}$. Applying the Cauchy-Schwarz inequality, we can then bound $\sum_i (\vert (Y_A^{\textnormal{key}})_i\vert + \vert( Y_B^{\textnormal{key}})_i\vert )^2$ by $$\begin{aligned}
k \Big(\text{m}_2(Y_A^{\textnormal{key}}) + \text{m}_2(Y_B^{\textnormal{key}}) + \big(\text{m}_2(Y_A^{\textnormal{key}}) \text{m}_2(Y_B^{\textnormal{key}})\big)^{\frac 12} \Big) \, .\end{aligned}$$ Hence, given that $\text{m}_2(Y_A^{\textnormal{key}}) \leq V_{Y_A}^{\textnormal{PE}}+\nu/\delta^2$ and $\text{m}_2(Y_B^{\textnormal{key}}) \leq V_{Y_B}^{\textnormal{PE}}+\nu/\delta^2$ holds, we find that $\sigma \leq \sigma_*$ with $\sigma_*$ as defined in .
In the second step, we bound in Lemma \[lem:LargeDeviation2\] the probability that $d^{\textnormal{key}}=d(Y_B^{\textnormal{key}},Y^{\textnormal{key}}_A)$ is larger than $d^{\textnormal{PE}}+\mu$ for a fixed and bounded $\sigma$. Combining these two steps, we can then estimate $$\begin{aligned}
\nonumber
{\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d_0 + \mu | {\textnormal{pass}}]}}
& \leq {\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d^{\textnormal{PE}}+ \mu | {\textnormal{pass}}]}} \\ \nonumber
& \leq \frac 1{p_{\textnormal{pass}}} {\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d^{\textnormal{PE}}+ \mu ]}}
\\ & \leq \frac 1{p_{\textnormal{pass}}} \Big( {\ensuremath{\mathrm{Pr}[\text{m}_2(Y_A^{\textnormal{key}}) > V_{Y_A}^{\textnormal{PE}}+\nu]}}\ \nonumber
\\ \nonumber
& \quad + {\ensuremath{\mathrm{Pr}[\text{m}_2(Y_B^{\textnormal{key}}) > V_{Y_B}^{\textnormal{PE}}+\nu]}} \\ & \quad + {\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d^{\textnormal{PE}}+ \mu | \text{C}]}}\Big) \label{eq:Prob1}\end{aligned}$$ where $C$ denotes the condition $\text{m}_2(Y_B^{\textnormal{key}}) \leq V_{Y_B}^{\textnormal{PE}}+\nu$ and $\text{m}_2(Y_A^{\textnormal{key}}) \leq V^{\textnormal{PE}}_{\tilde P_A}$.
\[lem:LargeDeviation1\] Let $Y$ be a string in ${\mathcal{X}}^{n+m}$ and $Y^{P}$ be a random sample without replacement from $Y$ of length $m$ with $\text{m}_2(Y^P)=V_{Y}^{\textnormal{PE}}$. Then, for the average second moment of the remaining sample $Y^{\textnormal{key}}$ of length $n$, holds that $$\begin{aligned}
{\ensuremath{\mathrm{Pr}[\text{m}_2( Y^{\textnormal{key}}) \geq V_{Y}^{\textnormal{PE}}+ \nu ]}}
\leq \exp\Big(\frac{-2 \nu^2\delta^4 n m^2}{M^4(n+m) (m+1)} \Big) \, .\end{aligned}$$
The proof is similar to strategies applied in [@tomamichellim11; @Furrer12] and based on a large deviation bound for random sampling without replacement by Serfling [@Serfling74]. Denoting the population mean of the variance by $V_{Y}=\text{m}_2(Y)$ and $V^{\textnormal{key}}_{Y}=\text{m}_2(Y^{\textnormal{key}})$, we have that $$\label{eq:Population}
n V^{\textnormal{key}}_{Y} + m V_{Y}^{\textnormal{PE}}= (n+m) V_{Y} \, .$$ The large deviation bound in [@Serfling74] implies that ${\ensuremath{\mathrm{Pr}[V^{\textnormal{key}}_{Y} \geq V_{Y} +\tilde \nu]}}$ is upper bounded by $$\exp\big( -\frac{2\tilde\nu^2 n(n+m)}{(M/\delta)^4(m+1)}\big) \, .$$ Since the bound is independent of $V_{Y}$, it is not necessary to know the actual value of $V_{Y}$. Indeed, using the relation in , we obtain the desired bound $$\begin{aligned}
{\ensuremath{\mathrm{Pr}[ V_{Y}^{\textnormal{key}}) \geq V_{Y}^{\textnormal{PE}}+ \nu ]}} &\leq {\ensuremath{\mathrm{Pr}[V^{\textnormal{key}}_{Y} \geq V_{Y} + \frac{m}{m+n} \nu]}} \\
& \leq \exp\big(\frac{-2\nu^2 n m^2}{(M/\delta)^4(n+m) (m+1)} \big)\, .\end{aligned}$$
\[lem:LargeDeviation2\] Let $Y_A^{\textnormal{tot}}\times Y_B^{\textnormal{tot}}$ be in $({\mathcal{X}}\times {\mathcal{X}})^N$ with $d_{\textnormal{tot}}= d(Y_A^{{\textnormal{tot}}},Y_B^{{\textnormal{tot}}})$ and $Y_A^{\textnormal{PE}}\times Y_B^{\textnormal{PE}}$ a random sample from it without replacement of length $k$ with $d^{{\textnormal{PE}}}=d(Y_A^{{\textnormal{PE}}},Y_B^{{\textnormal{PE}}})$. Let further $\sigma^2= \sum_i |(Y_A^{\textnormal{tot}})_i-(Y_B^{\textnormal{tot}})_i|^2 - d_{\textnormal{tot}}^2 $ be the average variance of the population. Then, for $d^{\textnormal{key}}=d(Y_A^{{\textnormal{key}}},Y_B^{{\textnormal{key}}})$ of the remaining sample $Y_A^{\textnormal{key}}\times Y_B^{\textnormal{key}}$ of length $n=N-k$, holds that $$\begin{aligned}
\label{lem,eq:Bernstein}
{\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d^{\textnormal{PE}}+ \nu ]}}
\leq \exp\Big(\frac{-\mu^2 n (k/N)^2 }{2\sigma^2 + 4\mu/3(k/N)(M/\delta)} \Big) \, .\end{aligned}$$
The bound follows directly from Bernstein’s inequality $${\ensuremath{\mathrm{Pr}[d^{\textnormal{key}}\geq d^{\textnormal{tot}}+ \tilde\mu ]}} \leq \exp\big(- \frac{n \tilde\mu^2}{2\sigma^2 + 2\mu\vert {\mathcal{X}}\vert/3} \big) \, ,$$ which, as shown by Hoeffding [@Hoeffding1963], also holds for sampling without replacement. Using that $nd^{\textnormal{key}}+ kd^{\textnormal{PE}}= Nd^{\textnormal{tot}}$ and that $|{\mathcal{X}}|=2M/\delta$, a straigthforward calculation results in .
We are now ready to prove Theorem \[thm:KeyLength\]. For that we observe that from follows that $$H_{\max}^\epsilon(Y_B^{\textnormal{key}}|Y^{\textnormal{key}}_A)_\rho \leq \gamma(d_0+\mu) \, ,$$ if $\mu$ is such that is smaller than $\epsilon^2$. Hence, we use Lemma \[lem:LargeDeviation1\] and Lemma \[lem:LargeDeviation2\] to bound and set the expression equal to $\epsilon^2$, where $\epsilon\leq (\epsilon_1-\epsilon_s)/(2p_{\textnormal{pass}}) - 2 \tilde \epsilon$ with $\tilde \epsilon$ as in (c.f. ). Solving the equation for $\mu$ and using $p_{\textnormal{pass}}\leq 1$, we obtain an upper bound on $\mu$ by . This concludes the security proof.
Performance and Limitations of Security Proofs based on the Extended Uncertainty Principle {#sec:Tightness}
==========================================================================================
In Section \[sec:UR\], we have seen that the main ingredient in the security proof is the uncertainty relation with quantum memory for smooth min- and max-entropy (c.f. ) $$\label{eq:URsmoothEnt}
H_{\min}^{\epsilon}(Q_B^{\delta,n}|E)_{\rho} + H_{\max}^{\epsilon}(P_B^{\delta,n}|A^n)_{\rho} \geq - n \log c(\delta) \, .$$ Here, we denote by $Q_B^{\delta,n}$ and $P_B^{\delta,n}$ the classical random variable induced by an arbitrary amplitude and phase measurement with discretization into intervals of equal length $\delta$. Thus, the tightness of the bound on the optimal key rate crucially depends on how tight the uncertainty relation is for the state given in the protocol. Since we are interested in optimality in the following, and as such in the question of how much key can be extracted under normal working condition, we can assume that Eve is absent for the moment. Then, the state is in good approximation given by the $n$-fold tensor product of identical Gaussian states described by a covariance matrix depending on coupling and channel losses as well as excess noise as described in .
But even though we can assume that the state takes this simple form it is still very hard to compute the corresponding smooth min- and max-entropy directly. We circumvent this problem by using a further approximation. In particular, we can use the asymptotic equipartition property in infinite dimensions [@Furrer10], saying that the smooth min-entropy $\frac 1n H_{\min}^{\epsilon}(Q_B^{\delta,n}|E)_{\rho^{\otimes n}}$ can be approximated up to a correction ${\mathcal{O}}(\frac1{\sqrt{n}})$ by the von Neumann entropy $ H(Q_B^{\delta}|E)_{\rho}$. Here, $\rho_{Q_B^{\delta} E}$ is given by measuring the amplitude with a spacing $\delta$ on a single copy. The same applies for the smooth max-entropy such that $\frac 1n H_{\max}^{\epsilon}(P_B^{\delta,n}|A)_{\rho^{\otimes n}}$ can be approximated by $ H(P_B^{\delta}|A)_{\rho}$.
Furthermore, if we choose $\delta$ small enough we can approximate the von Neumann entropy of the discrete distribution over intervals of length $\delta$ by the differential von Neumann entropy [@Berta13] $$\label{eq:DeltaApprox}
H(Q_B^{\delta}|E)_{\rho} \approx h(Q_B|E) -\log \delta \, ,$$ where $h(Q_B|E)$ denotes the differential quantum conditional entropy of the continuous amplitude measurement. Similarly, we have that $ H(P_B^{\delta}|E)_{\rho} \approx h(P_B|A) -\log \delta $. Using that $c(\delta)\approx \delta^2/(2\pi)$, we can thus conclude that in the asymptotic limit inequality is well approximated by $$\label{eq:URvN}
h(Q_B|E) + h(P_B|A) \geq \log 2\pi \, .$$
Hence, we can qualitatively investigate the tightness of by considering inequality . For our situation, the latter one can now easily be analyzed as the differential quantum conditional entropy can be computed for Gaussian classical and quantum states. In the following, we always choose system $E$ as the Gaussian purification of the Gaussian state between $A$ and $B$. In [@Berta13], it was shown that gets approximately tight for a two-mode squeezed state without losses and squeezing above $10$ dB. However, tightness holds only conditioned on Alice’s quantum system but not after she performs the amplitude measurements. The data processing inequality only ensures that $$h(P_B|A) \leq h(P_B|P_A) \, ,$$ but equality does not always hold, even for the optimal measurement on $A$. Unfortunately, in our case it turns out that the loss through the data processing inequality is substantial (see Figure \[fig:UR\]) such that the optimality of the bound has to be analyzed for the inequality after applying the data processing inequality $$\label{eq:URvNdp}
h(Q_B|E) + h(P_B|P_A) \geq \log 2\pi \, .$$ In Figure \[fig:UR\], we plotted the tightness of and for the same parameters of the state for which the key rates are plotted in Section \[sec:KeyRates\]. We see that unfortunately, the gap between the left hand side and right hand side of and increases for high losses. We further note that an increase of the initial squeezing does hardly change the gap for losses above $30$%.
![\[fig:UR\] The gap between l.h.s. and r.h.s. of the uncertainty relations in (solid line) and (dashed line) are plotted for a two-mode squeezed state with squeezing and antisqueezing of $11$dB and $16$dB against the losses on Bob’s mode. The losses on Alice’s mode and the excess noise are set to $0$. The gap for (solid line) is the amount by which the bound on the key rate reduces in the asymptotic limit compared with the optimal key rate. ](fig5.pdf){width="8.8cm"}
This gap severely limits the tolerated noises also causing that the finite-key rates presented in Section \[sec:KeyRates\] vanish for high losses. We can quantitatively analyze the effect of the untightness of the uncertainty relation on the key rate by calculating the asymptotic key rate . In this regime all the statistical estimation errors disappear and collective attacks are as strong as coherent attacks [@Renner_Cirac_09]. We find for the asymptotic key rate by using that $\ell_{\textnormal{IR}}= H(P_B^{\delta}|P_A^{\delta})$ for perfect error correction and the simple formula $$\label{eq:AsymKeyUR}
r_{\text{UR}} = \log 2\pi - 2 h(P_B|P_A) \, .$$ In contrast, the asymptotically optimal key rate given by the Devetak-Winter formula [@DevetakW] is $$\label{eq:AsymKey}
r_{\text{Opt}} = h(P_B|E) - h(P_B|P_A) \, ,$$ where we also applied the approximation in . In Figure \[fig:Asym\], we compare the finite-key rate from with the asymptotic key rates $r_{\text{UR}}$ and $r_{\text{Opt}}$ for the same parameters as in Figure \[fig:EC95\] except that the excess noise is set equal to $0$. We see that even the asymptotic key rate $r_{\text{UR}}$ vanishes for moderate losses of $66$%. We remark that even if the squeezing is arbitrarily high and the losses in Alice’s mode are $0$% the maximally tolerated losses are not exceeding $75$%.
![\[fig:Asym\] The loss dependence of the finite-key rate for $N_{\textnormal{tot}}=10^{11}$ (straight line) is compared with the asymptotic key rate $r_{\text{UR}}$ (dashed), the optimal key rate $r_{\text{Opt}}$ (dashed-dotted), and the asymptotic key rate for direct reconciliation $r_{\text{DR}}$ (dotted). The squeezing and antisqueezing is set to $11$dB and $16$dB and Alice’s coupling losses as well as the excess noise to $0$. The reconciliation efficiency of the non-asymptotic key rate is $\beta=0.95$ and the other parameters are as in Figure \[fig:EC95\]. ](fig6.pdf){width="8.8cm"}
In Figure \[fig:Asym\], we also plotted the asymptotic key rate obtained via the extended uncertainty principle if using a direct reconciliation protocol. In this situation the asymptotic key rate is $r_{\text{DR}} = \log 2\pi - 2 h(P_A|P_B)$. The plot shows that we have obtained a finite-key rate in the case of reverse reconciliation for losses much larger than what is ultimately tolerated in the case of direct reconciliation.
Conclusion {#sec:Conclusion}
==========
We have presented a security proof against coherent attacks including finite-size effects for a reverse reconciliation continuous variable QKD protocol. The protocol is based on the generation of two-mode squeezed states and homodyne detection. Security for transmission losses of up to $50$% for experimental parameters demonstrated in [@Eberle11] have been certified under realistic assumptions. A remaining challenging point in an implementation of the presented protocol will be the reconciliation protocol. However, recently some advances have been made in non-binary error correction codes such that reconciliation efficiencies above $90$% seem realistic.
We further investigated on the tightness of the security analysis based on the uncertainty relation with quantum memory and showed that even in the asymptotic limit the maximally tolerated losses are bounded. The reason for that is that the uncertainty relation is not perfectly tight and for high losses the trade-off between Eve’s knowledge and the correlations between Bob and Alice gets very small. Hence, for the high loss regime a very tight bound on Eve’s information is crucial.
*Acknowledgements.—* I gratefully acknowledge valuable discussions with Joerg Duhme, Vitus Händchen and Takanori Sugiyama. I’m especially grateful to Anthony Leverrier who provided very helpful comments on a first version and proposed using a test like in Figure \[fig:Energy\] to control the energy of Eve’s attack. This work is supported by the Japan Society for the Promotion of Science (JSPS) by KAKENHI grant No. 24-02793.
[10]{}
L. Lydersen, M. K. Akhlaghi, A H. Majedi, J. Skaar, and V. Makarov. Controlling a superconducting nanowire single-photon detector using tailored bright illumination. , 13:113042, 2011.
M. D. Eisaman, J. Fan, A. Migdall, and S. V. Polyakov. Invited review article: Single-photon sources and detectors. , 82(7), 2011.
C. Weedbrook, S. Pirandola, R. García-Patrón, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and Seth Lloyd. Gaussian quantum information. , 84:621–669, 2012.
H. Häseler, T. Moroder, and N. Lütkenhaus. Testing quantum devices: Practical entanglement verification in bipartite optical systems. , 77:032303, 2008.
P. Jouguet, S. Kunz-Jacques, and E. Diamanti. Preventing calibration attacks on the local oscillator in continuous-variable quantum key distribution. , 87:062313, 2013.
F. Grosshans, G. van Assche, J. Wenger, R. Brouri, N. J. Cerf, and P. Grangier. Quantum key distribution using gaussian-modulated coherent states. , 421:238–241, 2003.
P. Jouguet, S. Kunz-Jacques, A. Leverrier, P. Grangier, and E. Diamanti. Experimental demonstration of long-distance continuous-variable quantum key distribution. , 7:378–381, 2012.
C. Weedbrook, A.M. Lance, W.P. Bowen, T. Symul, T.C. Ralph, and P.K. Lam. Quantum cryptography without switching. , 93:170504, 2004.
A. Leverrier and P. Grangier. Continuous-variable quantum-key-distribution protocols with a non-gaussian modulation. , 83:042312, 2011.
R. Renner and J. I. Cirac. de [F]{}inetti representation theorem for infinite-dimensional quantum systems and applications to quantum cryptography. , 102:110504, 2009.
R. Garcia-Patron and N. J. Cerf. Unconditional optimality of gaussian attacks against continuous-variable quantum key distribution. , 97:190503, 2006.
M. Navascues, F. Grosshans, and A. Acin. Optimality of gaussian attacks in continuous-variable quantum cryptography. , 97:190502, Nov 2006.
A. Leverrier, F. Grosshans, and P. Grangier. Finite-size analysis of a continuous-variable quantum key distribution. , 81:062343, Jun 2010.
R. Renner. Symmetry of large physical systems implies independence of subsystems. , 3:645, 2007.
M. Christandl, R. König, and R. Renner. Postselection technique for quantum channels with applications to quantum cryptography. , 102:020504, 2009.
A. Leverrier, R. Garc[í]{}a-Patr[ó]{}n, R. Renner, and N. J Cerf. Security of continuous-variable quantum key distribution against general attacks. , 110:030502, 2013.
F. Furrer, T. Franz, M. Berta, A. Leverrier, V. B. Scholz, M. Tomamichel, and R. F. Werner. . , 109:100502, 2012.
M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner. The uncertainty principle in the presence of quantum memory. , 6:659–662, 2010.
F. Furrer, M. Berta, M. Tomamichel, V. B. Scholz, and M. Christandl. Position-Momentum Uncertainty Relations in the Presence of Quantum Memory. 2013. arXiv:1308.4527, to appear in J. Math. Phys..
T. Eberle, V. Händchen, F. Furrer, T. Franz, J. Duhme, C. Pacher, R. F. Werner, and R. Schnabel. Arbitrary-attack-proof quantum key distribution without single photons. 2014. arXiv:1406.6174.
F. Furrer, T. Franz, M. Berta, A. Leverrier, V. B. Scholz, M. Tomamichel, and R. F. Werner. Erratum: Continuous variable quantum key distribution: Finite-key analysis of composable security against coherent attacks. , 112:019902 (E), 2014.
N. Walk and T. C. Ralphl H. M Wiseman. Continuous variable one-sided device independent quantum key distribution. 2014. arXiv:1405.6593.
R. Renner and R. König. Universally composable privacy amplification against quantum adversaries. , 3378:407, 2005. arXiv:quant-ph/0403133v2.
M. Ben-Or, M. Horodecki, D. Leung, D. Mayers, and J. Oppenheim. The universal composable security of quantum key distribution. In [*Theory of Cryptography*]{}, volume 3378 of [*Lecture Notes in Computer Science*]{}, pages 386–406. Springer Berlin / Heidelberg, 2005.
J. Müller-Quade and R. Renner. Composability in quantum cryptography. , 11:085006, 2009.
J. L. Carter and M. N. Wegman. Universal classes of hash functions. , 18:143–154, 1979.
M. N. Wegman and J. L. Carter. New hash functions and their use in authentication and set equality. , 22:265–279, 1981.
Note that in a practical situation one does not need to abort the protocol and may only supply more information in the reconciliation protocol until the test is passed.
R. Renner. . PhD thesis, ETH Zurich, 2005.
M. Tomamichel, C. Schaffner, A. Smith, and R. Renner. Leftover hashing against quantum side information. , 57:8, 2011.
M. Berta, F. Furrer, and V. B. Scholz. . 2011. arXiv:1107.5460.
F. Furrer. . PhD thesis, Leibniz University Hannover, 2012.
N. J. Cerf, M. Lévy, and G. Van Assche. Quantum distribution of [G]{}aussian keys using squeezed states. , 63:052311, 2001.
A. Furusawa, J.L. Sorensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, and E.S Polzik. . , 282:706–709, 1998.
The case of a small deviation from a phase difference of $\pi /2$ can easily be included.
M. Tomamichel and R. Renner. . , 106:110506, 2011.
J. Lodewyck, M. Bloch, R. García-Patrón, S. Fossier, E. Karpov, E. Diamanti, T. Debuisschert, N. J. Cerf, R. Tualle-Brouri, S. W. McLaughlin, and P. Grangier. Quantum key distribution over $25\phantom{\rule{0.3em}{0ex}}\mathrm{km}$ with an all-fiber continuous-variable system. , 76:042305, 2007.
T. Eberle, V. Händchen, J. Duhme, T. Franz, R. F. Werner, and R. Schnabel. Strong [E]{}instein-[P]{}odolsky-[R]{}osen entanglement from a single squeezed light source. , 83:052329, 2011.
R. König, R. Renner, and C. Schaffner. . , 55:4337–4347, 2009.
M. Tomamichel, R. Colbeck, and R. Renner. Duality between smooth min- and max-entropies. , 56:4674, 2010.
Note that the uncertainty relation in [@Furrer11] was only proven for the non-smoothed min- and max-entropy. However, the extension of the inequality to smooth entropies is straightforward using similar arguments as in .
R. J. Serfling. Probability inequalities for the sum in sampling without replacement. , 2:39–48, 1974.
M. Tomamichel, C. C. W. Lim, N. Gisin, and R. Renner. . , 3:634, 2012.
W. Hoeffding. Probability inequalities for sums of bounded random variables. , 58(301):13–30, 1963.
F. Furrer, J. Aberg, and R. Renner. . , 306:165–186, 2011.
I. Devetak and A. Winter. Distillation of secret key and entanglement from quantum state. , 461:207, 2005.
[^1]: Note that in a practical situation one does not need to abort the protocol and may only supply more information in the reconciliation protocol until the test is passed.
[^2]: The case of a small deviation from a phase difference of $\pi/2$ can easily be included.
[^3]: Note that the uncertainty relation in [@Furrer11] was only proven for the non-smoothed min- and max-entropy. However, the extension of the inequality to smooth entropies is straightforward using similar arguments as in [@tomamichel11].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper the effective mass approximation and k$\cdot$p multi-band models, describing quantum evolution of electrons in a crystal lattice, are discussed. Electrons are assumed to move in both a periodic potential and a macroscopic one. The typical period ${\epsilon}$ of the periodic potential is assumed to be very small, while the macroscopic potential acts on a much bigger length scale. Such homogenization asymptotic is investigated by using the envelope-function decomposition of the electron wave function. If the external potential is smooth enough, the k$\cdot$p and effective mass models, well known in solid-state physics, are proved to be close (in strong sense) to the exact dynamics. Moreover, the position density of the electrons is proved to converge weakly to its effective mass approximation.'
author:
- 'Luigi Barletti$^{1}$ and Naoufel Ben Abdallah$^{2}$'
title: |
Quantum Transport in Crystals:\
Effective Mass Theorem and K$\cdot$P Hamiltonians
---
$^{1}$Dipartimento di Matematica, Università di Firenze, Viale Morgagni 67/A, 50134 Firenze, Italy, [*[email protected]*]{}\
$^{2}$Institut de Mathématiques de Toulouse, Université de Toulouse Univ. Paul Sabatier, 118 route de Narbonne, 31062 Toulouse, France, [*[email protected]*]{}
Introduction {#Intro}
============
The effective mass approximation is a common approximation in solid state physics [@Bastard; @Ashcroft76; @Wenckebach99] and states roughly speaking that the motion of electrons in a periodic potential can be replaced with a good approximation by the motion of a fictitious particle in vacuum but with a modified mass called the effective mass of the electron. This approximation is valid when the lattice period is small compared to the observation length scale, it relies on the Bloch decomposition theorem for the Schrödinger equation with a periodic potential. The effective mass is actually a tensor and depends on the energy band in which the electron “live’s’. One of the most important references in the Physics literature on the subject is the paper of Kohn and Luttinger [@LuttingerKohn55] which dates back to 1955. As for rigorous mathematical treatment of this problem, we are aware of the work of Poupaud and Ringhofer [@PoupaudRinghofer96] and that of Allaire and Piatnitski [@Allaire05]. The aim of the present work is to provide an alternative mathematical treatment which is based on the original work of Kohn and Luttinger. Like in [@Allaire05] (see also [@Allaire04] and [@Allaire06] for related problems), we consider the scaled Schrödinger equation $$i\partial_t\, \psi(t,x) = \left( -\frac{1}{2}\Delta + \frac{1}{{\epsilon}^2}
\,W_{\mathcal{L}}\left(\frac{x}{{\epsilon}}\right) + V\left(x,\frac{x}{{\epsilon}}\right) \right) \psi(t,x),$$ where $W_{\mathcal{L}}(z)$ is a periodic potential with the periodicity of a lattice ${\mathcal{L}}$, representing the crystal ions, while $V(x,z)$ represents an external potential. The latter is assumed to act both on the macroscopic scale $x$ and on the microscopic scale $z = x/{\epsilon}$, and to be ${\mathcal{L}}$-periodic with respect to $z$. The small parameter ${\epsilon}$ is interpreted as the so-called “lattice constant”, that is the typical separation between lattice sites. Note that the scaling of the Schrödinger equation is a homogenization scaling [@Allaire05; @PoupaudRinghofer96]. As mentioned above, the analysis of the limit ${\epsilon}\to 0$ has been done in Refs. [@Allaire05] and [@PoupaudRinghofer96] by different techniques. In [@PoupaudRinghofer96], the analysis is done indirectly by means of Wigner functions techniques. Using Bloch functions which diagonalize the periodic Hamiltonian, a Wigner function is constructed. The limit ${\epsilon}\to 0$ is done in the Wigner equation and is reinterpreted as the Wigner transform of an effective mass Schrödinger equation. In [@Allaire05], the problem is tackled differently thanks to homogenization techniques, mainly double-scale limits. The wave function is spanned on the Bloch basis and the limiting equation is obtained by expanding around zero wavevector the Bloch functions and the energy bands.
The approach we adopt in this paper is completely different from [@PoupaudRinghofer96] and somehow related to [@Allaire05] although the techniques are different. The main idea, borrowed from the celebrated work of Kohn and Luttinger [@LuttingerKohn55], consists of expanding the wave function on a modified Bloch basis. This choice of basis does not allow to completely diagonalize the periodic part of the Hamiltonian, but completely separates the “oscillating” part of the wave function from its slowly varying one. By doing so, we introduce a so-called envelope function decomposition of the wave function and rewrite the Schrödinger equation as an infinite system of coupled Schrödinger equations. Each of the envelope functions has a fast oscillating scale in time with a frequency related to the energy band for vanishing wavevector. Therefore adiabatic decoupling occurs as it is commonly the case for fast oscillating systems [@hagedorn-joye; @panati; @spohn-teufel; @teufel]. The action of the macroscopic potential becomes in the envelope function formulation a convolution operator in both the position variable and band index. The limit of this operator becomes a multiplication operator in position by a matrix potential (in the band index). The analysis of this limiting process is obtained through simple Fourier-like analysis and perturbation of point spectra of self-adjoint operators. The method allows to handle an infinite number of Bloch waves and also derive the so-called k$\cdot$p Hamiltonian as an intermediate model between the original Schrödinger equation and its limiting effective mass approximation.
The outline of the paper is as follows. Section \[sec2\] is devoted to the presentation of the functional setting, notations as well as the main result of the paper. As mentioned above, the Schrödinger equation is reformulated as an infinite system of coupled Schrödinger equations, where the coupling comes both from the differential part and from the potential part. In Section \[sec3\], we concentrate on the potential part and analyze its limit. Section \[sec4\] is devoted to the diagonalization of the differential part and to the expansion of the corresponding eigenvalues in the Fourier space. In Section \[sec5\], we analyze the convergence of the solution of the Schrödinger equation towards its effective mass approximation. The method relies on the definition of intermediate models and the comparison of their respective dynamics. Some comments are done in Section \[sec6\] while some proofs are postponed to Section \[post\].
Notations and main results {#sec2}
==========================
Bloch decomposition {#SecDEF}
-------------------
Let us consider the operator $$\label{ScaledHamiltonian}
H_{\mathcal{L}}^{\epsilon}= -\textstyle{\frac{1}{2}} \Delta + \frac{1}{{\epsilon}^2}\,W_{\mathcal{L}}\Big(\frac{x}{{\epsilon}}\Big),$$ where $W_{\mathcal{L}}$ is a bounded ${\mathcal{L}}$-periodic potential where the lattice ${\mathcal{L}}$ is defined by $$\label{DirLatt}
{\mathcal{L}}= \left\{ Lz \ \big| \ z \in {\mathbb{Z}}^d \right\} \subset {\mathbb{R}}^d,$$ where $L$ be a $d\times d$ matrix with $\det L \not= 0$. The centered fundamental domain ${\mathcal{C}}$ of ${\mathcal{L}}$ is, by definition, $$\label{Cell}
{\mathcal{C}}= \left\{ Lt \ \Big| \ t \in \Big[-{1\over 2},{1\over 2}\Big]^d \right\}.$$ Note that the volume measure ${{\left\vert {{\mathcal{C}}} \right\vert}}$ of ${\mathcal{C}}$ is given by ${{\left\vert {{\mathcal{C}}} \right\vert}} = {{\left\vert {\det L} \right\vert}}$. The [*reciprocal lattice*]{} ${\mathcal{L}}^*$ is, by definition, the lattice generated by the matrix $L^*$ such that $$\label{RecLatt}
L^T L^* = 2\pi I.$$ The [*Brillouin zone*]{} ${\mathcal{B}}$ is the centered fundamental domain of ${\mathcal{L}}^*$, i.e.[^1] $$\label{Brillo}
{\mathcal{B}}= \left\{ L^*t \ \Big| \ t \in \Big[-{1\over 2} , {1\over 2}\Big]^d \right\} .$$ Thus, we clearly have $$\label{CB}
{{\left\vert {{\mathcal{C}}} \right\vert}}\,{{\left\vert {{\mathcal{B}}} \right\vert}} = (2\pi)^d.$$ We assume without loss of generality that the periodic potential is larger than one ($W_{\mathcal{L}}\geq 1$). In solid state physics, $W_{\mathcal{L}}$ is interpreted as the electrostatic potential generated by the ions of the crystal lattice [@Ashcroft76]. With the change of variables $z = x/{\epsilon}$, the operator $ H_{\mathcal{L}}^{\epsilon}$ turns to ${1\over {\epsilon}^2} H_{\mathcal{L}}^1$, where $H_{\mathcal{L}}^1$ is given by with ${\epsilon}= 1$. This operator has a band structure which is given by the celebrated Bloch theorem [@ReedSimonIV78].
\[BlochDef\] For any $k\in {\mathcal{B}}$, the fiber Hamiltonian $$\label{HkDef}
H_{\mathcal{L}}(k) = \frac{1}{2} {{\left\vert {k} \right\vert}}^2 - ik\cdot\nabla -\textstyle{\frac{1}{2}} \Delta + W_{\mathcal{L}}.$$ defined on $L^2({\mathcal{C}})$ with periodic boundary condition has a compact resolvent. Its eigenfunctions form an orthonormal sequence of periodic solutions $(u_{n,k})_{n\in{\mathbb{N}}})$ solving the eigenvalue problem $$\label{Bloch}
H_{\mathcal{L}}(k)u_{n,k} = E_n(k)u_{n,k}$$ The functions $u_{n,k}$ are the so-called [*Bloch functions*]{} and the eigenvalues $E_n(k)$ are the [*energy bands*]{} of the crystal. For each fixed value of $k\in{\mathcal{B}}$, the set $\{u_{n,k} \mid n \in {\mathbb{N}}\}$ is a Hilbert basis of $L^2({\mathcal{C}})$ [@BerezinShubin91; @ReedSimonIV78]. The Bloch waves defined for $k\in{\mathcal{B}}$ and $n\in {\mathbb{N}}$ by $${\mathcal{X}}_{n,k}^\textsc{b}(x) = {{\left\vert {{\mathcal{B}}} \right\vert}}^{-1/2} \,{\mathbbm{1}}_{\mathcal{B}}(k)\, {\mathrm{e}}^{ik\cdot x}\,u_{n,k}(x)$$ form a complete basis of $L^2({\mathbb{R}}^d)$ and satisfy the equation $$H_{\mathcal{L}}^1{\mathcal{X}}_{n,k}^\textsc{b} = E_n(k){\mathcal{X}}_{n,k}^\textsc{b}.$$ The scaled Bloch functions are given by $${\mathcal{X}}_{n,k}^{\textsc{b},{\epsilon}}(x) = {{\left\vert {{\mathcal{B}}} \right\vert}}^{-1/2} \,{\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k)\, {\mathrm{e}}^{ik\cdot x}\,u_{n,{\epsilon}k}(x)$$ and they satisfy $$H_{\mathcal{L}}^{\epsilon}{\mathcal{X}}_{n,k}^{\textsc{b},{\epsilon}} = \frac{E_n({\epsilon}k)}{{\epsilon}^2}{\mathcal{X}}_{n,k}^{\textsc{b},{\epsilon}}.$$
In order to analyze the limit ${\epsilon}\to 0$, the usual starting point is to decompose the wave function on the Bloch wave functions. This decomposition was in particular used in [@Allaire05]. This has the big advantage of completely diagonalizing the periodic Hamiltonian, but since the wave vector appears both in the plane wave ${\mathrm{e}}^{i k\cdot x}$ and in the standing periodic function $u_{n,{\epsilon}k}$, the separation between the fast oscillating scale and the slow motion carried by the plane wave is not immediate. We follow in this work the idea of Kohn and Luttinger [@LuttingerKohn55] who decompose the wave function on the basis $$\label{kl}
{\mathcal{X}}_{n,k}^\textsc{lk}(x) = {{\left\vert {{\mathcal{B}}} \right\vert}}^{-1/2} \,{\mathbbm{1}}_{\mathcal{B}}(k)\, {\mathrm{e}}^{ik\cdot x}\,u_{n,0}(x).$$ The family ${\mathcal{X}}_{n,k}^\textsc{lk}$ is also a complete orthonormal basis of $L^2({\mathbb{R}}^3)$ but only partially diagonalizes $H^1_{\mathcal{L}}$ since $$\label{HLK}
\begin{aligned}
H_{\mathcal{L}}^1{\mathcal{X}}_{n,k}^\textsc{lk} &= {{\left\vert {{\mathcal{B}}} \right\vert}}^{-1/2} {\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k)\, {\mathrm{e}}^{ik\cdot x}
\left[\frac{1}{2} {{\left\vert {k} \right\vert}}^2 - ik\cdot\nabla + E_n(0) \right]u_{n,0}
\\
&= {{\left\vert {{\mathcal{B}}} \right\vert}}^{-1/2} {\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k)\, {\mathrm{e}}^{ik\cdot x} \sum_{n'}
\left[\frac{1}{2} {{\left\vert {k} \right\vert}}^2\delta_{nn'} - ik\cdot P_{nn'}
+ E_n\delta_{nn'}\right] u_{n',0}
\\
&= \sum_{n'} \left[\frac{1}{2} {{\left\vert {k} \right\vert}}^2\delta_{nn'} - ik\cdot P_{nn'}
+ E_n\delta_{nn'}\right]{\mathcal{X}}_{n',k}^\textsc{lk}\,.
\end{aligned}$$ Here, $E_n = E_n(0)$ and $$\label{Pdef}
P_{nn'} = \int_{\mathcal{C}}{\overline}u_{n,0}(x) \nabla u_{n',0}(x)\,dx$$ are the matrix elements of the gradient operator between Bloch functions. The interest of the Luttinger-Kohn wave functions is that the wave vector $k$ only appears in the plane wave and not in the standing periodic part $u_{n,0}$. This will allow us to decompose the wave function in a nice way for which we will prove some Hilbert analysis type results. This is the envelope function decomposition that we detail in the following section.
Envelope functions
------------------
In the following, we shall use the symbol ${\mathcal{F}}$ to denote the Fourier transformation on $L^2({\mathbb{R}}^d)$ $$\label{fourier}
{\mathcal{F}}\psi (k) = {1\over (2\pi)^{d/2}} \int_{{\mathbb{R}}^d} e^{-i x\cdot k } \psi(x) \, dk$$ and ${\mathcal{F}}^* = {\mathcal{F}}^{-1}$ for the inverse transformation. We shall use a hat, $\hat \psi = {\mathcal{F}}\psi$, for the Fourier transform of $\psi$.
We define $L^2_{\mathcal{B}}({\mathbb{R}}^d) \subset L^2({\mathbb{R}}^d)$ to be the subspace of $L^2$-functions supported in ${\mathcal{B}}$: $$\label{L2Bdef}
L^2_{\mathcal{B}}({\mathbb{R}}^d) = \left\{ f \in L^2({\mathbb{R}}^d)\ \left| \
\operatorname{supp}\big(f\big) \subset {\mathcal{B}}\right.\right\}.$$ Thus, ${\mathcal{F}}^*L^2_{\mathcal{B}}({\mathbb{R}}^d)$ is the space of $L^2$-functions whose Fourier transform is supported in ${\mathcal{B}}$.
The envelope function decomposition is defined by the following theorem.
\[T1\] Let $v_n : {\mathbb{R}}^d \to {\mathbb{C}}$ be ${\mathcal{L}}$-periodic functions such that $\{ v_n \mid n \in {\mathbb{N}}\}$ is an orthonormal basis of $L^2({\mathcal{C}})$. For every $\psi \in L^2({\mathbb{R}}^d)$ there exists a unique sequence $\{ f_n \in {\mathcal{F}}^*L^2_{\mathcal{B}}({\mathbb{R}}^d)\mid n \in {\mathbb{N}}\}$ such that $$\label{EFdec}
\psi = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}\,\sum_n f_n\,v_n.$$ We shall denote $f_n = \pi_n(\psi)$. The decomposition satisfies the Parseval identity $$\label{Parseval}
{{\left\langle \psi,\varphi \right\rangle}}_{L^2({\mathbb{R}}^d)} = \sum_n {{\left\langle \pi_n(\psi), \pi_n(\phi) \right\rangle}}_{L^2({\mathbb{R}}^d)}.$$ For any ${\epsilon}> 0$ we shall consider the scaled version $f_n^{\epsilon}= \pi_n^{\epsilon}(\psi)$ of the envelope function decomposition as follows: $$\label{EF}
\psi(x) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}\,\sum_n f_n^{\epsilon}(x) \,v_n^{\epsilon}(x),$$ with $\hat f_n^{\epsilon}\in L^2_{{\mathcal{B}}/{\epsilon}}({\mathbb{R}}^d)$, where $$\label{vScaled}
v_n^{\epsilon}(x) = v_n\left( \frac{x}{{\epsilon}} \right).$$ We still have the Parseval identity $$\label{Parsevaleps}
{{\left\langle \psi,\varphi \right\rangle}}_{L^2({\mathbb{R}}^d)} = \sum_n {{\left\langle \pi_n^{\epsilon}(\psi), \pi_n^{\epsilon}(\varphi) \right\rangle}}_{L^2({\mathbb{R}}^d)}.$$ Finally, the Fourier transforms of the ${\epsilon}$-scaled envelope functions are given by $$\label{EFChiEps}
\hat f_n^{\epsilon}(k) = \int_{{\mathbb{R}}^d} {\overline}{\mathcal{X}}_{n,k}^{\epsilon}(x)\,\psi(x)\,dx,$$ where, for $x \in {\mathbb{R}}^d$, $k \in {\mathbb{R}}^d$, $n \in {\mathbb{N}}$, $$\label{ChiEps}
{\mathcal{X}}_{n,k}^{\epsilon}(x) = {{\left\vert {{\mathcal{B}}\,} \right\vert}}^{-1/2}\,{\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k) \,{\mathrm{e}}^{ik\cdot x}\,v_n^{\epsilon}(x).$$
The proof of this theorem is postponed to Section \[post\].
Note that the above result is a variant of the so-called Bloch transform. In [@allaire08], the function $$\widehat{\psi} (x,k) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}\,\sum_n \widehat f_n (k) \,v_n(x)$$ is referred to as the Bloch transform of $\psi$. We also refer to [@allaire98], [@ReedSimonIV78] and [@kuch93] for Bloch wave methods in periodic media.
The functions $f_n = \pi_n(\psi)$ of Theorem \[T1\] will be called the [*envelope functions*]{} of $\psi$ relative to the basis $\{ v_n \mid n\in{\mathbb{N}}\}$, while $f_n^{\epsilon}= \pi_n^{\epsilon}(\psi)$ will be called the ${\epsilon}$-scaled envelope function relative to the basis $\{ v_n \mid n\in{\mathbb{N}}\}$
\[T2\] Let us consider the ${\epsilon}$-scaled envelope function decomposition of $\psi \in L^2({\mathbb{R}}^d)$. Then, for every $\theta \in L^1({\mathbb{R}}^d)$ such that $\hat\theta \in L^1({\mathbb{R}}^d)$, we have $$\lim_{{\epsilon}\to 0} \int_{{\mathbb{R}}^d} \theta(x)\left[|\psi(x)|^2 -\sum_{n} |f_n^{\epsilon}(x)|^2\right] dx = 0.$$
The proofs of this theorem is also postponed to Section \[post\].
Functional spaces
-----------------
In this section, we define some functional spaces which will be used all along the paper.
\[Spaces\] We define the space ${\mathcal{L}}^2 = \ell^2\left({\mathbb{N}},L^2({\mathbb{R}}^d)\right)$ as the Hilbert space of sequences $g = (g_0,g_1,\ldots)$, $g_n = g_n(k)$, with $g_n \in L^2({\mathbb{R}}^d)$, such that $${{\| {g} \|}}_{{\mathcal{L}}^2}^2 = \sum_n {{\| {g_n} \|}}^2_{L^2({\mathbb{R}}^d)} < \infty.
\label{L2}$$ Moreover, for $\mu \geq 0$ let ${\mathcal{L}}^2_\mu$ be the subspace of all sequences $g \in {\mathcal{L}}^2$ such that $$\label{L2mu}
{{\| {g} \|}}_{{\mathcal{L}}^2_\mu}^2 = {{\| {(1+{{\left\vert {k} \right\vert}}^2)^{\mu/2}g} \|}}_{{\mathcal{L}}^2}^2
= \sum_{n} {{\| {(1+{{\left\vert {k} \right\vert}}^2)^{\mu/2}g_n} \|}}_{L^2}^2< \infty$$ and let ${\mathcal{H}}^\mu = \ell^2\left({\mathbb{N}},H^\mu({\mathbb{R}}^d)\right)$, with $$\label{Hmu}
{{\| {f} \|}}_{{\mathcal{H}}^\mu}^2 = \sum_{n} {{\| {f_n} \|}}_{{\mathcal{H}}^\mu}^2
= \sum_{n} {{\| {(1+{{\left\vert {k} \right\vert}}^2)^{\mu/2}\widehat{f}_n} \|}}_{L^2}^2< \infty.$$ It is readily seen that $f\in {\mathcal{H}}^\mu$ if and only if $\widehat{f}\in {\mathcal{L}}^2_\mu$. Let us redefine the eigenpairs $(E_n,v_n)$ of the operator $H_{\mathcal{L}}^1 = -\frac{1}{2}\Delta + W_{\mathcal{L}}$ with periodic boundary conditions by $$\label{periodic}
\left\{
\begin{aligned}
&-\textstyle{\frac{1}{2}}\Delta v_n + W_{\mathcal{L}}v_n = E_n v_n, \quad \text{\rm on ${\mathcal{C}}$}
\\[6pt]
&\int_{\mathcal{C}}|v_n|^2\, dx = 1, \quad \text{\rm $v_n$ periodic}
\end{aligned}
\right.$$ (note that $v_n = u_{n,0}$, according to Definition \[BlochDef\]). The sequence $E_n$ is increasing and tends to $+\infty$.
Let us now define the functional spaces for the external potential: $$\label{vdeff}
{\mathcal{W}}_\mu = \Big\{ V \in L^\infty({\mathbb{R}}^{2d}) \ \Big| \
V(\cdot,z+\lambda) = V(\cdot,z),\ \lambda \in {\mathcal{L}},\
{{\| {V} \|}}_{{\mathcal{W}}_\mu} < \infty \Big\},$$ where $$\label{Mdef}
{{\| {V} \|}}_{{\mathcal{W}}_\mu} = \frac{1}{(2\pi)^{d/2}}\, \mathop\mathrm{ess\,sup}\limits_{z \in {\mathcal{C}}}
\int_{{\mathbb{R}}^d} (1 + {{\left\vert {k} \right\vert}})^\mu |\hat V(k,z)|\,dk$$ and ${\displaystyle}\widehat{V}(k,z) = (2\pi)^{-d/2}\int_{{\mathbb{R}}^d} {\mathrm{e}}^{-ik\cdot x}\, V(x,z)\,dx$.
We finally define for any positive constant $\gamma$ the truncation operator $$\label{truncation}
{\mathcal{T}}_\gamma (f) = {\mathcal{F}}^* ({\mathbbm{1}}_{\gamma {\mathcal{B}}} \widehat{f}).$$ It is now readily seen that the truncation operator satisfies for any nonnegative real numbers $s,\mu$, $${{\| {f-{\mathcal{T}}_\gamma f} \|}}_{H^s} \leq C \gamma^{-\mu} {{\| {f} \|}}_{H^{s+\mu}},
\label{truc-error}$$ where $C>0$ is a suitable constant independent of $\gamma$.
Main Theorem
------------
We announce in this section the main theorem of our paper. We recall that $(v_n,E_n)$ are defined by .
\[main\] Assume that $W_{\mathcal{L}}\in L^\infty$ and that all the eigenvalues $E_n = E_n(0)$ are simple. Let $\psi^{in,{\epsilon}}$ be an initial datum in $L^2({\mathbb{R}}^d)$, let $f^{in,{\epsilon}}_n= \pi_n^{\epsilon}(\psi^{in,{\epsilon}})$ be its scaled envelope functions relative to the basis $v_n$. Assume that the sequence $f^{in,{\epsilon}}$ belongs to ${\mathcal{H}}^\mu$, with a uniform bound for the norm as ${\epsilon}$ vanishes, and that it converges in ${\mathcal{L}}^2$ as ${\epsilon}$ tends to zero to an initial datum $f^{in}$. Let $\psi^{\epsilon}$ be the unique solution of $$\label{SE1}
\begin{aligned}
&i\partial_t\, \psi^{\epsilon}(t,x) = \left( -\frac{1}{2}\Delta + \frac{1}{{\epsilon}^2}
\,W_{\mathcal{L}}\left(\frac{x}{{\epsilon}}\right) + V\left(x,\frac{x}{{\epsilon}}\right) \right) \psi^{\epsilon}(t,x),
\\
&\psi(t=0) = \psi^{in,{\epsilon}},
\end{aligned}$$ and assume that $V\in {\mathcal{W}}_\mu$ for a positive $\mu$. Then for any $\theta\in L^1({\mathbb{R}}^d)$ such that $\widehat{\theta} \in L^1({\mathbb{R}}^d)$, we have the following local uniform convergence in time $$\int |\psi^{\epsilon}(t,x)|^2\theta(x)\, dx \to \int \sum_n |h_n(t,x)|^2 \theta(x)\, dx$$ where the envelope function $h_n$ is the unique solution of the homogenized Schrödinger equation $$i\partial_t\, h_n =
- \frac{1}{2} \operatorname{div}\left( {\mathbb{M}}_n^{-1} \nabla h_n \right)
+ V_{nn}(x)\,h_n,\quad h_n(t=0) = f^{in}_n,$$ with $$V_{nn} = \int_{\mathcal{C}}V(x,z)|v_n(z)|^2\, dz$$ and $${\mathbb{M}}_n^{-1} = \nabla\otimes\nabla \,E_n(k)_{\,|k=0}
= I - 2 \sum_{n' \not= n} \frac{P_{nn'} \otimes P_{n'n}}{E_n - E_{n'}}.$$ (effective mass tensor of the $n$-th band).
From the Schrödinger equation to the k$\cdot$p model {#sec3}
====================================================
Let $\psi^{\epsilon}(t,x)$ be the solution of the Schrödinger equation and let $f_n^{\epsilon}(t,x)$ be its ${\epsilon}$-scaled envelope function relative to the basis $v_n$ defined in and : $$\psi^{\epsilon}(t,x) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}\sum_n f_n^{\epsilon}(t,x) v_n^{\epsilon}(x)$$ Let us define $$g_n^{\epsilon}(t,k) = \widehat{f}_n(t,k).$$
From now on, we will reserve the notation $f$ for functions of the position variable $x$, while $g$ will be used for functions of the wavevector $k$. Multiplying the Schrödinger equation by $\overline{{\mathcal{X}}_{n,k}^{\epsilon}(x)}$ (see Eq. ) and integrating over $k$ leads to the following equation $$\label{EXE2}
\begin{aligned}
i\partial_t \,g_n^{\epsilon}(t,k)
&= \frac{1}{2}\,{{\left\vert {k} \right\vert}}^2\,g_n^{\epsilon}(t,k) - \frac{i}{{\epsilon}} \sum_{n'}
k\cdot P_{nn'} g_{n'}^{\epsilon}(t,k) + \frac{1}{{\epsilon}^2}\,E_n\,g_n^{\epsilon}(t,k)
\\
&+ \sum_{n'} \int_{{\mathbb{R}}^d} U^{\epsilon}_{nn'}(k,k')\,g_{n'}^{\epsilon}(t,k') \,dk',
\end{aligned}$$ where the kernel $U_{nn'}(k,k')$ is given by $$\begin{array}{lll}
{\displaystyle}U^{\epsilon}_{nn'}(k,k') &= &{\displaystyle}\int_{{\mathbb{R}}^d} {\overline}{\mathcal{X}}_{n,k}^{\epsilon}(x)\, V\left(x,{x\over{\epsilon}}\right)\,{\mathcal{X}}_{n',k'}^{\epsilon}(x)\,dx
\\[8pt]
&=& {\displaystyle}{{\left\vert {{\mathcal{B}}\,} \right\vert}}^{-1}\,{\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k) \int_{{\mathbb{R}}^d} {\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k') \,{\mathrm{e}}^{-i(k-k')\cdot x}\,
\overline{v_n}^{\epsilon}(x)\, V\left(x,{x\over{\epsilon}}\right)\,v_{n'}^{\epsilon}(x)\,dx.
\end{array}$$ By writing $$V(x,z) v_n(z) = \sum_{n'} V_{n'n}(x) v_{n'} (z),$$ where $$\label{VnmDef}
V_{n'n}(x) = \int_{\mathcal{C}}{\overline}v_{n'}(z)\, v_n(z)\, V(x,z)\,dz = {\overline}V_{nn'}(x),$$ we can express $U^{\epsilon}_{nn'}(k,k')$ in the form $$\label{uneps}
U^{\epsilon}_{nn'}(k,k') = \frac{{\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k)}{{{\left\vert {{\mathcal{B}}} \right\vert}}} \sum_{m} \int_{{\mathbb{R}}^d} {\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k')
\,{\mathrm{e}}^{-i(k-k')\cdot x}\,\overline{v_n}^{\epsilon}(x)\, V_{mn'}(x) v_{m}^{\epsilon}(x)\,dx$$ In position variables, the envelope functions satisfy the system $$\begin{gathered}
\label{EXE}
i\partial_t \,f_n^{\epsilon}(t,x) =
{E_n\over {\epsilon}^2} f_{n}(t,x)
- \textstyle{\frac{1}{2}} \Delta\,f_n^{\epsilon}(t,x)
\\
- \frac{1}{{\epsilon}} \sum_{n'\in{\mathbb{N}}} P_{nn'}\cdot\nabla f_{n'}^{\epsilon}(t,x)
+ \sum_{n'\in{\mathbb{N}}} \int_{{\mathbb{R}}^d} V_{n n'}^{\epsilon}(x,x')\, f_{n'}^{\epsilon}(t,x')\,dx',\end{gathered}$$ where $$\begin{gathered}
V_{nn'}^{\epsilon}(x,x') =
\frac{1}{(2\pi)^d{{\left\vert {{\mathcal{B}}} \right\vert}}}\, \int_{{\mathcal{B}}/{\epsilon}} dk \int_{{\mathbb{R}}^d}dy \int_{{\mathcal{B}}/{\epsilon}} dk' \times
\\
\times \left\{ {\mathrm{e}}^{ik\cdot x} {\mathrm{e}}^{-i(k - k')\cdot y}\, {\overline}v_n^{\epsilon}(y)
V\left(y,{y\over{\epsilon}}\right) v_{n'}^{\epsilon}(y)\,{\mathrm{e}}^{-ik'\cdot x'} \right\}\end{gathered}$$ From equation we see that the fast oscillation scales are different for different envelope functions. This will naturally lead to adiabatic decoupling (see [@hagedorn-joye; @panati; @spohn-teufel; @teufel]).
\[U\] Let us define the operator ${\mathcal{U}}^{\epsilon}$ on ${\mathcal{L}}^2$ as follows: for any element $g =(g_0, g_1, \ldots)$ of ${\mathcal{L}}^2$ $$\label{Udef}
\left( {\mathcal{U}}^{\epsilon}g \right)_n(k) =
\sum_{n'} \int_{{\mathbb{R}}^d} U^{\epsilon}_{nn'}(k,k')\,g_{n'}^{\epsilon}(k') \,dk'.$$ Let us also define the operator ${\mathbf{V}}^{\epsilon}$ on the position space ${\mathcal{L}}^2$ by $$\label{Vdef}
\left( {\mathbf{V}}^{\epsilon}f \right)_n(x) =
\sum_{n'} \int_{{\mathbb{R}}^d} V^{\epsilon}_{nn'}(x,x')\,f_{n'}^{\epsilon}(x') \,dx'.$$ We obviously have $$\widehat{{\mathbf{V}}^{\epsilon}(f)} = {\mathcal{U}}^{\epsilon}(\widehat{f}).$$
Since $v_n$ and $v_m$ are ${\mathcal{L}}$-periodic, the formal limit of $ U^{\epsilon}_{nn'}(k,k')$ is given by $$U^0_{nn'}(k,k')
= \sum_{m} { {{\left\langle v_n,v_m \right\rangle}}\over {{\left\vert {{\mathcal{B}}} \right\vert}} {{\left\vert {{\mathcal{C}}} \right\vert}}}\int_{{\mathbb{R}}^d} {\mathrm{e}}^{-i(k-k')\cdot x}\, V_{mn'}(x)\, dx
= {1 \over (2\pi)^{d/2} } \widehat{V_{nn'}}(k-k').$$ Therefore the formal limit of ${\mathcal{U}}^{\epsilon}$ is the operator ${\mathcal{U}}^0$ defined by $$\label{U0def}
\left( {\mathcal{U}}^0 g \right)_n(k) =
\sum_{n'} \frac{1}{(2\pi)^{d/2}}\int_{{\mathbb{R}}^d}\hat V_{nn'}(k-k')\,g_{n'}(k')\,dk',$$ which means that the in position space the limit of ${\mathbf{V}}^{\epsilon}$ is the non diagonal multiplication operator ${\mathbf{V}}^0$ defined by $$\label{V0def}
\left( {\mathbf{V}}^0 f \right)_n(x) =
\sum_{n'} V_{nn'}(x)\,f_{n'}(x).$$ The operators become diagonal in $n$ if $V(x,z)$ does not depend on $z$. Indeed, in this case $V_{nn'}(x) = V(x)\delta_{nn'}$. The k$\cdot$p approximation found in semiconductor theory [@Wenckebach99], consists in replacing the operator ${\mathcal{U}}^{\epsilon}$ by ${\mathcal{U}}^0$. Let us now analyze the departure of ${\mathcal{U}}^{\epsilon}$ from ${\mathcal{U}}^0$.
\[Lemma0\] Let the external potential $V(x,z)$ be in $L^\infty$. Then, for any ${\epsilon}\geq 0$, ${\mathcal{U}}^{\epsilon}$ is a bounded operator on ${\mathcal{L}}^2$ and we have the uniform bound $$\label{Knorm}
{{\| {{\mathcal{U}}^{\epsilon}} \|}} \leq {{\| {V} \|}}_{L^\infty}, \quad \forall\ {\epsilon}\geq 0.$$
Let us begin with the case ${\epsilon}= 0$. We remark that $${\mathcal{U}}^0 g = \widehat{{\mathbf{V}}^0 (f)},$$ where $f = {\mathcal{F}}^*(g)$. Let $G$ be another element of ${\mathcal{L}}^2$, and let $F$ be its back Fourier transform. We have $$\begin{aligned}
{{\left\vert {{{\left\langle {\mathcal{U}}^0 g,G \right\rangle}}} \right\vert}}
& = {{\left\vert {{{\left\langle {\mathbf{V}}^0 f, F \right\rangle}}} \right\vert}} = {{\left\vert {\sum_{nn'} \int V_{nn'}(x) f_{n'}(x) \overline{F_n}(x)\, dx} \right\vert}}
\\[6pt]
&= {{\left\vert {\sum_{nn'} \int V(x,z) v_{n'}(z)\overline{v_n}(z) f_{n'}(x)\overline{F_n}(x)\, dx\, dz} \right\vert}}
\\[6pt]
&= {{\left\vert {\int V(x,z) \left[\sum_n f_{n}(x) v_n(z) \right]\overline{\left[\sum_n F_{n}(x) v_n(z)\right]} dx\, dz} \right\vert}}
\\[2pt]
&\leq {{\| {V} \|}}_{L^\infty}\!\! \left[\int \Big| \sum_n f_{n}(x) v_n(z)\Big|^2 dx\, dz\right]^{\! {1\over 2}}
\! \left[\int \Big|\sum_n F_{n}(x) v_n(z)\Big|^2 dx\, dz \right]^{\! {1\over 2}}
\\[6pt]
&\leq {{\| {V} \|}}_{L^\infty} {{\| {f} \|}}_{{\mathcal{L}}^2}{{\| {F} \|}}_{{\mathcal{L}}^2} = {{\| {V} \|}}_{L^\infty} {{\| {g} \|}}_{{\mathcal{L}}^2}{{\| {G} \|}}_{{\mathcal{L}}^2}.
\end{aligned}$$ Since the result holds for any $g$ and $G$ in ${\mathcal{L}}^2$, this implies that ${{\| {{\mathcal{U}}^0(g)} \|}}_{{\mathcal{L}}^2} \leq {{\| {V} \|}}_{L^\infty} {{\| {g} \|}}_{{\mathcal{L}}^2}.$ For ${\epsilon}> 0$ it is enough to observe that ${\mathcal{U}}^{\epsilon}$ is unitarily equivalent to the multiplication operator by $V(x,{x\over{\epsilon}})$ in position space. More precisely, defining $f^{\epsilon}(x) = {\mathcal{F}}^*({\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}g)$ and defining $\psi^{\epsilon}(x) = \sum_n f_n^{\epsilon}(x) v_n^{\epsilon}(x)$ so that $f_n^{\epsilon}= \pi_n^{\epsilon}(\psi^{\epsilon})$, then it follows from the definition of ${\mathcal{U}}^{\epsilon}$ that $$( {\mathcal{U}}^{\epsilon}g)_n ={\mathcal{F}}\left[ \pi_n^{\epsilon}\left( V\Big(x,{x\over{\epsilon}}\Big) \psi^{\epsilon}\right)\right].$$ It is now readily seen that $${{\| {{\mathcal{U}}^{\epsilon}(g)} \|}}_{{\mathcal{L}}^2}^2 = {{\| {V\left(x,{x\over{\epsilon}}\right) \psi^{\epsilon}} \|}}_{L^2}^2
\leq {{\| {V} \|}}_{L^\infty}^2 {{\| { \psi^{\epsilon}} \|}}_{L^2}^2
\leq {{\| {V} \|}}_{L^\infty}^2 {{\| {g} \|}}_{{\mathcal{L}}^2}^2.$$
\[gammab\] For any $\gamma > 0$ let $\gamma {\mathcal{B}}$ be the set of $\gamma k$ where $k$ is in ${\mathcal{B}}$. Then $$\gamma {\mathcal{B}}+ \beta {\mathcal{B}}= (\gamma+\beta ){\mathcal{B}}.$$ Moreover Let $k\in {\mathcal{B}}$ and $k'\in {1\over 3} {\mathcal{B}}$. Let $\lambda$ a non vanishing element of the reciprocal lattice ${\mathcal{L}}^*$. Then $k- k' + \lambda \notin {1\over 3} {\mathcal{B}}$.
The proof of this lemma is immediate (using the fact that ${\mathcal{B}}$ is the linear deformation of a hypercube, see definition ) and is left to the reader.
\[Lemma01\] Let $V \in {\mathcal{W}}_0$ and $g \in {\mathcal{L}}^2$ be such that $\operatorname{supp}\big(\hat V_{nm}\big) \subset {1\over 3 {\epsilon}} {\mathcal{B}}$ and $\operatorname{supp}(g_n) \subset {1\over 3 {\epsilon}} {\mathcal{B}}$, for all $n, m \in {\mathbb{N}}$. Then, in this case, $\,{\mathcal{U}}^{\epsilon}g = {\mathcal{U}}^0 g$.
Let us first notice that $\{ {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1/2}\,{\mathrm{e}}^{i\eta\cdot x} \mid \eta \in {\mathcal{L}}^* \}$ is a orthonormal basis of $L^2({\mathcal{C}})$ (the Fourier basis). We first deduce from and from the identity $$v_n (y) = {1\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}} \sum_{\lambda \in {\mathcal{L}}^*} v_{n,\lambda} e^{i\lambda\cdot x}$$ where $v_{n,\lambda} = \langle v_n, { e^{i\lambda\cdot x}\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}}\rangle$ that $$\begin{gathered}
\begin{aligned}
\left( {\mathcal{U}}^{\epsilon}g \right)_n(k) =
\sum_{\lambda,\lambda'\in {\mathcal{L}}^*}\sum_{m,n'} \int_{{\mathbb{R}}^d\times {\mathbb{R}}^d}
&e^{-i(k-k' +{\lambda-\lambda' \over {\epsilon}})\cdot x} {\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k') {\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k) \times
\\
&\times V_{nm}(x) \overline{v_{m,\lambda} } \,v_{n',\lambda'}\, g_{n'} (k')\, dx\, dk' =
\end{aligned}
\\[6pt]
{\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k) {(2\pi)^{d/2}} \!\!\! \sum_{\lambda,\lambda'\in {\mathcal{L}}^*}
\sum_{m,n'} \int_{{\mathcal{B}}/{\epsilon}} \widehat{V}_{nm} \left( k-k' +\textstyle{{\lambda-\lambda' \over {\epsilon}}}\right)
\overline{v_{m,\lambda} } \,v_{n',\lambda'} \, g_{n'} (k')\, dx\, dk'.\end{gathered}$$ Since the support of $g_{n'}$ is included in ${\mathcal{B}}/3{\epsilon}$ and $k\in {\mathcal{B}}/{\epsilon}$, Lemma \[gammab\] implies that the only contributing terms to the above sum are those for which $\lambda = \lambda'$. Therefore, we are lead to evaluate $\sum_{\lambda} \overline{v_{m,\lambda} } \,v_{n',\lambda}$ which is equal to ${{\left\langle v_{n'}, v_m \right\rangle}} = \delta_{mn'}$ because of the orthonormality of the family $(v_n)$. Therefore $$\left( {\mathcal{U}}^{\epsilon}g \right)_n(k) = (2\pi)^{-d/2} {\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k)\sum_{n'} \int_{{\mathcal{B}}/{\epsilon}}
\widehat{V_{nn'}} ( k-k' ) g_{n'} (k')\, dx\, dk'.$$ Now, we can remove ${\mathbbm{1}_{{\mathcal{B}}/{\epsilon}}}(k)$ from the right hand side of the above identity, since both the support of $g_{n'}$ and that of $\widehat{V_{nn'}}$ are in ${1\over 3{\epsilon}} {\mathcal{B}}$. Hence $$\left( {\mathcal{U}}^{\epsilon}g \right)_n(k) = (2\pi)^{-d/2} \sum_{n'} \int_{{\mathbb{R}}^d}
\widehat{V_{nn'}} ( k-k' ) g_{n'} (k')\, dx dk' = \left( {\mathcal{U}}^0 g \right)_n(k).$$
\[T3\] Assume that $V \in {\mathcal{W}}_\mu$ for some $\mu \geq 0$. Then, a constant $c_\mu > 0$, independent of ${\epsilon}$, exists such that $$\label{Kextima}
{{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^0 g } \|}}_{{\mathcal{L}}^2} \leq {\epsilon}^\mu\,c_\mu\,{{\| {V} \|}}_{{\mathcal{W}}_\mu} \,{{\| {g} \|}}_{{\mathcal{L}}^2_\mu}$$ for all $g \in {\mathcal{L}}^2_\mu$ and for all ${\epsilon}> 0$.
Let the smoothed potential $V_s^{\epsilon}$ be defined by $$\label{UepsDef}
\hat V_s^{\epsilon}(k,z) = {\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}(k)\,\hat V(k,z).$$ Moreover, let ${\mathcal{U}}^{\epsilon}_s$ denote the operator ${\mathcal{U}}^{\epsilon}$ with the potential $V_s$. Let us assume firstly that $\operatorname{supp}\,(g_n) \subset {\mathcal{B}}/3{\epsilon}$ for all $n \in {\mathbb{N}}$. Then, from Lemma \[Lemma01\] we have ${\mathcal{U}}^{\epsilon}_s g = {\mathcal{U}}^0_s g$ and we can write $$\label{aux0}
{{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^0 g} \|}}_{{\mathcal{L}}^2} \leq
{{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^{\epsilon}_s g} \|}}_{{\mathcal{L}}^2} + {{\| {{\mathcal{U}}^0_s g - {\mathcal{U}}^0 g} \|}}_{{\mathcal{L}}^2} .$$ Using and the linearity of ${\mathcal{U}}^{\epsilon}$ and ${\mathcal{U}}^0$ with respect to the potential, we have $${{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^{\epsilon}_s g} \|}}_{{\mathcal{L}}^2} \leq {{\| {V-V_s^{\epsilon}} \|}}_{{\mathcal{W}}_0} \, {{\| {g} \|}}_{{\mathcal{L}}^2}.
\qquad {\epsilon}\geq 0,$$ Recalling the definition , we also have $${{\| {V-V_s^{\epsilon}} \|}}_{{\mathcal{W}}_0} = \frac{1}{(2\pi)^{d/2}}\, \mathop\mathrm{ess\,sup}\limits_{z \in {\mathcal{C}}}
\int_{{\mathbb{R}}^d \setminus {\mathcal{B}}/3{\epsilon}} |\hat V(k,z)|\,dk$$ $$\leq \frac{1}{(2\pi)^{d/2}}\, \mathop\mathrm{ess\,sup}\limits_{z \in {\mathcal{C}}}
\int_{k\notin {\mathcal{B}}/3{\epsilon}} \left( \frac{{{\left\vert {3{\epsilon}k} \right\vert}} }{R} \right)^\mu\, |\hat V(k,z)|\,dk
\leq \left(\frac{3{\epsilon}}{R}\right)^\mu \, {{\| {V} \|}}_{{\mathcal{W}}_\mu}$$ where $R>0$ is the radius of a sphere contained in ${\mathcal{B}}$. Then (still in the case $\operatorname{supp}(g_n) \subset {\mathcal{B}}/3{\epsilon}$), from we get $$\label{I2}
{{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^0 g} \|}}_{{\mathcal{L}}^2}
\leq 2\left(\frac{3{\epsilon}}{R}\right)^\mu {{\| {V} \|}}_{{\mathcal{W}}_\mu}\,{{\| {g} \|}}_{{\mathcal{L}}^2}.$$ Now, if $g \in {{\mathcal{L}}^2_\mu}$ (Definition \[Spaces\]), we can write (using ${\mathbbm{1}}^c = 1- {\mathbbm{1}}$) $$\label{aux2}
{{\| {{\mathcal{U}}^{\epsilon}g - {\mathcal{U}}^0 g} \|}}_{{\mathcal{L}}^2} \leq
{{\| {{\mathcal{U}}^{\epsilon}{\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2} +
{{\| {({\mathcal{U}}^{\epsilon}- {\mathcal{U}}^0) {\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}} g} \|}}_{{\mathcal{L}}^2}
+ {{\| {{\mathcal{U}}^0 {\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2}$$ From we have ${{\| {{\mathcal{U}}^{\epsilon}{\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2} \leq {{\| {V} \|}}_{{\mathcal{W}}_0} {{\| {{\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2}$, for all ${\epsilon}\geq 0$. But $${{\| {{\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2}^2 = \sum_n \int_{k\notin {\mathcal{B}}/3{\epsilon}} {{\left\vert {g_n(k)} \right\vert}}^2 \,dk$$ $$\leq \sum_n \int_{k\notin {\mathcal{B}}/3{\epsilon}}
\left( \frac{{{\left\vert {3{\epsilon}k} \right\vert}} }{R} \right)^{2\mu} {{\left\vert {g_n(k)} \right\vert}} \,dk
\leq \left(\frac{3{\epsilon}}{R}\right)^{2\mu}{{\| {g} \|}}_{{{\mathcal{L}}^2_\mu}}^2$$ and so we can estimate the first and third term in the right hand side of as follows: $${{\| {{\mathcal{U}}^{\epsilon}{\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2} + {{\| {{\mathcal{U}}^0 {\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}}^c g} \|}}_{{\mathcal{L}}^2}
\leq 2 \left(\frac{3{\epsilon}}{R}\right)^{\mu} {{\| {V} \|}}_{{\mathcal{W}}_0}\,{{\| {g} \|}}_{{{\mathcal{L}}^2_\mu}}.$$ Moreover, since Eq. holds for ${\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}} g$, then we can estimate also the second term: $${{\| {({\mathcal{U}}^{\epsilon}- {\mathcal{U}}^0) {\mathbbm{1}}_{{\mathcal{B}}/3{\epsilon}} g} \|}}_{{\mathcal{L}}^2} \leq
2 \left(\frac{3{\epsilon}}{R}\right)^\mu {{\| {V} \|}}_{{\mathcal{W}}_\mu}\,{{\| {g} \|}}_{{\mathcal{L}}^2}.$$ Since ${{\| {V} \|}}_{{\mathcal{W}}_0} \leq {{\| {V} \|}}_{{\mathcal{W}}_\mu}$ and ${{\| {g} \|}}_{{\mathcal{L}}^2} \leq {{\| {g} \|}}_{{{\mathcal{L}}^2_\mu}}$, then from we conclude that holds, with $c_\mu = 4 (3/R)^\mu$ (note that $R$ does not depend on ${\epsilon}$).
Diagonalization of the k$\cdot$p Hamiltonian {#sec4}
============================================
In this section, we consider the case $V(x,z) = 0$ and concentrate on the diagonalization of the k$\cdot$p Hamiltonian. The envelope function dynamics are then given in Fourier variables by Eq. which we rewrite under the form $$\label{FKPE}
i{\epsilon}^2\partial_t \,g_n(t,k) =
\frac{1}{2}{\epsilon}^2 {{\left\vert {k} \right\vert}}^2 g_n(t,k)
- i {\epsilon}\sum_{n'} k \cdot P_{nn'} g_{n'}(t,k) + E_n g_n(t,k).$$ Putting $\xi = {\epsilon}k$, we are therefore led to consider, for any fixed $\xi \in {\mathbb{R}}^d$, the following operators, acting in $\ell^2 \equiv \ell^2({\mathbb{N}},{\mathbb{C}})$ and defined on their maximal domains: $$\label{A012def}
(A_0)_{nn'} = E_n \delta_{nn'},
\quad
\left(A_1(\xi)\right)_{nn'} = -i\xi\cdot P_{nn'},
\quad
\left(A_2(\xi)\right)_{nn'} = \frac{1}{2}\,{{\left\vert {\xi} \right\vert}}^2\,\delta_{nn'}.$$ Moreover, we put $A(\xi) = A_0 + A_1(\xi) + A_2(\xi)$, so that $$\label{Adef}
\left(A(\xi)\right)_{nn'} = E_n \delta_{nn'} -i\xi\cdot P_{nn'}
+ \frac{1}{2}\,{{\left\vert {\xi} \right\vert}}^2\,\delta_{nn'}$$ is the operator at the right-hand side of Eq. (with $\xi = {\epsilon}k$).
\[P2\] The following properties hold:
1. for any given $\xi \in {\mathbb{R}}^d$, $A_1(\xi)$ is $A_0$-bounded with $A_0$-bound less than 1, which implies that $A(\xi) = A_0 + A_1(\xi) + A_2(\xi)$ is self-adjoint on the (fixed) domain of $A_0$, that is $$\label{D0}
{\mathcal{D}}(A_0) = \Big\{ g \in \ell^2\ \Big|\ \sum_n {{\left\vert {E_n g_n} \right\vert}}^2 < \infty \Big\};$$
2. $\{A(\xi) \mid \xi \in {\mathbb{R}}^d \}$ is a holomorphic family of type (A) of self-adjoint operators [@Kato80];
3. for any given $\xi \in {\mathbb{R}}^d$, $A(\xi)$ has compact resolvent, which implies that $A(\xi)$ has a sequence of eigenvalues $\lambda_1(\xi) \leq \lambda_2(\xi) \leq \lambda_3(\xi) \leq \cdots$, with $\lambda_n(\xi) \to \infty$, and a corresponding sequence $\varphi^{(1)}(\xi)$, $\varphi^{(2)}(\xi)$, $\varphi^{(3)}(\xi) \ldots$ of orthonormal eigenvectors .
\(a) We first recall (see ) that $(v_n,E_n)$ is an eigencouple of $H_{\mathcal{L}}^1= -\frac{1}{2}\Delta + W_{\mathcal{L}}$ on the domain ${\mathrm{H}}_{\mathrm{per}}^2({\mathcal{C}})$ (the subscript “per” denoting periodic boundary conditions). The operator $A_0$ is the representation in the basis $(v_n)$ of the operator $H^1_{\mathcal{L}}$, while $A_1(\xi)$ is the representation in the same basis of $-i\xi\cdot\nabla$ with domain ${\mathrm{H}}^1({\mathcal{C}})$: $${\mathcal{D}}\left(A_0\right) \equiv {\mathrm{H}}_{\mathrm{per}}^2({\mathcal{C}}) \subset {\mathrm{H}}^1({\mathcal{C}})
\equiv {\mathcal{D}}\left(A_1(\xi)\right).$$ Then, for any given sequence $(g_n)$, denoting $g(x) = \sum_n g_n v_n(x)$, we have $$\frac{1}{2}\int_{\mathcal{C}}{{\left\vert {\nabla g(x)} \right\vert}}^2\,dx + \int_{\mathcal{C}}W_{\mathcal{L}}(x) {{\left\vert {g(x)} \right\vert}}^2\,dx
= {{\left\langle H^1_{\mathcal{L}}g, g \right\rangle}}_{L^2({\mathcal{C}})} = \sum_n E_n{{\left\vert {g_n} \right\vert}}^2.$$ Since $W_{\mathcal{L}}$ is bounded and $W_{\mathcal{L}}\geq 1$, then for $g \in {\mathcal{D}}(A_0)$ we obtain $$\label{A1extim}
{{\| {A_1(\xi) g} \|}}_{\ell^2}^2 \leq {{\left\vert {\xi} \right\vert}}^2{{\| {\nabla g} \|}}_{L^2({\mathcal{C}})}^2
\leq 2{{\left\vert {\xi} \right\vert}}^2 \sum_n E_n {{\left\vert {g_n} \right\vert}}^2,$$ where we used the notation $g$ for both $g(x) = \sum_n g_n v_n(x)$ and for the sequence $g = (g_n) \in \ell^2$. Since $E_n \to \infty$, then, for any given $0 < b < 1$, a positive integer $n(\xi)$ exists such that $2{{\left\vert {\xi} \right\vert}}^2 E_n < b E_n^2$ for $n \geq n(\xi)$ and we can write $$2{{\left\vert {\xi} \right\vert}}^2 \sum_n E_n {{\left\vert {g_n} \right\vert}}^2 \leq
2{{\left\vert {\xi} \right\vert}}^2 E_{n(\xi)} \sum_{n=1}^{n(\xi)}{{\left\vert {g_n} \right\vert}}^2
+ \sum_{n=n(\xi)}^\infty b{{\left\vert {E_ng_n} \right\vert}}^2.$$ Thus, ${{\| {A_1(\xi) g} \|}}_{\ell^2}^2 \leq 2{{\left\vert {\xi} \right\vert}}^2 E_{n(\xi)} {{\| {g} \|}}_{\ell^2}^2 + b\, {{\| {A_0g} \|}}_{\ell^2}^2$, with $b<1$, which proves point (a). The proof of the remaining points is standard (see Refs. [@BerezinShubin91; @Kato80; @ReedSimonIV78]).
Recalling Definition \[BlochDef\] and Eq. we see that $A(\xi)$ is nothing but the expression of the fiber Hamiltonian $H_{\mathcal{L}}(\xi)$ in the Bloch basis $v_n = u_{n,0}$. Then, the diagonalization of $A(\xi)$ corresponds to the diagonalization of $H_{\mathcal{L}}(\xi)$ and, therefore, the eigenvalues $\lambda_n(\xi)$ coincide with the energy bands $E_n(\xi)$ inside the Brillouin zone. Moreover, $\varphi^{(n)}(\xi)$ is clearly the component expression of $u_{n,\xi}$ in the basis $u_{n,0}$, i.e. $\varphi^{(n)}(\xi) = {{\left\langle u_{n,\xi}, u_{n,0} \right\rangle}}_{L^2({\mathcal{C}})}$.
The eigenvalues $\lambda_n(\xi)$ have been numbered in increasing order for each $\xi$; this means that, when a eigenvalue crossing occurs, then the smoothness of $\lambda_n(\xi)$ (and of $\varphi^{(n)}(\xi)$) is lost. However, since we are assuming that $\lambda_n(0) = E_n$ are simple, then $\lambda_n(\xi)$ and $\varphi^{(n)}(\xi)$ are analytic in a neighborhood of the origin. Of course, such neighborhood depends of $n$. Next lemma allows to estimate the growth of the eigenvalues and, consequently, the size of the analyticity domain.
\[P4\] For any given $\xi \in {\mathbb{R}}^d$, an integer $n_0(\xi) \geq 0$ exists such that $$\label{LambdaGrowth}
{{\left\vert {\lambda_n(\xi) - E_n} \right\vert}} \leq {{\left\vert {\xi} \right\vert}} \sqrt{2 E_n} + \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2,
\quad
\text{for all $ n \geq n_0(\xi)$.}$$
The behavior of the eigenvalues $\lambda_n(\xi)$ for large $n$ will be investigated by means of the [*maxmin*]{} principle, which holds for increasingly-ordered eigenvalues, [@ReedSimonI72]. Since the operators $A(\xi)$ have compact resolvent, the [*maxmin*]{} principle reads as follows: $$ \lambda_n(\xi) = \max_{S \in M_{n-1}} \; \min_{g \in S^\perp\cap{\mathcal{D}}(A_0),\ {{\| {g} \|}}=1 }
\,{{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2},$$ where $M_n$ denotes the set of all subspaces of dimension $n$. In particular, $$ \lambda_n(0) = E_n = \max_{S \in M_{n-1}} \; \min_{g \in S^\perp\cap{\mathcal{D}}(A_0),\ {{\| {g} \|}}=1 }
\,{{\left\langle A_0g, g \right\rangle}}_{\ell^2}.$$ Let $g \in {\mathcal{D}}(A_0)$ with ${{\| {g} \|}}_{\ell^2} = 1$. From we have $${{\| {A_1(\xi) g} \|}}_{\ell^2}^2 \leq 2{{\left\vert {\xi} \right\vert}}^2 \sum_n E_n {{\left\vert {g_n} \right\vert}}^2
= 2{{\left\vert {\xi} \right\vert}}^2 {{\left\langle A_0g, g \right\rangle}}_{\ell^2}$$ and, therefore, ${{\left\vert {{{\left\langle A_1(\xi)g, g \right\rangle}}_{\ell^2}} \right\vert}} \leq {{\| {A_1(\xi)g} \|}}_{\ell^2}
\leq \sqrt{2}{{\left\vert {\xi} \right\vert}} {{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2}$, which, using $A(\xi) = A_0 + A_1(\xi) + A_2(\xi)$, yields $$\label{formext}
{{\left\vert { {{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2} - {{\left\langle A_0g, g \right\rangle}}_{\ell^2} } \right\vert}}
\leq \sqrt{2}{{\left\vert {\xi} \right\vert}}\,{{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2} + \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2.$$ From we get, in particular, $${{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2} \leq {{\left\langle A_0g, g \right\rangle}}_{\ell^2}
+ \sqrt{2}{{\left\vert {\xi} \right\vert}}\,{{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2} + \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2.$$ which allows us to estimate $\lambda_n(\xi)$ from above. In fact, since $x + \sqrt{2}{{\left\vert {\xi} \right\vert}} x^{1/2} + \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2$ is an increasing function of $x$, we can write $$\max\min {{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2} \leq \max\min
\left\{{{\left\langle A_0g, g \right\rangle}}_{\ell^2} + \sqrt{2}{{\left\vert {\xi} \right\vert}}{{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2} + \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2\right\}$$ $$\leq \max\min {{\left\langle A_0g, g \right\rangle}}_{\ell^2} + \sqrt{2}{{\left\vert {\xi} \right\vert}} \max\min {{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2}
+ \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2,$$ that is $$\label{ext1}
\lambda_n(\xi) \leq E_n + 2{{\left\vert {\xi} \right\vert}}\,E_n^{1/2} + \frac{{{\left\vert {\xi} \right\vert}}^2}{2},$$ which holds for all $n \in {\mathbb{N}}$. We now estimate $\lambda_n(\xi)$ from below, at least for large $n$. From we get $${{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2} \geq {{\left\langle A_0g, g \right\rangle}}_{\ell^2} - \sqrt{2}{{\left\vert {\xi} \right\vert}}{{\left\langle A_0g, g \right\rangle}}_{\ell^2}^{1/2}
- \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2$$ and we remark that $x - \sqrt{2}{{\left\vert {\xi} \right\vert}} x^{1/2} - {{\left\vert {\xi} \right\vert}}^2/2$ is an increasing function of $x$ for $x \geq {{\left\vert {\xi} \right\vert}}^2/2$. Thus, let $n_0(\xi)$ be such that $E_{n_0(\xi)} \geq {{\left\vert {\xi} \right\vert}}^2/2$ and fix $n \geq n_0(\xi)$. Let us define $$S_{n-1}^0 = \mathrm{span} \{ e^{(1)}, e^{(2)},\ldots e^{(n-1)} \},$$ where $\{ e^{(n)} \mid n \in {\mathbb{N}}\}$ is the canonical basis of $\ell^2$ (eigenbasis of $A_0$). We therefore have $$\min_{g \in S_{n-1}^{0\perp} \cap {\mathcal{D}}(A_0),\ {{\| {g} \|}}=1 } {{\left\langle A_0g,g \right\rangle}}_{\ell^2} = E_n,$$ because $S_{n-1}^{0\perp} = \mathrm{span} \{ e^{(n)}, e^{(n+1)},\ldots \}$. Thus, for every $g \in S_{n-1}^{0\perp} \cap {\mathcal{D}}(A_0)$ with ${{\| {g} \|}}_{\ell^2} = 1$, we can write $${{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2} \geq {{\left\langle A_0g,g \right\rangle}}_{\ell^2} - \sqrt{2}{{\left\vert {\xi} \right\vert}}\,{{\left\langle A_0g,g \right\rangle}}_{\ell^2}^{1/2}
- \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2
\geq E_n - \sqrt{2}{{\left\vert {\xi} \right\vert}}\,E_n^{1/2} - \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2,$$ (because $E_n \geq E_{n_0(\xi)} \geq {{\left\vert {\xi} \right\vert}}^2/2$), and so $$\min_{g \in S_{n-1}^{0\perp} \cap {\mathcal{D}}(A_0),\ {{\| {g} \|}}=1 } {{\left\langle A(\xi)g, g \right\rangle}}_{\ell^2}
\geq E_n - \sqrt{2}{{\left\vert {\xi} \right\vert}}\,E_n^{1/2} - \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2.$$ Since $S_{n-1}^0 \in M_{n-1}$, we conclude that $$\label{ext2}
\lambda_n(\xi) \geq E_n - \sqrt{2}{{\left\vert {\xi} \right\vert}}\,E_n^{1/2} - \frac{1}{2}{{\left\vert {\xi} \right\vert}}^2,
\qquad n \geq n_0(\xi),$$ which, together with , yields .
From we see that, for fixed $\xi$, the sequences $E_n$ and $\lambda_n(\xi)$ are asymptotically equivalent. Moreover it is not difficult to prove the following.
\[AnalyticRadius\] A constant $C_0$, independent of $n$, exists such that $\lambda_{n}(\xi) \geq \lambda_{n-1}(\xi)$ for all ${{\left\vert {\xi} \right\vert}} \leq C_0(E_{n+1}-E_n)/\sqrt{E_n}$. Then, the first $N$ bands do not cross each other in a ball of radius $$R_N = C_0 \max\{E_{n+1}-E_n \mid n \leq N+1 \} / \sqrt{E_{N+1}}\,.$$
Let us now consider the family of diagonalization operators $\{ T(\xi) : \ell^2 \to \ell^2 \mid \xi \in {\mathbb{R}}^d\}$, i.e. the unitary operators that map 1-1 the basis $\{e^{(n)} \mid n \in {\mathbb{N}}\}$ onto the basis $\{\varphi^{(n)}(\xi) | n \in {\mathbb{N}}\}$, so that $$\label{LambdaDef}
\Lambda(\xi) = T^*(\xi) A(\xi) T(\xi) =
\begin{pmatrix}
\lambda_1(\xi) & 0 & 0 & \cdots
\\
0 & \lambda_2(\xi) & 0 & \cdots
\\
0 & 0 & \lambda_3(\xi) & \cdots
\\
\vdots & \vdots & \vdots &\ddots
\end{pmatrix}$$ For any given ${\epsilon}\geq 0$ we define a unitary operator $T_{\epsilon}$ on the space ${\mathcal{L}}^2$ (see Definition \[Spaces\]) by $$\label{Tdef}
\big( T_{\epsilon}g \big)(k) = T({\epsilon}k) g(k).$$
\[P3\] For every ${\epsilon}\geq 0$, the operator $T_{\epsilon}: {\mathcal{L}}^2 \to {\mathcal{L}}^2$ is unitary, with $T_0 = I$. Moreover, if $g \in {{\mathcal{L}}^2_\mu}$ for some $\mu > 0$, then $\lim_{{\epsilon}\to 0} {{\| {T_{\epsilon}g - g} \|}}_{{\mathcal{L}}^2} = 0$.
The first part of the statement is clear, because $$\int_{{\mathbb{R}}^d} {{\| {T({\epsilon}k) g(k)} \|}}^2_{\ell^2} \,dk
= \int_{{\mathbb{R}}^d} {{\| {g(k)} \|}}^2_{\ell^2} \,dk = {{\| {g} \|}}_{{\mathcal{L}}^2}^2$$ and $\lambda_n(0) = E_n$. Now, let $\Pi_N$ be the projection operator in $\ell^2$ on the $N$-dimensional sub-space spanned by $e^{(1)}, e^{(2)}, \ldots, e^{(N)}$ (in other words, the cut-off operator after the $N$-th component). Since the first $N$ bands do not cross in a ball of radius $R_N$ (see Corollary \[AnalyticRadius\]), then $\xi \mapsto T(\xi)\Pi_N$ is unitary analytic from $\operatorname{span}\left\{e^{(1)}, e^{(2)}, \ldots, e^{(N)}\right\}$ to $\operatorname{span}\left\{\varphi^{(1)}(\xi), \varphi^{(2)}(\xi), \ldots, \varphi^{(N)}(\xi)\right\}$, in ${{\left\vert {\xi} \right\vert}}\leq R_N$. Let $g \in {{\mathcal{L}}^2_\mu}$ and put $$g^{(N)} = \Pi_Ng, \qquad g^{(N)}_c = g - g^{(N)},$$ so that ${{\| {(T_{\epsilon}-I) g} \|}}_{{\mathcal{L}}^2} \leq {{\| {(T_{\epsilon}-I) g^{(N)}} \|}}_{{\mathcal{L}}^2}+{{\| {(T_{\epsilon}-I) g^{(N)}_c} \|}}_{{\mathcal{L}}^2}$. Let ${\epsilon}>0$ and $r>0$ be such that ${\epsilon}r \leq R_N$. Then, using the analyticity of $T({\epsilon}k)\Pi_N$ in ${{\left\vert {{\epsilon}k} \right\vert}}\leq {\epsilon}r \leq R_N$, we can write $${{\| {(T_{\epsilon}-I) g^{(N)}} \|}}_{{\mathcal{L}}^2}^2
= \int_{{\mathbb{R}}^d} {{\| {\left(T({\epsilon}k) - I\right)g^{(N)}(k)} \|}}^2_{\ell^2}\,dk$$ $$= \int_{{{\left\vert {k} \right\vert}} \leq r} {{\| {\left(T({\epsilon}k) - I\right) g^{(N)}(k) } \|}}^2_{\ell^2}\,dk
+ \int_{{{\left\vert {k} \right\vert}} > r}{{\| {\left(T({\epsilon}k) - I\right) g^{(N)}(k)} \|}}^2_{\ell^2}\,dk$$ $$\leq L_N^2 \int_{{{\left\vert {k} \right\vert}} \leq r} {{\left\vert {{\epsilon}k} \right\vert}}^2 {{\| {g^{(N)}(k)} \|}}^2_{\ell^2}\,dk
+ \frac{4}{r^{2\mu}} \int_{{{\left\vert {k} \right\vert}} > r} {{\left\vert {{\epsilon}k} \right\vert}}^{2\mu} {{\| {g^{(N)}(k)} \|}}^2_{\ell^2}\,dk,$$ for some Lipschitz constant $L_N > 0$. Now, it can be easily verified that the inequality $$\label{AuxIneq}
{{\left\vert {k} \right\vert}}^n \leq (1+{{\left\vert {k} \right\vert}}^\mu)\,r^{\max\{ n-\mu,\,0\}}$$ holds for any $r>0$, $n \geq 0$, $\mu \geq 0$, and ${{\left\vert {k} \right\vert}} \leq r$. From this (with $n=1$) we get $$\int_{{{\left\vert {k} \right\vert}} \leq r} {{\left\vert {{\epsilon}k} \right\vert}}^2 {{\| {g^{(N)}(k)} \|}}^2_{\ell^2}\,dk
\leq {\epsilon}^2 r^{2\max\{ 1-\mu,\,0\}} \int_{{{\left\vert {k} \right\vert}} \leq r}
(1+{{\left\vert {{\epsilon}k} \right\vert}}^\mu)^2 {{\| {g^{(N)}(k)} \|}}^2_{\ell^2}\,dk$$ and, therefore, $${{\| {(T_{\epsilon}- I) g ^{(N)}} \|}}_{{\mathcal{L}}^2}^2 \leq
\big(L_N^2\,{\epsilon}^2\, r^{2\max\{ 1-\mu,\,0\}}+ 4r^{-2\mu} \big)
{{\| {g} \|}}_{{{\mathcal{L}}^2_\mu}}^2.$$ Choosing $r = R_N/{\epsilon}$ we obtain $$\label{TepsIneq}
{{\| {(T_{\epsilon}- I) g ^{(N)}} \|}}_{{\mathcal{L}}^2} \leq {\epsilon}^{\min\{\mu,\,1\}}\,C(\mu,N)\,{{\| {g} \|}}_{{{\mathcal{L}}^2_\mu}},$$ where $$C(\mu,N) = \left(L_N^2 R_N^{2\max\{ 1-\mu,\,0\}} + 4R_N^{-2\mu}\right)^{1/2}.$$ Moreover, $${{\| {(T_{\epsilon}-I) g^{(N)}_c} \|}}_{{\mathcal{L}}^2} \leq {{\| {T_{\epsilon}g^{(N)}_c} \|}}_{{\mathcal{L}}^2} + {{\| {g^{(N)}_c} \|}}_{{\mathcal{L}}^2}
\leq 2{{\| {g^{(N)}_c} \|}}_{{\mathcal{L}}^2}.$$ Since ${{\| {g^{(N)}_c} \|}}_{{\mathcal{L}}^2} \to 0$ as $N \to \infty$, we can fix $N$ and, then, ${\epsilon}$ in inequality so that ${{\| {(T_{\epsilon}-I)g} \|}}_{{\mathcal{L}}^2}$ is arbitrarily small, which proves the limit.
From inequality we see that, when a finite number $N$ of bands is considered, the distance between $T_{\epsilon}$ and $I$ is of order ${\epsilon}^{\min\{\mu,\,1\}}$ for ${g^\mathit{in}}\in {{\mathcal{L}}^2_\mu}$, with $\mu >0$.
Let us now consider the second-order approximation of $\Lambda(\xi)$, $$\label{Lambda2Def}
\Lambda^{(2)}(\xi) =
\begin{pmatrix}
\lambda_1^{(2)}(\xi) & 0 & 0 & \cdots
\\
0 & \lambda_2^{(2)}(\xi) & 0 & \cdots
\\
0 & 0 & \lambda_3^{(2)}(\xi) & \cdots
\\
\vdots & \vdots & \vdots &\ddots
\end{pmatrix}$$ where $\lambda^{(2)}_n(\xi)$ is the second-order Taylor approximation of $\lambda_n(\xi)$: $$\lambda_n(\xi) = \lambda^{(2)}_n(\xi) + {\mathcal{O}}\big({{\left\vert {\xi} \right\vert}}^3\big).$$ The approximated eigenvalues $\lambda^{(2)}_n(\xi)$ can be computed by means of standard non-degenerate perturbation techniques, which yield $$\label{lambdaexp}
\lambda^{(2)}_n(\xi) = E_n + \frac{1}{2}\, \xi \cdot {\mathbb{M}}_n^{-1} \xi,$$ where $$\label{effm}
{\mathbb{M}}_n^{-1} = \nabla\otimes\nabla \,\lambda_n(\xi)_{\,|\xi=0}
= I - 2 \sum_{n'\not= n} \frac{P_{nn'} \otimes P_{n'n}}{E_n - E_{n'}}$$ is the $n$-th band effective mass tensor [@Wenckebach99] (we remind that $P_{nn'}=0$ if $n=n'$). Note that the 1st order term in is zero.
The operators $\Lambda(\xi)$ and $\Lambda^{(2)}(\xi)$, which are self-adjoint on their maximal domains, generate, respectively, the exact dynamics and the effective mass dynamics (in Fourier variables and in absence of external fields).
\[TheoEM\] Let ${g^\mathit{in}}\in {{\mathcal{L}}^2_\mu}$, for some $\mu>0$, and assume ${g^\mathit{in}}= \Pi_N{g^\mathit{in}}$ (i.e. the initial datum is confined in the first $N$ bands). Then, a constant $C(\mu,N,t) \geq 0$, independent of ${\epsilon}$, exists such that $$\label{EMExtimN}
{\displaystyle}{{\| {(\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda({\epsilon}k)} -\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}) {g^\mathit{in}}} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu/3,\, 1\}}\, C(\mu,N,t) \,{{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}\,.$$
Note that, since $\Lambda({\epsilon}k)$ and $\Lambda^{(2)}({\epsilon}k)$ are diagonal, then both $\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda({\epsilon}k)}{g^\mathit{in}}$ and $\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}{g^\mathit{in}}$ remain confined in the first $N$ bands at all times. Denoting $g^{\epsilon}(t,k) = \mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}{g^\mathit{in}}$, the function $h^{\epsilon}(t,k) = (\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda({\epsilon}k)} -\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}) {g^\mathit{in}}$ satisfies the Duhamel formula $$h^{\epsilon}( t,k) = \int_0^t {\mathrm{e}}^{- \frac{i(t-s)}{{\epsilon}^2}\,\Lambda({\epsilon}k)}
\,\frac{\Lambda({\epsilon}k) - \Lambda^{(2)}({\epsilon}k)}{{\epsilon}^2}\, g^{\epsilon}(s,k) \,ds,$$ so that $${{\| { h^{\epsilon}(t,k)} \|}}_{\ell^2} \leq \int_0^t \Big\|
\frac{\Lambda({\epsilon}k) - \Lambda^{(2)}({\epsilon}k)}{{\epsilon}^2}\, g^{\epsilon}(s,k)\Big\|_{\ell^2} \,ds$$ Since $\lambda_1(\xi), \ldots \lambda_N(\xi)$ are analytic for ${{\left\vert {\xi} \right\vert}} \leq R_N$ (see Corollary \[AnalyticRadius\]), then a Lipschitz constant $L'_N$ exists such that $$\Big\|\frac{\Lambda({\epsilon}k) - \Lambda^{(2)}({\epsilon}k)}{{\epsilon}^2}\, g^{\epsilon}(s,k)\Big\|_{\ell^2}
\leq {\epsilon}L_N {{\left\vert {k} \right\vert}}^3 {{\| {g^{\epsilon}(k,s)} \|}}_{\ell^2}
= {\epsilon}L_N {{\left\vert {k} \right\vert}}^3 {{\| {{g^\mathit{in}}(k)} \|}}_{\ell^2}$$ for all $k$ with ${{\left\vert {{\epsilon}k} \right\vert}} \leq R_N$ (where we also used the fact that the $\ell^2$ norm of $g^{\epsilon}$ is conserved during the unitary evolution). Now we can proceed as in the proof of Theorem \[P3\]: if $r>0$ is such that ${\epsilon}r \leq R_N$, then we can write $$\int_{{{\left\vert {k} \right\vert}} \leq r}{{\| { h^{\epsilon}(t,k) } \|}}^2_{\ell^2} \, dk
\leq (L'_N t {\epsilon})^2 \int_{{{\left\vert {k} \right\vert}} \leq r} {{\left\vert {k} \right\vert}}^6 {{\| {{g^\mathit{in}}(k)} \|}}^2_{\ell^2}\,dk$$ and, using inequality with $n=3$, $$\int_{{{\left\vert {k} \right\vert}} \leq r}{{\| { h^{\epsilon}(t,k) } \|}}^2_{\ell^2} \, dk
\leq \left(L'_N t\,{\epsilon}\, r^{\max\{ 3-\mu,\, 0\}}\right)^2
{{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}^2.$$ Moreover, $$\int_{{{\left\vert {k} \right\vert}} > r}{{\| { h^{\epsilon}(t,k)} \|}}^2_{\ell^2} \, dk
\leq \frac{1}{r^{2\mu}} \int_{{{\left\vert {k} \right\vert}} > r} {{\left\vert {k} \right\vert}}^{2\mu}
{{\| {h^{\epsilon}(t,k)} \|}}^2_{\ell^2}\,dk$$ $$\leq\frac{4}{r^{2\mu}} \int_{{{\left\vert {k} \right\vert}} > r} {{\left\vert {k} \right\vert}}^{2\mu}
{{\| {{g^\mathit{in}}(t,k)} \|}}^2_{\ell^2} \,dk
\leq \frac{4}{r^{2\mu}} {{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}^2,$$ where we used the fact that ${{\| {g^{\epsilon}(t,k)} \|}}_{\ell^2} = {{\| {{g^\mathit{in}}(k)} \|}}_{\ell^2}$ for all $t$. Hence, $${{\| {h^{\epsilon}(t)} \|}}_{\ell^2}^2 \leq
\left[ \left(L'_N t\,{\epsilon}\, r^{\max\{ 3-\mu,\, 0\}}\right)^2 +4r^{-2\mu} \right]
{{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}^2$$ and, choosing $r = R_N/{\epsilon}^{1/3}$, we obtain ${{\| {h^{\epsilon}(t)} \|}}_{\ell^2} \leq C(\mu,N,t)\, {\epsilon}^{\min\{\mu/3,\, 1\}}
\,{{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}$, that is inequality , with $$C(\mu,N,t) = \left[ \left(L'_N t\, R_N^{\max\{ 3-\mu,\, 0\}}\right)^2 +4R_N^{-2\mu/3} \right]^{1/2}.$$
Let ${g^\mathit{in}}\in {{\mathcal{L}}^2_\mu}$, with $\mu>0$ (but ${g^\mathit{in}}$ not necessarily confined in the first $N$ bands), then $\lim_{{\epsilon}\to 0} {{\| {(\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda({\epsilon}k)} -\mathrm{e}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}) {g^\mathit{in}}} \|}}_{{\mathcal{L}}^2} = 0$, uniformly in bounded time intervals.
Like in the proof of the above theorem, we define $$h^{\epsilon}( t,k) = \int_0^t {\mathrm{e}}^{- \frac{i(t-s)}{{\epsilon}^2}\,\Lambda({\epsilon}k)}
\,\frac{\Lambda({\epsilon}k) - \Lambda^{(2)}({\epsilon}k)}{{\epsilon}^2}\, g^{\epsilon}( s,k) \,ds,$$ For any given $N$ we can write $${{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} \leq {{\| {\Pi_N h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} + {{\| {\Pi_N^c h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2},$$ where $\Pi_N^c = I - \Pi_N$. Recalling that the evolutions are diagonal, the first term at the right hand side corresponds to the initial datum $\Pi_N{g^\mathit{in}}$, for which holds. Using the fact that $\Pi_N$ commutes with both ${\mathrm{e}}^{-\frac{it}{{\epsilon}^2}\,\Lambda({\epsilon}k)}$ and ${\mathrm{e}}^{-\frac{it}{{\epsilon}^2}\,\Lambda^{(2)}({\epsilon}k)}$, for the second term we have $${{\| {\Pi_N^c h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2}
\leq 2{{\| {\Pi_N^c {g^\mathit{in}}} \|}}_{{\mathcal{L}}^2}.$$ Since $\Pi_N^c {g^\mathit{in}}\to 0$ in ${\mathcal{L}}^2$ as $N\to\infty$, this inequality, together with , shows that $\lim_{{\epsilon}\to 0}{{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} = 0$, uniformly in bounded $t$-intervals.
Comparison of the models {#sec5}
========================
We are now in position to exhibit the ensemble of models encountered and to compare their respective dynamics.
We first started by the exact dynamics. Let the wave function $\psi^{\epsilon}(t,x)$ be solution of the initial value problem . If we denote by $f_n^{in,{\epsilon}}(x)$ the ${\epsilon}$-scaled envelope functions of the initial wave function $\psi^{in,{\epsilon}}$, relative to the basis $v_n$, and by $g_n^{in,{\epsilon}}(k)$, their Fourier transform, then the Fourier transformed envelope functions $g^{\epsilon}$ of $\psi^{\epsilon}(t,x)$ are the solutions of $$\begin{aligned}
\label{EFEexact}
&i\partial_t \,g = A_{\mathrm{kp}}^{\epsilon}g + {\mathcal{U}}^{\epsilon}g, & g^{\epsilon}(t=0) = g^{in,{\epsilon}}\quad &\text{(exact dynamics)}\end{aligned}$$ where $$\label{HKPdef}
\left( A_{\mathrm{kp}}^{\epsilon}g\right)_n(k) = \frac{1}{{\epsilon}^2} \left(A({\epsilon}k)\,g(k)\right)_n
= \left( \frac{E_n}{{\epsilon}^2} + \frac{{{\left\vert {k} \right\vert}}^2}{2} \right)g_n(k)
- \frac{i}{{\epsilon}} \sum_{n'} k \cdot P_{nn'} g_{n'}(k),$$ The k$\cdot$p approximation consists in passing to the limit in ${\mathcal{U}}^{\epsilon}$. Therefore, we define $ g^{\epsilon}_{\mathrm{kp}}(t)$ as the solution of $$\begin{aligned}
\label{EFEkp}
&i\partial_t \,g = A_{\mathrm{kp}}^{\epsilon}g + {\mathcal{U}}^0 g,&g(t=0) = g^{in,{\epsilon}} &\quad \text{(k$\cdot$p model)}\end{aligned}$$ It is worth noting that the back Fourier transform of $g^{\epsilon}_{\mathrm{kp}}(t)$ which we will denote by $f^{\epsilon}_{\mathrm{kp}}(t,x)$ is a solution of system $$\label{kpF}
\begin{aligned}
&i\partial_t \,f_n = {E_n\over {\epsilon}^2} f_{n}
- \frac{1}{2} \Delta\,f_n
- \frac{1}{{\epsilon}} \sum_{n'} P_{nn'}\cdot\nabla f_{n'}
+ \sum_{n'} V_{n n'} f_{n'},
\\
&f_n(t=0) = f_n^{in,{\epsilon}}(x).
\end{aligned}$$ The diagonalization of the operator $A_{\mathrm{kp}}^{\epsilon}$ performed in the previous section leads to the effective mass dynamics $$\begin{aligned}
\label{EFEem}
&i\partial_t \,g = A^{\epsilon}_{\mathrm{em}}g + {\mathcal{U}}^0 g,& g(t=0) = g^{in,{\epsilon}} \quad &\text{(effective mass model)}\end{aligned}$$ where $$\label{HEMdef}
\left( A^{\epsilon}_{\mathrm{em}}g\right)(k) = \frac{1}{{\epsilon}^2}\left(\Lambda^{(2)}({\epsilon}k)\,g(k)\right)_n
= \left(\frac{E_n}{{\epsilon}^2} + \frac{1}{2}\, k \cdot {\mathbb{M}}_n^{-1} k \right) g_n(k).$$ The solution of will be denoted by $g^{\epsilon}_{\mathrm{em}}(t,k)$ and its back Fourier transform $f^{\epsilon}_{\mathrm{em}}(t,x)$ is easily shown to be the solution of $$\label{EM1pos}
\begin{aligned}
&i\partial_t\, f_n = \frac{1}{{\epsilon}^2}\,E_n f_{n}
- \frac{1}{2} \operatorname{div}\left( {\mathbb{M}}_n^{-1} \nabla f_{n}^{\epsilon}\right)
+ \sum_{n'} V_{nn'}\,f_{n'}^{\epsilon},
\\
&f_n(t=0) = f_n^{in,{\epsilon}}(x).
\end{aligned}$$ This equation is still involving oscillations in time. These oscillations can be filtered by setting $f_{n,{\mathrm{em}}}^{\epsilon}(t,x) = h_{n,{\mathrm{em}}}^{\epsilon}(t,x) \mathrm{e}^{-iE_n{t\over {\epsilon}^2}}$ which will be a solution of $$\label{hdyn}
\begin{aligned}
&i\partial_t\, h_{{\mathrm{em}},n}^{\epsilon}=
- \frac{1}{2} \operatorname{div}\left( {\mathbb{M}}_n^{-1} \nabla h_{{\mathrm{em}},n}^{\epsilon}\right)
+ \sum_{n'} {\mathrm{e}}^{i\omega_{nn'}t/{\epsilon}^2} V_{nn'}\,h_{{\mathrm{em}},n'}^{\epsilon},
\\
&h_{{\mathrm{em}},n}^{\epsilon}(t=0) = f_n^{in,{\epsilon}}(x),
\end{aligned}$$ where $$\omega_{nn'} = E_n - E_{n'}.$$ The limit $h_{{\mathrm{em}},n}$ of these function is the solution of the system $$\label{limit}
i\partial_t\, h_{{\mathrm{em}},n} =
- \frac{1}{2} \operatorname{div}\left( {\mathbb{M}}_n^{-1} \nabla h_{{\mathrm{em}},n} \right)
+V_{nn}\,h_{{\mathrm{em}},n}, \quad h_{{\mathrm{em}},n}(t=0) = f_n^{in}(x),$$ where $f_n^{in}(x)$ is the limit as ${\epsilon}$ tends to zero of $f_n^{in,{\epsilon}}(x)$, and which will be made precise later on.
The external-potential operators ${\mathcal{U}}^{\epsilon}$ and ${\mathcal{U}}^0$ have been defined in and . The free k$\cdot$p operator $A(\xi)$ and the effective mass operator $\Lambda^{(2)}(\xi)$ (see definitions and ) are now re-introduced as operators acting in ${\mathcal{L}}^2$. Recalling definition , we shall also consider the diagonal k$\cdot$p operator $$\label{Lambda}
\left( \Lambda^{\epsilon}g\right)_n(k) = \frac{1}{{\epsilon}^2} \left(\Lambda({\epsilon}k)\,g(k)\right)_n
= \frac{1}{{\epsilon}^2}\,\lambda_n({\epsilon}k)\,g_n(k).$$ The operators $A_{\mathrm{kp}}^{\epsilon}$, $A^{\epsilon}_{\mathrm{em}}$ and $\Lambda^{\epsilon}$ are “fibered” self-adjoint operators in ${\mathcal{L}}^2$, with fiber space $\ell^2$. It is well known (see Ref. [@ReedSimonIV78]) that a fibered self-adjoint operator $L$ in ${\mathcal{L}}^2$ has self-adjointness domain $${\mathcal{D}}(L) = \Big\{ g \in {\mathcal{L}}^2 \;\Big|\; g(\xi) \in {\mathcal{D}}\left(L(\xi)\right) \text{\ a.e. $\xi \in {\mathbb{R}}^d$}
\text{\ and $\int_{{\mathbb{R}}^d} {{\| {L(\xi)\,g(\xi)} \|}}^2_{\ell^2}\, d\xi < \infty$} \Big\},$$ where ${\mathcal{D}}\left(L(\xi)\right)$ is the self-adjointness domain of $L(\xi)$ in $\ell^2$.
Comparison of Envelope functions
--------------------------------
Assuming $V \in {\mathcal{W}}_0$ (Definition \[Spaces\]), we know from Lemma \[Lemma0\] that ${\mathcal{U}}^{\epsilon}$ and ${\mathcal{U}}^0$ are bounded (and, clearly, symmetric). Therefore, $A_{\mathrm{kp}}^{\epsilon}+ {\mathcal{U}}^{\epsilon}$, $A_{\mathrm{kp}}^{\epsilon}+ {\mathcal{U}}^0$ and $A_{\mathrm{em}}^{\epsilon}+ {\mathcal{U}}^{\epsilon}$ are the generators of the unitary evolution groups $$G^{\epsilon}(t) = {\mathrm{e}}^{-it( A_{\mathrm{kp}}^{\epsilon}+ {\mathcal{U}}^{\epsilon})},
\quad
G^{\epsilon}_{\mathrm{kp}}(t) = {\mathrm{e}}^{-it( A_{\mathrm{kp}}^{\epsilon}+ {\mathcal{U}}^0)},
\quad
G^{\epsilon}_{\mathrm{em}}(t) = {\mathrm{e}}^{-it( A_{\mathrm{em}}^{\epsilon}+ {\mathcal{U}}^0)}.$$ Our goal is to compare, in the limit of small ${\epsilon}$, the three mild solutions of Eqs. , and , i.e. $$\label{MildSols}
g^{\epsilon}(t) = G^{\epsilon}(t)\,g^{in,{\epsilon}},
\quad
g^{\epsilon}_{\mathrm{kp}}(t) = G^{\epsilon}_{\mathrm{kp}}(t)\,g^{in,{\epsilon}},
\quad
g^{\epsilon}_{\mathrm{em}}(t) = G^{\epsilon}_{\mathrm{em}}(t)\,g^{in,{\epsilon}},$$
\[L3\] Let $g^{in,{\epsilon}} \in {{\mathcal{L}}^2_\mu}$ and $V \in {\mathcal{W}}_\mu$ for some $\mu \geq 0$ (see Definition \[Spaces\]). Then, suitable constants $c_1(\mu,V) \geq 0$ and $c_2(\mu,V) \geq 0$, independent of ${\epsilon}$, exists such that $$\label{L3ineq}
{{\| {g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{{\mathcal{L}}^2_\mu}} \leq
{\mathrm{e}}^{c_1(\mu,V)t} \,{{\| {g^{in,{\epsilon}}} \|}}_{{{\mathcal{L}}^2_\mu}},
\qquad
{{\| {g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{{\mathcal{L}}^2_\mu}} \leq
{\mathrm{e}}^{c_2(\mu,V)t} \,{{\| {g^{in,{\epsilon}}} \|}}_{{{\mathcal{L}}^2_\mu}},$$ for all $t \geq 0$.
We prove the lemma only for $g_{\mathrm{kp}}$, the proof for $g_{\mathrm{em}}$ being identical. We also skip the ${\epsilon}$ superscript of $g^{in,{\epsilon}}$. Let $\alpha$ be a fixed multi-index with ${{\left\vert {\alpha} \right\vert}} \leq \mu$. For $R>0$, consider the bounded multiplication operators on ${\mathcal{L}}^2$ $$\big( m_R\, g \big)_n(k) = \left\{
\begin{aligned}
&k^\alpha g_n(k), & &\text{if ${{\left\vert {k} \right\vert}}\leq R$,}
\\
&0, & &\text{otherwise,}
\end{aligned}
\right.$$ Moreover, we denote by $m_\infty$ the (unbounded) limit operator $\big( m_\infty\, g \big)_n(k) = k^\alpha g_n(k)$. Since $m_R$ (with $R < \infty$) commutes with $A_{\mathrm{kp}}^{\epsilon}$ on ${\mathcal{D}}(A_{\mathrm{kp}}^{\epsilon})$, then, by applying standard semigroup techniques, we obtain $$m_R\, g^{\epsilon}_{\mathrm{kp}}(t) = G^{\epsilon}_{\mathrm{kp}}(t)\, m_R\, {g^\mathit{in}}+ \int_0^t G^{\epsilon}_{\mathrm{kp}}(t-s)
\left[ m_R ,\,{\mathcal{U}}^0\right] g^{\epsilon}_{\mathrm{kp}}(s) \,ds$$ and, therefore, $$\label{L3aux1}
{{\| { m_R\, g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{\mathcal{L}}^2} \leq {{\| { m_R\, {g^\mathit{in}}} \|}}_{{\mathcal{L}}^2}
+ \int_0^t {{\| { \left[ m_R,\, {\mathcal{U}}^0\right] g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{\mathcal{L}}^2} \,ds.$$ Using and the identity $k^\alpha - \eta^\alpha = \sum_{\beta < \alpha} \binom{\alpha}{\beta}
\,( k - \eta)^{\alpha-\beta}\,\eta^\beta$, we have $$\big( \left[m_\infty,\, {\mathcal{U}}^0 \right] g^{\epsilon}_{\mathrm{kp}}\big)_n(k)
= \sum_{n'} \frac{1}{(2\pi)^{d/2}} \int_{{\mathbb{R}}^d}
(k^\alpha - \eta^\alpha)\,\hat V_{nn'}(k-\eta)\,g^{\epsilon}_{{\mathrm{kp}}, n'}(\eta)\,d\eta$$ $$= \sum_{n'} \frac{1}{(2\pi)^{d/2}} \sum_{\beta < \alpha} \binom{\alpha}{\beta}
\int_{{\mathbb{R}}^d} (k - \eta)^{\alpha-\beta}\,\hat V_{nn'}(k-\eta)\,\eta^\beta g^{\epsilon}_{{\mathrm{kp}}, n'}(\eta)\,d\eta$$ Since $V \in {\mathcal{W}}_\mu$, the potential $U_{\alpha\beta}(x,z)$ such that $\hat U_{\alpha\beta}(k,z) = k^{\alpha - \beta}\hat V(k,z)$ belongs to ${\mathcal{W}}_0$, with ${{\| {U_{\alpha\beta}} \|}}_{{\mathcal{W}}_0} \leq {{\| {V} \|}}_{{\mathcal{W}}_\mu}$, and then, using , we obtain $$\label{Comm1}
{{\| {\left[m_\infty,\, {\mathcal{U}}^0 \right] g^{\epsilon}_{\mathrm{kp}}} \|}}_{{\mathcal{L}}^2} \leq
\sum_{\beta < \alpha} \binom{\alpha}{\beta} {{\| {U_{\alpha\beta}} \|}}_{{\mathcal{W}}_0}
{{\| {\eta^\beta g^{\epsilon}_{\mathrm{kp}}} \|}}_{{\mathcal{L}}^2}
\leq c_1(\mu,V) {{\| {g^{\epsilon}_{\mathrm{kp}}} \|}}_{{{\mathcal{L}}^2_\mu}}.$$ with $c_1(\mu,V) = (2^d-1){{\| {V} \|}}_{{\mathcal{W}}_\mu}$. Letting $R\to+\infty$, it is not difficult to show that the dominated convergence theorem applies and yields $$\lim_{R \to +\infty} {{\| {\left[m_R,\, {\mathcal{U}}^0 \right] g^{\epsilon}_{\mathrm{kp}}} \|}}_{{\mathcal{L}}^2}
= {{\| {\left[m_\infty,\, {\mathcal{U}}^0 \right] g^{\epsilon}_{\mathrm{kp}}} \|}}_{{\mathcal{L}}^2}
\leq c_1(\mu,V) {{\| {g^{\epsilon}_{\mathrm{kp}}} \|}}_{{{\mathcal{L}}^2_\mu}}.$$ Then, passing to the limit for $R \to +\infty$ in , we get $${{\| {g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{{\mathcal{L}}^2_\mu}} \leq {{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}}
+ c_1(\mu,V)\int_0^t {{\| {g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{{\mathcal{L}}^2_\mu}} \,ds,$$ and, therefore, Gronwall’s Lemma yields inequality .
Let us begin by comparing the exact dynamics $g^{\epsilon}(t)$ with the k$\cdot$p dynamics $g_{\mathrm{kp}}^{\epsilon}(t)$.
\[EXvsKP\] Let $g^{\epsilon}(t)$ and $g^{\epsilon}_{\mathrm{kp}}(t)$ be respectively the solution of and . If ${g^{\mathit{in},{\epsilon}}}\in {{\mathcal{L}}^2_\mu}$ and $V \in {\mathcal{W}}_\mu$, for some $\mu \geq 0$, then, for any given $\tau \geq 0$, a constant $C(\mu,V,\tau) \geq 0$, independent of ${\epsilon}$, exists such that $$\label{EXvsKPeq}
{{\| {g^{\epsilon}(t) - g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^\mu\, C(\mu,V,\tau)\, {{\| {{g^\mathit{in}}} \|}}_{{{\mathcal{L}}^2_\mu}},$$ for all $0 \leq t \leq \tau$.
The function $h^{\epsilon}(t) = g^{\epsilon}(t) - g^{\epsilon}_{\mathrm{kp}}(t)$ satisfies the integral equation $$h^{\epsilon}(t) = \int_0^t G^{\epsilon}(t-s) \big({\mathcal{U}}^0 - {\mathcal{U}}^{\epsilon}\big) g^{\epsilon}_{\mathrm{kp}}(s) \,ds$$ and, therefore, $${{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} \leq \int_0^t {{\| {\big( {\mathcal{U}}^0 - {\mathcal{U}}^{\epsilon}\big)
g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{\mathcal{L}}^2} \,ds.$$ From Lemma \[L3\] we have that $g^{\epsilon}_{\mathrm{kp}}(t)$ belongs to ${\mathcal{L}}^2_\mu$ for all $t$ and, therefore, we can apply Theorem \[T3\], which gives $${{\| {\big( {\mathcal{U}}^0 - {\mathcal{U}}^{\epsilon}\big) g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{\mathcal{L}}^2} \leq
{\epsilon}^\mu c_\mu {{\| {V} \|}}_{{\mathcal{W}}_\mu}\,{{\| {g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{{\mathcal{L}}^2_\mu}},$$ for a suitable constant $c_\mu$. Then we have $${{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} \leq {\epsilon}^\mu\,c_\mu\,{{\| {V} \|}}_{{\mathcal{W}}_\mu}\,
\int_0^t {{\| {g^{\epsilon}_{\mathrm{kp}}(s)} \|}}_{{{\mathcal{L}}^2_\mu}}ds$$ and, by , we have that holds with $ C(\mu,V,\tau) =
\frac{c_\mu \left({\mathrm{e}}^{c(\mu,V) \tau} - 1\right)}{c(\mu,V)} {{\| {V} \|}}_{{\mathcal{W}}_\mu}$.
We now compare the k$\cdot$p dynamics $g_{\mathrm{kp}}^{\epsilon}(t)$ with the effective mass dynamics $g_{\mathrm{em}}^{\epsilon}(t)$ (see definitions ). Recalling the discussion in Sec. \[sec4\], we need, as an intermediate step between $g_{\mathrm{kp}}^{\epsilon}(t)$ and $g_{\mathrm{em}}^{\epsilon}(t)$, the function $g_*^{\epsilon}(t) = T^*_{\epsilon}\,g^{\epsilon}_{\mathrm{kp}}(t)$, that is $$\label{GD}
g_*^{\epsilon}(t) = T^*_{\epsilon}\,G^{\epsilon}_{\mathrm{kp}}(t)\,{g^{\mathit{in},{\epsilon}}}=
\exp\left[-it\,\left( \Lambda^{\epsilon}+ T^*_{\epsilon}\,{\mathcal{U}}^0 T_{\epsilon}\right)\right]\,T^*_{\epsilon}\,{g^{\mathit{in},{\epsilon}}},$$ representing the diagonalized k$\cdot$p dynamics (definitions , and ).
\[KPvsEM\] Let $g^{\epsilon}_{\mathrm{em}}(t)$ and $g_*^{\epsilon}(t)$ be respectively defined by and . Let ${g^{\mathit{in},{\epsilon}}}\in {{\mathcal{L}}^2_\mu}$ and $V \in {\mathcal{W}}_\mu$, for some $\mu > 0$, and assume ${g^{\mathit{in},{\epsilon}}}= \Pi_N{g^\mathit{in}}$ (i.e. ${g^{\mathit{in},{\epsilon}}}$ is concentrated in the first $N$ bands). Then, for any given $\tau \geq 0$, a suitable constant $C'(\mu,N,V,\tau)$, independent of ${\epsilon}$, exists such that $$\label{LastIneq}
{{\| {g_*^{\epsilon}(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu/3,\,1 \}}\,C'(\mu,N,V,\tau)\,{{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}},$$ for all $0 \leq t \leq \tau$.
Let $S^{\epsilon}_{\Lambda}(t) = \exp(-it \Lambda^{\epsilon})$, $S^{\epsilon}_{\mathrm{em}}(t) = \exp(-it A^{\epsilon}_{\mathrm{em}})$ and ${\mathcal{U}}^{\epsilon}_T : = T^*_{\epsilon}\,{\mathcal{U}}^0 T_{\epsilon}$. Then, $$\begin{aligned}
&g^{\epsilon}_{\mathrm{em}}(t) = S^{\epsilon}_{\mathrm{em}}(t) {g^{\mathit{in},{\epsilon}}}+ \int_0^t S^{\epsilon}_{\mathrm{em}}(t-s)\,
{\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(s)\,ds,
\\[4pt]
&g_*^{\epsilon}(t) = S^{\epsilon}_\Lambda(t) T_{\epsilon}^* {g^{\mathit{in},{\epsilon}}}+ \int_0^t S^{\epsilon}_\Lambda(t-s)\,
{\mathcal{U}}^{\epsilon}_T\,g_*^{\epsilon}(s)\,ds.
\end{aligned}$$ Putting $h^{\epsilon}= g_*^{\epsilon}- g^{\epsilon}_{\mathrm{em}}$, we can write $$\begin{gathered}
\label{Duhamel}
h^{\epsilon}(t) = \left( S^{\epsilon}_\Lambda- S^{\epsilon}_{\mathrm{em}}\right)(t)\, {g^{\mathit{in},{\epsilon}}}+ S^{\epsilon}_\Lambda(t) \left(T_{\epsilon}^* - I\right){g^{\mathit{in},{\epsilon}}}+ \int_0^t S^{\epsilon}_\Lambda(t-s)\,{\mathcal{U}}^{\epsilon}_T h^{\epsilon}(s)\,ds
\\
+ \int_0^t S^{\epsilon}_\Lambda(t-s) \left( {\mathcal{U}}^{\epsilon}_T - {\mathcal{U}}^0 \right) g^{\epsilon}_{\mathrm{em}}(s)\, ds
+ \int_0^t \left( S^{\epsilon}_\Lambda - S^{\epsilon}_{\mathrm{em}}\right)(t-s)\, {\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(s)\,ds.\end{gathered}$$ From the effective mass theorem, Theorem \[TheoEM\], a constant $C(\mu,N,t)$ exists such that $$\label{Stima1}
{{\| { \left( S^{\epsilon}_\Lambda- S^{\epsilon}_{\mathrm{em}}\right)(t)\, {g^{\mathit{in},{\epsilon}}}} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu/3,\, 1\}}\,C(\mu,N,t) {{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}}.$$ Moreover, from Lemma \[L3\] we have that both $g^{\epsilon}_{\mathrm{em}}(t)$ and ${\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(t)$ belong to ${\mathcal{L}}^2_\mu$ for all $t$, and that a constant $C_1(\mu,V,t) \geq 0$ exists such that $$\label{Stima1.1}
{{\| {{\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{{\mathcal{L}}^2_\mu}} \leq C_1(\mu,V,t)\, {{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}}$$ (this stems, in particular, from the commutator inequality , which still holds for $g^{\epsilon}_{\mathrm{em}}$). This inequality, together with Theorem \[TheoEM\], yields $$\label{Stima2}
{{\| {\left( S^{\epsilon}_\Lambda- S^{\epsilon}_{\mathrm{em}}\right)(t-s)\, {\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(s)} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu/3,\, 1\}}\,C_2(\mu,N,t-s) {{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}}$$ for a suitable constant $C_2(\mu,N,t) \geq 0$. In order to estimate the last integral in , let us write $$\big({\mathcal{U}}^{\epsilon}_T - {\mathcal{U}}^0 \big) g^{\epsilon}_{\mathrm{em}}(s)
= \left(T_{\epsilon}^* - I\right) {\mathcal{U}}^0 g^{\epsilon}_{\mathrm{em}}(s)
+ T_{\epsilon}^*\,{\mathcal{U}}^0 \left(T_{\epsilon}- I \right) g^{\epsilon}_{\mathrm{em}}(s).$$ Using inequalities and we see that another constant $C_3(\mu,N,V,t) \geq 0$ exists such that $$\label{Stima3}
{{\| {\big( {\mathcal{U}}^{\epsilon}_T - {\mathcal{U}}^0 \big) g^{\epsilon}_{\mathrm{em}}(s)} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu,1\}}\, C_3(\mu,N,V,t)\,{{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}} \,.$$ In conclusion, from inequalities , and , and from Eq. , we get $${{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} \leq {\epsilon}^{\min\{\mu/3, 1\}} \,C_4(\mu,N,V,\tau) {{\| {{g^{\mathit{in},{\epsilon}}}} \|}}_{{{\mathcal{L}}^2_\mu}}
+ {{\| {V} \|}}_{{\mathcal{W}}_0}\int_0^t {{\| {h^{\epsilon}(s)} \|}}_{{\mathcal{L}}^2}\,ds,$$ for all $0 \leq t \leq \tau$ (here we also used the fact that all the estimation constants introduced so far are non-decreasing with respect to time). Hence, inequality , with $C'(\mu,N,V,\tau) = {\mathrm{e}}^{\tau{{\| {V} \|}}_{{\mathcal{W}}_0}} C_4(\mu,N,\tau,V)$, follows from Gronwall’s Lemma.
\[CoroFinal\] Let $g^{\epsilon}_{\mathrm{kp}}(t)$ and $g_{\mathrm{em}}^{\epsilon}(t)$ as in , and assume $g^{in,{\epsilon}} \in {{\mathcal{L}}^2_\mu}$, with a uniform bound as ${\epsilon}$ tends to zero. Moreover, assume $V \in {\mathcal{W}}_\mu$ for some $\mu > 0$. Then $\lim_{{\epsilon}\to 0} {{\| {g^{\epsilon}_{\mathrm{kp}}(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2} =0$, uniformly in bounded time-intervals. If, in addition, $g^{in,{\epsilon}} = \Pi_Ng^{in,{\epsilon}}$, for some $N$ then, for any given $\tau \geq 0$, a constant $C''(\mu,N,V,\tau) \geq 0$, independent of ${\epsilon}$, exists such that $$\label{CoroIneq}
{{\| {g^{\epsilon}_{\mathrm{kp}}(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2}
\leq {\epsilon}^{\min\{\mu/3,\,1 \}}\,C''(\mu,N,V,\tau)\,{{\| {g^{in,{\epsilon}}} \|}}_{{{\mathcal{L}}^2_\mu}},$$ for all $0 \leq t \leq \tau$.
We begin by the second statement, assuming $g^{in,{\epsilon}} = \Pi_Ng^{in,{\epsilon}}$. Using inequalities and , and recalling definition , we can write $${{\| {g^{\epsilon}_{\mathrm{kp}}(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2} \leq
{{\| {(T_{\epsilon}^*-I)g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{\mathcal{L}}^2} + {{\| {g^{\epsilon}_*(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2}$$ $$\leq {\epsilon}^{\min\{\mu,\,1\}}\,C(\mu,N)\,{{\| {g^{\epsilon}_{\mathrm{kp}}(t)} \|}}_{{{\mathcal{L}}^2_\mu}}
+ {\epsilon}^{\min\{\mu/3,\,1 \}}\,C'(\mu,N,V,\tau)\,{{\| {g^{in,{\epsilon}} } \|}}_{{{\mathcal{L}}^2_\mu}},$$ for $0 \leq t \leq \tau$. Then, using also , inequality follows. If now $g^{in,{\epsilon}}$ simply belongs to ${\mathcal{L}}^2_\mu$, then for any fixed $N$ we can write $${{\| {(g^{\epsilon}_{\mathrm{kp}}- g^{\epsilon}_{\mathrm{em}})(t)} \|}}_{{\mathcal{L}}^2} \leq
{{\| {(G^{\epsilon}_{\mathrm{kp}}- G^{\epsilon}_{\mathrm{em}})(t)\Pi_N g^{in,{\epsilon}} } \|}}_{{\mathcal{L}}^2}
+ {{\| {(G^{\epsilon}_{\mathrm{kp}}- G^{\epsilon}_{\mathrm{em}})(t)\Pi_N^c g^{in,{\epsilon}} } \|}}_{{\mathcal{L}}^2}$$ $$\leq {\epsilon}^{\min\{\mu/3,\,1 \}}\,C''(\mu,N,\tau)\,{{\| {g^{in,{\epsilon}}} \|}}_{{{\mathcal{L}}^2_\mu}}
+ 2\,{{\| {\Pi_N^c g^{in,{\epsilon}} } \|}}_{{\mathcal{L}}^2},$$ for all $0 \leq t \leq \tau$. Since $\Pi_N^c g^{in,{\epsilon}} \to 0$ in ${\mathcal{L}}^2$ as $N\to\infty$, then we can fix $N$ large enough and, successively, ${\epsilon}$ small enough (uniformly in $0 \leq t \leq \tau$, by assumption) so that ${{\| {g^{\epsilon}_{\mathrm{kp}}(t) - g^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2}$ is arbitrarily small, which proves our assertion.
The following result is a direct consequence of the above comparisons.
Assume that the envelope functions $f^{in,{\epsilon}}(x)$ are bounded in ${\mathcal{H}}^\mu$ for some $\mu>0$ and that the potential $V(x,z)$ belong to ${\mathcal{W}}_\mu$. Then we have the local uniform in time convergence $$\lim_{{\epsilon}\to 0} {{\| {f^{\epsilon}(t) - f^{\epsilon}_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2} = 0,$$ where $f^{\epsilon}(t,x)$ and $f^{\epsilon}_{\mathrm{em}}(t,x)$ are the respective solutions of and
We are now able to prove the following theorem.
\[LastTeo\] Let $h_{\mathrm{em}}^{\epsilon}(t,x)$ and $h_{\mathrm{em}}(t,x)$ be the mild solutions of, respectively, Eq. and Eq. . Assume $\lim_{{\epsilon}\to0} {{\| {f^{in,{\epsilon}} - f^{in}} \|}}_{{\mathcal{L}}^2} = 0$ and assume that $\mu > 0$ exists such that $V \in {\mathcal{W}}_\mu$ and $f^{in,{\epsilon}}$ is bounded uniformly in ${\mathcal{H}}^\mu$. Then $$\lim_{{\epsilon}\to0} {{\| {h_{\mathrm{em}}^{\epsilon}(t) - h_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2} = 0,$$ uniformly in bounded time intervals.
Since the dynamics generated by and both preserve the ${\mathcal{L}}^2$ norm, we can assume without loss of generality that the initial condition $f^{in,{\epsilon}}$ and $f^{in}$ are identical and replace them by the notation ${h^\mathit{in}}\in {\mathcal{H}}^\mu$. We consider the diagonal operator $H_0$ in ${\mathcal{L}}^2$ $$\left(H_0 h\right)_n(x) =
\frac{1}{2} \operatorname{div}\left( {\mathbb{M}}_n^{-1} \nabla h_{n} \right)(x) + V_{nn}(x)h_{n}(x).$$ We recall that the matrix $V_{nn'}$ defines a bounded operator on ${\mathcal{L}}^2$ (that is, the operator ${\mathcal{U}}^0$ in position variables, see definition ). Such operator, as well as its diagonal and off-diagonal parts are bounded operators with bound ${{\| {V} \|}}_{{\mathcal{W}}_0}$ (see Lemma \[Lemma0\]). Then, $H_0$ is self-adjoint on the domain $${\mathcal{D}}(H_0) = \left\{ h \in {\mathcal{L}}^2 \;\left|\; h_n \in {\mathrm{H}}^2({\mathbb{R}}^d),\
\sum_{n} {{\| {\operatorname{div}({\mathbb{M}}_n^{-1} \nabla h_{n})} \|}}_{L^2({\mathbb{R}}^d)}^2 < \infty \right. \right\}.$$ Let $S(t) = \exp(-itH_0)$ denote the (diagonal) unitary group generated by $H_0$. Moreover we consider the operator $R^{\epsilon}(t)$ given by $$\left(R^{\epsilon}(t)h\right)_n(x) =
\sum_{n' \not= n} {\mathrm{e}}^{i\omega_{nn'}t/{\epsilon}^2} V_{nn'}(x)\,h_{n'}(x),$$ which, being unitarily equivalent to the off-diagonal part of ${\mathcal{U}}^0$, is again bounded by ${{\| {V} \|}}_{{\mathcal{W}}_0}$ (for all $t$). The two mild solutions satisfy $$h_{\mathrm{em}}^{\epsilon}(t) = S(t) {h^\mathit{in}}+ \int_0^t S(t-s) R^{\epsilon}(s)\, h_{\mathrm{em}}^{\epsilon}(s)\,ds,
\qquad
h_{\mathrm{em}}(t) = S(t) {h^\mathit{in}},$$ and, therefore, what we need to do is proving that $$h_{\mathrm{em}}^{\epsilon}(t) - h_{\mathrm{em}}(t) = \int_0^t S(t-s) R^{\epsilon}(s)\, h_{\mathrm{em}}^{\epsilon}(s)\,ds$$ goes to zero as ${\epsilon}\to 0$. To this aim we resort to the usual cutoff. For any fixed $N \in {\mathbb{N}}$ we decompose the right hand side of the previous equation $$h_{\mathrm{em}}^{\epsilon}(t) - h_{\mathrm{em}}(t) = I_N(t) + I_N^c (t),$$ where, using the projection operators $\Pi_N$ and $\Pi_N^c = I - \Pi_N$, introduced in the proof of Theorem \[P3\], we have put $$\begin{aligned}
&I_N(t) = \int_0^t S(t-s) \Pi_N R^{\epsilon}(s) \Pi_N h_{\mathrm{em}}^{\epsilon}(s)\,ds,
\\
&I_N^c(t) = \int_0^t S(t-s)
\left[\Pi_N^c R^{\epsilon}(s) \Pi_N + \Pi_NR^{\epsilon}(s) \Pi_N^c
+ \Pi_N^c R^{\epsilon}(s) \Pi_N^c\right] h_{\mathrm{em}}^{\epsilon}(s)\,ds.
\end{aligned}$$
#### Case of regular data
We assume in this part that $V\in {\mathcal{W}}_2$ and that ${h^\mathit{in}}\in {\mathcal{H}}^2$. We fix a $\delta>0$ arbitrarily small and a maximum time $\tau$. Because $R^{\epsilon}(t)$ is uniformly bounded and ${{\| {h_{\mathrm{em}}^{\epsilon}(t)} \|}}_{{\mathcal{L}}^2} = {{\| {{h^\mathit{in}}} \|}}_{{\mathcal{L}}^2}$ then, clearly, a number $N(\delta,\tau)$ (independent of ${\epsilon}$) exists such that ${{\| {I_N^c(t)} \|}}_{{\mathcal{L}}^2} \leq \delta$, for all $N \geq N(\delta,\tau)$ and $0 \leq t \leq \tau$. We now turn our attention to $I_N(t)$. Using the assumption $V \in {\mathcal{W}}_2$, it is not difficult to prove the following facts:
1. for every $N$, if $h \in {{\mathcal{H}}^2}$ then $\Pi_N h \in {\mathcal{D}}(H_0)$, and a constant $C_N$ exists such that ${{\| {H_0\,\Pi_Nh} \|}}_{{\mathcal{L}}^2} \leq C_N{{\| {h} \|}}_{{{\mathcal{H}}^2}}$;
2. for every $N$ a constant $C_N'$, independent of $t$ and ${\epsilon}$, exists such that, if $h \in {{\mathcal{H}}^2}$, then ${{\| {\Pi_NR^{\epsilon}(t)\Pi_Nh} \|}}_{{{\mathcal{H}}^2}} \leq C_N'{{\| {h} \|}}_{{{\mathcal{H}}^2}}$.
Moreover, in a similar way to Lemma \[L3\], we can prove the following:
1. if ${h^\mathit{in}}\in {{\mathcal{H}}^2}$, then $h_{\mathrm{em}}^{\epsilon}(t) \in {{\mathcal{H}}^2}$ for all $t$ and a function $C(t)$, bounded on bounded time intervals and independent of ${\epsilon}$, exists such that ${{\| {h_{\mathrm{em}}^{\epsilon}(t)} \|}}_{{{\mathcal{H}}^2}} \leq C(t) {{\| {{h^\mathit{in}}} \|}}_{{{\mathcal{H}}^2}}$.
Using (i), (ii) and (iii) we have that $\Pi_N R^{\epsilon}(s) \Pi_N h^{\epsilon}(s) \in {\mathcal{D}}(H_0)$ and, therefore, $S(t-s) \Pi_N R^{\epsilon}(s) \Pi_N h_{\mathrm{em}}^{\epsilon}(s)$ is continuously differentiable in $s$. This makes possible to perform an integration by parts in the integral defining $I_N(t)$. Since $$R^{\epsilon}(t) = {\epsilon}^2 \int R^{\epsilon}_\omega(t)\,dt,$$ where $$\left( R^{\epsilon}_\omega(t)h\right)_n(x) =
\sum_{n' \not= n} \frac{1}{i\omega_{nn'}}\,{\mathrm{e}}^{i\omega_{nn'}t/{\epsilon}^2} V_{nn'}(x)\, h_{n'}(x),$$ then the integration by parts yields $$\begin{gathered}
I_N(t) = {\epsilon}^2 S(t-s)\Pi_N R^{\epsilon}_\omega(s)\Pi_Nh_{\mathrm{em}}^{\epsilon}(s)\Big|^{s=t}_{s=0}
\\
- {\epsilon}^2 \int_0^t S(t-s)\Pi_N
\left[ iH_0R^{\epsilon}_\omega(s)\Pi_Nh_{\mathrm{em}}^{\epsilon}(s) + R^{\epsilon}_\omega(s)\Pi_N\frac{d}{ds}h_{\mathrm{em}}^{\epsilon}(s)\right]ds,\end{gathered}$$ where, of course, $$\frac{d}{ds}h_{\mathrm{em}}^{\epsilon}(s) = H_0h_{\mathrm{em}}^{\epsilon}(s) + R^{\epsilon}(s)h_{\mathrm{em}}^{\epsilon}(s).$$ Since $\Pi_N R^{\epsilon}_\omega(t)$ is uniformly bounded by some constant dependent of $N$ (in particular, such constant will depend of $1/\min\{ \omega_{nn'} \mid n'\not= n,\ n \leq N\}$), then, from (i), (ii) and (iii), and using $\Pi_N H_0 = H_0\,\Pi_N$, we obtain that a constant $C_N(\tau)$, independent of ${\epsilon}$, exists such that $${{\| {I_N(t)} \|}}_{{\mathcal{L}}^2} \leq {\epsilon}^2 C_N(\tau) {{\| {{h^\mathit{in}}} \|}}_{{{\mathcal{H}}^2}},
\qquad 0 \leq t \leq \tau.$$ Thus, fixing $N \geq N(\delta,\tau)$, a ${\epsilon}$ small enough exists such that $ {{\| {I_N(t)} \|}}_{{\mathcal{L}}^2} \leq \delta$, for all $0 \leq t \leq \tau$. For such $N$ and ${\epsilon}$ we have, therefore, $${{\| {h_{\mathrm{em}}^{\epsilon}(t) - h_{\mathrm{em}}(t)} \|}}_{{\mathcal{L}}^2} \leq {{\| {I_N(t)} \|}}_{{\mathcal{L}}^2} + {{\| {I_N^c (t)} \|}}_{{\mathcal{L}}^2}
\leq 2\delta,$$ which proves the theorem in the regular case.
#### Case of general data
If $\mu\geq 2$, then there is nothing to do. Let us assume $0< \mu < 2$ and let $\delta$ be a regularizing parameter and let ${h^\mathit{in}}_\delta$ and $V_\delta$ be two regularizations of ${h^\mathit{in}}$ and of $V$ such that $${h^\mathit{in}}_\delta \in {\mathcal{H}}^2, \qquad
\lim_{\delta \to 0} {{\| {{h^\mathit{in}}_\delta - {h^\mathit{in}}} \|}}_{{\mathcal{H}}^\mu} = 0$$ and $$V_\delta \in {\mathcal{W}}_2, \qquad
\lim_{\delta \to 0} {{\| {V_\delta - V} \|}}_{{\mathcal{W}}_\mu} = 0.$$ Let $h_{{\mathrm{em}},\delta}^{\epsilon}$ and $h_{{\mathrm{em}},\delta}$ be the corresponding solutions of and with the modified initial data and potential. Then we have $$\begin{gathered}
{{\| {(h_{\mathrm{em}}^{\epsilon}- h_{\mathrm{em}})(t)} \|}}_{{\mathcal{L}}^2}
\leq {{\| {(h_{\mathrm{em}}^{\epsilon}- h_{{\mathrm{em}},\delta}^{\epsilon})(t) } \|}}_{{\mathcal{L}}^2} +
\\
+ {{\| { (h_{{\mathrm{em}},\delta}^{\epsilon}- h_{{\mathrm{em}},\delta})(t)} \|}}_{{\mathcal{L}}^2}
+ {{\| { (h_{{\mathrm{em}},\delta} - h_{\mathrm{em}})(t)} \|}}_{{\mathcal{L}}^2}.\end{gathered}$$ The above analysis of the regular case shows that for any fixed $\delta > 0$, the second term of the right hand side tends to zero as ${\epsilon}$ tends to zero. Thanks to Theorem \[T3\], it is easy to show that the third term of the right hand side tends to zero as $\delta$ tends to zero and that the first term of the right hand also tends to zero as $\delta$ tends to zero uniformly in ${\epsilon}$.
Convergence of the density {#EMWF}
--------------------------
In this section, we prove the convergence of the particle density towards the superposition of the envelope function densities. Namely, we have the following theorem.
\[MainTheorem\] Let the initial datum ${\psi^{\mathit{in},{\epsilon}}}\in L^2({\mathbb{R}}^d)$ be such that its envelope functions $(f_n^{in,{\epsilon}})$ form a bounded sequence in ${\mathcal{H}}^\mu$ which strongly converges in ${\mathcal{L}}^2$ towards the initial datum $f^{in} = (f^{in}_n)$, and assume that there exists a positive $\mu$ such that $V \in {\mathcal{W}}_\mu$. Then for any given function $\theta \in L^1({\mathbb{R}}^d)$ such that $\widehat{\theta}\in L^1({\mathbb{R}}^d)$, the following convergence holds locally uniformly in time: $$\lim_{{\epsilon}\to 0} \int {{\left\vert {\psi^{\epsilon}(t,x)} \right\vert}}^2\, \theta(x) \, dx = \sum_n \int \theta (x) {{\left\vert {h_{{\mathrm{em}},n}(t,x)} \right\vert}}^2\, dx,$$ where $\psi_{\epsilon}$ is the solution of and $h_{\mathrm{em}}$ is the solution of .
let $h^{\epsilon}_n(t,x) = f^{\epsilon}_n(t,x) {\mathrm{e}}^{iE_n t/{\epsilon}^2}$ where $f_n^{\epsilon}$ are the envelope functions of $\psi^{\epsilon}$ We deduce from the results of the above subsection, in particular from Theorem \[LastTeo\], that $$\lim_{{\epsilon}\to 0} \sum_n {{\| {h_n^{\epsilon}(t) - h_{{\mathrm{em}},n}(t)} \|}}_{L^2({\mathbb{R}}^d)}^2 = 0.$$ Let $$\tilde{\theta}^{\epsilon}= {\mathcal{T}}_{1\over 3{\epsilon}}(\theta), \quad \widetilde{h}_n^{\epsilon}= {\mathcal{T}}_{1\over 3{\epsilon}} (h_n^{\epsilon}), \quad \widetilde{h}_{{\mathrm{em}},n}^{\epsilon}= {\mathcal{T}}_{1\over 3{\epsilon}} (h_{{\mathrm{em}}, n})$$ where the truncation operator ${\mathcal{T}}_\gamma$ has been defined in . Recalling that $$\psi^{\epsilon}(t,x) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2} \sum_{n} h_{n}^{\epsilon}(t,x) {\mathrm{e}}^{-iE_nt/{\epsilon}^2} v_n^{\epsilon}(x)$$ let us define $$\tilde{\psi}^{\epsilon}(t,x) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2} \sum_{n} \tilde{h}_{n}^{\epsilon}(t,x) {\mathrm{e}}^{-iE_nt/{\epsilon}^2} v_n^{\epsilon}(x).$$ It is readily seen, in view of that $${{\| {\psi^{\epsilon}(t) - \tilde{\psi}^{\epsilon}(t)} \|}}_{L^2}^2
= \sum_n {{\| {h_n^{\epsilon}(t) - \tilde{h}_n^{\epsilon}(t)} \|}}_{L^2}^2
\leq C {\epsilon}^\mu {{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{H}}^\mu}^2,$$ where, by Lemma \[L3\], $ {{\| {h^{\epsilon}(t)} \|}}_{{\mathcal{H}}^\mu}$ remains bounded. It is now clear that $${{\left\vert {\int \theta(x) {{\left\vert {\psi^{\epsilon}(t,x)} \right\vert}}^2\, dx - \int \theta(x) {{\left\vert {\tilde{\psi}^{\epsilon}(t,x)} \right\vert}}^2\, dx} \right\vert}}
\leq {{\left\vert {{{\| {\psi^{\epsilon}(t)} \|}}_{L^2}^2 - {{\| {\tilde{\psi}^{\epsilon}(t)} \|}}_{L^2}^2} \right\vert}} {{\| { \theta} \|}}_{L^\infty}$$ goes to 0 and, therefore, we can replace $\psi^{\epsilon}$ by $\tilde{\psi}^{\epsilon}$. Now, $${{\left\vert { \int \theta(x) {{\left\vert {\tilde{\psi}^{\epsilon}(t,x)} \right\vert}}^2\, dx - \int \tilde{\theta}^{\epsilon}(x) {{\left\vert {\tilde{\psi}^{\epsilon}(t,x)} \right\vert}}^2\, dx} \right\vert}}
\leq {{\| {\tilde{\psi}^{\epsilon}(t)} \|}}_{L^2}^2{{\| {\tilde{\theta}^{\epsilon}- \theta} \|}}_{L^\infty} \to 0$$ and, therefore, we can replace $\theta$ by $\tilde{\theta}^{\epsilon}$. But $$\begin{gathered}
\int \tilde{\theta}^{\epsilon}(x) {{\left\vert {\tilde{\psi}^{\epsilon}(t,x)} \right\vert}}^2\, dx =
\\
{{\left\vert {{\mathcal{C}}} \right\vert}} \left\langle \sum_{n}\tilde{\theta}^{\epsilon}(x)\tilde{h}_{n}^{\epsilon}(t,x) {\mathrm{e}}^{-iE_nt/{\epsilon}^2}
v_n^{\epsilon}(x)
\, , \,
\sum_{n} \tilde{h}_{n}^{\epsilon}(t,x) {\mathrm{e}}^{-iE_nt/{\epsilon}^2} v_n^{\epsilon}(x) \right\rangle\end{gathered}$$ and $\operatorname{supp}(\widehat{\tilde{\theta}^{\epsilon}\tilde{h}_{n}^{\epsilon}}) \subset {\mathcal{B}}/3{\epsilon}+ {\mathcal{B}}/ 3 {\epsilon}\subset {\mathcal{B}}/{\epsilon}$. Therefore the Parseval formula shows that $$\int \tilde{\theta}^{\epsilon}(x) {{\left\vert {\tilde{\psi}^{\epsilon}(t,x)} \right\vert}}^2\, dx =
\sum_n \int \widetilde{\theta}^{\epsilon}(x) {{\left\vert {\widetilde{h}_n^{\epsilon}(t,x)} \right\vert}}^2\, dx
\to \sum_n \int \theta (x) {{\left\vert {h_{{\mathrm{em}},n}(t,x)} \right\vert}}^2\, dx,$$ which completes the proof of the theorem.
Comments {#sec6}
========
One of the most restrictive hypotheses that we made in the previous sections is the simplicity of all the eigenvalues of the periodic operator ${\mathcal{H}}^1_{\mathcal{L}}$. The question of simplicity of the eigenvalues is central in this problem as already has been noticed in the works of Poupaud and Ringhofer [@PoupaudRinghofer96] and of Allaire and Piatnistki [@Allaire05]. In these references, the authors do not assume that all the eigenvalues are simple but assume that the initial datum is concentrated on finite number of bands who have multiplicity 1. The difference between our approach and that of these two references is that ours allows for a an infinite number of envelope functions. Besides, the hypothesis of simplicity of all the eigenvalues at $k = 0$ can be removed and replaced by the fact that the initial datum envelope functions corresponding to multiple eigenvalues are vanishing. The proof has however to be reshuffled and we have chosen to stick to the restrictive hypothesis of simple eigenvalues. Let us however briefly explain how we can deal with this problem. One important step is the diagonalization of the k$\cdot$p Hamiltonian which gives rise to the equation . In this formula the operator $\Lambda^{\epsilon}$ is diagonal in the $n$ index while $T^*_{\epsilon}{\mathcal{U}}^0 T_{\epsilon}$ is not (the existence of the unitary transformation is still valid even in the case of multiple eigenvalues; it is continuous, but not regular for eigenvalues with multiplicity larger than one). Because of the separation of the eigenvalues, it is easy to show that the eigenspaces with different energies are decoupled from each other (adiabatic decoupling) and we can replace $T^*_{\epsilon}{\mathcal{U}}^0 T_{\epsilon}$ by ${\mathcal{U}}^0_{nn} \delta_{nn'}$. If the initial data are only concentrated on modes with multiplicity one, then the solution itself is almost concentrated on these modes and for these modes, we can make the expansion of eigenvalues and obtain the effective mass equation . Let us also mention a recent work by F. Fendt-Delebecque and F. Méhats [@fanny] where the effective mass approximation is performed for the Schrödinger equation with large magnetic field and which relies on large time averaging of almost periodic functions. This approach might be of help for analyzing the limit for multiple eigenvalues.
One final question which has not been addressed so far is the relationship between the regularity of function $\psi$ and that of its corresponding sequence $f^{\epsilon}$ of envelope functions. In particular, one may look for sufficient conditions on $\psi$ so that $f^{\epsilon}\in {\mathcal{H}}^\mu$. Since the envelope function is a Fourier like expansion of the function $\psi$ on the basis $v_n$, then their decay as $n$ becomes bigger depends not only on the regularity of $\psi$ but also on that of the basis $(v_n)$ which itself will depend on the regularity of the potential $W_{\mathcal{L}}$. We show in the following subsection some results in this direction.
Asymptotic behavior of scaled envelope functions
------------------------------------------------
In this section we study the asymptotic behavior as ${\epsilon}$ tends to zero of the scaled envelope functions relative to the basis $(v_n)$ defined in .
From , it is readily seen that the limit as ${\epsilon}$ tends to zero of the envelope function is given by $$\begin{array}{lll}
{\displaystyle}\lim_{{\epsilon}\to 0} \hat f_n^{\epsilon}(k) &= &{\displaystyle}\lim_{{\epsilon}\to 0}{{\left\vert {{\mathcal{B}}\,} \right\vert}}^{-1/2} \int_{{\mathbb{R}}^d} \,{\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k)
\,{\mathrm{e}}^{-ik\cdot x}\,v_n\left({x\over {\epsilon}}\right)\,\psi(x)\,dx
\\[10pt]
& = & {\displaystyle}{{\left\vert {{\mathcal{B}}\,} \right\vert}}^{-1/2}{{\left\vert {{\mathcal{C}}} \right\vert}}^{-1} {{\left\langle v_n,1 \right\rangle}} \int_{{\mathbb{R}}^d} \,{\mathrm{e}}^{-ik\cdot x}\psi(x)\,dx
= {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1/2} {{\left\langle v_n,1 \right\rangle}}\, \widehat{\psi}(k).
\end{array}$$ Therefore $$\lim_{{\epsilon}\to 0} \pi_n^{\epsilon}(\psi) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1/2} {{\left\langle v_n,1 \right\rangle}}\, \psi.$$ The following Proposition, shows that the regularity of the crystal potential leads to decay properties on the coefficients ${{\left\langle v_n,1 \right\rangle}} = \int_{\mathcal{C}}v_n(x)\,dx$.
\[vn1\] Let $W_{\mathcal{L}}$ be in $C^\infty$. Then for any integer $p$, the coefficients ${{\left\langle v_n,1 \right\rangle}}$ satisfy the inequality $${{\left\vert {{{\left\langle v_n,1 \right\rangle}}} \right\vert}} \leq {C_p \over E_n^p},$$ where $C_p$ is a constant only depending on $\|W_{\mathcal{L}}\|_{W^{2p, \infty}}$.
We first remark that $$E_n^p {{\left\langle v_n, 1 \right\rangle}} = {{\left\langle H_{\mathcal{L}}^p v_n, 1 \right\rangle}} = {{\left\langle v_n, H_{\mathcal{L}}^p 1 \right\rangle}}$$ (where $H_{\mathcal{L}}^p$ denotes the $p$-th power of $H_{\mathcal{L}}$, not to be confused with the notation $H_{\mathcal{L}}^{\epsilon}$ introduced in Sec. \[sec2\]). Now it is readily seen that if $W_{\mathcal{L}}\in W^{2p, \infty}$, then $H_{\mathcal{L}}^p 1 \in L^\infty$ with ${{\| {H_{\mathcal{L}}^p 1} \|}}_{L^\infty} \leq C {{\| {W_{\mathcal{L}}} \|}}_{W^{2p, \infty}}$, for a suitable constant $C\geq0$. Then $$E_n^p {{\left\vert {{{\left\langle v_n, 1 \right\rangle}}} \right\vert}} \leq \|v_n\|_{L^2} {{\| {H_{\mathcal{L}}^p 1} \|}}_{L^2} \leq C_p,$$ with $C_p$ only depending on ${{\| {W_{\mathcal{L}}} \|}}_{W^{2p, \infty}}$, which ends the proof.
We also have the following property.
\[decay\] Let $\lambda$ and $\lambda'$ two elements of the reciprocal lattice ${\mathcal{L}}^*$. Assume that $W_{\mathcal{L}}\in C^\infty$. Then, for any integers $k, p$, we have the estimate $${{\left\vert {{{\left\langle H_{\mathcal{L}}^k {\mathrm{e}}^{i\lambda \cdot x}, H_{\mathcal{L}}^k {\mathrm{e}}^{i\lambda'\cdot x} \right\rangle}}} \right\vert}} \leq
C_{k,p}{ (1+ {{\left\vert {\lambda} \right\vert}}^{2k} {{\left\vert {\lambda'} \right\vert}}^{2k}) \over 1 + {{\left\vert {\lambda - \lambda'} \right\vert}}^{2p}},$$ for a suitable constant $C_{k,p}\geq0$.
It is clear that $H_{\mathcal{L}}^k {\mathrm{e}}^{i\lambda \cdot x} = \sum_{{{\left\vert {\alpha} \right\vert}}=0}^{2k} \lambda^\alpha V_\alpha(x) {\mathrm{e}}^{i\lambda \cdot x}$, where $V_\alpha$ contains products of $W_{\mathcal{L}}$ and its derivatives up to order $2k-{{\left\vert {\alpha} \right\vert}}$. Therefore $${{\left\langle H_{\mathcal{L}}^k {\mathrm{e}}^{i\lambda \cdot x}, H_{\mathcal{L}}^k {\mathrm{e}}^{i\lambda'\cdot x} \right\rangle}} =
\sum_{{{\left\vert {\alpha} \right\vert}}, {{\left\vert {\beta} \right\vert}} = 0}^{2 k}\lambda^\alpha (\lambda')^\beta
\int_{\mathcal{C}}V_\alpha(x) V_\beta(x) \,{\mathrm{e}}^{i(\lambda-\lambda')\cdot x}dx.$$ Now the result can be obtained by simply integrating by parts $2p$ times.
The estimate of Lemma \[decay\] is not optimal and can certainly be refined, but this is not the scope of our paper. Next proposition follows from the previous result.
Assume $W_{\mathcal{L}}\in L^\infty$ and let $f_n^{\epsilon}= \pi_n^{\epsilon}(\psi)$ be the envelope functions of $\psi$. Then the following estimate holds for any $\mu \geq 0$: $${{\| {f^{\epsilon}} \|}}_{{\mathcal{L}}^2_\mu}^2 =
\sum_{n} {{\| { (1+ {{\left\vert {k} \right\vert}}^2)^{\mu/2}\, \widehat{ f_n^{\epsilon}} (k)} \|}}_{L^2}^2 \leq C_\mu \|\psi\|_{H^\mu}^2.
\label{hshs}$$ Let now $W_{\mathcal{L}}$ be in $C^\infty$, then the following estimate holds for any integer $s$ $$\sum_{n} E_n^{s} {{\| {f_n^{\epsilon}} \|}}_{L^2}^2 \leq C_s (\|\psi\|_{L^2}^2 +{\epsilon}^{2s} \|\psi\|_{H^s}^2).
\label{enhs}$$
Let us first prove . Using the identity $$\widehat{ f_n^{\epsilon}}(k) = {{\left\vert {{\mathcal{B}}\,} \right\vert}}^{-1/2} \int_{{\mathbb{R}}^d} \,{\mathbbm{1}}_{{\mathcal{B}}/{\epsilon}}(k)
\,{\mathrm{e}}^{-ik\cdot x}\,v_n\left({x\over {\epsilon}}\right)\,\psi(x)\,dx,$$ as well as the decomposition $$v_n (x) = {1\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}} \sum_{\lambda \in {\mathcal{L}}^*} v_{n,\lambda} e^{i\lambda\cdot x}$$ where $v_{n,\lambda} = {{\left\langle v_n, { e^{i\lambda\cdot x}\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}} \right\rangle}}$, we obtain, $$\begin{gathered}
\sum_{n} {{\| { (1+ |k|^2)^{\mu/2}\,\widehat{ f_n^{\epsilon}}(k) } \|}}_{L^2}^2
\\
= \sum_n \sum_{\lambda,\lambda'} \int_{{\mathcal{B}}/{\epsilon}}
(1+|k|^2)^\mu v_{n,\lambda} \overline{v_{n,\lambda'}} \,
\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)
\overline{\widehat{\psi}}\left(k -\textstyle{{\lambda'\over {\epsilon}}}\right) dk.\end{gathered}$$ Summing first with respect to $n$ and using the identity $$\sum_nv_{n,\lambda} \overline{v_{n,\lambda'}} =
\frac{1}{{{\left\vert {{\mathcal{C}}} \right\vert}}} \,\langle e^{i\lambda\cdot x}, e^{i\lambda' \cdot x} \rangle
= \delta_{\lambda,\lambda'},$$ the right hand side of the above identity takes the simple form $${\displaystyle}\sum_{n} {{\| { (1+ |k|^2)^{\mu/2}\,\widehat{ f_n^{\epsilon}}(k) } \|}}_{L^2}^2
= {\displaystyle}\sum_{\lambda \in {\mathcal{L}}^*} \int_{{\mathcal{B}}/{\epsilon}} (1+|k|^2)^\mu
{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)} \right\vert}}^2 dk.$$ It is now readily seen that there exists a constant $c \geq 1$, only depending on the fundamental cell ${\mathcal{C}}$, such that for all $k\in{\mathcal{B}}$ and for all $\lambda \in {\mathcal{L}}^*$, we have the estimate $${{\left\vert {k} \right\vert}} \leq c {{\left\vert {k-\lambda} \right\vert}},$$ so that $$\begin{gathered}
\sum_{n} {{\| { (1+ {{\left\vert {k} \right\vert}}^2)^{\mu/2} \,\widehat{ f_n^{\epsilon}}(k)} \|}}_{L^2}^2 \leq
c^{2\mu} \sum_{\lambda \in {\mathcal{L}}^*} \int_{{\mathcal{B}}/{\epsilon}} \left(1+{{\left\vert {k-\textstyle{{\lambda \over {\epsilon}}}} \right\vert}}^2\right)^\mu
{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda \over {\epsilon}}}\right)} \right\vert}}^2\, dk
\\
= c^{2\mu} \int_{{\mathbb{R}}^d} (1+{{\left\vert {k} \right\vert}}^2)^\mu {{\left\vert {\widehat{\psi}(k)} \right\vert}}^2\, dk.\end{gathered}$$ This implies that a suitable constant $C_\mu$ exists such that holds. Let us now prove . We proceed analogously and find $$\sum_{n} E_n^{s} {{\| {f_n^{\epsilon}} \|}}_{L^2}^2
= {1\over (2\pi)^d} {\displaystyle}\sum_n\sum_{\lambda,\lambda'} \int_{{\mathcal{B}}/{\epsilon}} E_n^{s} \,
v_{n,\lambda} \overline{v_{n,\lambda'}} \,\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)
\overline{\widehat{\psi}}\left(k -\textstyle{{\lambda'\over {\epsilon}}}\right) dk.$$ As above, we first make the sum over the index $n$ and, therefore, we need to evaluate $$\sum_{n} E_n^{s} v_{n,\lambda} \overline{v_{n,\lambda'}}.$$ We first remark that $E_n^s v_{n,\lambda} ={{\left\langle H_{{\mathcal{L}}}^s v_n, { e^{i\lambda\cdot x}\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}} \right\rangle}}
= {{\left\langle v_n, H_{{\mathcal{L}}}^s { e^{i\lambda\cdot x}\over {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}} \right\rangle}}$. Therefore $$\sum_{n} E_n^{s} \, v_{n,\lambda} \overline{v_{n,\lambda'}} =
{1\over {{\left\vert {{\mathcal{C}}} \right\vert}} } {{\left\langle H_{{\mathcal{L}}}^s e^{i\lambda\cdot x}, e^{i\lambda'\cdot x} \right\rangle}}.$$ Contrary to the proof of , the obtained formula is not diagonal in $(\lambda, \lambda')$ but Lemma \[decay\] leads to the following estimate, which holds for large enough integers $p$: $$\begin{aligned}
\sum_{n} E_n^{s} {{\| {f_n^{\epsilon}} \|}}_{L^2}^2
&\leq C_{s,p} \sum_{\lambda,\lambda'} \int_{{\mathcal{B}}/{\epsilon}}
{1 + {{\left\vert {\lambda} \right\vert}}^{2s} \over 1 + {{\left\vert {\lambda-\lambda'} \right\vert}}^{2p}}
{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)
\overline{\widehat{\psi}}\left(k -\textstyle{{\lambda'\over {\epsilon}}}\right) } \right\vert}} dk
\\[6pt]
&\leq \frac{C_{s,p}}{2} \sum_{\lambda,\lambda'} \int_{{\mathcal{B}}/{\epsilon}}
{1 + {{\left\vert {\lambda} \right\vert}}^{2s} \over 1 + {{\left\vert {\lambda-\lambda'} \right\vert}}^{2p}}
\left[{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)} \right\vert}}^2 +
{{\left\vert {\overline{\widehat{\psi}}\left(k -\textstyle{{\lambda'\over {\epsilon}}}\right) } \right\vert}}^2\right] dk
\\[6pt]
&\leq C_{s,p} \sum_{\lambda\in{\mathcal{L}}^*} \int_{{\mathcal{B}}/{\epsilon}} (1 + {{\left\vert {\lambda} \right\vert}}^{2s} )
{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)} \right\vert}}^2\, dk.
\end{aligned}$$ Note that we used the fact that, for large enough $p$, the following estimates hold with constants $C_1$ and $C_2$ only depending on $s$ and $p$ $$\sum_{\lambda \in {\mathcal{L}}^*} {1 + {{\left\vert {\lambda} \right\vert}}^{2s} \over 1 + {{\left\vert {\lambda-\lambda'} \right\vert}}^{2p}}
\leq C_1 (1+ {{\left\vert {\lambda'} \right\vert}}^{2s}),
\quad
\sum_{\lambda' \in {\mathcal{L}}^*} {1 + {{\left\vert {\lambda} \right\vert}}^{2s} \over 1 + {{\left\vert {\lambda-\lambda'} \right\vert}}^{2p}}
\leq C_2 (1+ {{\left\vert {\lambda} \right\vert}}^{2s}).$$ Now, for $\lambda \neq 0$ and ${\epsilon}k \in {\mathcal{B}}$ it is readily seen that $|\lambda| \leq c_0 {{\left\vert {\lambda -{\epsilon}k} \right\vert}}$, where $c_0$ is a positive constant independent of $\lambda$ and $k$. Therefore, $$\sum_{\lambda\in{\mathcal{L}}^*} \int_{{\mathcal{B}}/{\epsilon}} (1 + {{\left\vert {\lambda} \right\vert}}^{2s} )
{{\left\vert {\widehat{\psi}\left(k -\textstyle{{\lambda\over {\epsilon}}}\right)} \right\vert}}^2\, dk
\leq {{\| {\psi} \|}}_{L^2}^2 + {\epsilon}^{2s} c_0^{2s} {{\| {\psi} \|}}_{H^s}^2,$$ which implies that a suitable constant $C_s$ exists such that holds.
Postponed proofs {#post}
================
This section is devoted to the proofs of some results stated in the beginning of the paper.
Proof of Theorem \[T1\]
-----------------------
For any Schwartz function $\psi$ we can write $$\psi(x) = (2\pi)^{-d/2} \int_{{\mathbb{R}}^d} \hat \psi(k)\,{\mathrm{e}}^{ik\cdot x} dk
= \sum_{\eta \in {\mathcal{L}}^*} (2\pi)^{-d/2}
\int_{{\mathcal{B}}+ \eta} \hat \psi(k)\,{\mathrm{e}}^{ik\cdot x} dk$$ $$= \sum_{\eta \in {\mathcal{L}}^*} (2\pi)^{-d/2}\, {\mathrm{e}}^{i\eta\cdot x}
\int_{{\mathcal{B}}} \hat \psi(\xi+\eta)\,{\mathrm{e}}^{i\xi\cdot x}\, d\xi
= \sum_{\eta \in {\mathcal{L}}^*} {\mathrm{e}}^{i\eta\cdot x} G_\eta(x),$$ where $$G_\eta(x) = (2\pi)^{-d/2} \int_{{\mathcal{B}}} \hat \psi(\xi+\eta)\,{\mathrm{e}}^{i\xi\cdot x}\,d\xi$$ clearly belongs to ${\mathcal{F}}^*L^2_{\mathcal{B}}({\mathbb{R}}^d)$. Moreover, we have $$\sum_{\eta \in {\mathcal{L}}^*} {{\| {G_\eta} \|}}_{L^2}^2 =
\sum_{\eta \in {\mathcal{L}}^*} {{\| {\hat G_\eta} \|}}_{L^2}^2 =
\sum_{\eta \in {\mathcal{L}}^*} \int_{{\mathbb{R}}^d} {{\left\vert {\hat \psi(\xi+\eta){\mathbbm{1}}_{\mathcal{B}}(\xi)} \right\vert}}^2 dk$$ $$= \sum_{\eta \in {\mathcal{L}}^*} \int_{{\mathcal{B}}+\eta} {{\left\vert {\hat \psi(\xi)} \right\vert}}^2 dk
= {{\| {\psi} \|}}_{L^2}^2.$$ Thus, defining $$\label{Faux}
F(x,y) = \sum_{\eta \in {\mathcal{L}}^*} {\mathrm{e}}^{i\eta\cdot x} G_\eta(y),
\qquad
(x,y) \in {\mathcal{C}}\times{\mathbb{R}}^d,$$ we have that $F \in L^2({\mathcal{C}}\times{\mathbb{R}}^d)$ and $${{\left\vert {{\mathcal{C}}} \right\vert}}^{-1}{{\| {F} \|}}_{L^2({\mathcal{C}}\times{\mathbb{R}}^d)}^2
= \sum_{\eta \in {\mathcal{L}}^*} {{\| {G_\eta} \|}}_{L^2({\mathbb{R}}^d)}^2
= {{\| {\psi} \|}}_{L^2({\mathbb{R}}^d)}^2$$ (where we used the fact that $\{ {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1/2}\,{\mathrm{e}}^{i\eta\cdot x} \mid \eta \in {\mathcal{L}}^* \}$ is a orthonormal basis of $L^2({\mathcal{C}})$). Since $\{ v_n \mid n \in {\mathbb{N}}\}$ is another orthonormal basis of $L^2({\mathcal{C}})$, then we can also write $$F(x,y) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2}\,\sum_n f_n(y) v_n(x),$$ where $$f_n(y) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1/2} {{\left\langle F(\cdot,y),v_n \right\rangle}}_{L^2({\mathcal{C}})}.$$ Note that $\hat f_n \in L^2_{\mathcal{B}}({\mathbb{R}}^d)$ for every $n$ and that $${{\| {\psi} \|}}_{L^2({\mathbb{R}}^d)}^2 = {{\left\vert {{\mathcal{C}}} \right\vert}}^{-1} {{\| {F} \|}}_{L^2({\mathcal{C}}\times{\mathbb{R}}^d)}^2
= \sum_n{{\| {f_n} \|}}_{L^2({\mathbb{R}}^d)}^2.$$ For $y=x$, yields , at least for Schwartz functions. However, it can be easily proved that the mapping $\psi \mapsto (f_0,f_1,\ldots)$ can be uniquely extended to an isometry between $L^2({\mathbb{R}}^d)$ and $\ell^2({\mathbb{N}},{\mathcal{F}}^*L^2_{\mathcal{B}}({\mathbb{R}}^d))$, with the properties and .
Proof of Theorem \[T2\]
-----------------------
Recalling definition , let $$\tilde{\theta}^{\epsilon}= {\mathcal{T}}_{1\over 3{\epsilon}}(\theta), \qquad \tilde{f}_n^{\epsilon}= {\mathcal{T}}_{1\over 3{\epsilon}} (f_n^{\epsilon}),$$ and define $$\tilde{\psi}^{\epsilon}(x) = {{\left\vert {{\mathcal{C}}} \right\vert}}^{1/2} \sum_n \tilde{f}_n^{\epsilon}(x)\, v_n^{\epsilon}(x).$$ Then, we can write $$\begin{aligned}
&\int_{{\mathbb{R}}^d} \theta(x)\Big[ |\psi(x)|^2 -\sum_{n} |f_n^{\epsilon}(x)|^2\Big] dx
= \int_{{\mathbb{R}}^d} \theta(x)\Big[{{\left\vert {\psi(x)} \right\vert}}^2 -|\tilde{\psi}^{\epsilon}(x)|^2\Big]dx \\[6pt]
&+ \int_{{\mathbb{R}}^d} \Big[ \theta(x)- \tilde{\theta}^{\epsilon}(x) \Big] |\tilde{\psi}^{\epsilon}(x)|^2\, dx + \int_{{\mathbb{R}}^d} \tilde{\theta}^{\epsilon}(x)\Big[|\tilde{\psi}^{\epsilon}(x)|^2 -\sum_{n} |\tilde{f}_n^{\epsilon}(x)|^2\Big] dx \\[6pt]
&+ \int_{{\mathbb{R}}^d} \tilde{\theta}^{\epsilon}(x) \sum_n \Big[|\tilde{f}^{\epsilon}_n(x)|^2 - |f_n^{\epsilon}(x)|^2\Big] dx + \int_{{\mathbb{R}}^d} \Big[ \tilde{\theta}^{\epsilon}(x)- \theta(x) \Big] \sum_n |f^{\epsilon}_n(x)|^2\, dx \\[6pt]
& = I_1 + I_2 + I_3 + I_4 + I_5.
\end{aligned}$$ Since $\operatorname{supp}(\widehat{\tilde{\theta}^{\epsilon}\tilde{f}^{\epsilon}_n }) \subset {\mathcal{B}}/3{\epsilon}+ {\mathcal{B}}/3{\epsilon}\subset {\mathcal{B}}/{\epsilon}$, then $\tilde{\theta}^{\epsilon}\tilde{f}^{\epsilon}_n$ are the envelope functions of $\tilde{\theta}^{\epsilon}\tilde{\psi}^{\epsilon}$ and the Parseval identity can be applied to the functions $\tilde{\psi}^{\epsilon}$ and $\tilde{\theta}^{\epsilon}\tilde{\psi}^{\epsilon}$, which yields $I_3 = 0$.
As far as the terms $I_2$ and $I_5$ are concerned, we have $${{\left\vert {I_2} \right\vert}} \leq {{\| {\theta- \tilde{\theta}^{\epsilon}} \|}}_{L^\infty} {{\| {\tilde{\psi}^{\epsilon}} \|}}_{L^2}
= {{\| {\theta- \tilde{\theta}^{\epsilon}} \|}}_{L^\infty} \sum_n {{\| {\tilde{f}^{\epsilon}_n} \|}}_{L^2}
\leq {{\| {\theta- \tilde{\theta}^{\epsilon}} \|}}_{L^1} {{\| {\psi} \|}}_{L^2}$$ and, therefore, $I_2 \to 0$ as ${\epsilon}\to 0$. Similarly we can prove that $I_5 \to 0$.
Finally, if $R$ is the radius of a ball contained in ${\mathcal{B}}$, we have $$\begin{gathered}
{{\left\vert {I_1} \right\vert}} \leq
{{\| {\theta} \|}}_{L^\infty} \Big( {{\| {\psi} \|}}_{L^2}^2 - {{\| {\tilde{\psi}^{\epsilon}} \|}}_{L^2}^2 \Big) =
\\
{{\| {\theta} \|}}_{L^\infty} \sum_n \Big( {{\| {f_n^{\epsilon}} \|}}_{L^2}^2 - {{\| {\tilde{f}^{\epsilon}_n } \|}}_{L^2}^2 \Big)
\leq {{\| {\theta} \|}}_{L^\infty} \sum_n \int_{{{\left\vert {k} \right\vert}} > \frac{R}{3{\epsilon}}} {{\left\vert {\hat {f}^{\epsilon}_n(k)} \right\vert}}^2 dk.\end{gathered}$$ The last integral goes to 0 as ${\epsilon}\to 0$, because $\sum_n \int_{{\mathbb{R}}^d} |\hat {f}^{\epsilon}_n(k)|^2 dk = {{\| {\psi} \|}}_{L^2}^2$ and the dominated convergence theorem applies. Thus $I_1 \to 0$ and, in a similar way, we can also prove that $I_4\to 0$. In conclusion, $$\int_{{\mathbb{R}}^d} \theta(x)\Big[ |\psi(x)|^2 -\sum_{n} |f_n^{\epsilon}(x)|^2\Big] dx \to 0$$ as ${\epsilon}\to 0$, which proves the theorem.
[**A**cknowledgements.]{} N. Ben Abdallah acknowledges support from the project QUATRAIN (BLAN07-2 212988) funded by the French Agence Nationale de la Recherche) and from the Marie Curie Project DEASE: MEST-CT-2005-021122 funded by the European Union. L. Barletti acknowledges support from Italian national research project PRIN 2006 “Mathematical modelling of semiconductor devices, mathematical methods in kinetic theories and applications” (2006012132\_004).
[99]{} Allaire, G. Conca, C.: Bloch wave homogenization and spectral asymptotic analysis. J. Math. Pures Appl. (9) **77** no. 2, 153–208, (1998) Allaire, G., Capdeboscq, Y., Piatnitski, A., Siess, V. and Vanninathan, M.: Homogenization of periodic systems with large potentials. Arch. Ration. Mech. Anal. **174**, 179–220 (2004) Allaire, G. and Piatnistki, A.: Homogenization of the Schrödinger equation and effective mass theorems. Comm. Math. Phys. **258**, 1–22 (2005) Allaire, G. and Vanninathan, M.: Homogenization of the Schrödinger equation with a time oscillating potential. Discrete Contin. Dyn. Syst. Ser. B **6**, 1–16 (2006)
Allaire, G.: Periodic homogenization and effective mass theorems for the Schrödinger equation. In: Ben Abdallah, N. and Frosali, G. (Eds.): *Quantum transport. Modelling, analysis and asymptotics.* Lecture Notes in Math. 1946. Springer, Berlin, 2008. Ashcroft, N.W. and N. D. Mermin, N.D.: *Solid State Physics*. Saunders College Publishing, 1976. Bastard, G.: *Wave mechanics applied to semiconductor heterostructures*. Wiley Interscience, New York, 1990. Berezin, F.A. and Shubin, M.A.: *The Schrödinger Equation*. Kluwer, Dordrecht, 1991 Burt, M.G.: The justification for applying the effective mass approximation to microstructures. J. Phys. Condens. Matter **4**, 6651–6690 (1992) Fendt-Delebecque, F.and Méhats, F.: An effective mass theorem for the bidimensional electron gas in a strong magnetic field. Commun. Math. Phys. [**[292]{}**]{}, 829Ð870 (2009) Hagedorn, G.A. and Joye, A.: A time-dependent Born-Oppenheimer approximation with exponentially small error estimates. Comm. Math. Phys. **223**, 583–626 (2001) Kato, T.: *Perturbation Theory for Linear Operators* (Second edition). Springer-Verlag, Berlin, 1980 Kuchment, P.: *Floquet theory for partial differential equations.* Operator Theory: Advances and Applications, 60. BirkhŠuser Verlag, Basel, 1993. Luttinger, J.M. and Kohn, W.: Motion of electrons and holes in perturbed periodic fields. Phys. Rev. **97**, 869–882 (1955) Panati, G., Spohn, H. and Teufel, S.: The time-dependent Born-Oppenheimer approximation. M2AN Math. Model. Numer. Anal. **41**, 297–314 (2007) Poupaud, F. and Ringhofer, C.: Semi-classical limits in a crystal with exterior potentials and effective mass theorems. Comm. Partial Differential Equations **21**, 1897–1918 (1996) Reed, M. and Simon, B.: *Methods of Modern Mathematical Physics, I - Functional Analysis*. Academic Press, New York, 1972 Reed, M. and Simon, B.: *Methods of Modern Mathematical Physics, IV - Analysis of Operators*. Academic Press, New York, 1978 Spohn, H. and Teufel, S.: Adiabatic decoupling and time-dependent Born-Oppenheimer theory. Comm. Math. Phys. **224**, 113–132 (2001) Teufel, S.: *Adiabatic Perturbation Theory in Quantum Dynamics*. Springer-Verlag, Berlin, 2003 Wenckebach, T.: *Essentials of Semiconductor Physics*. Wiley, Chichester, 1999.
[^1]: In solid state physics the Brillouin zone used has a slightly different definition. However, the two definitions are equivalent to our purposes.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'E.Rigliaco'
- 'R.Gratton'
- 'D.Mesa'
- 'V.D’Orazi'
- 'M.Bonnefoy'
- 'J.M.Alcalà'
- 'S.Antoniucci'
- 'F.Bacciotti'
- 'M.Dima'
- 'B.Nisini'
- 'L.Podio'
- 'M.Barbieri'
- 'R.Claudi'
- 'S.Desidera'
- 'A.Garufi'
- 'E.Hugot'
- 'M.Janson'
- 'M.Langlois'
- 'E.L.Rickman'
- 'E.Sissa'
- 'M.Ubeira Gabellini'
- 'G.van der Plas'
- 'A.Zurlo'
- 'Y.Magnard,'
- 'D.Perret'
- 'R.Roelfsema'
- 'L.Weber'
bibliography:
- 'ref\_RCrA.bib'
date: 'Received : 16 September 2019; accepted: 30 September 2019 '
title: 'Investigating the nature of the extended structure around the Herbig star RCrA using integral field and high-resolution spectroscopy'
---
[We present a detailed analysis of the extended structure detected around the young and close-by Herbig Ae/Be star R CrA. This is a young triple system with an intermediate mass central binary whose separation is of the order of a few tens of the radii of the individual components, and an M-star companion at about 30 au.]{} [Our aim is to understand the nature of the extended structure by means of combining integral-field and high-resolution spectroscopy. ]{} [We conducted the analysis based on FEROS archival optical spectroscopy data and adaptive optics images and integral-field spectra obtained with SINFONI and SPHERE at the VLT. ]{} [The observations reveal a complex extended structure that is composed of at least two components: a non-uniform wide cavity whose walls are detected in continuum emission up to 400 au, and a collimated wiggling-jet detected in the emission lines of Helium and Hydrogen. Moreover, the presence of \[Fe \] emission projected close to the cavity walls suggests the presence of a slower moving wind, most likely a disk wind. The multiple components of the optical forbidden lines also indicate the presence of a high-velocity jet co-existing with a slow wind. We constructed a geometrical model of the collimated jet flowing within the cavity using intensity and velocity maps, finding that its wiggling is consistent with the orbital period of the central binary. The cavity and the jet do not share the same position angle, suggesting that the jet is itself experiencing a precession motion possibly due to the wide M-dwarf companion. ]{} [We propose a scenario that closely agrees with the general expectation of a magneto-centrifugal-launched jet. These results build upon the extensive studies already conducted on R CrA. ]{}
Introduction
============
Herbig Ae/Be stars [@Herbig1960] are pre-main sequence stars of intermediate mass covering the range between low-mass T Tauri stars (TTSs), and the embedded massive young stellar objects. They are considered the high-mass counterparts of pre-main sequence T Tauri stars [@Strom1972; @Cohen1979; @Finkenzeller1984]. These stars, like T Tauri stars, show rich emission-lines spectra, infrared continuum excess and veiled photospheric absorption. The formation of stars in the low and intermediate-mass regimes involves accretion disks, and fast collimated outflows and jets. The accretion activity is established from a spectroscopic point of view through the presence of emission lines in the stellar spectrum, in wavelength ranges that span from the ultraviolet to the infrared (e.g. @Alcala2017 [@Mendigutia2012]). Jets and outflows in TTSs are also spectroscopically revealed by the analysis of emission lines in their spectrum (e.g., @Edwards2007 [@Nisini2018]), while there is instead a paucity of detected jets and outflows around intermediate mass Herbig Ae/Be stars (e.g., @Grady2003 [@Ellerbroek2014]). This is partially due to the shorter time the intermediate mass objects spend in their pre-main sequence phase.
The advent of high-contrast scattered light observations of protoplanetary disks around TTSs and Herbig Ae/Be stars performed with Adaptive Optics techniques using instruments such as the Gemini Planet Imager (GPI: [@Macintosh2014]) or the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE: [@Beuzit2019]) has allowed for the investigation of the immediate surroundings of these stars, revealing a wealth of extended structures: concentric rings (e.g. [@Ginski2016; @Perrot2016; @Feldt2017]), cavities (e.g. [@Pohl2017; @Ligi2018]), spiral arms (e.g. [@Maire2017; @Benisty2017]), and asymmetries. ALMA, on the other hand, has allowed for the investigation of the immediate surroundings of younger and more embedded stars, revealing in turn non-axisymmetric features (e.g. [@van2013major]), multiple narrow rings (e.g. [@Perez2016; @fedele2017; @fedele2018; @Huang2018]). These features are usually associated with circumstellar/protoplanetary disks, but there are a few cases where the images reveal the presence of extended and elongated structures in the jet directions. The analysis of these elongated structures from Herbig stars is important because it allows us to study if the origins of the observed features are similar to T Tauri stars, that is, linked to interaction between magnetic fields and the circumstellar environment [@Hubrig2018]. Only a few observations of jets around close-by (less than 200 pc from the Sun) Herbig stars exist. A collimated jet around the Herbig Ae star HD 163296 was detected in Ly$\alpha$ by @Devine2000 and @Grady2000. A bipolar jet is also driven by the Herbig Ae stars MWC480 and HD104237 as shown by [@Grady2010; @Grady2004]. Recently, [@Garufi2019] have revealed, in scattered light, a dip along the jet axis around the intermediate mass star RY Tau that is consistent with an outflow cavity carved in the ambient envelope by the jet and the wind outflow. Further out Herbig stars have also been observed driving jets: extended emissions from both components of the Z CMa system have been revealed by [@Antoniucci2016] using the ZIMPOL instrument (Zurich Imaging Polarimeter) of SPHERE at the Very Large Telescope (VLT). A jet and counter-jet were also revealed in LkHa233 using the Hubble Space Telescope data [@Melnikov2008]. The paucity of observations of jets around Herbig stars with respect to the higher number of jets observed around T Tauri stars might be either due to an intrinsic abundance of TTSs, or to a shorter timescale of these structures around Herbig stars. In either case, any new observation of jets around Herbig stars is essential to shedding light on the accretion/ejection mechanisms operating on Herbig stars.
R CrA (HIP 93449) is an ideal target to enlarge the number of Herbig Ae/Be stars where jet-like structures are observed. It is the brightest member of the Coronet Cluster, belonging to the Corona Australis star-forming region, which is one of the nearest and most active regions of ongoing star formation. The Coronet Cluster is highly obscured [@Taylor1984], and characterized by high and spatially-variable extinction with A$_V$ up to 45 mag [@Neuhauser2008]. This star has been extensively studied over the years. @Takami2003 suggested the existence of a companion and of an outflow to explain the positional photo-center displacement observed in spectro-astrometric observations both in the blue and red-shifted wing of the spectrally resolved H$\alpha$-line. The spectral type of R CrA has also been largely debated. In analyzing IRAS data, @Bibo1992 found spectral type B8, stellar radius of 3.1 R$_{\odot}$, bolometric luminosity between 99-166 L$_\odot,$ and mass 3.0 M$_{\odot}$. Spectral type A5 and L$_{bol}$=92 L$_\odot$ were found by @Chen1997, while R CrA was classified as F5 by @Hillenbrand1992 [@Natta1993; @GarciaLopez2006]. R CrA is in a particularly early evolutionary phase [@Malfait1998], because it is still embedded in its dust envelope, whose emission dominates the spectral energy distribution from mid-IR to millimeter wavelengths [@Kraus2009]. At optical wavelengths, the star is known to be highly variable, both on long and short time scales [@Bellingham1980]. @Sissa2019 found A$_V$=5.47$\pm$0.5 mag, slightly larger than the value obtained by @Bibo1992 (A$_V$=4.65 mag). R CrA shows indications of active accretion, and various outflow tracers have been reported. For instance, a compact bipolar molecular outflow with an east-west orientation [@Walker1984; @Levreault1988; @Graham1993], as well as several Herbig-Haro objects (in particular HH 104 A/B), have been associated with R CrA [@Hartigan1987; @Graham1993]. However, more recent studies [@Anderson1997; @Wang2004] convincingly identified the source IRS 7 as the driving source of these outflows, making a physical association with R CrA rather unlikely. The outflow was also detected in infra-red (IR) through CO emission (at 4.6$\mu$m) using astrospectrometry by @vanderPlas2015. They notice that the CO emission is located entirely on one side of the IR continuum emission, and blueshifted by $\sim$10 km/s with respect to the surrounding Corona Australis molecular cloud. They point out that CO emission in the R CrA spectrum is likely due to an outflow, as also suggested from the bow-shocks seen in shocked H2 emission in the immediate vicinity of R CrA (@Kumar2011). The study of R CrA has gained new momentum in recent years, thanks to high-contrast imaging observations obtained with SPHERE [@Mesa2019] and the NAOS-CONICA at VLT [@Cugno2019]. These studies focused their attention on the detection of a stellar companion as close as 19-28 au, and with a mass range between 0.1-1.0 M$_{\odot}$. Moreover, in the [@Mesa2019] analysis, the presence of an elongated jet-like structure was pointed out, together with some evidence of a disk seen almost edge-on. Recently, @Sissa2019 analyzed the light curve of the star, and found that the central object of R CrA is a binary with masses of 3.0 and 2.3 M$_{\odot}$ for the two components and with a period of approximately 66 days. Together with the discovery of the M-type companion, it makes it a triple system.
In this paper, we analyze images of R CrA acquired with SPHERE and the SINFONI integral field spectrograph at the VLT, and an archival optical spectrum from the Fibre-fed Optical Echelle Spectrograph (FEROS) archive. In Sect. 2, we describe the collected data. in Sect. 3, we describe the data analysis. In Sect. 4, we propose a scenario that reconciles all the findings, and in Sect. 5, we summarize and conclude.
Observations and data reduction
===============================
As mentioned in the previous section, several studies at different wavelength ranges have been conducted over the years on R CrA. The use of high-contrast imagers has made it possible to shed new light on this interesting source. In this section, we describe the observations of R CrA acquired in recent years with SPHERE and SINFONI at VLT (summarized in Table \[log\_tab\]), and FEROS at the 2.2 m telescope in La Silla. We focus here on the elongated structure already reported in @Mesa2019, and identified in Figure \[IFS\_total\], analyzing the emission spectrum of the source, and investigating the origin of the emitting lines in light of images of the jet-like structure observed in the SPHERE data.
Date Instrument band FOV Spaxel Spectral resolution Wavelength coverage
------------ ------------ -------- --------------------------------------------------- -------------- --------------------- -----------------------
2018-06-19 IFS Y-H 1.7$^{\prime\prime} \times$1.7$^{\prime\prime}$ 7.49mas$^2$ $\sim$30 0.95–1.65$\mu$m
2018-06-19 IRDIS K1, K2 11.0$^{\prime\prime} \times$11.0$^{\prime\prime}$ 12.25mas$^2$ — 2.11$\mu$m,2.25$\mu$m
2018-09-11 SINFONI H 0.8$^{\prime\prime} \times$0.8$^{\prime\prime}$ 12.5mas$^2$ $\sim$3000 1.45–1.85$\mu$m
SPHERE data
-----------
![Top panel: Median image obtained using monochromatic PCA with 1 principal component for spectral channel on SPHERE-IFS image, showing the continuum emission. The bright source at about 0.3 arcsec south-east from the center, identified with B, is the M-star companion detected by @Mesa2019, which appears elongated as a consequence of the differential imaging adopted. Solid and dashed lines refer to the median PA and aperture of the “elongated structure” as measured in Sect. 3.2, respectively. Bottom panel: Same image as top panel, but only in channels containing Helium I at 1.083 $\mu$m line. The white solid and dashed lines show the PA of the extended structure, as discussed in Sect. 3. The white circle shows the size of the coronagraph used. []{data-label="IFS_total"}](IFS_continuum){width="\linewidth"}
![Top panel: Median image obtained using monochromatic PCA with 1 principal component for spectral channel on SPHERE-IFS image, showing the continuum emission. The bright source at about 0.3 arcsec south-east from the center, identified with B, is the M-star companion detected by @Mesa2019, which appears elongated as a consequence of the differential imaging adopted. Solid and dashed lines refer to the median PA and aperture of the “elongated structure” as measured in Sect. 3.2, respectively. Bottom panel: Same image as top panel, but only in channels containing Helium I at 1.083 $\mu$m line. The white solid and dashed lines show the PA of the extended structure, as discussed in Sect. 3. The white circle shows the size of the coronagraph used. []{data-label="IFS_total"}](IFS_HeI){width="\linewidth"}
R CrA was observed with SPHERE [@Beuzit2019] in four different epochs, in coronagraphic mode, as described in [@Mesa2019]. In this paper, we use the observation of the night 2018-06-19 as part of the SHINE (SpHere INfrared survey of Exoplanets - @Chauvin2017) guaranteed time observations, since they were taken in better weather conditions and reached deeper contrast with respect to the other sets of observations. The instrument was used in IRDIFS$\_$EXT mode, allowing simultaneous observation with the integral-field spectrometer IFS [@Claudi2008] and with the dual band differential imager and spectrograph IRDIS [@Dohlen2008]. In this mode, IFS provides diffraction-limited observations covering the Y and H bands (0.95-1.65 $\mu$m), with a spectral resolution of $\sim$30 inside the 1.7$^{\prime\prime} \times$ 1.7$^{\prime\prime}$ field of view. The IRDIS sub-system, also diffraction-limited, was set in its dual-band imaging mode (DBI, @Vigan2010) to simultaneously observe with the K1 and K2 filters (K1=2.110 $\mu$m and K2=2.251 $\mu$m, width 0.1 $\mu$m) with a 11.0$^{\prime\prime} \times$ 11.0$^{\prime\prime}$ field of view. Details on the observing mode and data reduction are reported in @Mesa2019. Data were acquired in pupil stabilized mode, with a sequence of acquisition, while the field of view (FOV) rotated. This allows application of angular differential imaging (ADI, [@Marois2006]) to reduce the impact of speckle noise: in general, we use an approach based on a principal component analysis (PCA, [@Soummer2012]) for differential imaging. The IFS data allows a further spectral dimension of the hyper-data cube. The IFS image, obtained using a PCA done along temporal and spectral channels (ASDI) simultaneously, with 10 principal components (@Mesa2015) by [@Mesa2019], highlighted the presence of an elongated structure in the north-east direction, as well as the hint of a disk seen almost edge-on in the image. In the following we refer to this “elongated structure”, deferring its interpretation to the next sections.
In the present paper, we prefer to use images obtained with a PCA done separately on the different spectral channels (monochromatic PCA) in order to retrieve all the information coming from every channel; this makes it possible to highlight the different wavelengths sampled in the image. In addition, we also used images that are obtained simply subtracting a radial-profile and that are not affected by the self-subtraction of non-uniform imaged structures typical of differential imaging. The median IFS image showing the continuum emission in Y and H band and the most prominent line emission of the Helium at 1.083$\mu$m is shown in Figure \[IFS\_total\]. Figure \[IRDIS\_total\] shows the continuum emission obtained in K1 and K2 with IRDIS, and the subtraction of the K1-K2 filters.
![Top panel: Median image obtained using monochromatic PCA with one principal component per spectral channel on SPHERE-IRDIS image, showing continuum emission. Contours are also shown. Bottom panel: Subtraction of K1-K2 filters. No signal is left. For comparison, the contours used in the top panel are also reported. Solid and dashed lines as in Figure \[IFS\_total\].[]{data-label="IRDIS_total"}](IRDIS_continuum.png){width="\linewidth"}
![Top panel: Median image obtained using monochromatic PCA with one principal component per spectral channel on SPHERE-IRDIS image, showing continuum emission. Contours are also shown. Bottom panel: Subtraction of K1-K2 filters. No signal is left. For comparison, the contours used in the top panel are also reported. Solid and dashed lines as in Figure \[IFS\_total\].[]{data-label="IRDIS_total"}](IRDIS_subtraction.png){width="\linewidth"}
SINFONI data
------------
{width="\hsize"}
A H-band image of R CrA was obtained with the AO-fed integral field spectrograph SINFONI [@Eisenhauer2003; @Bonnet2004] operating between 1.45–1.85 $\mu$m with a resolution R$\sim$3000. The data were collected during the night 2018-09-11 under the program 2101.C-5048(A) (P.I. D.Mesa) with a spatial sampling of 0.0125$^{\prime\prime}$/pxl $\times$ 0.0125$^{\prime\prime}$/pxl for a total field of view of 0.8$^{\prime\prime} \times$ 0.8$^{\prime\prime}$. The data reduction is described in [@Mesa2019]. The phase of the central binary of the SINFONI observation is 0.459 [@Sissa2019], and according to the interpretation of the light curve in that paper, the spectrum of the star should be dominated by the secondary at this epoch.
In Figure \[Fig:composite\_2\], we show the composite line emission images obtained for the H, H$_2$ and \[Fe \] lines detected in the SINFONI data. Green refers to H-emission, blue to H$_2$-emission, and red to \[Fe \]-emission. While H and H$_2$ are almost co-located, the \[Fe \] emission shows a clear offset, being present at the southern edge of the “elongated structure” seen in the continuum image. In the same Figure, we identify two separated regions: one indicated as “jet” along the “elongated structure” direction, and the other one indicated as “cavity”, which extends along the east direction. These regions were chosen arbitrarily, in a region where the bulk of the line emission is observed. We extracted the spectra in these two regions, and they are shown in Figure \[Fig:spectra\_sinfoni\]. The spectra were obtained after dividing the total intensity of each wavelength for a synthetic spectrum of telluric absorption lines, and after subtracting the median continuum emission. The synthetic spectrum of telluric absorption was computed using the synthetic sky modeler, Telfit (@Gullikson2014), adopting the proper parameters for airmass, pressure, humidity, and temperature. The Brackett series Hydrogen recombination lines, as well as H$_2$ emission lines, and \[Fe \] lines, are identified in the spectra. A faint signal may be detected in Figure \[Fig:spectra\_sinfoni\] in the direction of the counter-“elongated structure”. This is most likely not a totally “elongated structure”-related signal, but it is contaminated by the typical butterfly pattern in the direction of the wind sometimes observed in in high-contrast images [@Milli2017]. To assess this hypothesis, we checked the wind direction for the night when the SINFONI observations were taken on the Paranal Ambient Conditions Archive, that is indeed $\sim$140$^{\circ}$ (counted clockwise from north), in agreement with the direction of the signal from “elongated structure” and counter-“elongated structure”.
![Spectra from SINFONI data along the elongated structure for both “jet” and the “cavity” components. The locations of the Hydrogen recombination lines from the Brackett series are indicated, together with the \[Fe \] and H$_2$ lines. The spectra are extracted from the two boxes shown in Figure \[Fig:composite\_2\]. []{data-label="Fig:spectra_sinfoni"}](jet_cavity_spectra.png){width="\hsize"}
FEROS data
----------
![FEROS spectrum of RCrA. Top panels: regions around H$\alpha$ and H$\beta$ lines. Bottom panels: regions around \[O\]$\lambda$6716/6731 and \[S \]$\lambda$4068/6716/6731 forbidden lines. The colored Gaussian profiles in the bottom panels refer to the multiple component deconvolution: in green, the HVC, in blue, the LVC, in red, the sum of the two components. []{data-label="Fig:forbidden_lines"}](RCrA_OISII.png){width="9cm"}
We retrieved an optical spectrum of R CrA from the ESO data-product Archive. It was acquired in 2009 with FEROS as part of the program 083.A-9013(A) and covers the wavelength range from 3500 Å to 9200 Å with a resolution of R$\simeq$48000 ($\simeq$6 km s$^{-1}$). The FEROS fiber diameter projected on the sky is 2.7, and covers radii out to $\sim$200 au at the distance of R CrA. The spectrum was reduced by performing flat-fielding, wavelength calibration, and barycentric correction. Assuming the $\sim$66-day period of the central binary system [@Sissa2019], at the epoch of the FEROS observation, the spectrum was dominated by the primary star. We identified several emission lines in the spectrum of R CrA, and we discuss the emission spectrum in the next section.
Data Analysis
=============
The analysis of the images and spectra described in the previous section allows us to investigate the extent, morphology, and physical condition of the gas and dust in the region where the “elongated structure” is observed. The detected lines and continuum probe different components of this complex system and allow us to derive the properties of the jet, the cavity, and the accreting gas, as explained in the following sections.
Gas properties
--------------
----------- ------------ --------------- --------------- ----------------- --------------- --------------- ----------------
Element Wavelength
v$_c$ FWHM EW v$_c$ FWHM EW
(Å) (km s$^{-1}$) (km s$^{-1}$) (Å) (km s$^{-1}$) (km s$^{-1}$) (Å)
$[$O $]$ 6300 -5.8$\pm$0.2 45.6$\pm$0.7 1.22$\pm$0.01 -71.8$\pm$3.7 94.9$\pm$6.8 0.40$\pm$0.01
$[$O $]$ 6363 -6.2$\pm$0.4 43.2$\pm$1.4 0.50$\pm$0.01 -50.4$\pm$7.8 94.7$\pm$14.9 0.16$\pm$0.009
$[$S $]$ 6731 -5.6$\pm$0.6 24.3$\pm$1.7 0.18$\pm$0.01 ... ... ...
$[$S $]$ 6716 -7.6$\pm$6.1 28.5$\pm$2.9 0.093$\pm$0.009 ... ... ...
$[$S $]$ 4068 -6.8$\pm$4.1 36.0$\pm$12.6 0.28$\pm$0.02 ... ... ...
$[$Fe $]$ 7155 ... ... ... -35.8$\pm$1.4 129.8$\pm$4.5 0.26$\pm$0.01
----------- ------------ --------------- --------------- ----------------- --------------- --------------- ----------------
The FEROS spectrum covers a wavelength range rich in emission lines that are diagnostic of accretion and outflow activity. Strong, double-peaked, self-absorbed H$\alpha$ and H$\beta$ emission lines are observed, together with atomic forbidden lines such as \[O \] and \[S \].
The ratio between the intensity of the lines of \[S \] at 6716 and 6731 Å has been largely used to derive the electron density $n_e$ of the emitting gas, because they have a similar energy of the upper level. The intensity ratio is then sensitive to electron density and almost independent of the temperature [@Osterbrock1989]. We used the revised diagnostic diagrams from [@Proxauf2014], which give $n_e$=7$\times$10$^{3}$cm$^{-3}$ for the \[S \] $\lambda$6716/6731 line ratio, assuming an electron temperature of 10,000 K. This value is in agreement with the value found from the intensity ratio of the \[S \] lines at 4069 and 6731 Å by [@Hamann1994].
The high resolution of the FEROS spectrum also allowed us to conduct an analysis on the multiple component of the Oxygen forbidden lines. Emission from \[O \] lines in the optical wavelengths is a well established tracer of outflows in T Tauri stars. Several studies of these lines have shown that their emission is often blueshifted and is formed in an outflow whose redshifted part is obscured by the circumstellar disk (e.g. @Edwards1987). Moreover, the \[O \] lines observed at medium/high-resolution show a profile composed of multiple components: a high-velocity component (HVC) with line peaks shifted by up to a few hundreds km s$^{-1,}$ and a low-velocity component (LVC) with blueshifts between a few to $\sim$30 km s$^{-1}$. The line components are emitted in physically different regions. The HVC is produced in a fast-moving collimated (micro) jet (e.g. @Kwan1988 [@Hartigan1995]), while the LVC has been found to most likely trace disk winds (e.g. @Acke2005 [@Rigliaco2013; @Natta2014; @Banzatti2019]). In Fig. \[Fig:forbidden\_lines\], we show the \[O \] 6300 Å and 6363 Å line profiles of R CrA, and in Table \[table:OIparam\], we summarize the forbidden line properties. Both lines clearly show the two components that can be reproduced by Gaussian profiles: in blue the LVC, blueshifted by $\sim$ 6km s$^{-1}$, and in green the HVC tracing gas moving at higher velocity. The observed profile is well reproduced by the sum of two Gaussian components (red profile). These features are observed within the 2.7$^{\prime\prime}$ FEROS fiber, meaning that the gas is distributed within 200 au from the central star. This represents the only solid limit we can put on the region where the HVC form as seen on-sky. More speculatively, if we assume that these lines are emitted from gas in Keplerian orbit around the star, we can use the measured FWHM (see Table \[table:OIparam\]) to constrain the radius at which they are emitted. Using the sum of the masses of the two components of the central binary system, respectively, 3.0 M$_{\odot}$ and a 2.3 M$_{\odot}$, we find that under this hypothesis, the \[O \] LVC should be emitted at $\sim$2.5 au from the central star. On the other hand, the LVC can be either magnetically or thermally driven, and in both cases, its origin is within few au from the central star. Following @Hartigan1995), we used the \[O \] 6300 Å HVC line luminosity to retrieve an estimate of the mass loss through the jet, to be compared then to the mass accretion rate onto the star. Using equation (A8) from @Hartigan1995), assuming $l_{\bot}\sim$1.0 $^{\prime\prime}$ as the jet aperture on the plane of the sky (as retrieved from the images), and $v_{\bot}$ the projected velocity of the jet in the sky ($\sim$760 km/s, as discussed in the next section), we retrieved $\dot{M}_{loss}\sim$2.8$\times10^{-7}$M$_\odot/yr$. This value is then compared to the accretion rate we retrieved from the H$\alpha$ and H$\beta$ lines. We measured the equivalent width (EW) for these lines, and obtained an estimate of the accretion luminosity L$_{acc}$ of the star using the empirical relationships given by @Fairlamb2017 for Herbig Ae/Be stars. To overcome the uncertainty caused by variability of the star, we employed the photometry taken the same night as the FEROS spectrum (V$_{mag}$=11.6) with the All Sky Automated Survey (ASAS, [@Pojmanski1997]) for these measurements. The measured EW$_{H\alpha}\sim$135Å, and EW$_{H\beta}\sim$11Å, yield to accretion rate values of 1.6$\times10^{-6}$M$_{\odot}$/yr and 5.1$\times10^{-6}$M$_{\odot}$/yr, respectively. Assuming the average accretion rate is the mean between these two values, we find $\dot{M}_{acc}\sim$3.3$\times10^{-6}$M$_{\odot}$/yr. The ratio of the mass accretion to the mass loss rate is $\sim$0.08, which is consistent with values found for TTSs with prominent stellar jets (e.g., HH34, HH47, HH111 @Hartigan1994 [@Hartigan1995]), and it is consistent with a jet that is powered by the accretion onto the central star.
As we mentioned in the previous section, the SINFONI spectrum in the “jet” region covering the near-IR wavelengths range from 1.45 $\mu$m to 1.85 $\mu$m, does show the Brackett series Hydrogen recombination lines, but there is no detection of \[Fe \] lines at 1.53, 1.60 and 1.64 $\mu$m in this region. Studies of jets from Herbig stars (as also recently found for RY Tau, @Garufi2019) in the optical and in the Near-IR showed that optical jets also emit in the Near-IR \[Fe \] lines, unless the density in the jet beam is very low ($n_e$<10$^{3}$cm$^{-3}$). Indeed, owing to their large critical density, these lines need high electron density to be efficiently excited, thus they cannot be used as diagnostics in low-density jets (e.g., @Podio2006). Besides being caused by the low-density gas, the non-detection of the \[Fe \] might be due either to the low sensitivity of the H-band spectrum, or to the depletion of the iron that is locked into the dust grains (e.g., @Podio2006 [@GarciaLopez2010]).
“Elongated structure” size
--------------------------
![Spectra extracted from SPHERE-IFS data, once a radial profile, has been subtracted. Top left panel (a): spectrum (mean contrast versus wavelengths) extracted in area where“elongated structure” is identified, normalized for stellar peak. The location of the Helium and Hydrogen (Pa$\beta$) line emission are labeled. The blue line shows the linear curve utilized to normalize the spectrum, and reported in the bottom panel. Bottom left panel (b): spectrum normalized to linear fit indicated in top panel. The grey-shaded area shows the telluric band. Top right panel (c): same as panel (a), but for a region closer to the disk identified by @Mesa2019. Bottom right panel (d): same as panel (b) for disk.[]{data-label="Fig:IFS_spectrum"}](IFS_spectra.png){width="8cm"}
The SPHERE images of R CrA (IRDIS and IFS) obtained using the monochromatic PCA with 1 principal component are shown in Figures \[IFS\_total\] and \[IRDIS\_total\]. In Figure \[Fig:IFS\_spectrum\], we show the median spectrum from the IFS data cube. It represents the average signal within the area where we identify the “elongated structure” normalized to the stellar peak. The spectrum is dominated by continuum emission: however, there is some evidence of the contribution due to the emission line of Helium at 1.083 $\mu$m, and slight hints of Paschen $\beta$ when the spectrum is normalized to the continuum (Fig. \[Fig:IFS\_spectrum\], left bottom panel). The steepness of the observed spectrum shows that the “elongated structure” continuum emission has a redder color than the stellar emission. For comparison, we also extracted the spectrum in the region where the almost edge-on disk was pointed out by [@Mesa2019] (panels (c, d) in Fig. \[Fig:IFS\_spectrum\]). We notice that while the reddening of the spectrum in the direction of the disk is in agreement with the spectra extracted for other disks around Herbig stars (e.g., HD 100546, @Sissa2018 and SAO 206462, @Maire2017), the spectrum along the “elongated structure” appears steeper. We must consider that the circumstellar environment around RCrA is affected by high extinction (e.g., @Bibo1992 [@Sissa2019]), and a gradient of extinction within the region itself, hence the reddening of the spectrum might be due to this effect. However, the scattering properties of dusty grains of different size might also play a role in this context.
We analyze the aperture of the “elongated structure”. Following on from the analysis of [@Mesa2019], we measured the aperture from the IFS continuum image (Fig. \[IFS\_total\], top panel). The position angle (PA) of the “elongated structure” spans from $\sim$30$^{\circ}$ to $\sim$70$^{\circ}$, with a median PA of $\sim$50$^{\circ}$. The same PA and “elongated structure” aperture is shown in the IRDIS image, where the contour plot of the “elongated structure” is also shown. The furthest region of the “elongated structure” marked by the contours appear slightly bent toward south. We notice, moreover, that the dust “cavity” region, as identified in the SINFONI image, extends outside the “elongated structure” as identified from the IFS and IRDIS continuum and line images. In particular, the axis of the emission in the dust “cavity” region has a PA of $\sim$100$^{\circ}$.
The continuum emission of the “elongated structure” observed in the IFS image, which represents the approaching part to the observer, covers all the IFS field of view, meaning that it extends up to $\sim$120 au from the central system. From the IRDIS image, the radial extent of the approaching part of the observed “elongated structure” extends up to 2.6$^{\prime\prime}$ from the central objects (400 au at the R CrA distance), remaining as wide as $\sim$30$^{\circ}$ up to $\sim$300 au. Optically visible jets from TTSs are known to begin with wide (10-30 degree) opening angles close to the source, and are rapidly collimated to within a few degrees in the innermost 50-100 au [@Ray2007; @Frank2015]. Wider structures, as the one observed in the continuum emission around R CrA, might be consistent with shells of ambient gas swept up by the jet bow-shock and a surrounding slower wider-angle component. This wide-angle wind and the swept-up outflow expand more slowly, carving out a cavity, which widens over time into the envelope and the surrounding cloud [@Frank2015].
Morphology
----------
The overall morphology of the observed “elongated structure” as seen in the continuum emission from IFS and IRDIS images appears non-uniform and discontinuous along the radial extent. Moreover, the SINFONI image shows a wiggling pattern in the Hydrogen lines. We analyzed the wiggling as a function of the distance from the central binary by focusing both on the emission lines and on the continuum emission. We employed the same method used by [@Antoniucci2016]: we considered a set of contiguous slices orthogonal to the jet axis and in each slice we fitted the pixel distribution with a Gaussian function in order to obtain the profile peak positions as a function of the distance from the star. We applied this method to the SPHERE-IFS channels containing the He line at 1.083$\mu$m, to the SINFONI channels containing the Brackett series lines at 1.555$\mu$m, 1.641$\mu$m, 1.681$\mu$m, 1.736$\mu$m, and to the IFS average continuum emission. The results are shown in Fig. \[Fig:wiggle\_lines\], where the plots from the gas components are shown in the top panels, and the continuum emission is shown in the bottom panel. We first notice that the “elongated structure” seen in the gaseous component (both Helium and Hydrogen) is at a median PA of $\sim$65$^{\circ}$, higher than the median PA ($\sim$50$^{\circ}$) found from the continuum emission, pointing to a misalignment between the “elongated structure” axis seen in the continuum and the gas emission. This misalignment can already be clearly noted in Fig. \[IFS\_total\], where the He emission (bottom panel) does not share the same PA as the continuum emission (top panel). The same happens for the Hydrogen, \[Fe \] and H$_2$ lines shown in Figure \[Fig:composite\_2\], where the lines are not emitted at the same PA as the continuum in Y-H and K bands from the IFS and IRDIS images. Both Helium and Hydrogen emissions display a wiggling pattern with a projected half-opening angle of 5$^{\circ}$ – 6$^{\circ}$. It is not surprising to see a wiggle in the jet of young stars: it has been detected in both the other young stars where a jet was observed with SPHERE, namely RY Tau [@Garufi2019] and ZCMa [@Antoniucci2016]. Contrary to these previous studies, where the binarity of the central object was assumed from the wiggle of the observed jet, for R CrA we know that the central system is a binary star [@Sissa2019]. We checked that the rotation period of the binary system (66 days) is in agreement with the phase difference between the two observations, SINFONI and SPHERE-IFS, which are shifted by $\sim$two months in time. If the observed period of the wiggling (190 mas) is linked to the period of the central binary system, we could then measure the velocity of the jet projected on the sky, which we found to be $\sim$760 km s$^{-1}$. This corresponds to a radial component of the velocity respect to us of $\sim$130 km s$^{-1}$, if we consider that the disk is seen almost edge-on and that the jet is then seen with an inclination of $\sim$10$^{\circ}$ on the plane of the sky (see below).
![Peak position of Hydrogen and Helium lines from SINFONI and SPHERE-IFS data, respectively, as a function of distance from central objects. Dots indicate the sigma of the Gaussian distribution used to obtain the peak position within the slice. The solid blue line is the fit of the wiggle if it is produced by the orbital motion of the jet around the binary system at the center. The blue fit in the three panels is the same, however in the panel showing SINFONI data it is adjusted in order to account for the difference in phase between SPHERE and SINFONI observations and for the different instrument resolution. []{data-label="Fig:wiggle_lines"}](plot_H_He_Wiggle.png){width="\hsize"}
We also notice that the wiggling is not visible in the continuum emission. As shown in the bottom panel of Fig. \[Fig:wiggle\_lines\], the model obtained from the emission lines (blue solid line), using the same PA measured from the gas lines, does not reproduce the shape of the observation, pointing to the conclusion that the continuum emission does not have a wiggling pattern, even if is appears non-uniform.
Another point coming from the analysis of the SPHERE-IFS images in the different channels is that the emission by the gas shown by He[i]{} line is not centered on the continuum emission. This is clear comparing the top and bottom panels of Fig. \[IFS\_total\]: the He[i]{} emission is not exactly co-located to the continuum emission, appearing shifted to the south direction, almost at the border of the continuum emission, suggesting it is produced in the external layer of the “elongated structure”. Moreover, the He[i]{} emission is spatially resolved along the direction perpendicular to the jet elongation, yielding a median jet semi-aperture of $\sim$10 au.
Geometrical model of the wiggling jet
-------------------------------------
We constructed a geometrical model of the wiggling jet of R CrA, as observed in the H[i]{} lines in the SINFONI spectrum. We modeled two observed quantities: the luminosity and the radial velocity maps. The radial velocities were derived by cross-correlation, and the maps of the intensities and radial velocities of the H[i]{} lines are shown in the top panels of Figure \[velocity\_model\].
{width="\hsize"}
In order to model these two quantities, we assume that the jet is describing a helix on the surface of a truncated cone, which is seen with an inclination $i$ and position angle $PA$ with respect the sky plane. Since the inclination is very small, the jet velocity on the sky plane is set by the ratio of the separation between the helix pitch and the binary orbital period. In the assumptions that the jet is optically thin, the regions where the spiral crosses the plane of the sky have a greater depth along the line of sight, hence they are seen as blobs in the luminosity image. We identified two consecutive blobs in Figure \[velocity\_model\] as box1 and box2. The projected separation between two consecutive blobs on the same side of the jet will then give the helix pitch. As we have seen previously, the jet transverse velocity determined in this way is 760 km s$^{-1}$. The combination of the luminosity and radial velocity maps make it possible to determine all the parameters of the model (see Figure \[velocity\_model\]). The PA is the mid-angle of the jet on the sky plane (PA=67${^{\circ}}$, in the usual convention starting from north toward east). The ratio between the mean jet radial velocity and the transverse velocity sets the inclination angle at which the helix is seen. We find $i$=3.3${^{\circ}}$, meaning that the jet is seen very close to the sky plane. This suggests that the disk of R Cra is seen almost edge on. In this model, the radial velocity difference between opposite sides of the helix (e.g. box 1 and box 2 in Figure \[velocity\_model\]) is due to a projection effect, because they are moving with a different inclination with respect to the line of sight, and then sets the cone aperture angle. The observed difference of $\sim$18 km s$^{-1}$ is reproduced by a semi-aperture of the cone of $\sim$1.2${^{\circ}}$, pointing to a very well collimated jet. Once this angle is fixed, the observed distance between the blobs and the jet axis determines the radius of the base of the cone $r_B\sim$5.3 au.
Most interestingly, the model also sets the phase when the jet crossed the binary orbital plane at the epoch of the SINFONI observation (at a distance of 5.3 au from the barycenter of the central binary). Of course this is an idealization of the real jet trajectory because it is unlikely that the jet is launched from this position. However, this can still be useful in understanding the jet geometry with respect to the central binary. We found that this phase is 0.35, where the zero point is when the binary primary (R CrA Aa) is in opposition. For comparison, the binary model constructed in @Sissa2019 tells us that at the same epoch, the R CrA Aa was at a phase of 0.459. The difference between these two phases is quite small, and may be justified by considering the delay of the jet due to the time required to reach a distance of 5.3 au from the star at a constant speed of 700 km s$^{-1}$, 12 days, that corresponds to a phase difference of 0.18. Of course, the jet trajectory is likely shorter than one passing through this ideal point. Even if these values contain some uncertainties (e.g. in the exact jet trajectory and velocity and in the exact value of $r_B$), they suggest that the jet is actually launched along the direction defined by the two components of the central binary, towards the far side of the primary.
[lcc]{} Component & $v_c$ & FWHM\
& (km s$^{-1}$) & (km s$^{-1}$)\
\
H & -14.8$\pm$4.0 & 435$\pm$14\
\
H & -39.9$\pm$2.4 & 393$\pm$ 7\
H (box 1) & -29.7$\pm$3.0 & 383$\pm$ 9\
H (box 2) & -45.3$\pm$2.7 & 410$\pm$ 9\
H$_2$ & +0.7$\pm$4.9 & 174$\pm$12\
\
$[$Fe[ii]{}$]$ & -0.2$\pm$4.5 & 208$\pm$12\
H & -6.6$\pm$8.1 & 423$\pm$26\
\[tab:sinfoni\]
\[Fe [ii]{}\] and H$_2$ lines
-----------------------------
Table \[tab:sinfoni\] collects radial velocities and full width at half maximum (FWHM) of the spectral lines detected in the SINFONI spectra. In particular, we collected data for spectra extracted from different regions of the data cube: close to the star, and in the “jet” and dust “cavity” area, as identified in Fig. \[Fig:composite\_2\] and in Sect. 2.2. The Hydrogen lines are detected in all regions and are usually very broad, consistent, and with a high velocity wind. The \[Fe [ii]{}\] are only detected in the dust “cavity” area, tracing an external layer of the “elongated structure”. They have low velocities, and are narrow and not resolved in the SINFONI spectrum. The ratio between the intensity of the lines at 1.64 and 1.53 $\mu$m is a diagnostic of density [@Nisini2005]. The observed ratio of about four corresponds to a density of about 10$^4$ cm$^{-3}$. This is an order of magnitude larger than the upper limit obtained from the lack of \[Fe [ii]{}\] lines in the “jet” region, but not far from the one estimated using the \[S [ii]{}\] lines and the narrow component of the \[O [i]{}\] lines. This suggests that this dust “cavity” region corresponds to the low-velocity component seen in the FEROS spectra (see Table \[table:OIparam\]).
H$_2$ lines are also clearly detected in the SINFONI spectra, and their emission comes from a region very close to the star, as seen in Fig. \[Fig:composite\_2\]. Even if an exact quantification is difficult, it suggests that it originated within 230 mas (corresponding to $\sim$35 au from the star). This is within the orbit of the M-dwarf companion, where the circumbinary disk should be. The radial velocities of these lines are in agreement with those derived in the Hydrogen lines in the “jet” direction, but the FWHM are much smaller, and the lines are not resolved in the SINFONI spectrum.
The relative intensity of the different H$_2$ lines agree, within 10%, with what was expected from collisional excitation at a temperature of about 15000 K, and it is very different from what was expected from fluorescence models [@Black1987]. Namely, mainly low-excitation lines are observed, and the high-excitation lines, which are also expected to be strong due to fluorescence, are absent. The high-temperature gas can be heated by shocks [@Shull1982] that may be located close to the base of the jet or in the regions where this interacts with the disk. Radial velocities and FWHM of the Hydrogen and H$_2$ lines in the “jet” area are in agreement with the values found for the \[Fe [ii]{}\] at 7155 Å line in the FEROS spectrum, suggesting that this line (that requires densities as high as $\sim$10$^{5}$cm$^{-3}$, @Nisini2005) might be also produced in the same post-shock region near the star.
We notice that the H$_2$ lines, which should dominate the K1-band in the IRDIS image, are not detected. To stress this point, we performed the subtraction of the K1-K2 IRDIS image, as shown in Fig. \[IRDIS\_total\] (bottom panel). On the other hand, fainter H$_2$ lines are detected in the H-band SINFONI spectrum. This non-detection of H$_2$ in the K1-K2 image is essentially due to the fact that SPHERE has been designed for detecting continuum emission, it is not optimized for detecting (extended) line emission. The spectral resolution of SINFONI is two orders of magnitude higher: this makes it possible to detect emission lines with a surface luminosity two orders of magnitude fainter. We verified that the non-detection if the brightest H$_2$ emission line in the K1-band (at 2.12$\mu$m, (S(1)(1-0))) is compatible with the detection of the brightest H$_2$ emission line in the H-band (at 1.74$\mu$m, (S(7)(1-0))).
Spectro-astrometry: information on the verycentral regions
----------------------------------------------------------
Some information about the very central region of the system can be obtained using spectro-astrometry [@Whelan2008] on the SINFONI data. We should remind readers that the bulk of the emission from R Cra in the near-IR is due to the warm disk [@Sissa2019]. Interferometric observations with the instrument AMBER at VLTI (@Kraus2009) indicate that this emission is offset by about four mas with respect to the barycenter of the system, in the direction of the jet, because self-shadowing of the highly inclined disk only makes it possible to see the far side of the disk. Also, the separation between the central binary components projected along the jet axis is very small (about 0.3-0.4 mas) and can thus be neglected here. Since the scope of spectro-astrometry is to explore regions very close to the center of the system, we used the images obtained by subtraction of a radial profile. The images were corrected for the telluric lines and then cross-correlated with digital masks for HI and \[Fe [ii]{}\] (results for H2 lines were less clear). The resulting data cube is then made of cross correlation functions - that is, the z-coordinate now represents radial velocities. We then rotated these images so that the jet is along the y-axis, and collapsed the images along the jet. This is how we obtained bidimensional images, where the axes are radial velocity and offset along the jet with respect to the peak of the continuum emission. We then fitted Gaussians in offset as a function of radial velocity and compared the position of the peak with the value we obtained for the continuum (that, we remind readers, represents the position of the warm disk). Since there is no relevant variation of the position of this peak with velocity, we simply considered the average values. The peak obtained for HI lines is offset by about four mas in the opposite direction to the jet: this indicates that the bulk of the H emission is caused by material very close to the central binary, likely tracing the accretion. This agrees with the large value of the FWHM. On the other hand, the peak for \[Fe [ii]{}\] is offset by about two mas in the direction of the jet, indicating that, in this case, the bulk of the emission comes either from the jet or from regions close to the disk, at about six mas ($\sim 1$ au) from the star projected along the jet axis.
Proposed scenario and discussion
================================
In this section, we discuss a scenario where all the features, structures, and properties seen and analyzed in the “elongated structure” are taken into account. The proposed scenario is supported by at least a similar case, the T Tauri star FS Tau B observed by @Eisloffel1998, who imaged the “elongated structure” as the wide outer edges of windblown cavities and the narrower jet flowing inside the cavity in H$\alpha$.
![Sketch of scenario proposed for the “elongated structure” of R CrA. White-filled circles are the three stellar components. The blue region is the circumbinary disk. The green area is the gaseous high velocity jet. The red region is the edge of the dust cavity. The stars of the triple system are represented with the white dots.[]{data-label="Fig:sketch"}](Caff6_red_qual.jpg){width="9cm"}
The schematic picture we consider is obtained from the complementary detailed analysis of the SPHERE (IRDIS and IFS) images and spectra, of the SINFONI data, and of the optical FEROS spectrum. The system appears to be composed of at least two elements: a dusty component seen in scattered light, and a gaseous component, detected in emission atomic lines of forbidden species (\[O [i]{}\], \[He [i]{}\] and \[Fe [ii]{}\]) and Hydrogen lines. What we have called “elongated structure” until now, which is seen in scattered light in the IFS and IRDIS images, as well as in the SINFONI image, is dominated by continuum emission and is most likely a cavity carved out into the circumstellar environment; the dust on the cavity walls is illuminated by the central binary and scatters light toward us. The jet flowing inside the cavity is detected in the HVC of the optical forbidden lines and in the broad Hydrogen lines. The jet is very collimated and shows a wiggling pattern in the Hydrogen and \[H [i]{}\] lines, which is in agreement with the orbital period of the central binary. The LVC of the optical forbidden lines suggests the presence of gas moving more slowly, most likely a disk wind. SINFONI data allows us to spatially resolve the \[Fe [ii]{}\] emission, which appears to be located at the edge of the cavity, consistent with a layered disk wind as well. This structure is captured in the schematic picture of the R CrA environment that is shown in Figure \[Fig:sketch\]. The disk, which is illustrated in blue in the Figure, is not well detected in the scattered light images, and it is most likely seen edge on. The central binary system [@Sissa2019] is hidden behind the coronagraph, while the wide M-dwarf companion is clearly visible in all the images. The red shell shows the cavity walls, and the green diffuse area the gaseous jet flowing within the cavity.
Differently from FS Tau B [@Eisloffel1998], where the narrower jet flowing inside the cavity points toward the center of the cavity, in the case of R CrA the gaseous jet is not currently pointing toward the center, but rather toward the southern border of the cavity. The misalignment between the axis of the cavity and the jet might be due to a temporal variation of the jet direction. In turn this might be attributed on short timescales to a wiggling pattern of the jet related to the central binary orbit, and on longer timescales to a precession motion that tends to change the direction of the jet axis of rotation. This precession motion may be due to the multiplicity of the system, which includes the central binary [@Sissa2019] and the wide companion (@Mesa2019).
This scenario reconciles well with the fact that the environment around R CrA is rich in diffuse ambient gas and dust that can be swept-up by powerful jets, as shown by the prominent number of bow shocks, knots, and outflows detected at different wavelengths (e.g. @Kumar2011 [@Anderson1997; @Groppi2004; @Groppi2007]). Moreover, the shock produced by the wind-carved cavity would be responsible, together with the lower mass wide companion, of the emission in X-ray of hot plasma observed with Chandra and XMM-Newton and analyzed by @Forbrich2006. The H2 lines may also form in this shock area.
What can our observations tell us about the launching mechanism for the jet of R CrA? Jets from young stars are usually thought to be launched magneto-centrifugally from disks around them (see e.g. [@Frank2014]). Favored scenarios consider disk wind (see [@Pudritz1983]) or star-disk magnetosphere [@Camenzind1990], the latter in particular in the X-wind scenario [@Shu2000]. The disk-wind scenario can itself be separated into two separate schemes, cold-wind and warm-wind [@Konigl2000]. As discussed, for example, by @Ferreira2006, cold winds are generally not considered favorably, because they lead to jets that rotate much faster than usually observed. The main difference between these mechanisms is the launch radius $r_0$, which is related to the specific angular momentum of the jet and to the magnetic leverage $\Lambda$, which is the square of the ratio between the Alfvèn radius $r_A$, and $r_0$: $\Lambda=(r_A/r_0)^2$. The Alfvèn radius is where the magnetic energy density is equal to the kinetic energy density. Beyond the Alfvèn distance, the field lines lag behind the rotation of their footpoints and are coiled into a spiral (see e.g. [@Spruit2010]). We then expect that the radius of the base of the truncated cone representing a jet should be connected to the Alfvèn radius, though not necessarily be identical, because the jet may have to traverse many Alfvèn radii before being effectively focused, since its collimation depends not simply on the magnetic lever arm, but also on the poloidal field strength at the disk surface (see [@Frank2014]). The magnetic leverage is in turn related to the ratio between the jet velocity $V_{\rm jet}$ and the Keplerian $V_{\rm Kep}$ at $r_0$ by the relation $V_{\rm jet} = V_{\rm kep}~\sqrt{2 \Lambda -3}$. All these quantities are then related to the launch radius $r_0$. As discussed by @Frank2014, typical parameters for jets around T Tau stars are compatible with $r_0$ of the order of a few stellar radii. The large value of $V_{\rm Kep}$ at such small separation implies a small value for the magnetic leverage factor ($\sim 5-10$) and slow jet rotation ($\sim 10$ km s$^{-1}$). These are compatible with the very few detections of jet rotation (see e.g. the case of HH212, [@Lee2017], or the more questioned case of DG Tau: [@Bacciotti2002; @White2014]).
Since R CrA is a binary with a separation of $a\sim 0.56$ au, which is $\sim 119$ R$_\odot$, it is possible that its jet might be quite peculiar. The geometrical model discussed in the previous section suggests that the jet is launched close to the primary. However, the presence of a massive enough disk around it is not obvious, because we expect such a disk to be truncated at about a third of the Hill radius, which is at about 20 R$_\odot$ from the star. Since it is likely that the magnetic field of the whole binary system is locked to the binary orbit, the jet orientation would be determined by the orbital phase. On the other hand, the jet velocity is much larger than the escape velocity from the binary. As a consequence, we might expect that the jet describes a helix that may be similar to what is observed. If it exists, such a jet would have properties similar to those observed for other very young stars; in particular, it would be slowly rotating (at most a few tens of km s$^{-1}$), meaning below the spectral resolution offered by SINFONI.
As another option, we might perhaps consider the case of a jet that is launched at the inner edge of the circumbinary disk, which is with $r_0$ of the order of $1.5~a\sim 0.84$ au. Such a jet would have peculiar characteristics. Given the high specific angular momentum of material at this location, we would expect a rapidly rotating jet. The relation between $r_0$ and the jet rotational velocity $V_{\rm rot}$ is obtained considering that $r_0=0.05*(2*(l_J/10)/(V_{\rm jet}/115))^2*(M/0.25)^{1/3}$ [@Lee2017], where $M$ is the stellar mass and $l_J$ is the jet specific angular momentum (in au$\times$km s$^{-1}$). Assuming typical values appropriate for the case of R CrA (jet size $\sim$25 mas, $\sim$4 au; jet velocity $V_{\rm jet}=700$ km s$^{-1}$; stellar mass =$3.0$ M$_\odot$), we obtain $V_{\rm rot}\sim 180$ km s$^{-1}$. This is much larger than typically observed in T Tau stars, but not incompatible with the value we obtain for the Half Width Half Maximum of the H lines in direction of the jet (and of blobs 1 and 2) that is $\sim 400$ km s$^{-1}$. This is about twice the spectral resolution of SINFONI, and indeed the H lines are clearly much broader than the \[Fe [ii]{}\] or H$_2$ lines (that are not seen in the jet).
Another peculiar property expected for such a jet is the large leverage factor $\Lambda\sim 44$. This would imply $\xi=1/[2~(\Lambda-1)]$=0.01 and $r_A=5.5$ au. $\xi$ is related to the ratio between the mass-loss rate through the jet and the accretion mass rate, $\xi\sim \dot{M}_{\rm jet}/\dot{M}_{\rm acc}$. Hence a low value of $\xi$ implies a low value for $\dot{M}_{\rm jet}$, which in turn might explain the low density of the jet of R CrA. The Alfvèn radius would be actually similar to the radius of the base of the cone $r_B=5.3$ au derived in our geometrical model. The high value of $\xi$ is compatible with a well collimated jet (see e.g. [@Garcia2001]). Finally, it should be noted that such a large magnetic leverage factor would be more compatible with a cold-wind jet scenario.
In summary, the nature of the jet seen in R CrA is still not clear. Observations are compatible both with the usual warm-disk or X-wind scenarios, but a cold-wind scenario is also possible, related to the peculiar fact that R CrA is a binary with a separation of a few tens of stellar radii. The two cases might be distinguished by a more accurate estimate of the specific angular momentum of the jet, which might be possible using ALMA (see e.g. [@Lee2017]) for example.
Conclusions
===========
Taking advantage of the complementary information provided by optical spectroscopy data (acquired with FEROS) and adaptive optics images in the near infrared obtained with SPHERE and SINFONI, we investigated the extended structure seen around the Herbig Ae/Be star R CrA. R CrA is a very interesting system, not only because it is bright, quite massive, and very young, but also because it is a triple system with a central binary whose separation is of the order of a few tens of the radii of the individual components. This separation might be critical to the survival of circumstellar disks that are considered in the most popular scenarios of magneto-centrifugally launched jets. The vicinity of this star, together with the spatial resolution offered by complementary instrumentation allowed us to investigate in great detail the extended structures seen around R CrA. The data reveal a complex overall structure composed by three components: a collimated jet, a wide cavity, and a shock region close to the central system.
A well collimated gaseous jet has been detected in the Hydrogen (with SINFONI) and \[He [i]{}\] (with SPHERE) emission lines. It shows a wiggling pattern that is consistent with the period of the binary system. We implemented a geometrical model to reproduce this wiggling pattern, showing that observations may be reproduced by a high velocity ($\sim 770$ km s$^{-1}$) jet inclined by 3.3$^{\circ}$ toward the observer, which describes a helix on the surface of a cone. The HVC seen in the optical forbidden lines with FEROS is most likely associated with this fast-collimated jet that is flowing inside a cavity carved in the interstellar medium.
The wide cavity is seen in the continuum emission of the IRDIS and IFS SPHERE data. It shows as a non-uniform extended structure that extends up to 400 au in the N-E direction, and as wide as $\sim$30$^{\circ}$. The cavity walls are seen in scattered light. The fast-collimated jet appears to be pointing toward the southern side of the cavity, meaning it is not oriented toward the center of the wide cavity, most likely due to a precession motion. \[Fe [ii]{}\] emission is detected along the wall of the cavity. This emission might be attributed to a slower moving wind, most likely a disk wind, that is also producing the LVC seen in the optical forbidden lines.
The third component is a shock region, close to the central star, where H$_2$ (observed with SINFONI) and possibly the \[Fe [ii]{}\] (observed with FEROS) at 7155 Å are produced at higher density. The velocities of these lines and their FWHM are in between the velocities of the HVC and LVC.
R CrA represents a very interesting object, because it allows us to study the structure of jets around Herbig stars, which is not yet well understood. The overall scenario agrees closely with general expectations for magneto-centrifugally launched jets: however, the fact that the star is actually a triple system likely makes the scenario more complex. Given the relevance of this particular object in our understanding of jets from very young stars, more observations would be welcomed to confirm our findings. Namely, a spectroscopic follow-up both with high-resolution spectrographs (in the optical and near-infrared) and with diffraction-limited integral-field spectrographs (e.g. SPHERE, MUSE, and ERIS) may better constrain the regions where the different lines emit, and the kinematic model used to interpret the fast collimated jet. Finally, the launch region of the jet might possibly be established using high-spatial and spectral-resolution observations of the jet with ALMA for example, which may be used to determine the jet rotational velocity.
E.R. is supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sk$\l$odowska-Curie grant agreement No 664931. This work has been supported by the project PRIN INAF 2016 The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA) and by the “Progetti Premiali” funding scheme of the Italian Ministry of Education, University, and Research. Programme National de Planétologie (PNP) and the Programme National de Physique Stellaire (PNPS) of CNRS-INSU. This work has also been supported by a grant from the French Labex OSUG2020 (Investissements d’avenir - ANR10 LABX56). The project is supported by CNRS, by the Agence Nationale de la Recherche (ANR-14-CE33-0018). This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM CeSAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon/CRAL. We thank P. Delorme and E. Lagadec (SPHERE Data Centre) for their efficient help during the data reduction process. SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF Osservatorio Astronomico di Padova (Italy), Observatoire de Geneve (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France) and ASTRON (Netherlands) in collaboration with ESO. SPHERE was funded by ESO, with additional contributions from CNRS (France), MPIA (Germany), INAF (Italy), FINES (Switzerland) and NOVA (Netherlands). SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012), and grant number 312430 for FP7 (2013-2016). GvdP acknowledges funding from the ANR of France under contract number ANR-16-CE31-0013 (Planet-Forming-Disks).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '[VLA observations of large-scale HI and OH absorption in the merging galaxy of NGC$\,$6240 are presented with 1 arcsec resolution. HI absorption is found across large areas of the extended radio continuum structure with a strong concentration towards the double-nucleus. The OH absorption is confined to the nuclear region. The HI and OH observations identify fractions of the gas disks of the two galaxies and confirm the presence of central gas accumulation between the nuclei. The data clearly identify the nucleus of the southern galaxy as the origin of the symmetric superwind outflow and also reveal blue-shifted components resulting from a nuclear starburst. Various absorption components are associated with large-scale dynamics of the system including a foreground dust lane crossing the radio structure in the northwest region.]{}'
author:
- 'Willem A. Baan'
- Yoshiaki Hagiwara
- Peter Hofner
title: 'HI AND OH ABSORPTION TOWARD NGC$\,6240$'
---
Introduction
============
The chaotic appearance of NGC$\,6240$ at all wavelengths is due to a forceful galactic collision of two galaxies (Fosbury & Wall 1979). The two individual nuclei of NGC$\,6240$ were first detected in R and I bands at a projected distance of 1.8" or 0.9 kpc (Fried & Schulz 1983). Early H$\alpha$ studies revealed extended emission with two independent and almost perpendicular disk systems (Bland-Hawthorn et al. 1991).
NGC$\,6240$ is a prototypical Luminous Infrared Galaxy with an IR luminosity of L$_{IR}$ = L$_{8-1000 \mu m}$ = 6 x 10$^{11}$ (Sanders et al. 1988). The FIR luminosity of these galaxies is powered by extremely high star-formation activity and or an embedded AGN. For NGC6240 the mid-IR observations are consistent with a dominant starburst power contribution of approximately 75$\%$ within the central 5 kpc (Genzel et al. 1998).
Radio data show two nuclei embedded in a connecting structure that extends into a loop structure to the West (Condon et al. 1982; Colbert et al. 1994) also seen in our continuum data (see Figure 1). MERLIN and the Very Long Baseline Array (VLBA) observations of the two nuclear continuum sources show brightness temperatures of 7 x 10$^6$ K for the northern component and 1.8 x 10$^7$ K for the southern component (Gallimore & Beswick 2004). The inverted spectra at low frequency confirm the AGN nature at each of the nuclei. The loop results most likely from a bubble front swept up by a superwind emanating mostly from the southern nucleus (designated as N1 in Figure 1)(Colbert et al. 1994; Ohyama et al 2000). NGC$\,6240$ exhibits HI and OH absorption against the nuclear continuum (Baan et al. 1985). Recent HI absorption studies at 0.3 arcsecond resolution with MERLIN distinguish the absorption at each of the two nuclear components (Beswick et al. 2001).
ASCA, XMM and Chandra data confirm the presence of two deeply buried AGNs in the NGC6240 system on the basis of a hard X-ray component with neutral Fe K$\alpha$ lines in addition to the soft X-ray components due to the starburst (Iwasawa & Comastri 1998; Boller et al. 2003; Komossa et al. 2003). The most prominent AGN is located at the southern nucleus N1, where the obscuration is the highest. The extended X-ray emission has a close correlation with the well-known (butterfly-shaped) H$\alpha$ emission. H$\alpha$ studies with HST confirm the presence of filamentary structures filling the inner volume of the arc and confirm the presence of confining walls of the outflow at either side of the nucleus (Gerssen et al. 2004). Significant H$\alpha$ structures have also been found in NGC$\,6240$ in the form of a butterfly-shaped structure that partially superposes the radio arc and extended radio structure.
NGC$\,6240$ displays strong H$_2$ [*v*]{} = 1-0 S(1) and \[Fe II\] line emission that peaks between the stellar light of the nuclei but lies closer to the southern nucleus (van der Werf et al. 1993; Joseph & Wright 1985; Ohyama et al. 2000). Spectroscopic studies of H$_2$ at K band allow the separation of the dynamics of the two nuclei (Tecza et al. 2000). The NIR light of each of the nuclei is dominated by red super-giants formed during a short episode of intense star-formation 15-25 million years ago. K band infrared imaging with Keck II and NICMOS on the Hubble Space Telescope has revealed elongated structures at both the North and South nuclei and considerable substructure within each nucleus (Max et al. 2005). Additional point-like regions are found around the two nuclei, which are thought to be young super-star-clusters.
CO(2$-$1) emission studies with the IRAM interferometer show a similar structure as the H$_2$ emission and also peaks between the two nuclei (Tacconi et al 1999). Most of the CO flux is concentrated in a thick and turbulent disk-like structure between the two IR/radio nuclei. Studies with Nobeyama Rainbow interferometer (Hagiwara 1998) indicate that the HCN(1-0) and HCO$^+$ fluxes also peak between the nuclei, and do not coincide with the star-formimg region in the galaxy (Nakanishi et al. 2005). The molecular structure accounts for a large fraction (30-70%) of the dynamic mass. Although the central location of the molecular material in NGC$\,6240$ is not unique, it is notably different from the (more advanced) interaction in Arp 220 where the emission peaks at the two nuclei (Scoville et al 1997; Sakamoto et al 1999).
This paper presents studies of the OH and HI absorption in the NGC6240 system using the NRAO Very Large Array (VLA) in A-Configuration. With NGC6240 at a distance of 104 Mpc the spatial conversion for the VLA data is 504 pc per arcsecond, which complements the resolution of other spectral line and continuum studies. Our data reveal more of the dynamics of the system and provide connections to studies of other atomic and molecular emissions.
Observations
============
Observations of the HI $21\,$cm line and the OH main lines at $1665$ and $1667\,$MHz toward NGC$\,6240$ were made on September 1, 1995 with the NRAO Very Large Array in the A-configuration. Phase tracking was centered on $\alpha(1950) = 16^{h} 50^{m}
28\fs3$, $\delta(1950) = 02^{\circ} 28^{\prime}
53^{\prime\prime}$.
The HI line was observed using a $2\,$IF mode, each IF having a bandwidth of $6.25\,$MHz subdivided into $32$ channels of $195.3\,$kHz in width. This setup resulted in a usable velocity coverage of $1298\,$km$\,$s$^{-1}$ and a velocity resolution of $43.3\,$km$\,$s$^{-1}$. The spatial resolution in the HI spectral data is 1.95“ x 1.79” for NA weighting and the channel width is 43.3 . The rms in the individual channel maps is 0.49 mJybeam$^{-1}$.
Both OH lines at 1665 and 1667 MHz were observed simultaneously using two partially overlapping IFs of width 6.25MHz with 32 channels of width 195.3kHz. The synthesized band used for this discussion has a center frequency of 1666.38MHz (between those of the 1665 and 1667MHz lines) and has a total velocity coverage of 1326km$\,$s$^{-1}$. The velocity scale in this presentation is heliocentric in the optical definition and has been re-gridded using the rest frequency of the 1667MHz line. The resolution in the OH maps is 1.11 x 1.08" and the channel width is 36.7 . The rms in the channel maps is 0.41 mJybeam$^{-1}$.
Continuum maps at 21 and 18-cm have been constructed using line-free channels. The rms and the beam size of the two maps are respectively 0.46 mJybeam$^{-1}$ with 1.965“ x 1.79” and 0.28 mJybeam$^{-1}$ with 1.11“ x 1.08”.
The data were reduced using the NRAO software package AIPS. The flux and bandpass calibrator and phase calibrator were 3C 286 and 1648+015 for both HI and OH line data sets. Image cubes were made with a pixel size of $0\farcs3$, using a variety of weighting schemes. Continuum data sets were constructed by averaging line-free channels. Subtraction of the continuum was done independently in the visibility and in the image domain, resulting in consistent results. In the case of the OH data both IFs were imaged independently and joined after the continuum subtraction, averaging overlapping channels. Due to the uncertainty of the baseline structure at the edges of the spectrum, we estimate a flux uncertainty of 20% for the OH absorption data and 10% for the HI absorption and the continuum data.
Results
=======
The distance of NGC6240 is assumed to be $D$ = 104 Mpc for a systemic optical velocity of 7275 using $H_0$ = 70 kms$^{-1}$ Mpc$^{-1}$. At this distance the spatial conversion is 504 pc per arcsecond. In the discussions below, we have adopted the radio designations of Colbert et al. (1994) for the nuclei N1 and N2 and the components of NGC6240. In addition, the suggestion has been made for the northern component N3, which may be a third nucleus or an enhanced fragment of the northern galaxy. There is a Southern extension S and various West components W1 - W4 forming the arc structure of 5.9 kpc in size. A W0 component has been designated in the continuum structure to the west of N3. These designations have been indicated in Figure \[continuum\]b.
Continuum studies
-----------------
Contour maps of the natural-weighted continuum emission at $1420\,$MHz and the robust-weighted continuum emission at $1666\,$MHz are presented in Figure \[continuum\]. The integrated flux densities in the maps are 466 and 333 mJy with peak values 108 and 58 mJy, respectively. Our A-array L-band images are consistent with previous B-array images except that our peak fluxes are higher by 10$-$20%, when convolved to a resolution of 4.79“ x 4.39” of Colbert et al. (1994), possibly due to slightly different integration boxes. The 21 cm map shown in Figure \[continuum\]a (upper panel) is optimized for the detection of extended low-brightness features and shows that the individual radio components N1 (South) and N2 (+N3) (North) are embedded in a halo of diffuse emission. The higher resolution 18 cm map in Figure \[continuum\]b (lower panel) clearly separates the two nuclei N1 and N2 (+N3). At higher resolution the N1$-$N2 axis is found to be at PA = 20$^o$ with a (projected) separation of the nuclei of 1.575" or 93 pc (Hagiwara et al. 2003; Gallimore & Beswick 2004).
The structure along the western arm (W1-W4) shows diffuse emission without any sharp peaks at this resolution. The new components NW and W0 have been added to indicate the diffuse structure west of N3. Diffuse extensions can be seen in the north-east NE, the south-east SE, and an S extension. We present radio continuum parameters derived from the $1666\,$MHz map in Table 1.
The large-scale continuum structure shows a strong similarity to the butterfly structure found in the optical and X-ray (Komossa et al. 2003). The primary cause of this structure would be a nuclear blowout from the nuclear region of N1 in the southern galaxy. The large-scale H$\alpha$ structure towards the west (see Max et al. 2005) appears to match and complement the loop structure in the radio. In addition, there eastern complement to the western loop in H$\alpha$ and soft X-ray (Max et al. 2005; Komossa et al. 2003). The SE and NE radio extensions agree with X-ray and H$\alpha$ structure and form “the base” of a similar blowout bubble to the east of the nuclei. The S and NW radio extensions in Figure \[continuum\]a,b have counterparts also in the larger scale H$\alpha$, H$_2$ and X-ray emission structures.
The HI Absorption
-----------------
[**The HI line characteristics**]{} - The HI absorption spectra in the extended emission region of NGC6240 have been given in Figure \[hispec\]. The profiles of the HI absorption at the two nuclei are very similar as both spectra have a FWZI width of about 900 and a half-power width of about 348 . However, the absorption at N2 is 1.7 times stronger that the absorption at N1, and the line at N2 is more symmetric than the N1 line that is skewed due to a higher velocity component. The systemic velocities at N1 and N2 are 7295 and 7339 respectively. N1 lies just south of the peak in the HI absorption column density in Figure \[hispec\].
The spectra presented in Figure\[hispec\] indicate that there are large differences in the absorbtion columns and that absorption is seen over a velocity range of more than 900 . This velocity width results from the rotation in each of the galaxies, the orbital velocity component of the two galaxies, and the inflow and outflow due to the interaction. The line of sight to each nucleus does not provide an accurate estimate of the systemic velocity of that nucleus. The spatial resolution of the 21-cm data is 980 x 900 pc and our line-of-sight towards the two nuclei will sample multiple velocity components. The high-resolution HI absorption study with MERLIN at 0.3 arcsec resolution by Beswick et al. (2001) revealed two isolated absorption components at the locations of the N1 and N2 nuclei at velocities 7087 and 7260 (radio definition). Using the optical heliocentric definition we find systemic velocities V(N1) = 7258 and V(N2) = 7440 and we adopt these as a more accurate approximation of the systemic velocities of the two nuclei. It should be noted that these velocities straddle the absorption peak in the two nuclear absorption spectra of Figure \[hispec\]. While the peak of the absorption column density coincides with N1, the centroid absorption velocity at N1 is about 100 lower, due to the lower velocity of the structural component between the nuclei.
[**The HI PV diagrams**]{} - Figures \[hipvd\] and \[hipvr\] present velocity-position maps in two principal E-W and N-S directions. These diagrams cover only the double-nucleus region. The RA$-$velocity diagrams of Figure \[hipvd\] show the velocity structure in the south and the north of the system. Close to N1 where the column density peaks at 7265 , the rotation is essentially south-to-north with a gradient to be determined from the PV plot along the declination axis. In the northern PV diagram, the extended absorption peaks just south of N2 at 7385 and shows an west-to-east velocity gradient of 1.0 pc$^{-1}$ close to N2. In the North, an additional (weak) second component appears at 7700 with a east-to-west velocity gradient of 0.53 pc$^{-1}$. The extended low-declination outflow structure south of N1 reaches 6900 (Fig. \[hipvd\]b) and has a southwest-to-northeast gradient of 0.89 pc$^{-1}$.
The declination$-$velocity diagrams of Figure \[hipvr\] display three velocity profiles along the east side, close to the center, and the west side of the central region (along a south-north RA direction). The diagrams show a changing south-to-north velocity gradient resulting from three distinct components. South of N1 there is a gradient of 1.49 pc$^{-1}$; north of N2, a gradient of 1.15 pc$^{-1}$. In between N1 and N2 the gradient is 0.26 pc$^{-1}$, which is close to the predicted value of 0.24 pc$^{-1}$ based on the velocity difference of the two nuclei. This central component is dominated by an accumulation of gas in between the two nuclei. A similar absorption structure is found in the OH data. In accordance with Fig. \[hipvd\], an east-to-west component enters between 7300 and 7750 , which occurs at high declination at the west side (right frame) of the source, i.e. going towards the NW region and possibly representing streaming gas motions in the northern galaxy. A number of rather marginal but distinctly offset components are found west of N2 (Fig. \[hipvr\]a) and southeast of N2 (Fig. \[hipvd\]a), covering a large velocity range of 7000$-$7750 .
In our discussion of the OH absorption data below, we will correlate our findings of the HI absorption in the nuclear N1$-$N2 region. However, the OH absorption is solely confined to the nuclear region and does not display any of the extensions found in this section.
The combined PV diagrams show the existence of five independent HI components found against the nuclear region: (1) a disk-like structure with three components with distinct gradients of 1.49 pc$^{-1}$ south of N1, of 1.15 pc$^{-1}$ north of N2, and of 0.26 pc$^{-1}$ between N1 and N2. A single gradient covering the whole range would have a south-to-north gradient of 0.32 pc$^{-1}$.; (2) the region north of N2 shows a second (reverse) southwest-to-northeast gradient of 1.0 pc$^{-1}$; (3) a high-declination east-to-west structure between 7250 and 7750 with a gradient of 0.53 pc$^{-1}$ providing a connection with the NW absorption region; (4) an outflow component reaching 6900 associated with nucleus N1 with a southwest-to-northeast gradient of 0.89 pc$^{-1}$; and (5) some distinct (but marginal) offset components covering 7000$-$7750 associated with the disturbed region west-south-west of the nucleus N1.
[**The moment maps**]{} - The first moment HI map in Figure \[himom12\]a confirms the dominant south-to-north (component (1)) velocity gradient along the N1 - N2 axis suggesting organized rotation. The velocity gradient deduced between N1 and N2 is 0.18 pc$^{-1}$, which is smaller than the one obtained above from the PV diagrams.
A large-scale east-to-west rotation (component (3)) in the northern region causes curved iso-velocity lines and continues into the NW region but is interrupted by a lower-velocity north-south component crossing the region at W0. This interruption has the signature of the foreground dust lane passing just west of the nuclei in optical images. The dust lane absorption at W0 is at 7400 , which is close to the systemic velocity of N2.
At the locations of the two nuclei, the systemic velocities in Figure \[himom12\]a are V(N1) = 7235 and V(N2) = 7305 . The velocity difference of 70 is smaller than the 120 found in the PV diagrams of Fig. \[hipvd\], which would come closer to the difference of 182 found at higher resolution (Beswick et al. 2002) and in other molecular data. Outside the main structure we can also identify absorption at W2 at V = 7034 and at the eastern edge of W4 at 7687 , which needs confirmation.
The second moment map in Figure \[himom12\]b shows a rather curious structure with a band of large velocity widths (more than 150 ) running from N1 and N2 into the NW region. For comparison, the MERLIN line widths are largest at N2 with 300 km/s and narrowest towards the northwest with 60 (Beswick et al. 2001).
A distinct low-velocity component (seen also as 4 and 5 in the PV data) located at west-south-west of N1 is likely related to the superwind outflow. This feature at PA = 25 $^o$ appears to have an west-to-east velocity gradient and lower line widths. It should be noted that this structure also appears at the same location in the OH data. Furthermore, the position angle of the outflow component also points to the eastern edge of the W4 component that is also found in the moment maps of Fig. \[himom12\] and \[hicolumn\]. In addition, the offset components found at the outflow position in the PV diagrams have a velocity range of 7000$-$7750 , indicating a highly disturbed region.
[**The HI absorption column density**]{} - The absorption column density presented in Figure \[hicolumn\] and in Table 2 is determined using the absorption line strengths and the associated continuum data at 1420 MHz (Figure \[continuum\]a). The expression for the hydrogen column density used is $N_{\rm
H}$/$T_{\rm S}$ = 1.823 $\times$ 10$^{18}$ $\int$ $\tau(V)$ $dV$ cm$^{-2}$, where $T_{\rm S}$ is the hydrogen spin temperature, and $\tau$ is the absorption optical depth. The map shows the highest column density of $N_H$ = 1.28 x 10$^{22}$ cm$^2$ at nucleus N1 using a spin temperature of 100 K. The optical depth is largest at N1 with 0.15 ($\pm$ 0.001) and at N2 with 0.11 ($\pm$ 0.001).
The column density map also shows absorption at W0 with 9.42 x 10$^{21}$ cm$^2$, and at W4 with 7.78 x 10$^{21}$ cm$^2$. If one assumes that the absorption at W0 is composed of a continuation of the NW absorption plus a contribution of the foreground dust-lane at a significantly lower velocity, then the dust-lane specific column density is estimated at 3.2 x 10$^{21}$ cm$^{-2}$. The NW structure at PA = $-$60$^o$ is located at some 2 kpc from the nuclei and W0 is at 2.7 kpc. In addition, the NE elongation at PA = 45$^o$ extends almost 2.0 kpc from N2. The absorption spot W4 is 4.2 kpc away from N1.
The OH Lines
------------
[**The OH line characteristics**]{} - The integrated spectrum of the 1667 MHz and 1665 MHz OH absorption is depicted in Figure \[ohspec\] in the rest frame of the 1667 MHz line. The adopted systemic velocities of the two nuclei of 7258 and 7440 lie just below and above the absorption peaks of the two lines. The integrated OH spectrum across the whole source shows an overall line ratio of 1.3 for the two absorbing transitions. Channel maps of the OH data cube have not been displayed because the spectral information is better presented by other means.
The shallowness and irregularity of the single-dish OH absorption spectrum (Baan et al. 1985) suggested that some of the absorption in the system had been filled in with emission. With this in mind, the OH data have been scrutinized in a search for OH emission but no clear evidence has been found. The two absorption lines in Fig. \[ohspec\] suggest an asymmetry of the line profiles and non-similarity that could indeed be explained by a partial infilling with emission on the high velocity side of the 1665 MHz line and/or at the low side of the 1667 MHz line.
The OH hyperfine ratio in the nuclear absorption region is depicted in Figure \[ohrat\] using the sums of the channel maps for the 1667 and 1665 MHz lines as depicted in Figure \[ohspec\]. The central region between the nuclei exhibits values below 1.0 and going below 0.8 on the west side. Locations outside the central region have values in the optically-thick and optically-thin LTE range of 1.0$-$1.8. The values of the hyperfine ratios at N1 and N2 are respectively 1.6 and 1.2. The occurrence of non-LTE conditions in the velocity range of 7330 and below may be explained with emission in the 1667 MHz line. The range around 7300 corresponds to a missing (low-velocity) shoulder of the 1667 MHz line profile (see Fig. \[ohspec\]).
[**The OH PV maps**]{} - The OH data shows substantial absorption only against the nuclear radio double source and no large extensions can be found outside the nuclear area. A single OH velocity-position map is presented at PA = 20$^o$ along the N1$-$N2 axis (Fig. \[ohpv\]). Different than that of the HI absorption, the bulk of the OH absorption occurs between the two nuclear sources at 7325 , and shows a dominant south-to-north velocity gradient in the central region of 0.19 pc$^{-1}$, which is somewhat smaller than that of the central HI component. Similar to the HI absorption, there is a changing velocity gradient across the region with two separate components with velocity gradients outside N1 and N2. The gradients in the two shoulders in Fig. \[ohpv\] south of N1 and north of N2 are estimated to be 0.75 pc$^{-1}$, which is much steeper than the central part but still lower than that of the HI estimate. This component has also a counterpart in the 1665 MHz lines and is related tot the connecting bridge between the 1665 and 1667 MHz lines.
In the northern region close to N2, there is a distinct component with an (estimated) opposite west-to-east velocity gradient of 0.30 pc$^{-1}$. A similar structure has been found in the HI data (Fig. \[hipvr\]), which has been associated with rotation due to the northern galaxy along the N2$-$NW line. In the spectrum of Figure \[ohspec\] this translates into the low-velocity shoulder of the 1667 MHz line.
[**The moment maps**]{} - The first moment map of the 1667 MHz line in Figure \[ohmom12\]a shows a smooth velocity gradient, that resembles and confirms the HI characteristics in the central region. The velocity gradient starts in the SE region close to N1 as part of the southern galaxy and continues via N2 into the northeast direction. There is some evidence of a superposed east-to-northwest gradient starting at N2 that is associated with the northern galaxy. The velocities derived for N1 and N2 from the second moment map are 7255 and 7370 .
The line width in the 1667 MHz line displayed in Figure \[ohmom12\]b is largest at a location between the two nuclei similar to the HI case, but with a value of 80+ it is significantly smaller than the 150+ width found in HI. It should be noted that the highest line widths coincide partially with the region of non-LTE (super optically-thin with ratio $\leq$ 1.0) excitation in Figure \[ohrat\]. The moment maps of the 1665 MHz lines are all consistent with those of the 1667 MHz.
Figures \[ohmom12\] also display the curious structure southwest of N1 at PA = -25$^o$ that is also present in the HI data, and represents the direction of a jet or is part of a wider nuclear outflow. At that location the velocity field is confused and the OH linewidths become narrower. Further to the west there is an additional (disjoint) region with of very low 20 linewidth at 7360 , which may relate to the (streaked) extensions at low declination (57.0") in the PV diagram (Fig. \[ohpv\]).
[**The OH column density**]{} - The OH column density has been presented in Figure \[ohcolumn\] and has been based on the 1667 MHz optical depth using the 18 cm continuum map (Fig. \[continuum\]) and the expression N$_{\rm OH67}$ = 2.35 x 10$^{14}$ T$_{ex}$ $\int\tau(V) dV$, where the excitation temperature T$_{ex}$ has a typical value of 20 K. The region with the highest OH column density of N$_{\rm OH67}$ = 1.08 x 10$^{16}$ cm$^{-2}$ occurs halfway between N1 and N2. The column densities at N1 and N2 are a factor of about 1.8 lower. The peak OH optical depth of 0.063 is a factor of 2 smaller than that of HI.
Discussion
==========
The central gas concentration
-----------------------------
The central gas concentration is clearly present in the OH data, where it peaks between the nuclei at about 0.9" north of N1 (and closer to N2), and also in the HI data, where the peak occurs close to N1. In a projection scenario with N1 being located behind N2 (see section below), the largest column densities should occur at N1 and gas distributions of the two galaxies are displaced by only 790 pc. Therefore, the column differences between the absorption peaks and the nuclei of 1.3 for HI and 1.8 for OH could be accommodated by a superposition of two galactic gas distributions. However, the velocity gradient in the central region of 0.19 pc$^{-1}$ for OH and 0.26 pc$^{-1}$ for HI is close to the predicted value of 0.24 pc$^{-1}$, that results from the velocity difference of the two nuclei. The central molecular structure is also not tied simply to the orbital motion of the two nuclei, because the emission peak lies off the N1-N2 connecting line. The central gas structure could thus be a superposition of disk gas, which is combined with gas pulled out of the two galaxies during the interaction and deposited close to the center of mass of the system. The structure appears co-rotating within the two nuclear regions.
The centrally peaked OH absorption shows rough agreement with the findings for other thermally excited molecules such as the CO $J$ = 2$-$1 and H$_2$ emissions (van der Werf et al. 1993; Ohyama et al. 2000; Tacconi et al. 1999). The HI absorbing gas samples a larger volume than the OH, and different structural components may contribute to the HI and molecular absorption component. However, the central OH and HI velocity gradients of about 0.25 pc$^{-1}$ are surprisingly different from the velocity gradient of the CO $J$ = 2$-$1 emission of 0.74 pc$^{-1}$. Possibly the CO data also samples the higher gradients of the gas in the two disks.
The uniformly large line widths found in the HI and OH data confirm large-scale contributions to the central absorption. The large half-width values for HI and OH of 200 and 75 are consistent with the stellar velocity dispersion peaking at 270 close to the H$_2$ and CO $J$ = 2$-$1 emission peaks (see Tecza et al. 2000). The large zero-intensity line widths of HI and OH confirm the presence of a disturbed and rather clumpy medium with multiple structures in the central region including the two nuclei (see also Beswick et al. 2001).
The OH$-$HI optical depth ratio in the central region suggests an OH/HI abundance ratio of 8.6 x 10$^{-7}$. This value is relatively high compared to other extragalactic absorption systems (see Baan et al. 1985), and would support the notion that the central region consists of enriched disk gas.
The two nuclei
--------------
Our data do not provide the detailed properties of the gas in the nuclear region and in the foreground. The HI and OH column densities suggest that the more obscured nucleus N1 lies behind the less obscured northern nucleus. The path to N1 would then sample a more complex multiple absorption structure (see also Beswick et al. 2001). The total HI column density at N1 of 1.28 x 10$^{22}$ cm$^{-2}$ agrees with the column density estimated from X-ray nuclear emission (Komossa et al. 2003). In addition, the lower HI column at N2 of 1.01 x 10$^{22}$ cm$^{-2}$ also agrees with the X-ray column. The ratio of (integrated) OH and HI optical depths ranges between 0.25 at N1 to 0.32 at N2 , which suggests lower enrichment at the nuclei relative to the central region.
Both nuclei have AGN characteristics in the radio and X-ray (Gallimore et al,; Beswick et al. 2001; Komossa et al. 2003). However, the extended radio continuum seen in the nuclear region forming the back-ground for the HI and OH absorption is associated with intense star-formation resulting from the close interaction of the galaxies (Genzel et al. 1998; Beswick et al. 2001). Extended radio structures of at least 1 kpc are commonly found in radio-quiet Seyferts and LINERs that do not follow the morphology of a galactic disk (Gallimore et al. 2006). Such extended structures result from nuclear outflows rather than starbursts and are likely to have a relatively luminous, compact radio source in the nucleus. Although NGC6240 has evidence of a weak nuclear jet in the northern nucleus N2 (Gallimore & Beswick 2004), there is no evidence of strong AGN-related activity that could explain the total emission region covering a projected 7 x 4 kpc region that does not include the arc structure. It is more plausible that the AGNs in N1 and N2 have recently become active as a result of the interaction, and that the extended radio emission is due to distributed star-formation and the symmetric outflow triggered by the star-formation.
The distinct radio continuum region designated N3 is not necessarily a third nucleus, but is is rather an area of enhanced (superposed) star-formation region in the interaction zone of the two galaxies and is currently embedded in the extended radio structure in the north.
The dynamics of NGC6240
-----------------------
The nuclei N1 and N2 are separated by 1.575" corresponding to 793 pc. If the interacting galaxies were in the plane of the sky, they would be at a very different velocity and the nuclear regions would be coalesced and extremely confused. Since we see apparently distinct nuclear entities, they are more distant from each other and have only a projected distance of 793 pc. Considering that the highest HI and molecular column densities lie in between the nuclei and that the X-ray source in N1 shows the highest column density, it is most plausible that N1 lies behind N2. The small velocity difference suggests that the N1-N2 connecting axis has a small angle with the line-of-sight and the relative values of the velocities at N1 and N2 suggest that the galaxies are just past transit.
The HI/OH systemic velocities at the nuclei can be used for a dynamical/orbital scenario for the nuclei projected on the sky. Given the relatively large beam, we find an HI estimate of 7235 and 7305 and an OH estimate of 7255 and 7370 , which are nominally consistent with the higher resolution HI values of 7258 and 7440 (Beswick et al. 2001). These values are larger than the difference of stellar velocities of 50 (Tecza et al. 2000), but consistent with the H$_2$ ($\approx$ 150 ), CO(2$-$1) ($\approx$ 100 ), and Bracket $\gamma$ emission data (Lira et al. 2002; Ohyama et al 2000; Tecza et al. 2000). We adopt the high-resolution HI estimate of the velocity difference for the nuclei $\delta V$ = 182 . The projected distance $D_{obs}$ of the connecting line between the two galaxies is 1.575", which corresponds to a projected separation of 793 pc.
As an attempt at the dynamics of the close encounter, we assume a simple [*edge-on circular orbit for two equal masses M*]{} around the center of mass, and a (small) projection angle $\theta$ between our line of sight and the connecting line between the two nuclei. The description of the orbital motion of the system follows from: sin($\theta$)$^3$ = 0.031 $\delta V^2$ $G^{-1}$ $M_n^{-1}$ $D_{obs}$, where M$_n$ is the combined dynamic mass of the nuclear region. The estimate of Tecza et al. (2000) of the stellar mass in each of the nuclei of about 2 x 10$^9$ gives a combined dynamic mass of the nuclei of $M_n$ = 1.2 x 10$^{10}$ using the smaller velocity difference. The central gas concentration also constitutes a significant fraction of the dynamic mass, such that $M_{gas}(R\leq 470 pc)$ $\approx$ (2$-$4) x 10$^9$ $\approx$ (0.3$-$0.7) $M_{dyn}$ (Tacconi et al. 1999). For this reason, we adopt a dynamic mass for the nuclei of $M_n$ = 1.5 x 10$^{10}$ , which results in $\theta$ = 13.4$^o$ (projection factor = 4.3), an orbital velocity of 392 , and a distance between the nuclei of 3.42 kpc. The orbital period is about 27 Myr. This scenario gives a sufficiently large separation distance to ensure identifiable nuclear/galaxy characteristics at this well-advanced stage just before coalescence.
The two interacting galaxies
----------------------------
The OH and HI velocity field of the nuclear region is dominated by the large-scale organized motion of the accumulated gas structure. However, the detailed velocity pattern displayed in Figures \[himom12\]a and \[ohmom12\]a suggest the presence of large-scale velocity components associated with the interaction of the two galaxies. The analysis of the stellar velocity field in the southern galaxy suggests a northwest-southeast rotation (i = 60$^o$) at PA = $-$34$^o$ with $V_{rot}$ = 270 $\pm$ 90 (Tecza et al. 2000). The northern galaxy (i = 33$^o$) displays a southwest-northeast rotation at PA = 41$^o$ with $V_{rot}$ = 360 $\pm$ 195 .
The HI and OH absorption in the northeast extension (Figs. \[himom12\]a and \[ohmom12\]a) shows a southwest-northeast rotation for the northern galaxy at PA = 40$-$50$^o$ with an HI gradient of 1.15 (or 0.75 for OH) pc$^{-1}$ north of N2, which is consistent with a stellar gradient of 1.2 pc$^{-1}$. However, the region south of N1 also shows evidence of a south-north rotation at 1.49 pc$^{-1}$ for HI and 0.75 pc$^{-1}$ for OH, and is associated with the large gas structure of the interaction. In addition, there is a weak low-velocity HI signature south of N1 down to 6900 with the southwest-northeast gradient of 0.89 pc$^{-1}$, which could constitute the motion of the southern galaxy. The absence of a clear velocity signature of the southern galaxy could easily result from the column density around N1. Furthermore, the region west-south-west of N1 is very perturbed by the outflow and shows evidence of components with velocities up to 7750 .
The superwind outflow
---------------------
The continuum structure extends from N1 and N2 in all directions, including to the northwest region and the radio arc. The western radio arc results from the shocked regions forming the boundaries of a symmetric superwind-driven outflow emanating from N1 (see Heckman et al. 1990), that is less prominent towards the east. In addition, there is a considerable extended radio emission resulting from distributed star-formation in the southwest and northeast regions. Besides the presence of two AGNs, the radio properties of the nuclear region suggest dominant starburst activity (Beswick et al. 2001; Gallimore & Beswick 2004). The K-band emission at both nuclei suggests a dominant population of red supergiants (Tecza et al. 2000). Recent X-ray data suggest that the outflow was indeed symmetric and that there are remnants on both sides of the nuclei.
The HI data shows a blue-shifted component along the line of sight extending to $-$300 with respect to N1 (Fig. \[hipvr\]). Similarly there is a low velocity component in the OH data at $-$120 (Fig. \[ohpv\]), which produces a wing on the 1667 MHz line (Fig. \[ohspec\]). Besides Ohyama et al. (2000) note a $-$250 component in the H$_2$ emission. These blue-shifted features could represent line-of-sight outflows and shocks, which are driven into the denser nuclear ISM by the nuclear starburst and are associated with the superwind.
The HI and OH velocity and linewidth data west-south-west of N1 as well as the blue-shifted HI components at N1 clearly confirm that N1 is the origin of the outflows. The lifetime of the starburst has been estimated at $\approx$ 10 Myr (Tecza et al. 2000), which is about 40% of the orbital period of 27 Myr derived above. The orbital motion may thus have resulted in smearing out the X-ray and radio emission regions. In addition, the difference of the emission strength of the northern and southern parts of the radio arc may also have resulted from the “piling up” of emission in the forward direction, which confirms that N1 is moving north. The complicated OH and HI structures southwest of N1 are associated with the outflow into the western cavity and cover a large velocity range, with a mean of about 100 below that of the systemic velocity of N1. There is (marginal) evidence of HI absorption components west-south-west of N1 reaching an extreme of 7750 . Our HI data also displays continuous absorption against the base of the northern radio arc and against arc components W0, W2 and W4.
The extended absorption
-----------------------
The HI absorption is found across much of the extended radio emission, while the OH is found only in the central region of the source. The HI absorption shows complicated structures with a wide range of velocities and line widths, and relatively low column densities. Some of this material is associated with foreground dust lanes and ejected gas resulting from the interaction.
The extended radio emission in NGC$\,$6240 is associated with the remnants of the two galaxies and the radio arcs resulting from the symmetric superwind outflows emanating from N1 and possibly N2. As discussed above, we find significant absorption and an HI velocity gradient of 1.0 pc$^{-1}$ in the NE region, which is associated with the remnant of the northern galaxy. In addition, there is an east-to-west gradient towards the NW region (at PA = -50$^o$), where we find the highest HI velocities in the system.
The column density map of Figure \[hicolumn\] displays widespread HI absorption in the large-scale structure of NGC$\,$6240. Discrete absorption components with column densities in the range of 0.4 - 1.0 x 10$^{22}$ cm$^{-2}$ are found at the continuum component in the NW region and at the W0, W2, and W4 components of the arc structure. The velocities at these components, which are not associated with distinct components in the optical and H$\alpha$, suggest an increasing velocity towards the south.
The dominant absorption at W0 is caused by a north-south dust-lane passing in the foreground of the continuum structure with an estimated column density of 3.2 x 10$^{21}$ cm$^{-2}$. Images with various optical and X-ray instruments (Max et al. 2005; Komossa et al. 2003) show the clear presence of this north-south dust-lane that crosses the NW radio structure at W0 and accounts for an added column density. While the distributed HI in the NW region has a velocity of about 7500 , the dust-lane has a systemic velocity of about 7270 , which is close to that of N1. Furthermore, it has no clear velocity gradient because of its distance from the nuclei.
The radio continuum and X-ray images also display two extended structures S and SW of nucleus N1 (see Fig \[continuum\]b; Komossa et al. 2003). It is found that a dust-lane structure towards the south divides these two structures. Only weak absorption has been seen against the continuum in this southern region.
The OH and H$_2$O emission
--------------------------
A narrow H$_2$O maser line has been detected towards the nuclear region of NGC$\,$6240 at Vlsr = 7565 located within 3 pc from the continuum peak at N1 (Hagiwara et al. 2003). The occurrence of an H$_2$O maser is rather unusual in a FIR dominated galaxy such as NGC$\,$6240, which is more likely an OH Megamaser candidate. OH-MM UGC05101 also hosts a weak H$_2$O maser towards the nuclear region (Zhang et al. 2005). The maser in NGC6240 is redshifted about 300 relative to the systemic velocity of N1, while high-velocity OH or HI gas has only been found in the northern region of the source (Figs. \[hipvd\], \[hipvr\], and \[ohpv\]). An association of the maser with the AGN could exist with shocked outflows, or jet-molecular cloud interactions in order to account for these discrepant velocities. Examples of other redshifted jet-related masers can be found in the elliptical NGC1052 (100-180 ; Claussen et al. 1998) and Mkn348 (130 ; Peck et al 2003). Alternatively, there could be either an association with infalling foreground gas to the nuclear region or with an active nucleus.
The shallow and multi-component Arecibo spectrum of OH in NGC$\,$6240 has been interpreted on the basis of partial infilling of the absorption by emission (Baan et al. 1985). The asymmetries in the spectrum of Fig. \[ohspec\] and the PV diagram Fig. \[ohpv\] could indeed support this notion. While asymmetries suggest emission infilling at the velocity of N2, there is no evidence in the data for this. The bulk of the OH absorption shows an LTE line ratio. Only the western side of the central absorption shows non-LTE ratios that suggest infilling with emission at the low-velocity side of the 1667 MHz line at the velocity of N1. Non-LTE conditions could be caused by the FIR radiation field, which is dominant in NGC6240 and has the right infrared colors for FIR pumping as in OH Megamasers (Baan 1989; Henkel and Wilson 1990). While there would be enough background radio continuum for this purpose in the N1 system, there is no discernable line emission.
Summary
=======
The extended HI and OH absorption against the continuum structure has revealed more of the dynamic and evolutionary properties of the interacting system NGC6240, and complementary evidence obtained at other wavelengths. The radio continuum structure and the associated absorption structure of NGC6240 is in part the result of a superposition of the two galaxies and their constituents. In a simple dynamic model using HI systemic velocities, the northern galaxy with nucleus N2 would be located in front of the southern galaxy with nucleus N1, such that the N1-N2 connecting line would be foreshortened by a factor of 4.3. In this picture N1 would be expected to have the largest absorbing column density, while the central disks of the galaxies would be superposed between the two nuclei.
The radio continuum structure of 15“ x 17” (7.6 x 8.6 kpc) peaks at the two nuclei of the interacting galaxies with hybrid starburst and AGN emission, and is surrounded by an extended structure associated with star-formation activity triggered by the interaction. A large-scale but incomplete loop structure on the western side of the source has been associated with a nuclear blowout and outflow from nucleus N1 of the southern galaxy, while traces of a similar structure can also be found at the eastern side of the source.
The HI absorption covers a contiguous 5“ x 8” (2.7 x 4.2 kpc) region of the continuum structure and provides a large-scale view of the velocity field across this area. The peak of the HI absorption falls close to nucleus N1 in such a way that the HI column density at N1 is 1.26 times that of N2, and is in agreement with the estimates from X-ray observations.
OH absorption has been found only against the nuclear continuum and extends 2.5“ x 2.0” (1.25 x 1.0 kpc). The largest column density of the OH absorption falls north of N1 and about halfway between N1 and N2, a fact that agrees with maps of other molecular emissions. The column densities at N1 and N2 are about 60% of that of the central gas structure.
The HI and OH velocity fields reveal parts of the velocity gradients of the two individual galaxies buried in the central region. Velocity gradients at various locations suggest gas motions resulting from the interaction of the two galaxies. In particular, the location of N1 and the region to the west displays blue-shifted (l.o.s.) outflow components, as well as structural components related to the sideways outflow into the western bubble. This evidence clearly confirms the nuclear activity at N1 as the origin of the outflows and the cause of the radio arc, which is consistent with the evidence from X-ray and H$\alpha$ data.
Distinct velocity components are found in the northern region and along the radio structure northwest of the nuclei. A foreground dust lane passes across the northwest radio loop structure. Absorption in more distant continuum components do not reveal a coherent velocity pattern. The large width of the HI absorption line across the central part of the source confirms the violent dynamics of the system. The highest OH velocity widths are found at the central gas deposit, but they are significantly lower than those of HI and therefore less affected by the merger dynamics.
The central gas structure may result from a superposition of the disks of the two galaxies. There may also be accumulation of gas in the center of mass of the dynamic system. The central gas accumulation between the nuclei behaves as an independent structure with a velocity gradient proportional to the velocity difference of the two nuclei, and the gas appears locked into the motion of the system. The HI and OH velocity gradients for the central region are much smaller than that of CO(2$-$1), which may suggest that different observations detect distinctly different scale sizes within these structures.
The OH hyperfine ratio in the absorption region suggests mostly LTE conditions across the nuclear region, except in the western part of the central gas accumulation where non-LTE conditions are found. Non-LTE conditions may suggest that radiative far-infrared pumping actively reduces the absorption on the low-velocity side of the 1667 MHz OH line. No further OH maser emission has been found in the system.
WAB would like to thank Aubrey Haschick (formerly of Haystack Observatory) for constructive support during the early stages of this project.
Baan, W.A., Haschick, A.D., Buckley, D. & Schmelz, J.T. 1985, ApJ 293, 394 Baan, W.A. 1989, ApJ 338, 804 Beswick, R.J., Pedlar, A., Mundell, C.G. & Gallimore, J.F. 2001, MNRAS 325, 151 Bland-Hawthorn, J., Wilson, A.S. & Tully, R.B. 1991, ApJ 371, L19 Boller, Th. Keil, R. Hasinger, G., Costantini, E., Fujimoto, R., Anabuki, N., Lehmann, I. & Gallo, L. 2003, A&A 411, 63 Claussen, M., Diamond, P.J., Braatz, J.A., Wilson, A.S. & Henkel, C. 1998, ApJL 500, L129 Condon, J.J., et al. 1982, ApJ 252, 102 Colbert, J.M.E., Wilson, A.S. & Bland-Hawthorn, J. 1994, ApJ 436, 89 Fosbury, R.A.E. & Wall, J.V. 1979, MNRAS 189, 79 Fried, J.W. & Schulz, H. 1983, A&A 118, 166 Gallimore, J.F. & Beswick, R.J. 2004, AJ 127, 239 Gallimore., J.F., Axon. D.J., O’Dea, C.P., Baum, S.A. & Pedlar, A. 2006, AJ 132, 546 Genzel, R. et al. 1998, ApJ 498, 579 Gerssen, J., van der Marel, R.P., Axon, D., Mihos, J.C., Hernquist, L. & Barnes, J.E 2004, AJ 127, 75 Hagiwara, Y. 1998, PhD thesis, The Graduate University for Advanced Studies (GUfAS), Japan Hagiwara, Y., Diamond, P.J. & Myoshi, M. 2003, A&A 400, 457 Iwasawa, K. & Comastri, A. 1998, MNRAS 297, 1219 Joseph, R.D. & Wright, G.S. 1985, MNRAS 214, 87 Heckman, T.M., Armus, L. & Miley, G.K 1990, ApJS 74, 833 Henkel, C. & Wilson, T.L. 1990, A&A 229, 431 Komossa, S., Burwitz, V., Hasinger, G., Predehl, P., Kaastra, J.S. & Ikebe, Y. 2003, ApJ 582, L15 Lira, P., Ward, M., Zezas, A., et al. 2002, MNRAS 333, 709 Max, C.E., Canalizo, G., Macintosh, B.A., Raschke, L., Whysong, D., Antonucci, R. & Schneider, G. 2005, ApJ 621, 738 Nakanishi, K., Okumura, S.K., Hohno, K., Kawabe, R. & Nakagawa, T. 2005, PASJ 57, 575 Ohyama, Y. et al 2000, PASJ 52, 563 Peck, A.B., Henkel. C., Ulvestad, J.S., Brunthaler, A., Falcke, H., Elitzur, M., Menten, K.M. & Gallimore, J.F. 2003, ApJ 590, 149 Sakamoto, K. Scoville, N.Z., Yun, M.S., Crosas, M., Genzel, R. & Tacconi, L.J. 1999, ApJ 514, 68 Sanders, D.B., Soifer, B.T., Elias, J.T., Madore, B.F., Mathews, K., Neugebauer, G. & Scoville, N.Z. 1988, ApJ 470, 222 Scoville, N.Z., Yun. M.S. & Bryant, P.M. 1997, ApJ 484, 702 Tacconi, L.J., Genzel, R., Tecza, M., Gallimore, J.F., Downes, D. & Scoville, N.Z. 1999, ApJ 524, 732 Tecza, M., Genzel, R., Tacconi, L.J., Anders, S., Tacconi-Garman, L.E. & Thatte, N. 2000, ApJ 537, 690 van der Werf, P.P., Genzel, R., Krabbe, A., Bleitz, M., Lutz, D., Drapetz, S., Ward, M.J. & Forbes, D.A. et al. 1993, ApJ 405, 522 Zhang, J. S., Henkel, C., Kadler, M., Greenhill, L. J., Nagar, N., Wilson, A. S. & Braatz, J. A. 2006, A&A 450, 933
[lcccccc]{} N1 & 16 50 27.84 & 02 28 57.5 & 108.8&225.9 & 55.7& 88.2\
N2 + N3& 16 50 27.84 & 02 28 58.7 & & & 41.4& 74.1\
NW & 16 50 25.70 & 02 29 00.2 & 6.03 & - & 3.3 & 6.3\
W0 & 16 50 27.43 & 02 29 02.3 & 15.7 & 15.5 & 6.1 & 24.0\
W1 & 16 50 27.18 & 02 29 02.6 & 6.65 & - & 3.4 & 6.4\
W2 & 16 50 27.16 & 02 29 00.2 & 13.8& 14.1 & 5.2 & 16.4\
W3 & 16 50 26.92 & 02 28 56.6 & 11.1& - & 4.4 & 14.4\
W4 & 16 50 27.20 & 02 28 55.1 & 9.8 & 9.7 & 4.4 & 12.1\
S & 16 50 27.78 & 02 28 53.3 & 14.6& 13.8 & 6.2 & 17.9\
E1 & 16 50 28.20 & 02 28 57.1 & 1.5 & - & - & -\
NE & 16 50 28.08 & 02 28 59.0 & 1.8 & - & - & -\
SE & 16 50 27.90 & 02 28 55.1 & 4.0 & 5.7 & - & -\
[lcccccccccc]{} Nucleus N1 & 7295& 909& 0.15 & 70.4$\pm 0.7$ & 1.28 (22)& 7243& 295 & 0.038 & 1.39$\pm 0.1$ & 6.55 (15)\
Central Peak & 7295& 913& 0.12 & 70.4$\pm 0.7$ & 1.28 (22)& 7274& 406 & 0.063 & 2.29$\pm 0.2$ & 1.1 (16)\
Nucleus N2 & 7339& 870& 0.11 & 55.5$\pm 0.5$ & 1.01 (22)& 7363& 406 & 0.035 & 1.28$\pm 0.1$ & 6.03 (15)\
NE corner & 7600& 303& 0.52 & 25.6$\pm 1.5$ & 4.67 (21)& - & - & - & & -\
NW comp & 7513& 606& 0.07 & 34.2$\pm 1.3$ & 6.23 (21)& - & - & - & & -\
Dust lane & 7270& 800& 0.05 & 51.7$\pm 1.2$ & 9.42 (21)& - & - & - & & -\
E1 comp & 7530& 380& 0.04 & 46.8$\pm 1.3$ & 8.55 (21)& - & - & - & & -\
W2 comp & 7034& 217& 0.04 & 21.4$\pm 1.5$ & 3.90 (21)& - & - & - & & -\
W4 comp & 7687& 390& 0.07 & 42.7$\pm 1.3$ & 7.80 (21)& - & - & - & & -\
![Continuum structure at L-band towards NGC6240. [*Upper panel*]{}: the 1420MHz continuum emission with contour levels of -1 and 1 to 128 by factors of two times 1.2 mJybeam$^{-1}$. The peak in the map is 108.8 mJybeam$^{-1}$. [*Lower panel*]{}: the 1666MHz continuum emission with contour levels of -1 and 1 to 64 by factors of two times 1.1 mJybeam$^{-1}$. The peak in the map is 55.7 mJybeam$^{-1}$. Labelling of components according to the nomenclature of Colbert et al. (1994).[]{data-label="continuum"}](f1.eps){width="7.5cm"}
![Spectral signatures of the HI absorption in NGC$\,$6240. The zeroth moment map of integrated HI absorption is presented in the central frame and is similar to Figure \[hicolumn\]. The spectra at seven locations have velocity tick marks of 6800 to 7800 . We note that the significance of the spectral profiles is low at some locations in the column density maps.[]{data-label="hispec"}](f2.eps){width="16.5cm"}
![The HI velocity-position diagrams in the nuclear region for two declinations. The systemic velocities of N1 and N2 are 7258 and 7440 . [*Upper diagram:*]{} Close to nucleus N2 at declination 02 28 59.9. [*Lower diagram:*]{} Close to nucleus N1 at declination 02 28 57.2. For an rms of 0.45 mJybeam$^{-1}$, the contour levels are at 0.5, 1, 2, 4, 6, 8, 10, 12, 14, 16 mJybeam$^{-1}$. The declination-velocity locations of the two nuclei have been marked in the diagrams. Features at the lowest contours have a marginal significance. []{data-label="hipvd"}](f3a.eps "fig:"){width="7cm"}\
![The HI velocity-position diagrams in the nuclear region for two declinations. The systemic velocities of N1 and N2 are 7258 and 7440 . [*Upper diagram:*]{} Close to nucleus N2 at declination 02 28 59.9. [*Lower diagram:*]{} Close to nucleus N1 at declination 02 28 57.2. For an rms of 0.45 mJybeam$^{-1}$, the contour levels are at 0.5, 1, 2, 4, 6, 8, 10, 12, 14, 16 mJybeam$^{-1}$. The declination-velocity locations of the two nuclei have been marked in the diagrams. Features at the lowest contours have a marginal significance. []{data-label="hipvd"}](f3b.eps "fig:"){width="7cm"}
![The HI velocity-position diagrams in the nuclear region for three RA values. The systemic velocities of N1 and N2 are 7258 and 7440 . [*Left diagram:*]{} Just east of nucleus N2 at RA = 16 50 27.88. [*Middle diagram:*]{} Just west of nucleus N1 at RA = 16 50 27.82. [*Right diagram:*]{} West of nucleus N1 and N2 at RA = 16 50 27.78. For an rms in these maps of 0.45 mJybeam$^{-1}$, the contour levels are at 0.5, 1, 2, 4, 6, 8, 10, 12, 14, 16 mJybeam$^{-1}$. The declination-velocity locations of the two nuclei have been marked in the diagrams. Features at the lowest contours have a marginal significance. []{data-label="hipvr"}](f4a.eps "fig:"){width="5.2cm"} ![The HI velocity-position diagrams in the nuclear region for three RA values. The systemic velocities of N1 and N2 are 7258 and 7440 . [*Left diagram:*]{} Just east of nucleus N2 at RA = 16 50 27.88. [*Middle diagram:*]{} Just west of nucleus N1 at RA = 16 50 27.82. [*Right diagram:*]{} West of nucleus N1 and N2 at RA = 16 50 27.78. For an rms in these maps of 0.45 mJybeam$^{-1}$, the contour levels are at 0.5, 1, 2, 4, 6, 8, 10, 12, 14, 16 mJybeam$^{-1}$. The declination-velocity locations of the two nuclei have been marked in the diagrams. Features at the lowest contours have a marginal significance. []{data-label="hipvr"}](f4b.eps "fig:"){width="5.2cm"} ![The HI velocity-position diagrams in the nuclear region for three RA values. The systemic velocities of N1 and N2 are 7258 and 7440 . [*Left diagram:*]{} Just east of nucleus N2 at RA = 16 50 27.88. [*Middle diagram:*]{} Just west of nucleus N1 at RA = 16 50 27.82. [*Right diagram:*]{} West of nucleus N1 and N2 at RA = 16 50 27.78. For an rms in these maps of 0.45 mJybeam$^{-1}$, the contour levels are at 0.5, 1, 2, 4, 6, 8, 10, 12, 14, 16 mJybeam$^{-1}$. The declination-velocity locations of the two nuclei have been marked in the diagrams. Features at the lowest contours have a marginal significance. []{data-label="hipvr"}](f4c.eps "fig:"){width="5.2cm"}
![HI velocity moment maps. Positions of N1 and N2 are marked by crosses. [*Upper diagram*]{}: HI 1st moment map. Contours are plotted at 7200, 7250, 7300, 7350, 7400, 7450, 7500, 7550, and 7600 . The grey-sale is from 6900 to 7800 . [*Lower diagram*]{}: HI 2nd moment map, showing HI velocity dispersion. Contours are plotted at 10, 50, 100 to 500 by 50 . The grey-scale starts at 0 and ends at 500 . The central region has velocity half-widths greater than 150 .[]{data-label="himom12"}](f5a.eps "fig:"){width="8cm"}\
![HI absorption column density map. Contours for N$_{\rm
H}$ are 0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, 1.5, and 1.64 and grey scale from 0.1 to 1.64 in units of 7.78 $\times$ 10$^{21}$ cm$^{-2}$ assuming T$_S$ = 100 K.[]{data-label="hicolumn"}](f6.eps){width="8cm"}
![The hyperfine line ratio of the 1667 and 1665 MHz absorption lines across the nuclear region. The location of the two nuclei are indicated in the diagram. The contour levels are 0.8$-$1.8 with intervals of 0.2. The lowest contour of 0.8 is at the west side of the central region and subsequent contours are increasingly further out towards the two nuclei. Values of 1.6 are found at N1 and 1.2 at N2. The highest optically$-$thin ratio of 1.8 is found north of N1, while south of N1 the ratio decreases again to 1.4. []{data-label="ohrat"}](f8.eps){width="7.5cm"}
![Position-velocity map of the 1667/1665 MHz OH absorption along P.A.=20$\degr$ along the N1$-$N2 line. The spectrum has been inverted for presentation. The RA - velocity position of each of the nuclei has been indicated in the diagram. For an rms of 0.29 mJybeam$^{-1}$, the contours are plotted at -1, 1, 2, 3, 4, 5, 6, 7, and 8 $\times$ 0.55 mJybeam$^{-1}$. Features at the lowest contour have marginal significance.[]{data-label="ohpv"}](f9.eps){width="8cm"}
![First and second moment maps of the OH 1667 MHz line. The locations of the twin-nuclei are marked by crosses. [*Upper diagram*]{} First moment: The line velocity contours superposed on the grey-scale are spaced linearly by 20 beginning at 7240 and ending at 7360 . The peak value is 7393 and the values at N1 and N2 are 7275 and 7370 . Grey-scale range is from 7150 to 7400 . [*Lower diagram*]{} Second moment: The line width contours are at 20 to 80 with increments of 10 . The central region has a line width of 80+ . []{data-label="ohmom12"}](f10a.eps "fig:"){width="7.5cm"}\
![First and second moment maps of the OH 1667 MHz line. The locations of the twin-nuclei are marked by crosses. [*Upper diagram*]{} First moment: The line velocity contours superposed on the grey-scale are spaced linearly by 20 beginning at 7240 and ending at 7360 . The peak value is 7393 and the values at N1 and N2 are 7275 and 7370 . Grey-scale range is from 7150 to 7400 . [*Lower diagram*]{} Second moment: The line width contours are at 20 to 80 with increments of 10 . The central region has a line width of 80+ . []{data-label="ohmom12"}](f10b.eps "fig:"){width="7.5cm"}
![OH column density map of 1667 MHz. The integrated column density contours are 1, 2, 4, 6, 8, 10, and 12 times 8.62 x 10$^{14}$ cm$^{-2}$ using an excitation temperature of $T_{ex}$ = 20 K. The grey scale displays the corresponding optical depth on a scale of 0 to 0.06. The peak optical depth is 0.063 with a column density of 1.08 x 10$^{16}$ cm$^{-1}$. []{data-label="ohcolumn"}](f11.eps){width="7.5cm"}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the top quark charge asymmetry in the process $pp \to t\bar{t}+\gamma$ at the 13 TeV LHC. The genuine tree level asymmetry in the $q\bar{q}$ channel is large with about $-12\%$. However, the symmetric $gg$ channel, photon radiation off top quark decay products, and higher order corrections wash out the asymmetry and obscure its observability. In this work, we investigate these effects at next-to-leading order QCD and check the robustness of theoretical predictions. We find a sizable perturbative correction and discuss its origins and implications. We also study dedicated cuts for enhancing the asymmetry and show that a measurement is possible with an integrated luminosity of $150\,{\mathrm{fb}}^{-1}$.'
author:
- Jonas Bergner
- Markus Schulze
bibliography:
- 'acbib.bib'
title: 'The top quark charge asymmetry in $t\bar{t}\gamma$ production at the LHC'
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction {#sec:intro}
============
The charge asymmetry in the fermion annihilation process $f\bar{f} \to f' \bar{f}'$ is a well-studied phenomenon of Quantum Electrodynamics (QED) [@Berends:1973fd; @Berends:1982dy] and Quantum Chromodynamics (QCD) [@Nason:1989zy; @Beenakker:1990maa]. Even though $\mathrm{C}$ (charge) and $\mathrm{P}$ (parity) are good symmetries of QED and QCD, there can be interference between charge-odd amplitudes that lead to asymmetric terms under $p_{f'} \leftrightarrow p_{\bar{f}'}$. For $2\to2$ kinematics the $\mathrm{C}$-odd interference appears at next-to-leading order (NLO) for the first time, causing a suppression by one power of the coupling constant and therefore yields a numerically small asymmetry.
At particle colliders where the initial state is not charge symmetric (e.g. at $e^+ e^-$ or $p\bar{p}$ colliders), a non-vanishing charge asymmetry ${A_\mathrm{C}}$ translates into a $\mathrm{P}$-violating forward backward asymmetry ${A_\mathrm{FB}}$. This feature received a lot of attention in the case of top quark pair production at the Tevatron. The NLO QCD theory prediction of ${A_\mathrm{FB}}^{{{t\bar{t}}}} \approx 5\%$ [@Kuhn:1998jr] was in long lasting tension with the experimentally measured values by CDF and DZero, which were about two standard deviations higher [@Peters:2012ji]. The dust settled after more data was collected and NNLO QCD and NLO electroweak corrections [@Hollik:2011ps; @Bernreuther:2012sx; @Czakon:2014xsa] were accounted for in the theory calculations. Now, the best prediction yields ${A_\mathrm{FB}}^{{t\bar{t}}}=9.5\pm 0.7\%$ [@Czakon:2014xsa], which has to be compared to ${A_\mathrm{FB}}^{{t\bar{t}}}=10.6\pm 3\%$ from DZero [@Abazov:2014cca] and ${A_\mathrm{FB}}^{{t\bar{t}}}=16.4\pm 5\%$ from CDF [@Aaltonen:2012it].
Recently, the top quark charge asymmetry enjoyed a revival at the Large Hadron Collider (LHC) because the delicate interference effects can be used as a sensitive probe for new physics searches, see e.g. Refs. [@Rodrigo:2010gm; @Kuhn:2011ri; @Han:2012qu; @Ko:2012ud; @Hagiwara:2012gy] and references in [@Aguilar-Saavedra:2014kpa]. The charge symmetric initial state at the LHC does, however, not produce a forward-backward asymmetry and makes the effect harder to capture. An observable effect can still be obtained thanks to the different parton distributions of valence and sea quarks in the proton, which, in conjunction with a non-zero ${A_\mathrm{C}}$ cause anti-top quarks to be scattered more centrally than top quarks. The canonical definition of the charge asymmetry at the LHC is $$\begin{aligned}
\label{eq:AC}
& {A_\mathrm{C}}& = \frac{\sigma^\mathrm{asymm.}}{\sigma^\mathrm{symm.}}
\quad\mathrm{with}
\\
&\sigma^\mathrm{asymm.} &= \sigma(\Delta y > 0)-\sigma(\Delta y < 0),
\nonumber \\
&\sigma^\mathrm{symm.}&=\sigma(\Delta y > 0)+\sigma(\Delta y < 0),
\nonumber\end{aligned}$$ where $\Delta y = |y_t| - |y_{\bar{t}}|$ is the difference of the absolute top and anti-top quark rapidities. For $pp\to{{t\bar{t}}}$ at $\sqrt{s}=8$ TeV, the best prediction ${A_\mathrm{C}}^{{t\bar{t}}}= 0.9\%$ [@Czakon:2017lgo] is in agreement with current experimental measurements [@Aad:2015noh; @Khachatryan:2015oga] that are, however, also compatible with zero given the smallness of the effect.
An interesting twist enters the discussion when studying top quark pair production in association with massless gauge bosons. Hadronic production of ${{t\bar{t}}}+\mathrm{jet}$ is one example that has been studied extensively [@Dittmaier:2008uj; @Melnikov:2010iu; @Berge:2012rc; @Alte:2014toa]. $\mathrm{C}$ asymmetric interference terms enter already at leading order causing a sizable negative value of the asymmetry. Somewhat surprisingly, the inclusion of higher order corrections shifts the LO value by more than 100% [@Dittmaier:2008uj] into the positive direction. This feature appears in contrast to ${{t\bar{t}}}$ production where the asymmetry at NLO does not receive large corrections at NNLO [@Czakon:2014xsa]. In Ref. [@Melnikov:2010iu] a reasoning for this pattern was given based on a separation of [*soft*]{} and [*hard*]{} degrees of freedom that enter the asymmetry at different orders of perturbation theory.
In this work, we investigate the charge asymmetry for top quark pair production in association with a photon at the LHC. An experimental measurement of this quantity was not undertaken at the Tevatron and is not yet achieved at the LHC. This is surprising since ${{t\bar{t}}}+\gamma$ production is particularly interesting for probing physics beyond the Standard Model. For example, in a pioneering study the authors of Ref. [@Aguilar-Saavedra:2014vta] investigated the use of ${A_\mathrm{C}}^{{{t\bar{t}}}\gamma}$ to resolve cancellation mechanisms between up-type and down-type initial states arising from possible new physics contamination of the SM signal. Here, we elevate previous studies at LO to NLO QCD precision. This is motivated by a foreseeable measurement in the near future and by the importance higher order corrections played in the similar process $pp \to{{t\bar{t}}}+\mathrm{jet}$. A first NLO QCD calculation for ${A_\mathrm{C}}$ in ${{t\bar{t}}}+\gamma$ production was presented in Ref. [@Maltoni:2015ena] for stable top quarks. In this work put special attention to a realistic description of the process, accounting for the full decay chain in the lepton+jets final state, $b \bar{b} \ell \nu j j+\gamma$, including all spin correlations, photon emission off all charged particles, and NLO QCD corrections in production and decay. We demonstrate that every of these features is crucial for a reliable description of the charge asymmetry in this process. Moreover, we devise dedicated selection cuts to enhance the asymmetry while simultaneously maintaining the statistical significance of a measurement.
Setup {#sec:setup}
=====
We consider the process $pp \to {{t\bar{t}}}+\gamma \to b \bar{b} \ell \nu j j+\gamma$ at $\sqrt{s}=13$ TeV, summing over $\ell=e^+,e^-,\mu^-$ and $\mu^+$. In this final state, the top quark momenta can be reconstructed unambiguously from the decay products since the neutrino momentum is constrained by momentum conservation. Hence, the top quark rapidities in Eq. (\[eq:AC\]) can be calculated unambiguously. Top quarks are treated in the narrow-width approximation (NWA). We require intermediate on-shell states which result from a $q_t^2$-integration over their (undistorted) Breit-Wigner propagator, $$\int \!\! \mathrm{d} q_t^2
\left| \frac{1}{(q_t^2-m_t^2 + \mathrm{i} \Gamma_t m_t)} \right|^2
\!\to\! \frac{\pi }{m_t \Gamma_t} \int \!\! \mathrm{d} q_t^2
\delta(q_t^2-m_t^2)$$ in the limit $\Gamma_t/m_t \to 0$. It is well known that this treatment leads to a parametric approximation of the cross section up to terms $\mathcal{O}(\Gamma_t \big/ m_t)$. In this case, the amplitude for ${{t\bar{t}}}$ production factorizes according to $$\begin{aligned}
\label{eq:PrDk}
\mathcal{M}^\mathrm{NWA}_{ij\to t\bar{t}\to b \bar{b} f\bar{f}f'\bar{f}'}
\;=\;
\mathcal{P}_{ij \to t\bar{t}} \otimes \mathcal{D}_{t\to b f\bar{f}}
\otimes \mathcal{D}_{\bar{t}\to \bar{b} f'\bar{f}'}~,\end{aligned}$$ where $\mathcal{P}_{ij \to t\bar{t}}$ describes the $t\bar{t}$ production process and $\mathcal{D}_{t\to b f\bar{f}}$ the top quark decay dynamics. The symbol $\otimes$ indicates the inclusion of spin correlations.
The factorization for the ${{t\bar{t}}}+\gamma$ process is obtained from Eq. (\[eq:PrDk\]) by inserting a photon in either of the three terms, unfolding it into a sum of three terms at $\mathcal{O}(\alpha)$. As a consequence, ${{t\bar{t}}}+\gamma$ production is governed by two very different dynamics: (i) photons can be emitted in the hard scattering process of ${{t\bar{t}}}$ production, followed by the top decays; and (ii) photons can be emitted off the top quark decay products, which is preceded by ${{t\bar{t}}}$ production. We refer to these two mechanisms as [*photon radiation in production*]{} and [*radiative top quark decays*]{}, respectively. An equivalent way of phrasing this circumstance is: The photon can be radiated either before or after the top quarks went on-shell.
\[tab:1\]
cuts Eq. (\[eq:acc\_cuts\]) cuts Eq. (\[eq:acc\_cuts\])+(\[eq:supp\_cuts\]) cuts Eq. (\[eq:acc\_cuts\])+(\[eq:supp\_cuts\])+$|y_\gamma|>1.0$
----------------------------------------------- -------------------------------- ------------------------------------------------- ------------------------------------------------------------------
$\sigma^\mathrm{symm.}_\mathrm{LO}$ $837\,{\mathrm{fb}}\pm 25\%$ $301\,{\mathrm{fb}}\pm 28\%$ $126\,{\mathrm{fb}}\pm 25\%$
\[2ex\] $\sigma^\mathrm{asymm.}_\mathrm{LO}$ $-10.6\,{\mathrm{fb}}\pm 21\%$ $-9.2\,{\mathrm{fb}}\pm 22\%$ $-5.6\,{\mathrm{fb}}\pm 21\%$
\[1ex\] $\sigma^\mathrm{symm.}_\mathrm{NLO}$ $1708\,{\mathrm{fb}}\pm 19\%$ $647\,{\mathrm{fb}}\pm 21\%$ $287\,{\mathrm{fb}}\pm 22\%$
\[2ex\] $\sigma^\mathrm{asymm.}_\mathrm{NLO}$ $-7.8\,{\mathrm{fb}}\pm 6\%$ $-6.4\,{\mathrm{fb}}\pm 6\%$ $-4.8\,{\mathrm{fb}}\pm 2\%$
\[1ex\] ${A_\mathrm{C}}^\mathrm{NLO}$ $-0.5(1)\% $ $-1.0(2)\% $ $-1.7(4)\% $
\[1ex\] $\mathcal{S}_\mathrm{NLO}$ $2.3(3)\sigma$ $3.1(4)\sigma$ $3.5(4)\sigma$
To account for a finite detector volume and resolution we require $$\begin{aligned}
\label{eq:acc_cuts}
&{p_{\mathrm{T}}}^\gamma \ge 20\,{\mathrm{GeV}},
\quad
|y_\gamma| \le 2.5,
\quad
R_{\gamma \ell} \ge 0.2,
\quad
R_{\gamma \mathrm{jet}} \ge 0.2,
\nonumber
\\
&{p_{\mathrm{T}}}^\ell \ge 15\,{\mathrm{GeV}},
\quad
|y_\ell| \le 5.0,
\quad
{p_{\mathrm{T}}}^\mathrm{miss} \ge 20\,{\mathrm{GeV}},
\nonumber
\\
&{p_{\mathrm{T}}}^\mathrm{jet} \ge 15\,{\mathrm{GeV}},
\quad
|y_\mathrm{jet}| \le 5.0.\end{aligned}$$ We define jets by the anti-$k_\mathrm{T}$ jet algorithm [@Cacciari:2008gp] with $R=0.3$ and request at least two $b$-jets. Photons in a hadronic environment are defined through the smooth-cone isolation [@Frixione:1998jh] with $R=0.2$. We perform our calculations within the TOPAZ framework described in Ref. [@Melnikov:2011ta]. The input parameters to our calculation are $$\begin{aligned}
& \alpha=1/137,
\quad
G_\mathrm{F} = 1.16639 \times 10^{-5} \, {\mathrm{GeV}}^{-2},
\nonumber
\\
&m_t = 173\, {\mathrm{GeV}},
\quad
M_W = 80.419\, {\mathrm{GeV}},\end{aligned}$$ from which follows $$\begin{aligned}
&\Gamma_t^\mathrm{LO}=1.495\,{\mathrm{GeV}},
\quad
\Gamma_t^\mathrm{NLO}=1.367\, {\mathrm{GeV}},
\nonumber
\\
&\Gamma_W^\mathrm{LO}=2.048\, {\mathrm{GeV}},
\quad
\Gamma_W^\mathrm{NLO}=2.118\, {\mathrm{GeV}}\end{aligned}$$ at $\mu_\mathrm{R}=m_t$. We use the parton distribution functions NNPDF31\_nlo\_ as\_0118\_luxqed [@Bertone:2017bme], with the corresponding running of the strong coupling constant $\alpha_s$.
Higher Order Corrections to ${A_\mathrm{C}}$ {#sec:corrections}
============================================
In the following we discuss the impact of higher order QCD corrections to the charge asymmetry and the cross section of $pp \to {{t\bar{t}}}+\gamma$. Applying the cuts in Eq. (\[eq:acc\_cuts\]) and setting renormalization and factorization scales to $\mu_0=m_t$, we find $$\begin{aligned}
{A_\mathrm{C}}^\mathrm{LO} = -1.3\%,
\quad
{A_\mathrm{C}}^\mathrm{NLO} = -0.5\%.\end{aligned}$$ The leading order value of $-1.3\%$ arises from the asymmetric contribution in $q\bar{q} \to {{t\bar{t}}}+\gamma$, which has a genuine asymmetry of $-12\%$ that is diluted by the symmetric $gg$ channel and radiative decays. Comparing LO and NLO asymmetries, the relative shift by more than $-60\%$ is striking and might raise questions about the perturbative convergence of this quantity. We therefore investigate the different perturbative corrections in greater detail and note that ${A_\mathrm{C}}$ by itself is not an observable. Only $\sigma^\mathrm{asymm.}$ and $\sigma^\mathrm{symm.}$, i.e. the numerator and denominator of ${A_\mathrm{C}}$ are experimentally accessible quantities. The term $\sigma^\mathrm{symm.}$ is just the total cross section, which is known to receive large corrections (see e.g. Refs. [@Melnikov:2011ta; @PengFei:2009ph]). We confirm this feature within our setup and find the leading and next-to-leading order cross sections $$\begin{aligned}
\label{eq:sigmasymm}
\sigma_\mathrm{LO} &=& 837\,{\mathrm{fb}}\pm 25\%,
\nonumber \\
\sigma_\mathrm{NLO} &=& 1708\,{\mathrm{fb}}\pm 19\%.\end{aligned}$$ Renormalization and factorization scales are varied by a factor of two around the central scale $\mu_0$ and the respective cross sections are symmetrized. The large perturbative correction of $104\%$ and the marginal reduction of scale uncertainty is a combination of various effects: Firstly, the dominant $gg$ channel receives a sizable ($\approx \! +80\%$) perturbative correction. The main contribution arises from tree level type ${{t\bar{t}}}\gamma+g$ configurations, where the gluon constitutes a hard resolved jet (similar features were observed in Ref.[@Melnikov:2011ta]). Secondly, the kinematics of the light jets from $W\to jj$ are significantly restricted at leading order because of jet cuts and the jet algorithm. This restriction is lifted when an additional jet is allowed at next-to-leading order. It affects all partonic channels and leads to yet another increase of the NLO cross section by about $+20\%$. While the size of this kinematic effect cannot be estimated with scale variation, we believe it is sufficiently saturated at NLO, yielding a realistic and reliable prediction. Lastly, the $qg$ initial state enters at NLO for the first time and is responsible for the sizable residual scale dependence of the NLO cross section in Eq. (\[eq:sigmasymm\]).
{width="50.00000%"} {width="50.00000%"}
Let us now discuss the perturbative correction of the numerator of the asymmetry. We find a very different behavior $$\begin{aligned}
\label{eq:sigasymsym}
\sigma^\mathrm{asymm.}_\mathrm{LO} &=& -10.7\,{\mathrm{fb}}\pm 20\%,
\nonumber \\
\sigma^\mathrm{asymm.}_\mathrm{NLO} &=& -8.0\,{\mathrm{fb}}\pm 7\%.\end{aligned}$$ In contrast to $\sigma^\mathrm{symm.}$ in Eq. (\[eq:sigmasymm\]), the asymmetric piece receives a moderate $-25\%$ correction and enjoys a significantly reduced scale dependence. Hence, the perturbative convergence seems under good control. This conclusion is further supported by another observation. Adopting the reasoning of Ref. [@Melnikov:2010iu] for ${{t\bar{t}}}+\mathrm{jet}$ production, the asymmetry is governed by [*soft*]{} and [*hard*]{} degrees of freedom, which enter at different stages of perturbation theory. In the limit where the cross section is dominated by logarithms of $p_\mathrm{T,cut}^{\gamma} \big/ m_t$ one finds [@Melnikov:2010iu] $$\begin{aligned}
\label{eq:Amechanism}
A_{q\bar{q}\to{{t\bar{t}}}\gamma}^\mathrm{NLO} \approx A_{q\bar{q}\to{{t\bar{t}}}\gamma}^\mathrm{LO} + A_{q\bar{q}\to{{t\bar{t}}}}^\mathrm{NLO}.\end{aligned}$$ The [*soft*]{} degrees of freedom are contained in $A_{q\bar{q}\to{{t\bar{t}}}\gamma}^\mathrm{LO}$ because it is generated dominantly by a soft photon exchange. Beyond LO, new asymmetric contributions appear from [*hard*]{} exchanges that are related to the asymmetry in ${{t\bar{t}}}$ production $A_{q\bar{q}\to{{t\bar{t}}}}^\mathrm{NLO}$. To study these dynamics for our case, we perform an independent NLO QCD calculation for $pp\to{{t\bar{t}}}\to b \bar{b} \ell \nu j j$ at $\sqrt{s}=13$ TeV, using the same cuts as in Eq. (\[eq:acc\_cuts\]). We find $A_{q\bar{q}\to{{t\bar{t}}}}^\mathrm{NLO}=+2.9\%$. Together with $A_{q\bar{q}\to{{t\bar{t}}}\gamma}^\mathrm{LO}=-12.0\%$ and $A_{q\bar{q}\to{{t\bar{t}}}\gamma}^\mathrm{NLO}=-8.9\%$, this nicely supports the prediction in Eq. (\[eq:Amechanism\])[^1]. Consequently, we follow the arguments presented in Ref. [@Melnikov:2010iu] and suggest that even higher order corrections (i.e. beyond NLO QCD) should stabilize the prediction of\
$\sigma^\mathrm{asymm.}$ and will not drastically shift its value.
From these studies we conclude that the genuinely asymmetric cross section of ${{t\bar{t}}}+\gamma$ production is perturbatively under good control, whereas symmetric contributions from $gg$ and $qg$ initial states are converging slower with sizable scale dependence. The resulting relative uncertainty for the asymmetry $$\label{eq:deltaAC}
\frac{\delta{A_\mathrm{C}}}{{A_\mathrm{C}}} = \sqrt{ \left(\frac{\delta\sigma^\mathrm{asymm.}}{\sigma^\mathrm{asymm.}}\right)^2
+ \left(\frac{\delta\sigma^\mathrm{ symm.}}{\sigma^\mathrm{ symm.}}\right)^2 }$$ is therefore dominated by $\delta\sigma^\mathrm{symm.}\big/\sigma^\mathrm{symm.}=\pm 19\%$. The significance of a measurement (assuming statistical uncertainties only) is $$\begin{aligned}
\mathcal{S} &=& |{A_\mathrm{C}}| \big/ \delta N
\quad \mathrm{with} \quad
\delta N = 1 \big/ \sqrt{N}
\nonumber \\
&=& |{A_\mathrm{C}}| \, \sqrt{\mathcal{L} \times \sigma_\mathrm{NLO}}.\end{aligned}$$ The corresponding numerical values can be found in the first column of Table \[tab:1\] and are illustrated in the first column of Fig. \[fig2\] (left) for an integrated luminosity of $150\,{\mathrm{fb}}^{-1}$.
{width="40.00000%"} {width="59.00000%"}
Analysis and Results {#sec:results}
====================
We proceed with a study of dedicated phase space cuts to enhance the charge asymmetry. The basic idea is to isolate asymmetric contributions while suppressing symmetric ones. The dominant asymmetric contribution originates from quark anti-quark annihilation with photon radiation in the production, $q\bar{q} \to {{t\bar{t}}}+ \gamma$. Large symmetric contributions arise from $gg$ scattering and photon emission in the top quark decay stage.
A suppression of the $gg$ channel over the $q\bar{q}$ channel is notoriously difficult to achieve. However, we find that one can use shape differences in the photon rapidity distributions to separate the two channels. Fig. \[fig1\] (left, upper pane) illustrates this feature for the normalized distributions. The lower pane shows the relative percentage of $q\bar{q}$ versus $gg$ as a function of a photon rapidity at NLO QCD. It is evident that an increasing lower cut value $$\label{eq:yphcut}
|y_\gamma| \ge y_\gamma^\mathrm{cut}$$ is enhancing this relative percentage, while, at the same time, reducing the overall cross section.
The feature of radiative top quark decays is the second source of large symmetric contributions. Splitting the cross section into photon radiation in production (prod) and radiative decays (dec), we find for the cuts in Eq. (\[eq:acc\_cuts\]) $$\sigma_\mathrm{NLO}^{{{t\bar{t}}}+\gamma}= 526\,{\mathrm{fb}}\,(\mathrm{prod}) + 1182\,{\mathrm{fb}}\,(\mathrm{dec}) = 1708\,{\mathrm{fb}}.$$ Almost $70\,\%$ of the total rate is due to ${{t\bar{t}}}$ production followed by a radiative top quark decay. This is a somewhat counter-intuitive picture as one typically imagines the ${{t\bar{t}}}+\gamma$ final state as being produced altogether in the hard collision. We suppress radiative top quark decays using invariant masses of the decay products. To start, we associate the two $b$-jets with the [*correct*]{} side of the decay chain ($b$-jets belong to the $t$ decay chain, $\bar{b}$-jets belong to the $\bar{t}$ decay chain). This is achieved by pairing $b$-jet and leptonic decay chain which minimize $\{ m_{\ell b_1},m_{\ell b_2}\}$. The other $b$-jet is associated with the hadronic decay chain. Subsequently, we consider the minima $$\begin{aligned}
\label{eq:supp_cuts}
\min_{x \in D_i \cup D_{i\gamma}} \bigg\{ m_x^2 - m_t^2 \bigg\}, \quad i=\mathrm{\ell,h}\end{aligned}$$ for $D_\ell=\{ b\ell\nu, b\ell\nu j \}$, $D_{\ell \gamma}=\{ b\ell\nu\gamma, b\ell\nu j\gamma,\}$, $D_\mathrm{h}=\{ b j j, b j j j \}$, and $D_{\mathrm{h} \gamma}=\{ b j j\gamma, b j j j \gamma\}$. If kinematics is such that $x \in D_{\ell\gamma} $ or $x \in D_{\mathrm{h}\gamma} $, we consider it a radiative top quark decay event and reject it. All other events are kept. We find that these selection criteria are robust under QCD corrections, and we believe that the impact of off-shell effects is small because a smearing of the invariant masses around the top quark Breit-Wigner peak will not significantly change the minimization procedure. This assertion can, in principle, be checked thanks to the off-shell calculation presented in Ref. [@Bevilacqua:2018woc]. Fig. \[fig1\] (right) shows the relative contribution of photon emission in the production and radiative top quark decays. The upper pane shows the two contributions without the cuts of Eq. (\[eq:supp\_cuts\]), the lower pane shows the contributions when the cuts are included. It is evident that this procedure works very efficiently in selecting photon emission in production. Moreover, the rapidity distribution is flat and remains flat after the cuts. Hence, the cuts for $gg$ suppression (Eq. (\[eq:yphcut\])) and radiative decays do not interfere with each other.\
In the following, we study the charge asymmetry as a function of the cuts in Eq. (\[eq:yphcut\]) and Eq. (\[eq:supp\_cuts\]) including NLO QCD corrections. It is evident that applying the cuts on the one hand increases the asymmetry, and on the other hand reduces the cross section, therefore lowering the statistical significance of a measurement. Hence, we try to optimize the cuts such that the two competing effects are balanced. We vary the lower photon rapidity cut $y_\gamma^\mathrm{cut}$ from $0.0$ to $1.4$ in steps of $0.2$. The results are given in the second and third column of Table \[tab:1\] and displayed in Fig. \[fig2\]. These are the main results of this work. We find that the perturbative pattern that we discussed in Sect. \[sec:corrections\] persist if the cuts Eq. (\[eq:yphcut\]) and Eq. (\[eq:supp\_cuts\]) are added. The asymmetric contribution receives a moderate NLO correction with small scale dependence. In contrast, the symmetric cross section gets large corrections and exhibits $\approx 20\%$ scale dependence. This uncertainty feeds into the uncertainty of the asymmetry ${A_\mathrm{C}}$ as the dominating one.
From Table \[tab:1\] it is evident that the additional cuts significantly enhance the asymmetry. The initial value of $-0.5\%$ is doubled when the radiative decay suppression cuts in Eq. (\[eq:yphcut\]) are applied. Further, it is more than tripled when $|y_\gamma|>1$ is required in addition. The relative uncertainties remain roughly constant at about $20\%$. The statistical significance is boosted to values above $3\sigma$ for an integrated luminosity of $\mathcal{L}=150\,{\mathrm{fb}}^{-1}$.
Fig. \[fig2\] (right) illustrates the dependence on the cuts in more detail and allows to find the optimal cut values. We plot the significance $\mathcal{S}$ over the negative asymmetry ${A_\mathrm{C}}$. The connected dots show the dependence on the monotonically increasing value $y_\gamma^\mathrm{cut}$, with cuts in Eq. (\[eq:acc\_cuts\]) (lower dotted line) and cuts in Eqs. (\[eq:acc\_cuts\])+(\[eq:supp\_cuts\]) (upper dotted line). The colored bands indicate the corresponding uncertainties obtained from Eq. (\[eq:deltaAC\]) for ${A_\mathrm{C}}$ and similar for $\mathcal{S}$. Comparing the pink and yellow bands it is obvious that the radiative decay suppression is very effective in enhancing the asymmetry and the statistical significance. Yet the uncertainties are somewhat inflated. Following the dotted lines of increasing $y_\gamma^\mathrm{cut}$, we observe that the asymmetry can be strongly enhanced while the significance receives a mild increase and later deteriorates for too large values. The optimal point appears at $y_\gamma^\mathrm{cut} \approx 1.0$.
Summary {#sec:summary}
=======
We study the top quark charge asymmetry in the lepton+jet final state of ${{t\bar{t}}}+\gamma$ production at the 13 TeV LHC. The asymmetry is an Abelian effect of interference between diagrams of even and odd charge-parity, a phenomenon that is well-studied for ${{t\bar{t}}}$ production at the Tevatron and the LHC. The $pp \to {{t\bar{t}}}+\gamma$ process is interesting because it exhibits an asymmetry already at leading order, which is significantly larger than in ${{t\bar{t}}}$ production. We present perturbative corrections to this observable, including top quark decays, and discuss uncertainties and enhancement strategies. We find that the asymmetric cross section is converging well and is under good theoretical control. In contrast, the symmetric cross section receives sizable corrections. As a result, leading order predictions turn out to be unreliable and next-to-leading order predictions carry sizable uncertainties. Yet, we find arguments to support the reliability of our NLO results within their uncertainties. In addition, we present a set of tailored cuts for enhancing the asymmetry by more than a factor of three such that a measurement with $150\,{\mathrm{fb}}^{-1}$ should be possible at the LHC.
We thank Ivor Fleck, Manfred Kraus, Till Martini, and Peter Uwer for fruitful feedback and discussions. We are grateful for computing resources provided by AG PEP.
[^1]: Note that in contrast to Eq. (\[eq:sigasymsym\]), only photon radiation in the production is considered here.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Production of large transverse momentum $\rho^{0}$ meson in high-energy nuclear collisions is investigated for the first time at the next-leading-order in the QCD improved parton model. The $\rho^0$ fragmentation functions (FFs) in vacuum at any scale $Q$ are obtained, by evolving a newly developed initial parametrization of $\rho^0$ FFs at a scale $\rm Q_{0}^2=1.5\ GeV^2$ from a broken SU(3) model through NLO DGLAP equations. The numerical simulations of $p_{\rm T}$ spectra of $\rho^{0}$ meson in the elementary $\rm p+p$ collisions at NLO give a decent description of STAR $\rm p+p$ data. In $\rm A+A$ reactions the jet quenching effect is taken into account with the higher-twist approach by the medium-modified parton FFs due to gluon radiation in the quark-gluon plasma, whose space-time evolution is described by a (3+1D) hydrodynamical model. The nuclear modification factors for $\rho^{0}$ meson and its double ratio with $\pi^\pm$ nuclear modification in central $\rm Au+Au$ collisions at the RHIC are calculated and found to be in good agreement with STAR measurement. Predictions of $\rho^{0}$ nuclear modification and the yield ratio $\rho^0/\pi^0$ in central Pb+Pb at the LHC are also presented. It is shown that the ratio $\rho^0/\pi^0$ in central Pb+Pb will approach to that in p+p reactions when $p_{\rm T}>12$ GeV.'
author:
- Wei Dai
- 'Ben-Wei Zhang[^1]'
- Enke Wang
title: ' Production of $\rho^{0}$ meson with large $p_{\rm T}$ at NLO in heavy-ion collisions'
---
A new state of matter of deconfined quarks and gluons, the so-called quark-gluon plasma (QGP), is expected to be created in heavy ion collisions (HIC) at very high colliding energies. To study the creation and properties of the QGP, the jet quenching has been proposed, which states that when an energetic parton traveling through the hot/dense QCD medium, a substantial fraction of its energy should be losted and could in turn be used to obtain the temperature and density information of the QGP [@Wang:1991xy; @Gyulassy:2003mc]. Even though rapid developments of experiments and theories on new jet quenching observables, such as di-hadron [@Aamodt:2011vg; @Adler:2002tq], photon triggered hadron [@Adare:2009vd; @Abelev:2009gu] and full jet observable [@Vitev:2008rz; @Vitev:2009rd; @Dai:2012am; @Aad:2010bu; @Chatrchyan:2011sx; @Kang:2014xsa], have emerged in the last decade, the suppression of inclusive hadron production, as the most intensively studied observable on jet quenching, is still indispensable to unravel the properties of the QCD medium. Recently, by comparing the theoretical calculation with the measurements of the production spectra and its suppression of $\pi$ mesons which are the most commonly observed hadrons, the jet transport coefficient $\hat{q}$ has been extracted to characterize the local properties of the QCD medium probed by the energetic parton jets [@Burke:2013yra]. The higher twist multiple scattering of the jet quenching incorporated with perturbative quantum chromodynamics (pQCD) improved parton model has been developed and successfully described the $\pi^0$ and $\eta$ productions and their suppressions in $\rm A+A$ collisions [@Chen:2010te; @Chen:2011vt; @Dai:2015dxa; @Dai:2016zjy].
The study of the identified hadron spectra at high $p_{\rm T}$ other than $\pi^0$ and $\eta$ in HIC can further constrain and cast insight into the hadron suppression pattern. Whereas a relatively large amount of data on the yields of identified hadrons at large $p_{\rm T}$ has been accumulated at the RHIC and the LHC [@Agakishiev:2011dc; @Adare:2010pt; @Bala:2016hlf], there are still very few theoretical studies of hadrons with different types. An interesting type of identified hadrons with available data is $\rho^0$ meson, which is heavier than $\pi^{0}$ and $\eta$, and also consists of the similar constituent quarks. We notice that even the theoretical calculations of the $\rho^{0}$ productions in p+p collisions with large $p_{\rm T}$ at both the RHIC and the LHC are absent due to the lack of knowledge of parton fragmentation functions (FFs) for $\rho_0$ in vacuum. In a previous study [@Dai:2015dxa] we have paved the way to understand identified hadron suppression pattern by calculating the productions of $\eta$ meson and investigating the hadron yield ratios [@Dai:2015dxa]. In this manuscript, we extend this study to $\rho^{0}$ meson productions and the yield ratios of $\rho^{0}$ and $\pi$ in A+A collisions at the RHIC and the LHC. It is of great interest to see how the alteration of the jet chemistry brought by the jet quenching will eventually affect the $\rho^{0}$ production spectrum and the ratio of hadron yields [@Liu:2006sf; @Brodsky:2008qp; @Chen:2008vha].
In this paper, firstly we employ a newly developed initial parametrization of $\rho^0$ FFs in vacuum at a starting scale $\rm Q_{0}^2=1.5\ GeV^2$, which is provided by the $SU(3)$ model of FFs of vector mesons [@Saveetha:2013jda; @Indumathi:2011vn]. By evolving them through DGLAP evolution equations at NLO [@Hirai:2011si], we obtain parton FFs of $\rho^{0}$ meson at any hard scale $Q$. The theoretical results of $\rho^{0}$ productions in $\rm p+p$ collisions are provided up to the next-to-leading order(NLO) in pQCD improved parton model, and we find that they describe the experimental data rather well. Then we study $\rho^{0}$ production in $\rm A+A$ collisions at both RHIC and LHC by including parton energy loss in the hot/dense QCD medium in the framework of higher twist approach of jet quenching [@Guo:2000nz; @Zhang:2003yn; @Zhang:2003wk]. In this approach, the energy loss due to the multiple scattering suffered by an energetic parton traversing the medium are taken into account by twist-4 processes, and the vacuum fragmentation functions are modified effectively in high-energy nuclear collisions. Therefore, we can compute numerically for the first time $\rho^{0}$ meson yields in $\rm A+A$ collisions. We give a description of $\rho^{0}$ nuclear modification factor $R_{AA}(\rho^0)$ at large $p_{\rm T}$ in $\rm Au+Au$ collisions at the RHIC to confront against the experimental data by STAR Collaboration, and $R_{AA}(\rho^0)$ in $\rm Pb+Pb$ collisions at the LHC to give a theoretical prediction. The double ratio of $R_{AA}(\rho^0)/R_{AA}(\pi^\pm)$ is calculated and found to be in good agreement with the experimental data. Lastly we explore the features of the $\rho^0/\pi^0$ ratios in both p+p and A+A collisions.
In NLO pQCD calculation, the single hadron production can be factorized as the convolution of elementary partonic scattering cross sections up to $\alpha^3$, parton distribution functions (PDFs) inside the incoming particles and parton FFs to the final state hadrons [@Kidonakis:2000gi]. We can express the formula symbolically as: $$\begin{aligned}
\frac{1}{p_{T}}\frac{d\sigma_{h}}{dp_{T}}=\int F_{q}(\frac{p_{T}}{z_{h}})\cdot D_{q\to h}(z_{h}, p_{T})\frac{dz_{h}}{z_{h}^2} \nonumber \\
+ \int F_{g}(\frac{p_{T}}{z_{h}})\cdot D_{g\to h}(z_{h}, p_{T})\frac{dz_{h}}{z_{h}^2} \,\,\, .
\label{eq:ptspec}\end{aligned}$$ The above equation implies that the hadron yield in $\rm p+p$ collision will be determined by two factors: the initial (parton-)jet spectrum $F_{q,g}(p_T)$ and the parton fragmentation functions $D_{q,g\to h}(z_{h}, p_{T})$. In the following calculations, we utilize CTEQ6M parametrization for proton PDFs [@Lai:1999wy], which has been convoluted with the elementary partonic scattering cross sections up to $\alpha^3$ to obtain $F_{q,g}(\frac{p_{T}}{z_{h}})$. Here $D_{q,g\to h}(z_{h}, p_{T})$ represents the vacuum parton FFs, which denote the possibilities of scattered quark or gluon fragmenting into hadron $h$ with momentum fraction $z_h$. They can be given by corresponding parametrization for different final-state hadrons. So potentially, we could predict all the identified hadron productions in $\rm p+p$ collision as long as the fragmentation functions are available. Note that the factorization scale, renormalization scale and fragmentation scale are usually chosen to be the same and proportional to $\rm p_{\rm T}$ of the leading hadron in the final-state.
To accurately determine the $\rm p+p$ reference, parton FFs in vacuum as a non-perturbative input, should be available. So far it is still impossible to derive parton FFs from the first-principle of QCD and a common practice is to make phenomenological parametrizations by comparing perturbative QCD calculations with the data. Unlike $\pi$ and charged hadrons, until now there are very few satisfatory parametrizations of parton FFs for the vector mesons due to the paucity of the relevant data. Fortunately, a broken $SU(3)$ model is recently proposed to provide a systematic description of the vector mesons production [@Saveetha:2013jda; @Indumathi:2011vn]. To reduced the complexity of the meson octet fragmentation functions, the $SU(3)$ flavor symmetry is introduced with a symmetry breaking parameter. In addition, isospin and charge conjugation invariance of the vector mesons $\rho(\rho^+,\rho^-,\rho^{0})$ are assumed to further reduce independent unknown quark FFs into functions named valence(V) and sea($\gamma$). The inputs of valence $V(x, Q_0^2)$, sea $\gamma(x, Q_0^2)$ and gluon $D_g(x,Q_0^2)$ FFs are parameterized into a standard polynomial at a starting low energy scale of $Q_0^2=1.5$ $\rm GeV^2$ such as: $$\begin{aligned}
F_i(x)=a_ix^{b_i}(1-x)^{c_i}(1+d_ix+e_ix^2)\end{aligned}$$ These parameters are systematically fixed by fitting the cross section at NLO with the measurements of LEP($\rho$,$\omega$) and SLD($\phi$,$K^\star$) at $\sqrt{s}=91.2$ GeV. In Ref. [@Saveetha:2013jda; @Indumathi:2011vn] the parameters of $\rho^{0}$ FFs in vacuum at $Q^2=1.5$ $\rm GeV^2$ are listed and we obtain $\rho^{0}$ FFs at any hard scale $D_{q,g}(x,Q^2)$ $Q>2$ GeV by evolving them through DGLAP evolution equations at NLO with the compute code invented in Ref. [@Hirai:2011si], then these $\rho_0$ FFs $D_{q,g}(x,Q^2)$ are used in our numerical simulations.
We have plotted the parton FFs as functions of fragmenting fraction $z_h$ in the left panel of Fig. \[fig:fragfunc\] at fixed scale of $Q^2=100$ $\rm GeV^2$, and also the parton FFs as functions of final state $\rm p_{\rm T}$ at fixed fragmenting fraction $z_h=0.6$ in the right panel of Fig. \[fig:fragfunc\]. It is observed that at fixed scale $\rho^{0}$ FFs decrease with $z_h$, and FF of up quark is much larger than that of strange quark, especially at large $z$ region. At a typical value with $z_h=0.6$, we notice that $\rho^{0}$ FFs show a rather weak $p_{\rm T}$ dependence.
![ Left: parton FFs as functions of $z_h$ at fixed scale $Q^2=100$ $\rm GeV^2$; Right: parton FFs as functions of $p_{\rm T}$ at fixed $z_h=0.6$.[]{data-label="fig:fragfunc"}](ffq.eps "fig:"){width="1.7in" height="1.7in"} ![ Left: parton FFs as functions of $z_h$ at fixed scale $Q^2=100$ $\rm GeV^2$; Right: parton FFs as functions of $p_{\rm T}$ at fixed $z_h=0.6$.[]{data-label="fig:fragfunc"}](ffz.eps "fig:"){width="1.7in" height="1.7in"}
The existence of the $\rho^0$ meson FFs at NLO allows us to calculate the inclusive vector meson productions as a function of the final state hadron $p_{\rm T}$ in pQCD at the accuracy of NLO. Fig. \[fig:illustrhopp\] shows the confrontation of the theoretical calculation with the STAR data [@Agakishiev:2011dc]. We see the results at the scale $Q=0.5$ $p_{\rm T}$ agree well with the data of $\rho^0$ yield. In the following calculations we will fix $Q=0.5$ $p_{\rm T}$ to provide a good p+p baseline.
![ Numerical calculation of the $\rho^{0}$ production in $\rm p+p$ collisions at RHIC $200$ GeV comparing with STAR [@Agakishiev:2011dc] data.[]{data-label="fig:illustrhopp"}](rhopp.eps){width="3.4in" height="2.8in"}
A hot and dense QCD matter is created shortly after the high energy central nucleus-nucleus collisions. Before a fast parton fragmented into identified hadrons in the vacuum, it should suffer energy loss due to multiple scattering with other partons in QCD medium. In higher twist approach, the multiple scattering is described by twist-4 processes of hard scattering and will lead to effective medium-modification of the vacuum FFs [@Guo:2000nz; @Zhang:2003yn; @Zhang:2003wk; @Chen:2010te; @Chen:2011vt; @Dai:2015dxa; @Dai:2017piq]: $$\begin{aligned}
\tilde{D}_{q}^{h}(z_h,Q^2) &=&
D_{q}^{h}(z_h,Q^2)+\frac{\alpha_s(Q^2)}{2\pi}
\int_0^{Q^2}\frac{d\ell_T^2}{\ell_T^2} \nonumber\\
&&\hspace{-0.7in}\times \int_{z_h}^{1}\frac{dz}{z} \left[ \Delta\gamma_{q\rightarrow qg}(z,x,x_L,\ell_T^2)D_{q}^h(\frac{z_h}{z}, Q^2)\right.
\nonumber\\
&&\hspace{-0.2 in}+ \left. \Delta\gamma_{q\rightarrow
gq}(z,x,x_L,\ell_T^2)D_{g}^h(\frac{z_h}{z}, Q^2) \right] ,
\label{eq:mo-fragment}\end{aligned}$$ where $\Delta\gamma_{q\rightarrow qg}(z,x,x_L,\ell_T^2)$ and $\Delta\gamma_{q\rightarrow gq}(z,x,x_L,\ell_T^2)=\Delta\gamma_{q \rightarrow qg}(1-z,x,x_L,\ell_T^2)$ are the medium modified splitting functions [@Guo:2000nz; @Zhang:2003yn; @Zhang:2003wk]. Though the medium-modified FFs include a contribution from gluon radiation in the QCD medium, they obey QCD evolution equations similar to the DGLAP equations for FFs in vacuum. In this formalism, we convolute the medium-induced kernel $\Delta\gamma_{q\rightarrow qg}(z,x,x_L,\ell_T^2)$ and $\Delta\gamma_{q\rightarrow gq}$ (instead of those vacuum splitting functions) with the (DGLAP) evolved FFs at scale $Q^2$. We average the above medium modified fragmentation functions over the initial production position and jet propagation direction, scaled by the number of binary nucleon-nucleon collisions at the impact parameter $b$ in $\rm A+A$ collisions to replace the vacuum fragmentation functions in Eq. (\[eq:ptspec\]). In the medium modified splitting functions $\Delta\gamma_{q\rightarrow qg,gq}$, we can extract the dependency of the properties of the medium into the jet transport parameter $\hat{q}$ which defined as the average squared transverse momentum broadening per unit length. In the higher-twist approach, the jet transport parameter $\hat{q}$ is related to the gluon distribution density of the medium. Phenomenologically the jet transport parameter can be assumed to be proportional to the local parton density in the QGP phase and also to the hadron density in the hadronic gas phase [@Chen:2010te]: $$\label{q-hat-qgph}
\hat{q} (\tau,r)= \left[\hat{q}_0\frac{\rho_{QGP}(\tau,r)}{\rho_{QGP}(\tau_{0},0)}
(1-f) + \hat q_{h}(\tau,r) f \right]\cdot \frac{p^\mu u_\mu}{p_0}\,,$$ $\rho_{QGP}$ is the parton (quarks and gluon) density in an ideal gas at a given temperature, $f(\tau,r)$ is the fraction of the hadronic phase as a function of space and time, $\hat q_{0}$ is the jet transport parameter at the center of the bulk medium in the QGP phase at the initial time $\tau_{0}$, $p^\mu$ is the four momentum of the jet and $u^\mu$ is the four flow velocity in the collision frame.
The space-time evolution of the QCD medium is given by a full three-dimensional (3+1D) ideal hydrodynamics description [@Hirano2001; @HT2002]. Parton density, temperature, fraction of the hadronic phase and the four flow velocity at every time-space points are provided by the hydro dynamical model. The only free parameter is $\hat{q}_0\tau_0$, the product of initial value of the jet transport parameter $\hat{q}_0$ and the time $\tau_0$ when the QCD medium is initially formed. This parameter controls the strength of jet-medium interaction, and the amount of the energy loss of the energetic jets. In the calculations, we use the values of $\hat{q}_0\tau_0$ extracted in the previous studies [@Chen:2010te; @Chen:2011vt; @Dai:2015dxa], which give very nice descriptions of single $\pi^0$ and $\eta$ productions in HIC. Moreover, we have used the EPS09 parametrization sets of nuclear PDFs $f_{a/A}(x_a,\mu^2)$ to consider the initial-state cold nuclear matter effects [@Eskola:2009uj].
![Top panel: Numerical calculation of the $\rho^{0}$ and $\pi^0$ production suppression factors in $0-10\%$ $\rm Au+Au$ collisions at RHIC $200$ GeV at NLO as functions of $p_{\rm T}$, comparing with STAR [@Agakishiev:2011dc] and PHENIX [@Adler:2003qi] data; Bottom panel: double ratio calculation of $R_{\rm AA}^{\rho^0}/R_{\rm AA}^{\pi^\pm}$ both at NLO, also comparing with STAR data. []{data-label="fig:illustrhorhic"}](doubleratio.eps){width="3.0in" height="3.2in"}
![Gluon and quark contribution fraction of the total yield both in p+p and Au+Au at RHIC []{data-label="fig:rhofracrhic"}](rhofracrhic.eps){width="3.0in" height="2.2in"}
Now we are ready to calculate the single $\rho^{0}$ productions in heavy ion collisions up to the NLO. The nuclear modification factor $R_{\rm AA}$ as a function of $p_{\rm T}$ is calculated to demonstrate the suppression of the production spectrum in $\rm A+A$ collisions relative to that in $\rm p+p$ collision: $$\begin{aligned}
R_{AB}(b)=\frac{d\sigma_{AB}^h/dyd^2p_T}{N_{bin}^{AB}(b)d\sigma_{pp}^h/dyd^2p_T}
\label{eq:eloss}\end{aligned}$$
In the $0-10\%$ most central $\rm Au+Au$ collisions at RHIC $200$ GeV, we calculate $\rho^{0}$ productions at typical values of $\hat q_{0}=1.2$ GeV$^2$/fm and $\tau_{0}=0.6$ fm at the RHIC [@Dai:2015dxa]. The theoretical calculation can explain the data of $\rho^{0}$ meson at large $p_{\rm T}$ region (see the top panel of Fig. \[fig:illustrhorhic\]). The theoretical calculation and the experimental data of the $\pi^0$ nuclear suppression factor are also presented for comparison. We note that the nuclear suppression factor of $\rho^0$ is similar to the one of $\pi^{0}$, as demonstrated by the double ratio $R_{\rm AA}^{\rho^0}/R_{\rm AA}^{\pi^\pm}$ in the bottom panel of Fig. \[fig:illustrhorhic\], which is around 1 calculated at the NLO accuracy. We also find that the theoretical curve undershoots the experimental data of $R_{\rm AA}$ same as the case in $\pi^0$, and the uncertainty caused by this undershooting will be cancelled out to a large extent when we discussing the double ratio of $\rho^0$ and charged $\pi$. Here $\pi^{\pm}$ FFs in vacuum are given by AKK08 [@Albino:2008fy].
To understand better the nature of the suppression pattern of $\rho^0$, we calculate the gluon (quark) contribution fraction of the total yield both in $\rm p+p$ and $\rm Au+Au$ collisions in Fig. \[fig:rhofracrhic\]. It is similar to $\eta$ and $\pi^0$ productions which demonstrate the domination of the quark fragmentation process contribution at high $p_{\rm T}$ region either in $\rm p+p$ or in $\rm A+A$ collisions , and the jet quenching effect may suppress the gluon fragmenting contribution but enhance the quark contribution. Therefore the crossing point where the fractional contributions of quark and gluon fragmentation are equal, will move toward lower $p_{\rm T}$ in $\rm Au+Au$ collision, as one observes in Fig. \[fig:rhofracrhic\].
![ Numerical calculation of the $\rho^{0}$ production in $0-10\%$ $\rm Pb+Pb$ collisions at LHC $2.76$ GeV in the top panel; theoretical calculation results of nuclear suppression factor of $\rho^{0}$ and $\pi^0$ are compared with the experimental data of charged hadron [@Aamodt:2010jd] in $0-10\%$ $\rm Pb+Pb$ collisions at LHC $2.76$ GeV in the bottom panel.[]{data-label="fig:illustrholhc"}](LHCrho.eps){width="3.6in" height="4.4in"}
We also predict the $\rho^{0}$ production in the $0-10\%$ most central $\rm Pb+Pb$ collisions at the LHC with $\sqrt{s_{NN}}=2.76$ TeV in the top panel of Fig. \[fig:illustrholhc\]. The values of the $\hat q_{0}$ are set to be the same as the typical values which have been used to describe production suppression of both single $\pi^0$ and $\eta$ mesons at the LHC [@Chen:2010te; @Chen:2011vt; @Dai:2015dxa]. We can see that, with the increase of $p_{\rm T}$, the nuclear modification factor of $\rho^0$ meson goes up slowly. In the calculation, best fit to the PHENIX data on $\pi^0$ nuclear suppression factor as a function of $p_{\rm T}$ in $0-5\%$ Au+Au collisions at $\sqrt{s}=200$ $ \rm GeV$ gives $\hat{q}_0=1.20\pm0.30$ $\rm GeV^2/fm$. Similarly, the best fit to the CMS data on charged hadron nuclear suppression factor in $0-5\%$ Pb+Pb collisions at $\sqrt{s}=2.76$ $ \rm TeV$ as a function of $p_{\rm T}$ would gives $\hat{q}_0=2.2\pm0.4 ~\rm GeV^2/fm$ at $\tau_0=0.6 ~\rm fm/c$ [@Burke:2013yra]. The same values of $\hat{q}_0\tau_0$ have been employed to give a very nice description of $\rho^0$ productions in LHC shown in the bottom panel of Fig. \[fig:illustrholhc\].
To compare the different trends of $\pi^0$ and $\rho^0$ spectra, we plot the ratio $\rho^0/\pi^0$ as a function of the transverse momentum $p_{\rm T}$ in Fig. \[fig:ratio\]. As we have mentioned that in the study the $\pi^0$ FFs are given by AKK08 [@Albino:2008fy]. We note that even the validity of the $\pi^0$ (charged hadron) FFs had been challenged by the over-predicting of its production in the LHC and Tevatron due ot the too-hard gluon-to-hadron FFs in the parameterizations [@dEnterria:2013sgr]. A recent attempt to address the problem and a global refit is performed in Ref. [@deFlorian:2014xna]. The uncertainty brought in by the usage of AKK08 fortunately do not affect the results of the nuclear modification factor $R_{\rm AA}$ much due to the cancellation when taking the ratio of A+A production to p+p reference. Therefore one expects that the extraction of jet transport parameter $\hat{q}_0$ from the comparison between theoretical calculated $R_{\rm AA}$ and the experimental data will not be affected much by such FFs uncertainties. In the studies of particle ratio, $\pi^0$ fragmentation function and its jet chemistry are used as reference to understand other mesons such as $\eta$, its FFs uncertainties certainly will be expected to affect particle ratios like $\eta/\pi^0$. However, since light mesons such as $\pi^0$ and $\eta$ are dominated by quark fragmenting contribution, such effect is therefore minimized.
Fig. \[fig:ratio\] illustrates that the ratio $\rho^0/\pi^0$ increases with the $\rm p_{\rm T}$ in $\rm p+p$ collision at the RHIC energy and LHC. Though the jet quenching effect may alter the ratio a little bit in $\rm A+A$ at lower $\rm p_T$, as $\rm p_T$ becomes larger, the ratio in $\rm A+A$ comes very close that in $\rm p+p$, especially at the LHC with higher $p_{\rm T}$. We note flat curves are observed in $\eta/\pi^0$ ratios as functions of $p_{\rm T}$ at both the RHIC and the LHC, whereas a increasing $\rho^0/\pi^0$ with respect to $p_{\rm T}$ are shown in Fig. \[fig:ratio\]. The $\rho^0/\pi^0$ ratio in the RHIC demonstrates a more rapidly increasing behavior with respect to $p_{\rm T}$. It is realized that the flat particle ratio dependence of $p_{\rm T}$ is therefore not a universal trend, and the shape of the particle ratio depends on the relative slope of their spectra in p+p , different flavor contributions to FFs as well as flavor dependence of parton energy loss in the QGP.
![ $\rho^0/\pi^0$ production ratio as a function of final state $p_{\rm T}$ calculated both in p+p and A+A collisions at RHIC and LHC []{data-label="fig:ratio"}](ratiorho.eps){width="3.0in" height="2.4in"}
We note that at high $p_{\rm T}$ region, the productions of both $\rho^0$ and $\pi^0$ are dominated by quark contribution (for example, see Fig. \[fig:rhofracrhic\]). If at high $p_{\rm T}$, quark FFs of $\rho^0$ and $\pi^0$ have a relatively weak dependence on $z_h$ and $p_{\rm T}$, then we have: $$\begin{aligned}
& &\text{Ratio}(\rho^0/\pi^0)=\frac{d\sigma_{ \eta}}{dp_{T}}/\frac{d\sigma_{ \pi^0}}{dp_{T}} \nonumber \\
&\approx&
\frac{\int F_{q}(\frac{p_{T}}{z_{h}})\ D_{q\to \rho^0}(z_{h}, p_{T})\frac{dz_{h}}{z_{h}^2}}
{\int F_{q}(\frac{p_{T}}{z_{h}})\ D_{q\to \pi^{0}}(z_{h}, p_{T})\frac{dz_{h}}{z_{h}^2}} \approx \frac{\Sigma_{q} D_{q\to \rho^0}(\left<z_{h}\right>, p_{T})}
{\Sigma_q D_{q\to \pi^{0}}(\left<z_{h}\right>, p_{T}) } \, .\nonumber
$$ Therefore, while quark and gluon may lose different fractions of their energies, at very high $p_{\rm T}$ region, the ratio $\rho^0/\pi^0$ in $\rm A+A$ collisions should approximately be determined only by quark FFs in vacuum with the $p_{\rm T}$ shift because of the parton energy loss. As we can see in Fig. \[fig:fragfunc\], the quark FFs at large scale $Q$ ($=p_{\rm T}$) change slowly with the variation of both $z_h$ and $p_{\rm T}$, then the ratio of $\rho^0/\pi^0$ in both $\rm A+A$ and $\rm p+p$ may approach to each other at larger $p_{\rm T}$. It is just as we have observed in the case for the yield ratio of $\eta/\pi^0$ [@Dai:2015dxa].
[**Acknowledgments:**]{} This research is supported by the MOST in China under Project No. 2014CB845404, NSFC of China with Project Nos. 11435004, 11322546, 11521064, and partly supported by the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) (No. 162301182691)
[99]{}
X. N. Wang and M. Gyulassy, Phys. Rev. Lett. [**68**]{}, 1480 (1992). M. Gyulassy, I. Vitev, X. N. Wang and B. W. Zhang, In \*Hwa, R.C. (ed.) et al.: Quark gluon plasma\* 123-191 \[nucl-th/0302077\]. K. Aamodt [*et al.*]{} \[ALICE Collaboration\], Phys. Rev. Lett. [**108**]{}, 092301 (2012) \[arXiv:1110.0121 \[nucl-ex\]\]. C. Adler [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett. [**90**]{}, 082302 (2003) \[nucl-ex/0210033\]. A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**80**]{}, 024908 (2009) \[arXiv:0903.3399 \[nucl-ex\]\]. B. I. Abelev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. C [**82**]{}, 034909 (2010) \[arXiv:0912.1871 \[nucl-ex\]\].
I. Vitev, S. Wicks and B. W. Zhang, JHEP [**0811**]{}, 093 (2008) \[arXiv:0810.2807 \[hep-ph\]\]. I. Vitev and B. W. Zhang, Phys. Rev. Lett. [**104**]{}, 132001 (2010) \[arXiv:0910.1090 \[hep-ph\]\]. W. Dai, I. Vitev and B. W. Zhang, Phys. Rev. Lett. [**110**]{}, no. 14, 142001 (2013) \[arXiv:1207.5177 \[hep-ph\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Rev. Lett. [**105**]{}, 252303 (2010) \[arXiv:1011.6182 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. C [**84**]{}, 024906 (2011) \[arXiv:1102.1957 \[nucl-ex\]\]. Z. B. Kang, R. Lashof-Regas, G. Ovanesyan, P. Saad and I. Vitev, Phys. Rev. Lett. [**114**]{}, no. 9, 092002 (2015) \[arXiv:1405.2612 \[hep-ph\]\].
K. M. Burke [*et al.*]{} \[JET Collaboration\], Phys. Rev. C [**90**]{}, no. 1, 014909 (2014) \[arXiv:1312.5003 \[nucl-th\]\]; Z. Q. Liu, H. Zhang, B. W. Zhang and E. Wang, Eur. Phys. J. C [**76**]{}, no. 1, 20 (2016) \[arXiv:1506.02840 \[nucl-th\]\]. X. F. Chen, C. Greiner, E. Wang, X. N. Wang and Z. Xu, Phys. Rev. C [**81**]{}, 064908 (2010) \[arXiv:1002.1165 \[nucl-th\]\]. X. F. Chen, T. Hirano, E. Wang, X. N. Wang and H. Zhang, Phys. Rev. C [**84**]{}, 034902 (2011) \[arXiv:1102.5614 \[nucl-th\]\]. W. Dai, X. F. Chen, B. W. Zhang and E. Wang, Phys. Lett. B [**750**]{}, 390 (2015) \[arXiv:1506.00838 \[nucl-th\]\]. W. Dai and B. W. Zhang, arXiv:1612.05848 \[hep-ph\].
G. Agakishiev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett. [**108**]{}, 072302 (2012) \[arXiv:1110.0579 \[nucl-ex\]\]. A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**83**]{}, 024909 (2011) \[arXiv:1004.3532 \[nucl-ex\]\]. R. Bala, I. Bautista, J. Bielcikova and A. Ortiz, Int. J. Mod. Phys. E [**25**]{}, no. 07, 1642006 (2016) \[arXiv:1605.03939 \[hep-ex\]\]. W. Liu, C. M. Ko and B. W. Zhang, Phys. Rev. C [**75**]{}, 051901 (2007) \[nucl-th/0607047\].
S. J. Brodsky and A. Sickles, Phys. Lett. B [**668**]{}, 111 (2008) \[arXiv:0804.4608 \[hep-ph\]\]. X. Chen, H. Zhang, B. W. Zhang and E. Wang, J. Phys. [**37**]{}, 015004 (2010) \[arXiv:0806.0556 \[hep-ph\]\]. H. Saveetha, D. Indumathi and S. Mitra, Int. J. Mod. Phys. A [**29**]{}, no. 07, 1450049 (2014) \[arXiv:1309.2134 \[hep-ph\]\]. D. Indumathi and H. Saveetha, Int. J. Mod. Phys. A [**27**]{}, 1250103 (2012) \[arXiv:1102.5594 \[hep-ph\]\]. M. Hirai and S. Kumano, Comput. Phys. Commun. [**183**]{}, 1002 (2012) \[arXiv:1106.1553 \[hep-ph\]\].
X. f. Guo and X. N. Wang, Phys. Rev. Lett. [**85**]{}, 3591 (2000) \[hep-ph/0005044\]. B. W. Zhang and X. N. Wang, Nucl. Phys. A [**720**]{}, 429 (2003) \[arXiv:hep-ph/0301195\]. B. W. Zhang, E. Wang and X. N. Wang, Phys. Rev. Lett. [**93**]{}, 072301 (2004) \[nucl-th/0309040\]; A. Schafer, X. N. Wang and B. W. Zhang, Nucl. Phys. A [**793**]{}, 128 (2007) \[arXiv:0704.0106 \[hep-ph\]\]. N. Kidonakis and J. F. Owens, Phys. Rev. D [**63**]{}, 054019 (2001) doi:10.1103/PhysRevD.63.054019 \[hep-ph/0007268\].
H. L. Lai [*et al.*]{} \[CTEQ Collaboration\], Eur. Phys. J. C [**12**]{}, 375 (2000) \[hep-ph/9903282\]. W. Dai, B. W. Zhang, H. Z. Zhang, E. Wang and X. F. Chen, Eur. Phys. J. C [**77**]{}, no. 8, 571 (2017) \[arXiv:1702.01614 \[nucl-th\]\].
T. Hirano, Phys. Rev. C [**65**]{}, 011901 (2002). T. Hirano and K. Tsuda, Phys. Rev. C [**66**]{}, 054905 (2002).
K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP [**0904**]{}, 065 (2009) \[arXiv:0902.4154 \[hep-ph\]\]. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett. [**91**]{}, 072301 (2003) doi:10.1103/PhysRevLett.91.072301 \[nucl-ex/0304022\].
K. Aamodt [*et al.*]{} \[ALICE Collaboration\], Phys. Lett. B [**696**]{}, 30 (2011) doi:10.1016/j.physletb.2010.12.020 \[arXiv:1012.1004 \[nucl-ex\]\]. S. Albino, B. A. Kniehl and G. Kramer, Nucl. Phys. B [**803**]{}, 42 (2008) doi:10.1016/j.nuclphysb.2008.05.017 \[arXiv:0803.2768 \[hep-ph\]\]. D. d’Enterria, K. J. Eskola, I. Helenius and H. Paukkunen, Nucl. Phys. B [**883**]{}, 615 (2014) doi:10.1016/j.nuclphysb.2014.04.006 \[arXiv:1311.1415 \[hep-ph\]\]. D. de Florian, R. Sassot, M. Epele, R. J. Hern��ndez-Pinto and M. Stratmann, Phys. Rev. D [**91**]{}, no. 1, 014035 (2015) doi:10.1103/PhysRevD.91.014035 \[arXiv:1410.6027 \[hep-ph\]\].
[^1]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we present a-posteriori KAM results for existence of $d$-dimensional isotropic invariant tori for $n$-DOF Hamiltonian systems with additional $n-d$ independent first integrals in involution. We carry out a covariant formulation that does not require the use of action-angle variables nor symplectic reduction techniques. The main advantage is that we overcome the curse of dimensionality avoiding the practical shortcomings produced by the use of reduced coordinates, which may cause difficulties and underperformance when quantifying the hypotheses of the KAM theorem in such reduced coordinates. The results include ordinary and (generalized) iso-energetic KAM theorems. The approach is suitable to perform numerical computations and computer assisted proofs.'
address:
- 'Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via 585, 08007 Barcelona, Spain.'
- 'Department of Mathematics, Uppsala University, Box 480, 751 06 Uppsala, Sweden'
author:
- 'Alex Haro$^{\mbox{\textdagger}}$'
- 'Alejandro Luque$^{\mbox{\textdaggerdbl}}$'
bibliography:
- 'references.bib'
title: ' A-posteriori KAM theory with optimal estimates for partially integrable systems'
---
Introduction
============
Persistence under perturbations of regular (quasi-periodic) motion is one of the most important problems in Mechanics and Mathematical Physics, and has deep implications in Celestial and Statistical Mechanics. Classical perturbation theory experienced a breakthrough around sixty years ago, with the work of Kolmogorov [@Kolmogorov54], Arnold [@Arnold63a] and Moser [@Moser62], the founders of what is nowadays known as KAM theory. They overcame the so called small divisors problem, that might prevent the convergence of the series appearing in perturbative methods. Since then, KAM theory has become a full body of knowledge that connects fundamental mathematical ideas, and the literature contains eminent contributions and applications in different contexts (e.g. unfoldings and bifurcations of invariant tori [@BroerHTB90; @BroerW00], quasi-periodic solutions in partial differential equations [@BertiBP14; @EliassonFK15; @Poschel96], hard implicit function theorems [@Moser66b; @Moser66a; @Zehnder75; @Zehnder76], stability of perturbed planetary problems [@Arnold63b; @ChierchiaP11; @Fejoz04], conformally symplectic systems and the spin-orbit problem [@CallejaCLa; @CellettiC09], reversible systems [@Sevryuk07], or existence of vortex tubes in the Euler equation [@EncisoP15], just to mention a few).
The importance and significance of KAM theory in Mathematics, Physics and Science in general are accounted in the popular book [@Dumas14]. But, although KAM theory holds for general dynamical systems under very mild technical assumptions, its application to concrete systems becomes a challenging problem. Moreover, the “threshold of validity” of the theory (the size of the perturbation strength for which KAM theorems can be applied) seemed to be absurdly small in applications to physical systems[^1]. With the advent of computers and new developed methodologies, the distance between theory and practice has been shortened (see e.g. [@CellettiC07] for a illuminating historical introduction and [@CellettiC88; @LlaveR90] for pioneering computer-assisted applications). But new impulses have to be made in order to make KAM theory fully applicable to realistic physical systems. In this respect, recent years have witnessed a revival of this theory. One direction that have experienced a lot of progress is the a-posteriori approach based on the parameterization method [@Llave01; @GonzalezJLV05; @HaroCFLM16; @Sevryuk14].
The main signatures of the parameterization method are the application of the a-posteriori approach and a covariant formulation, free of using any particular system of coordinates. In fact, the method was baptized as KAM theory *without action-angle variables* in [@GonzalezJLV05]. Instead of performing canonical transformations, the strategy consists in solving the invariance equation for an invariant torus by correcting iteratively an approximately invariant one. If the initial approximation is good enough (relatively to some non-degeneracy conditions), then there is a true invariant torus nearby [@CellettiC07; @GonzalezJLV05] (see also [@CellettiC95; @Llave01; @Moser66b; @Russmann76b] for precedents). Hence one can consider non-perturbative problems in a natural way. The approach itself implies the traditional approach (of perturbative nature) and has been extended to different theoretical settings [@CanadellH17a; @FontichLS09; @GonzalezHL13; @LuqueV11]. A remarkable feature of the parameterization method is that it leads to very fast and efficient numerical methods for the approximation of quasi-periodic invariant tori (e.g. [@CallejaL10; @CanadellH17b; @FoxM14; @HuguetLS12]) We refer the reader to the recent monograph [@HaroCFLM16] for detailed discussions (beyond the KAM context) on the numerical implementations of the method, examples, and a more complete bibliography.
A recently reported success in KAM theory is the design of a general methodology to perform computer assisted proofs of existence of Lagrangian invariant tori in a non-perturbative setting [@FiguerasHL17]. The methodology has been applied to several low dimensional problems obtaining almost optimal results[^2]. The program is founded on an a-posteriori theorem with explicit estimates, whose hypotheses are checked using Fast Fourier Transform (with interval arithmetics) in combination with a sharp control of the discretization error. This fine control is crucial to estimate the norm of compositions and inverses of functions, outperforming the use of symbolic manipulations. An important consequence is that the rigorous computations are executed in a very fast way, thus allowing to manipulate millions of Fourier modes comfortably.
One of the typical obstacles to directly apply the above methodology (and KAM theory in general) to realistic problems in Mechanics is the presence of additional first integrals in involution. This degeneracy implies that quasi-periodic invariant tori appear in smooth families of lower dimensional tori. The constraints linked to these conserved quantities can be removed by using classical symplectic reduction techniques [@Cannas01; @MarsdenW74], thus obtaining a lower dimensional Hamiltonian system in a quotient manifold, where Lagrangian tori can be computed. This approach has an undeniable theoretical importance (e.g. the moment map is an object of remarkable relevance [@MarsdenMOPR07]) and has been successfully used to obtain perturbative KAM results in significant examples [@ChierchiaP11; @Fejoz04; @PalacianSY14]. However, the use of symplectic reduction presents serious difficulties when applying the a-posteriori KAM approach to the reduced system. This is obvious because, in the new set of coordinates, it may be difficult to quantify the required control of the norms for both global objects (e.g. Hamiltonian system or symplectic structure) and local objects (e.g. parameterization of the invariant torus, torsion matrix), or also, these estimates may become sub-optimal to apply a quantitative KAM theorem. The goal of this paper is overcoming these drawbacks. Instead of reducing the system, we will characterize a target lower dimensional torus, using the original coordinates of the problem, by constructing a geometrically adapted frame to suitably display the linearized dynamics on the full family of invariant tori.
In this paper we present two KAM theorems in a-posteriori format for existence of (families of) isotropic[^3] tori in Hamiltonian systems with first integrals in involution, avoiding the use of action-angle coordinates (in the spirit of [@GonzalezJLV05]) and symplectic reduction. The first theorem is an ordinary KAM theorem (also known as *à la Kolmogorov*) on existence of an invariant torus with a fixed frequency vector. Of course, if no additional first integrals are present in the system, then we recover the results in [@GonzalezJLV05], with the extra bonus of providing a geometrically improved scheme with explicit estimates. Our second theorem is a version of a non-perturbative iso-energetic KAM theorem (see [@BroerH91; @DelshamsG96] for perturbative formulations), in which one fixes either the energy or one of the first integrals but modulates the frequency vector. Actually, we present a general version that allows us to consider any conserved quantity in involution. Even if there are no additional first integrals, the corollary of this result is an iso-energetic theorem for Lagrangian tori which is a novelty in this covariant formulation.
The statements of the results are written with an eye in the applications. Hence, we provide explicit estimates in the conditions of our theorem, so that the hypotheses can be checked using the computer assisted methodology in [@FiguerasHL17]. In particular, another novelty of this paper is that estimates are detailed taking advantage of the presence of additional geometric structures in phase space other than the symplectic structure, such as a Riemannian metric or a compatible triple, covering a gap in the literature [@HaroCFLM16]. We think researchers interested in applying these techniques to specific problems can benefit from these facts. It is worth mentioning that the required control of the first integrals is limited to estimate the norm of objects that depend only on elementary algebraic expressions and derivatives. Quantitative application of these theorems using computer assisted methods will be presented in a forthcoming work.
The paper is organized as follows. Section 2 introduces the background and the geometric constructions. We present the two main theorems of this paper in Section 3: the ordinary KAM theorem and the generalized iso-energetic KAM theorem (with presence of first integrals). Some common lemmas are given in Section 4, that control the approximation of different geometric properties. The proof of the ordinary KAM theorem is given in Section 5, while the proof of the generalized iso-energetic KAM theorem is done in Section 6. In order to collect the long list of expressions leading to the explicit estimates and conditions of the theorems, we include separate tables in the appendix.
Background and elementary constructions
=======================================
Basic notation {#ssec:basic:notation}
--------------
We denote by ${{\mathbb R}}^m$ and ${{\mathbb C}}^m$ the vector spaces of $m$-dimensional vectors with components in ${{\mathbb R}}$ and ${{\mathbb C}}$, respectively, endowed with the norm $$|v|= \max_{i=1,\dots,m} |v_i|.$$ We consider the real and imaginary projections $\re,\im: {{\mathbb C}}^m\to {{\mathbb R}}^m$, and identify ${{\mathbb R}}^m \simeq \im^{-1}\{0\} \subset {{\mathbb C}}^m$. Given $U\subset {{\mathbb R}}^m$ and $\rho>0$, the complex strip of size $\rho$ is $U_\rho= \{ \theta \in {{\mathbb C}}^m \,:\, \re\, \theta \in U \, , \, |\im\, \theta|<\rho \}$. Given two sets $X,Y \subset {{\mathbb C}}^{m}$, $\dist(X,Y)$ is defined as $\inf\{|x-y| \,:\, x\in X \, , \, y\in Y\}$.
We denote ${{\mathbb R}}^{n_1 \times n_2}$ and ${{\mathbb C}}^{n_1\times n_2}$ the spaces of $n_1 \times n_2$ matrices with components in ${{\mathbb R}}$ and ${{\mathbb C}}$, respectively. We will consider the identifications ${{\mathbb R}}^m\simeq {{\mathbb R}}^{m\times 1}$ and ${{\mathbb C}}^m\simeq {{\mathbb C}}^{m\times 1}$. We denote $I_n$ and $O_n$ the $n\times n$ identity and zero matrices, respectively. The $n_1\times n_2$ zero matrix is represented by $O_{n_1\times n_2}$. Finally, we will use the notation $0_n$ to represent the column vector $O_{n \times 1}$. Matrix norms in both ${{\mathbb R}}^{n_1 \times n_2}$ and ${{\mathbb C}}^{n_1\times n_2}$ are the ones induced from the corresponding vector norms. That is to say, for a $n_1 \times n_2$ matrix $M$, we have $$|M| = \max_{i= 1,\dots,n_1} \sum_{j= 1,\dots, n_2} |M_{i,j}|.$$ In particular, if $v$ is a $n_2$-dimensional vector, $|M v|\leq |M| |v|$. Moreover, $M^\top$ denotes the transpose of the matrix $M$, so that $$|M^\top|= \max_{j= 1,\dots, n_2} \sum_{i= 1,\dots,n_1} |M_{i,j}|.$$
Given an analytic function $f: \U\subset {{\mathbb C}}^m \to {{\mathbb C}}$, defined in an open set $\U$, the action of the $r$-order derivative of $f$ at a point $x\in \U$ on a collection of (column) vectors $v_1,\dots, v_r\in {{\mathbb C}}^m$, with $v_k= (v_{1k}, \dots, v_{mk})$, is $$\Dif^r f(x) [v_1,\dots,v_r] = \sum_{\ell_1,\dots,\ell_r} \frac{\partial^r f}{\partial x_{\ell_1}\dots\partial x_{\ell_r}}(x)\ v_{\ell_1 1} \cdots v_{\ell_r r},$$ where the indices $\ell_1,\dots,\ell_r$ run from $1$ to $m$.
The construction is extended to vector and matrix valued functions as follows: given a matrix valued function $M:\U\subset{{\mathbb C}}^m\to {{\mathbb C}}^{n_1\times n_2}$ (whose components $M_{i,j}$ are analytic functions), a point $x\in \U$, and a collection of (column) vectors $v_1,\dots, v_r\in {{\mathbb C}}^m$, we obtain a $n_1\times n_2$ matrix $\Dif^r M(x) [v_1,\dots,v_r]$ such that $$\left(\Dif^r M(x) [v_1,\dots,v_r]\right)_{i,j} = \Dif^r M_{i,j}(x) [v_1,\dots,v_r].$$ For $r=1$, we will often write $\Dif M(x) [v]= \Dif^1 M(x) [v]$ for $v\in {{\mathbb C}}^m$.
Notice that, given a function $f: \U \subset {{\mathbb C}}^m \to {{\mathbb C}}^n\simeq {{\mathbb C}}^{n\times 1}$ we can think of $\Dif f$ as a matrix function $\Dif f: \U \to {{\mathbb C}}^{n\times m}$. Hence, $\Dif^1 f(x) [v]= \Dif f(x) v$ for $v\in{{\mathbb C}}^m$. Therefore, we can apply the transpose to obtain a matrix function $(\Dif f)^\top$, which acts on $n$-dimensional vectors, while $\Dif f^\top=\Dif (f^\top)$ acts on $m$-dimensional vectors. Hence, according to the above notation, the operators $\Dif$ and $(\cdot)^\top$ do not commute. Therefore, in order to avoid confusion, we must pay attention to the use of parenthesis.
A function $u : {{\mathbb R}}^d \to {{\mathbb R}}$ is 1-periodic if $u(\theta+e)=u(\theta)$ for all $\theta \in {{\mathbb R}}^d$ and $e \in {{\mathbb Z}}^d$. Abusing notation, we write $u:{{\mathbb T}}^d \to {{\mathbb R}}$, where ${{\mathbb T}}^d = {{\mathbb R}}^d/{{\mathbb Z}}^d$ is the $d$-dimensional standard torus. Analogously, for $\rho>0$, a function $u:{{\mathbb R}}^d_\rho\to {{\mathbb C}}$ is 1-periodic if $u(\theta+e)=u(\theta)$ for all $\theta \in {{\mathbb R}}^d_\rho$ and $e \in {{\mathbb Z}}^d$. We also abuse notation and write $u:{{\mathbb T}}^d_\rho \to {{\mathbb R}}$, where ${{\mathbb T}}^{d}_{\rho}= \{\theta\in {{\mathbb C}}^d / {{\mathbb Z}}^d : |\im\ \theta| < \rho\}$ is the complex strip of ${{\mathbb T}}^d$ of width $\rho>0$. We will also write the Fourier expansion of a periodic function as $$u(\theta)=\sum_{k \in {{\mathbb Z}}^d} \hat u_k \ee^{2\pi \ii k \cdot \theta}, \qquad \hat u_k =
\int_{{{\mathbb T}}^d} u(\theta) \ee^{-2 \pi \ii k \cdot \theta} \dif \theta\,,$$ and introduce the notation ${\langle{u}\rangle}:=\hat u_0$ for the average. The notation in the paragraph is extended to $n_1 \times n_2$ matrix valued periodic functions $M: {{\mathbb T}}^{d}_{\rho}\to{{\mathbb C}}^{n_1\times n_2}$, for which $\hat M_k \in {{\mathbb C}}^{n_1 \times n_2}$ denotes the Fourier coefficient of index $k\in {{\mathbb Z}}^d$.
Hamiltonian systems and invariant tori {#ssec:inv:tor}
--------------------------------------
In this paper we consider an open set $\mani$ of ${{\mathbb R}}^{2n}$ endowed with a symplectic form $\sform$, that is, a closed ($\dif \sform =0$) non-degenerate differential 2-form on $\mani$. We will assume that $\sform$ is exact ($\sform=\dif \aform$ for certain 1-form $\aform$ called *action form*), so $\mani$ is endowed with an exact symplectic structure. The matrix representations of $\aform$ and $\sform$ are given by the matrix valued functions $$\begin{array}{rcl}
a: \mani & \longrightarrow & {{\mathbb R}}^{2n}\\
z & \longmapsto & a(z)
\,,
\end{array}$$ and $$\begin{array}{rcl}
\Omega: \mani & \longrightarrow & {{\mathbb R}}^{2n \times 2n}\\
z & \longmapsto & \Omega(z)
=(\Dif a(z))^\top-\Dif a(z)
\,,
\end{array}$$ respectively. The non-degeneracy of $\sform$ is equivalent to $\det \Omega(z) \neq 0$ for all $z\in\mani$.
\[rem:proto\] The prototype example of symplectic structure is the *standard symplectic structure* on $\mani \subset {{\mathbb R}}^{2n}$: $\sform_0= \sum_{i= 1}^n {\rm d} z_{n+i}\wedge {\rm d}z_i$. An action form for $\sform_0$ is ${\aform_0}= \sum_{i= 1}^n z_{n+i}\
{\rm d} z_i$. The matrix representations of $\aform_0$ and $\sform_0$ are, respectively, $$a_0(z)= \begin{pmatrix} O_n & I_n \\ O_n & O_n\end{pmatrix} z\,, \qquad
\Omega_0= \begin{pmatrix} O_n & -I_n \\ I_n & O_n \end{pmatrix}\,.$$ Another usual action form on $\mani\subset {{\mathbb R}}^{2n}$ is ${\aform_0}=\tfrac{1}{2}\sum_{i= 1}^n (z_{n+i}\
{\rm d} z_i-z_i \ \dif z_{n+i})$, which is represented as $$a_0(z)= \frac{1}{2} \begin{pmatrix} O_n & I_n \\ -I_n & O_n\end{pmatrix} z\,,$$ in coordinates.
We say that a vector field $X : \mani\to {{\mathbb R}}^{2n}$ is *symplectic* (or *locally-Hamiltonian*) if $L_X \sform=0$ where $L_X$ stands for the Lie derivative with respect to $X$. Using Cartan’s magic formula, and the fact that $\dif \sform=0$, it turns out that $X$ is symplectic if and only if $i_X \sform$ is closed. We say that $X$ is *exact symplectic* (or *Hamiltonian*) if the contraction $i_X \sform$ is exact, i.e., if there exists a function $\H : \mani\to {{\mathbb R}}$ (globally defined) such that $i_X \sform =
-\dif \H$. In coordinates, an exact symplectic vector field satisfies $$\Omega(z) X(z)= (\Dif \H(z))^\top\,,
\qquad
\mbox{i.e.,}
\qquad
X(z)= \Omega(z)^{-1} (\Dif \H(z))^\top\,.$$ Hence, we will use the notation $X=X_\H$.
The Poisson bracket of two functions $f$, $g$ is given by $\{f,g\} =
-\sform(X_f,X_g)$. In coordinates, $$\{f,g\} (z)
= \Dif f(z) \Omega(z)^{-1} (\Dif g(z))^\top.$$ Then, if $\varphi_t$ is the flow of $X_\H$, it follows that $$\frac{\dif}{\dif t} (f \circ \varphi_t) = \{f,\H\} \circ \varphi_t,$$ and so, $f$ is a conserved quantity if $\{f,\H\}=0$.
In this context, given a Hamiltonian vector field $X_\H$ on $\mani$ and a frequency vector $\omega \in {{\mathbb R}}^d$, with $2\leq d\leq n$, we are interested in finding a parameterization $K : {{\mathbb T}}^d \rightarrow \mani$ satisfying $$\label{eq:inv:fv}
X_\H (K(\theta)) = \Dif K(\theta) \omega \,.$$ This means that the $d$-dimensional manifold $\torus=K({{\mathbb T}}^d)$ is invariant and the internal dynamics is given by the constant vector field $\omega$. For obvious reasons, equation is called *invariance equation for $K$*. Therefore, given a parameterization $K : {{\mathbb T}}^d \rightarrow \mani$ and a frequency vector $\omega\in{{\mathbb R}}^d$, the *error of invariance* is the periodic function $E:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n}$ given by $$\label{eq:inv:err}
E(\theta) := X_\H (K(\theta)) - \Dif K(\theta) \omega \,.$$ Roughly speaking, if we have a good enough approximation of a $d$-dimensional invariant torus $\torus$ (that is, the error is small enough in certain norm), then one is interested in obtaining a true invariant torus, close to $\torus$, satisfying .
\[rem:topo\] Equation is the infinitesimal version of the equation $$\varphi_t(K(\theta)) = K(\theta+\omega t)\,,$$ where $\varphi_t$ is the flow of $X_\H$. Accordingly, we can study the invariance of $\torus$ using a discrete version of a KAM theorem for symplectic maps. We refer the reader to [@GonzalezJLV05; @FiguerasHL17; @HaroCFLM16] for such a-posteriori theorems, quantitative estimates, and applications. However, notice that obtaining the required estimates for the flow $\varphi_t$ demands to integrate the equations of the motion up to second order variational equations.
If $\omega \in {{\mathbb R}}^d$ is *nonresonant* (i.e. if $\omega\cdot k\neq 0$ for every $k \in {{\mathbb Z}}^d\backslash\{0\}$) then $z(t)=K({\alpha}+\omega t)$ is a quasi-periodic solution of $X_\H$ for every ${\alpha}\in {{\mathbb T}}^d$, and ${\alpha}$ is called the initial phase of the parameterization. It is well known that quasi-periodicity implies additional geometric properties of the torus (see [@BroerHS96; @Moser66c]). In particular, that the torus is *isotropic*. This means that the pullback $K^*\sform$ of the symplectic form on the torus $\torus$ vanishes. In matrix notation, the representation of $K^*\sform$ at a point $\theta \in {{\mathbb T}}^d$ is $$\label{def-OK}
\Omega_K(\theta)
= (\Dif K(\theta))^\top \Omega(K(\theta)) \ \Dif K(\theta)\,,$$ and so, $\torus$ is isotropic if $\Omega_K(\theta)=O_{d}$, $\forall \theta \in {{\mathbb T}}^d$. If $d=n$ then $\torus$ is *Lagrangian*. Moreover, quasi-periodicity implies that $$\H(K(\theta)) = {\langle{\H \circ K}\rangle}\,, \qquad \forall \theta \in {{\mathbb T}}^d\,,$$ which means that the torus is contained in an energy level of the Hamiltonian.
Other topologies can be considered for the ambient manifold. For example, we may have some information (e.g. from normal form analysis) that allows us to construct tubular coordinates around a torus. In this situation, it is interesting to look for an invariant torus inside an ambient manifold of the form $\mani \subset {{\mathbb T}}^d \times U$, with $U \subset {{\mathbb R}}^{2n-d}$. Notice that both $a(z)$ and $\H(z)$ are 1-periodic in the first $d$-variables, and, since the Poisson bracket preserves this property, also is $X_{\H}$. Assume that we are interested in obtaining an invariant torus that preserves the topology of the manifold (typically called *primary torus*). Then we must choose a parameterization $K : {{\mathbb T}}^d \rightarrow \mani$ such that $K(\theta)-(\theta,0)$ is $1$-periodic (that is, $K$ is *homotopic to the zero-section*). Therefore, it turns out that the error function is also 1-periodic, so the construction makes sense. All expressions and formulas presented in this paper remain valid, and one only needs to take into account that the elements of $K$ and $\Dif K$ contain an additional term that comes from the topology. We refer to [@HaroCFLM16] for a detailed discussion of this case. The case of intermediate topologies $\mani \subset {{\mathbb T}}^m \times U$ with $U \subset {{\mathbb R}}^{2n-m}$ is similar.
Conserved quantities and families of invariant tori {#ssec:conserved}
---------------------------------------------------
We will assume that the vector field $X_\H$ has $n-d$ first integrals in involution $p_1,\dots,p_{n-d}:\mani\to{{\mathbb R}}$, that is to say: $$\label{eq:Poisson:H:p}
\{\H,p_j\}= 0\,,
\quad 1\leq j\leq n-d
\,,$$ and $$\label{eq:Poisson:p:p}
\{p_i,p_j\}= 0\,,
\quad 1\leq i,j\leq n-d
\,.$$ Consequently, the Lie brackets of the corresponding Hamiltonian vector fields vanish and we have: $$\label{eq:comm:H:p}
\Dif X_\H(z) X_{p_j}(z)= \Dif X_{p_j}(z) X_\H(z)\,,
\quad 1\leq j\leq n-d
\,,$$ and $$\label{eq:comm:p:p}
\Dif X_{p_i}(z) X_{p_j}(z)= \Dif X_{p_j}(z) X_{p_i}(z)\,,
\quad 1\leq i,j\leq n-d
\,.$$ We will encode the $n-d$ first integrals in an only function $p:\mani\to {{\mathbb R}}^{n-d}$, so that the involution conditions are rephrased as $$\Dif \H(z) \ \Omega(z)^{-1} (\Dif p(z))^\top = 0_{n-d}^\top \,,$$ and $$\Dif p(z) \ \Omega(z)^{-1} (\Dif p(z))^\top = O_{n-d}\,.$$ Moreover, the corresponding $n-d$ Hamiltonian vector fields are the columns of the matrix function $X_p:\mani \to{{\mathbb R}}^{2n \times (n-d)}$, with $(X_p)_{i,j}= (X_{p_j})_i$, and $$X_p(z)= \Omega(z)^{-1} (\Dif p(z))^\top\,.$$ The commuting conditions are $$\label{eq:comm:H:p:bis}
\Dif X_\H(z) X_{p}(z)= \Dif X_{p}(z) [X_\H(z)]
\,,$$ and $$\label{eq:comm:p:p:bis}
\Dif X_{p_i}(z) X_{p}(z)= \Dif X_{p}(z) [X_{p_i}(z)]\,,
\quad 1\leq i\leq n-d
\,.$$
The above setting implies that $p$ generates a $(n-d)$-parameter family of local symplectomorphisms. In particular, we introduce $$\label{eq:loc:action}
\Phi_s = \varphi_{s_1}^{1} \circ \cdots \circ \varphi_{s_{n-d}}^{n-d}$$ where $s = (s_1,\ldots,s_{n-d})$ belongs to an open neighborhood of $0$ in ${{\mathbb R}}^{n-d}$, and $\varphi_{s_i}^i$ is the flow of the Hamiltonian vector field $X_{p_i}$. Notice that this is a local group action of ${{\mathbb R}}^{n-d}$ and, by the commutativity of the flows in , we have $$\frac{\partial \Phi_s}{\partial s_i} = X_{p_i} \circ \Phi_s\,.$$ If the vector fields are linearly independent, then the local group action defines a family of local diffeomorphisms $s \mapsto \Phi_s$ which commutes with the flow of $X_\H$: $$\Phi_s \circ \varphi_t = \varphi_t \circ \Phi_s\,.$$ The map $\Phi_s$ is usually called the *continuous family of symmetries* of $X_\H$. A consequence of this is the following: if $\torus=K({{\mathbb T}}^d)$ is invariant for $X_\H$, with frequency vector $\omega$, then $\torus_s = \Phi_s(\torus)$ is also invariant for $X_\H$ with the same frequency vector: $$\varphi_t(\Phi_s(K(\theta))) = \Phi_s (\varphi_t(K(\theta))) =
\Phi_s(K(\theta+\omega t))\,,$$ and so, $\Phi_s \circ K$ is a parameterization of $\torus_s$.
An important observation is that all invariant tori of the family are contained in the submanifold $$\{z \in \mani \,:\, \H(z)=\H_0\,,\, p(z)=p_0\}\,.$$ Hence, once the frequency $\omega$ of the torus has been fixed, we cannot fix the values of $\H_0$ or $p_0$. This case is considered in Section \[ssec:theo\], referred to as the ordinary case. If we are interested in obtaining an invariant torus on a target energy level $\H_0$, then we fix the direction of the frequency vector $\omega$ but adjust its modulus (iso-energetic case). Similarly, the same kind of adjustment of $\omega$ can be made if we are interested in obtaining an invariant torus having a prefixed value of one of the components of $p$ (iso-momentum case). The previous scenarios can be generalized by considering any first integral $c : \mani \rightarrow {{\mathbb R}}$ that commutes with $\H$ and $p=(p_1,\ldots,p_{n-d})$, that is, $$\Dif c(z) X_\H(z)=0\,,
\qquad
\Dif c(z) X_p(z)= 0_{n-d}^\top\,.$$ For example, we can thing that $c$ is a function of $h$ and $p$ given by $c(z)=f_{c}(\H(z),p(z))$, where $f_{c}:{{\mathbb R}}\times {{\mathbb R}}^{n-d} \rightarrow {{\mathbb R}}$ is known (we have the particular cases $c(z)=h(z)$ and $c(z)=p_j(z)$ with $j \in \{1,\ldots,n-d\}$). In Section \[ssec:theo:iso\] we will establish sufficient conditions to obtain an invariant torus having a prefixed value of the target conserved quantity $c$ (generalized iso-energetic case). We emphasize that selecting simultaneously the values of several conserved quantities is not possible in general, but makes sense for Cantor sets of frequencies.
Linearized dynamics and reducibility {#ssec:red:lin:eq}
------------------------------------
In this section we describe the geometric construction of a suitable symplectic frame attached to an invariant torus $\torus$ of a Hamiltonian system $X_\H$ with conserved quantities $p : \mani \rightarrow {{\mathbb R}}^{n-d}$. Indeed, given a parameterization $K:{{\mathbb T}}^d \rightarrow \mani$ satisfying $$\label{eq:inv0}
X_\H(K(\theta))=\Dif K(\theta) \omega\,,$$ and given any $m$-dimensional vector subbundle parameterized by $V : {{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n \times m}$, with $1\leq m \leq 2n$, we introduce the operator $$\label{eq:Loper}
{{{\mathcal X}}_{V}}(\theta) := \Dif X(K(\theta)) V(\theta) -
\Dif V(\theta) [\omega]\,,$$ which corresponds to the infinitesimal displacement of $V$, and we say that a bundle is invariant under the linearized equations if ${{{\mathcal X}}_{V}}(\theta)=O_{2n \times m}$ for every $\theta \in {{\mathbb T}}^d$.
We consider the map $L:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2 n\times n}$ given by $$L(\theta) =
\begin{pmatrix} \Dif K(\theta) & X_p(K(\theta)) \end{pmatrix}\,,$$ and we will assume that $\mathrm{rank}\, L(\theta)=n$ for every $\theta \in {{\mathbb T}}^d$. Then, it turns out that $L(\theta)$ satisfies $$\label{eq:inv1}
{{{\mathcal X}}_{L}}(\theta) = O_{2n \times n}\,, \qquad \forall \theta \in {{\mathbb T}}^d\,.$$ This invariance follows from two observations. Firstly, taking derivatives at both sides of , we have $$\Dif X_\H(K(\theta)) \Dif K(\theta) = \Dif (\Dif K(\theta)) [\omega]
\,,$$ and secondly, from the commutation rule , we have: $$\begin{aligned}
\Dif (X_p(K(\theta))) [\omega]
= {} & \Dif X_p(K(\theta)) [\Dif K(\theta) \omega] \\
= {} & \Dif X_p(K(\theta)) [X_\H(K(\theta))] \\
= {} & \Dif X_\H(K(\theta)) X_p(K(\theta))\,.\end{aligned}$$ Then, the property in follows putting together both expressions.
By similar geometric properties (detailed computations will be presented in Section \[ssec:symp\]), it turns out that the subspace $L(\theta)$ is *Lagrangian*, i.e., we have $$L(\theta)^\top \Omega(K(\theta)) L(\theta) = O_{2n \times n}$$ for every $\theta \in {{\mathbb T}}^d$. Then one can use the geometric structure of the problem to complement the above frame, thus obtaining linear coordinates on the full tangent bundle $\Tan \mani$ that express $\Dif X_\H \circ K$ in a simple way. This is one of the main ingredients of recent KAM theorems presented in different contexts and structures (see e.g. [@CallejaCLa; @GonzalezJLV05; @FontichLS09; @GonzalezHL13; @LuqueV11]). The constructions have been summarized in [@HaroCFLM16] using a common framework that unifies the previous works and emphasizes the role of the symplectic properties. We briefly summarize this framework in Section \[ssec:sym:frame\].
Hence, we will obtain a map $N: {{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n\times n}$ such that the juxtaposed matrix $$P(\theta) =
\begin{pmatrix} L(\theta) & N(\theta) \end{pmatrix}\,,$$ satisfies $\mathrm{rank}\, P(\theta)=2n$ for every $\theta \in {{\mathbb T}}^d$ and also $$\label{eq:Psym}
P(\theta)^\top \Omega(K(\theta)) P(\theta) = \Omega_0\,.$$ In this case, we say that $P:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n\times 2n}$ is a symplectic frame. The use of these linear coordinates on $\Tan_{\torus} \mani$ has several advantages. Among them, it produces a natural and geometrically meaningful non-degeneracy condition (twist condition) in the KAM theorem, it simplifies certain computations substantially ($P^{-1}$ can be computed directly), but most importantly, it reduces the linearized equation to triangular form as follows: $$\Dif X_\H(K(\theta)) P(\theta)
-\Dif P(\theta) [\omega]
= P(\theta) \Lambda(\theta)\,,$$ with $$\Lambda(\theta)
= \begin{pmatrix}
O_n & T(\theta) \\
O_n & O_n
\end{pmatrix}$$ and $$\begin{aligned}
T(\theta) = {} & N(\theta)^\top \Omega(K(\theta))
\left(
\Dif X_\H(K(\theta))
N(\theta)
-\Dif N(\theta) [\omega]
\right) \\
= {} & N(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{N}}(\theta)\,.\end{aligned}$$ The matrix $T$ is usually called the *torsion matrix* and plays the role of Kolmogorov’s non-degeneracy condition.
The torsion measures the symplectic area determined by the normal bundle and its infinitesimal displacement. Notice that, in the present paper, the torsion involves geometrical and dynamical properties of both the torus and the first integrals, and in fact, of the family of $d$-dimensional invariant tori.
The above setting allows us a approximate the solutions of the linearized equations around the torus by the solutions of a triangular system which is easier to handle. The fundamental idea is the following fact: if $\torus$ is approximately invariant, the above geometrical properties are still satisfied, modulo some error functions which can be controlled in terms of the error of invariance. The main ingredient is the fact that (under certain assumptions) the frame $\theta \mapsto L(\theta)$, associated to a $(n-d)$-parameter family of approximately invariant tori, is also approximately Lagrangian. Hence, the linear dynamics around the torus is approximately reducible. This is enough to perform a quadratic scheme to correct the initial approximation. This will be discussed in Section \[sec:lemmas\].
Construction of a geometrically adapted frame {#ssec:sym:frame}
---------------------------------------------
In this section we deal with the construction of a symplectic frame on the bundle $\Tan_\torus\mani$ by complementing the column vectors of a map $L:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n \times n}$ that parameterizes a Lagrangian subbundle. To this end, we assume that we have a map $N^0:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n\times n}$ such that $$\label{eq:cond:CaseI}
\mathrm{rank} \begin{pmatrix} L(\theta) & N^0(\theta) \end{pmatrix} = 2n
\quad
\Leftrightarrow
\quad
\det (L(\theta)^\top \Omega(K(\theta)) N^0(\theta)) \neq 0\,,$$ for every $\theta \in {{\mathbb T}}^d$. Then, we complement the Lagrangian subspace generated by $L(\theta)$ by means of a map $N:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n \times n}$ given by $$N(\theta) = L(\theta)A(\theta) + N^0(\theta) B(\theta)\,,$$ where $$B(\theta)=-(L(\theta)^\top \Omega(K(\theta)) N^0(\theta))^{-1}$$ and $A(\theta)$ is a solution of $$A(\theta)-A(\theta)^\top = -B(\theta)^\top N^0(\theta)^\top \Omega(K(\theta)) N^0(\theta) B(\theta).$$ The solution of this equation is given by $$A(\theta)=-\frac{1}{2}(B(\theta)^\top N^0(\theta)^\top \Omega(K(\theta)) N^0(\theta) B(\theta))$$ modulo the addition of any symmetric matrix. A direct computation shows that the juxtaposed matrix $P(\theta)=(L(\theta)~N(\theta))$ satisfies .
The map $N^0$ can be obtained by directly complementing the tangent vectors of the initial (approximately invariant) torus, and after that, be fixed along the iterative procedure. This is called **Case I** in chapter 4 of [@HaroCFLM16]. It has the advantage of being more general and flexible, but the it requires some extra work to obtain optimal quantitative estimates (we refer the reader to [@FiguerasHL17], where additional geometric properties of $N^0$ are controlled).
A natural way to construct a map $N^0$ systematically is by using additional geometric information. In this paper, we assume that $\mani$ is also endowed with a Riemannian metric $\gform$, represented in coordinates as the positive-definite symmetric matrix valued function $G:\mani\to{{\mathbb R}}^{2n\times 2n}$. Then, we define the linear isomorphism $\J: \Tan \mani
\to \Tan\mani$ such that $\sform_z(\J_z u,v)=\gform_z(u,v)$, $\forall u,v \in T_z \mani$. Observe also that $\J$ is antisymmetric with respect to $\gform$, that is, $\gform_z(u,\J_z v)=-\gform_z(\J_zu,v)$, $\forall u,v \in \Tan_z \mani$. The matrix representation of $\J$ is given by the matrix valued function $J:\mani\to {{\mathbb R}}^{2n\times 2n}$. Then, we have $$\label{eq:prop:struc1}
\Omega^\top = -\Omega\,, \qquad G^\top = G\,, \qquad J^\top \Omega=G\,,$$ and we introduce the matrix valued function $\tilde \Omega:\mani\to{{\mathbb R}}^{2n\times 2n}$, as $$\tilde \Omega := J^{\top} \Omega J = G J\,,$$ for the representation of the symplectic form in the frame given by $J$.
Then, we choose the map $N^0$ as follows $$\label{eq:map:N0:CII}
N^0(\theta):= J(K(\theta)) L(\theta)$$ and condition is equivalent to $$\det (L(\theta)^\top G(K(\theta)) L(\theta)) \neq 0\,\qquad \forall \theta \in {{\mathbb T}}^d\,.$$ Moreover, the matrices $A(\theta)$ and $B(\theta)$ are expressed as follows $$\begin{aligned}
A(\theta)={}& -\frac{1}{2} (B(\theta)^\top L(\theta)^\top \tilde \Omega(K(\theta)) L(\theta) B(\theta)) \,, \\
B(\theta)={}& (L(\theta)^\top G(K(\theta)) L(\theta))^{-1}\,.\end{aligned}$$ This is called **Case II** in [@HaroCFLM16].
We want to point out that the above construction differs slightly from the discussion in chapter 4 of [@HaroCFLM16], where the linear isomorphism $\J$ is defined as $\sform_z(u,v)=\gform_z(u,\J_zv)$. This choice results in the map $$N^0(\theta)= - J(K(\theta))^{-1} L(\theta)$$ rather than . Both constructions are equivalent, but the construction described here is geometrically more natural and produces better quantitative estimates.
There is also the special case where the isomorphism $\J$ is anti-involutive. that is, $\J^2 = -\I$. Then, we say that the triple $(\sform,\gform,\J)$ is compatible and that $\J$ endows $\mani$ with a complex structure. This is called **Case III** in chapter 4 of [@HaroCFLM16]. In coordinates, we have the following properties $$J^2= -I_{2n}, \qquad
\Omega= J^\top \Omega J, \qquad G= J^\top G J.$$ In this situation, we have $$N^0(\theta) = J(K(\theta)) L(\theta)\,,
\qquad
A(\theta) = O_n\,,
\qquad
B(\theta) = (L(\theta)^\top G(K(\theta)) L(\theta))^{-1}\,.$$
It is important to notice that the above constructions lead to different quantitative estimates in the KAM theorem. Selecting the best option depends on the particular problem under consideration. Since **Case I** has been fully reported in the references [@FiguerasHL17; @HaroCFLM16], in this paper we will focus in obtaining sharp quantitative estimates for **Case II** and **Case III**. Hence, we cover a gap in the literature that could be valuable in future studies.
Univocal determination of an invariant torus of the family {#ssec:univocal}
----------------------------------------------------------
In this section we describe suitable strategies to avoid the undeterminations observed in Sections \[ssec:inv:tor\] and \[ssec:conserved\]. Let us recapitulate them:
- If $\torus = K({{\mathbb T}}^d)$ is a $d$-dimensional invariant torus of $X_\H$ of frequency $\omega$, then $K^{\alpha}(\theta)=K(\theta+{\alpha})$ also parameterizes $\torus$ for every ${\alpha}\in{{\mathbb T}}^d$.
- If $\torus = K({{\mathbb T}}^d)$ is a $d$-dimensional invariant torus of $X_\H$ of frequency $\omega$, and we introduce $K_s = \Phi_s \circ K$ using the family of symmetries, then $\torus_s= K_s({{\mathbb T}}^d)$ is also an invariant torus of frequency $\omega$ for every $s \in {{\mathbb R}}^{n-d}$ in the domain of definition.
The first indetermination corresponds to the choice of the parameterization of the invariant object, and it can be avoided simply by fixing an initial phase of the torus. To this end, we consider a $(2n-d)$-dimensional manifold given by the preimage of a map $Z: {{\mathbb R}}^{2n} \rightarrow {{\mathbb R}}^d$ and select the value of ${\alpha}$ such that $$Z(K^{\alpha}(0))=Z_0\,,$$ for some $Z_0 \in {{\mathbb R}}^d$. Notice that $Z$ must be selected in such a way that the transversality condition $$\det (\Dif Z(K^{\alpha}(0)) \Dif K^{\alpha}(0)) \neq 0\,,$$ holds in an open set of values ${\alpha}\in {{\mathbb R}}^d$.
\[eq:fix:top\] If $\mani = {{\mathbb T}}^d \times U$ and we are considering a non-contractible invariant tori of the form $K(\theta)=(K^x(\theta),K^y(\theta)) \in \mani$, then a typical way to determine the phase univocally is to ask the following average condition $${\langle{K^x-\id}\rangle} = 0_d\,.$$ In this case, we select ${\alpha}=-{\langle{K^x-\id}\rangle}$. Another possibility, in the spirit described above, is to select the transversal plane given by $Z(z)=(z_1,\ldots,z_d)$.
The second indetermination corresponds to a choice of a given invariant torus inside the $(n-d)$-parameter family described in Section \[ssec:conserved\]. In this case, we need to fix additional $(n-d)$ conditions in order to define univocally a single torus of the family. For example, we may assume that there is a map $q : \mani \rightarrow {{\mathbb R}}^{n-d}$ satisfying $$\label{eq:cond:pq}
\Dif q(z) X_p(z)
= \Dif q(z) \ \Omega(z)^{-1} (\Dif p(z))^\top = I_{n-d}\,,$$ which means that $\{q_i,p_j\}=\delta_{ij}$. For obvious reasons, $p$ and $q$ are referred to as the *generalized momentum* and the *generalized conjugated position*, respectively. Then, we can determine univocally a torus in the family by asking for the extra equations $$q \circ K (0) = q_0 \in {{\mathbb R}}^{n-d}\,.$$ We can recover the full family by considering the map $$s \longmapsto q \circ \Phi_s \circ K = q \circ K + s\,,$$ where we used that $$\begin{aligned}
\frac{\pd}{\pd s_i} (q\circ \Phi_s \circ K)
= {} &
(\Dif q \circ \Phi_s \circ K)
\pd_{s_i} (\Phi_s \circ K)
\\
= {} &
(\Dif q \circ \Phi_s \circ K)
(X_{p_i} \circ \Phi_s \circ K)
= e_i\end{aligned}$$
The above construction can be readily generalized asking the map $q$ to satisfy $$\det (\Dif q(z) X_p(z))\neq 0$$ instead of .
It is clear that both indeterminations can be addressed simultaneously by fixing $n$ conditions. This can be done for example by asking for a transversality condition on the Lagrangian frame $\theta \mapsto L(\theta)$ described in Section \[ssec:red:lin:eq\] at a given point. To this end, we denote $$K_{{\alpha},s}(\theta)=\Phi_s(K(\theta+{\alpha}))\,,
\qquad {\alpha}\in {{\mathbb R}}^d\,,
\qquad s \in {{\mathbb R}}^{n-d}\,,$$ we consider a map $Q:\mani \rightarrow {{\mathbb R}}^n$, and we ask for the condition $$Q(K_{{\alpha},s}(0)) = Q_0$$ for a given point $Q_0\in {{\mathbb R}}^n$. It this situation, the transversality condition reads $$\det \left(\Dif Q(K_{{\alpha},s}(0))L_{{\alpha},s}(0)
\right)
\neq 0$$ where $$\theta \mapsto L_{{\alpha},s}(\theta) =
\begin{pmatrix}
\Dif K_{{\alpha},s}(\theta) & X_p(K_{{\alpha},s}(\theta))
\end{pmatrix}$$ is the Lagrangian frame associated with the torus $\torus_{{\alpha},s}$. For example, a natural choice would be $$Q(z)=\begin{pmatrix}
Z(z) \\
q(z)
\end{pmatrix}\,,$$ where $Z$ is selected to fix the phase of the parameterizations and $q$ are generalized positions associated with $p$. Depending on the topology of the ambient space, we may consider other choices (see Remark \[eq:fix:top\]).
A-posteriori KAM theory for partially integrable Hamiltonian systems
====================================================================
In this section, we present two a-posteriori KAM theorems for $d$-dimensional quasi-periodic invariant tori in Hamiltonian systems with $n$ degrees-of-freedom that have $n-d$ additional first integrals in involution. To this end, we will assume that the frequency vector $\omega$ satisfies Diophantine conditions. Specifically, we denote the set of Diophantine vectors as $$\label{eq:def:Dioph}
{{{\mathcal D}}_{\gamma,\tau}} =
\left\{
\omega \in {{\mathbb R}}^d \, : \,
{|{k \cdot \omega}|} \geq \frac{\gamma}{|k|_1^{\tau}}
\,,
\forall k\in{{\mathbb Z}}^d\backslash\{0\}
\,,
|k|_1 = \sum_{i= 1}^d |k_i|
\right\}\,,$$ for certain $\gamma >0$ and $\tau \geq d-1$.
In Section \[ssec-anal-prelims\] we set some basic notation regarding Banach spaces and norms of analytic functions. In Section \[ssec:theo\] we present the statement of a KAM theorem for existence (and persistence) of $d$-dimensional invariant tori having fixed frequency vector $\omega \in {{{\mathcal D}}_{\gamma,\tau}}$. This corresponds to the so-called ordinary (à la Kolmogorov) KAM theorem. In Section \[ssec:theo:iso\] we present and adapted version of the theorem that generalizes the iso-energetic approach. Section \[sec:lemmas\] is devoted to the control of approximate geometric properties, the anteroom of the proofs of the main theorems in Sections \[sec:proof:KAM\] and \[sec:proof:KAM:iso\]. We will pay special attention in providing explicit and rather optimal bounds, with an eye in the application of the theorems and in computer assisted proofs. The constants have been collected in a series of tables in Appendix \[ssec:consts\].
Analytic functions and norms {#ssec-anal-prelims}
----------------------------
In this paper we work with real analytic functions defined in complex neighborhoods of real domains. We will consider the sup-norms of (matrix valued) analytic functions and their derivatives (see the notation in Section \[ssec:basic:notation\]). That is, for $f: \U\subset {{\mathbb C}}^m \to {{\mathbb C}}$, we consider $${\|{f}\|}_\U= \sup_{x\in \U} |f(x)|,$$ and $${\|{\Dif^r f}\|}_\U= \sum_{\ell_1,\dots,\ell_r} {\left\|{\frac{\partial^r f}{\partial x_{\ell_1}\dots\partial x_{\ell_r}}}\right\|}_\U,$$ that could be infinite. For $M:\U \subset {{\mathbb C}}^m\to {{\mathbb C}}^{n_1\times n_2}$, we consider the norms $${\|{M}\|}_\U= \max_{i= 1,\dots,n_1} \sum_{j= 1,\dots,n_2} {\|{M_{i,j}}\|}_\U \,,$$ $${\|{\Dif^r M}\|}_\U= \max_{i= 1,\dots,n_1} \sum_{j= 1,\dots,n_2} {\left\|{\Dif^r M_{i,j}}\right\|}_\U\,,$$ and we notice, of course, that the norms ${\|{M^\top}\|}_\U$ and ${\|{\Dif^r M^\top}\|}_\U$ are obtained simply by interchanging the role of the indices $i$ and $j$.
Let us remark that the above norms present Banach algebra-like properties. For example, given $r$ analytic functions $v_1,\dots, v_r: \U\to {{\mathbb C}}^m\simeq {{\mathbb C}}^{m\times 1}$, then the function $\Dif^r M [v_1,\dots, v_r]: \U\subset {{\mathbb C}}^m\to {{\mathbb C}}^{n_1\times n_2}$ defined as $$\Dif^r M [v_1,\dots, v_r](x)= \Dif^r M(x) [v_1(x),\dots, v_r(x)]$$ is also analytic, and we have $$\begin{aligned}
&{\|{\Dif^r M [v_1,\dots, v_r]}\|}_\U
\leq
\max_{i=1,\ldots,n_1} \sum_{j=1}^{n_2} {\|{\Dif^r M_{i,j} [v_1,\ldots,v_r]}\|}_\U \\
&\qquad \leq
\max_{i=1,\ldots,n_1} \sum_{j=1}^{n_2} {\left\|{\sum_{\ell_1,\ldots,\ell_r} \frac{\partial^r M_{i,j}}{\partial x_{\ell_1} \cdots \partial x_{\ell_r}} v_{\ell_1 1} \cdots v_{\ell_r r} }\right\|}_\U \\
&\qquad \leq
\max_{i=1,\ldots,n_1} \sum_{j=1}^{n_2} \sum_{\ell_1,\ldots,\ell_r} {\left\|{\frac{\partial^r M_{i,j}}{\partial x_{\ell_1} \cdots \partial x_{\ell_r}} }\right\|}_\U \max_{\ell=1,\ldots,r} {\left\|{v_{\ell,1}}\right\|}_\U
\cdots \max_{\ell=1,\ldots,r} {\left\|{v_{\ell,r}}\right\|}_\U \\
&\qquad = {\left\|{\Dif^r M}\right\|}_\U \ {\|{v_1}\|}_\U\cdots {\|{v_r}\|}_\U\,.\end{aligned}$$ There is also a similar bound for the action of the transpose: $$\begin{aligned}
&{\|{(\Dif^r M [v_1,\dots, v_r])^\top}\|}_\U
\leq
\max_{j=1,\ldots,n_2} \sum_{i=1}^{n_1} {\|{\Dif^r M_{i,j} [v_1,\ldots,v_r]}\|}_\U \\
&\qquad \leq {\left\|{\Dif^r M^\top}\right\|}_\U \ {\|{v_1}\|}_\U\cdots {\|{v_r}\|}_\U\,.\end{aligned}$$ In addition, given $M_1: \U \subset {{\mathbb C}}^m\to {{\mathbb C}}^{n_1\times n_3}$ and $M_2: \U \subset {{\mathbb C}}^m\to {{\mathbb C}}^{n_3\times n_2}$, we have $${\|{M_1 M_2}\|}_\U \leq
{\|{M_1}\|}_\U {\|{M_2}\|}_\U \, ,$$ and $${\|{\Dif(M_1 M_2)}\|}_\U \leq
{\|{\Dif M_1}\|}_\U {\|{M_2}\|}_\U + {\|{M_1}\|}_\U {\|{\Dif M_2}\|}_\U \, .$$
The particular case of real-analytic periodic functions deserves some additional comments. We denote by $\Anal({{\mathbb T}}^d_\rho)$ the Banach space of holomorphic functions $u:{{\mathbb T}}^d_\rho \to {{\mathbb C}}$, that can be continuously extended to $\bar{{\mathbb T}}^d_\rho$, and such that $u({{\mathbb T}}^d) \subset {{\mathbb R}}$ (real-analytic), endowed with the norm $${\|{u}\|}_\rho = {\|{u}\|}_{{{\mathbb T}}^d_\rho}= \max_{|\im\theta|\leq \rho} |u(\theta)|\,.$$ As usual in the analytic setting, we will use Cauchy estimates to control the derivatives of a function. Given $u \in \Anal({{\mathbb T}}^d_\rho)$, with $\rho>0$, then for any $0<\delta<\rho$ the partial derivative $\pd u/\pd {x_\ell}$ belongs to $\Anal({{\mathbb T}}^d_{\rho-\delta})$ and we have the estimates $${\left\|{
\frac{\pd u}{\pd x_{\ell}}}\right\|}_{\rho-\delta} \leq \frac{1}{\delta}{\|{u}\|}_\rho,
\qquad
{\left\|{\Dif u}\right\|}_{\rho-\delta} \leq \frac{d}{\delta}{\|{u}\|}_\rho,
\qquad
{\left\|{(\Dif u)^\top}\right\|}_{\rho-\delta} \leq \frac{1}{\delta}{\|{u}\|}_\rho.$$ The above definitions and estimates extend naturally to matrix valued function, that is, given $M: {{\mathbb T}}^d_\rho \to {{\mathbb C}}^{n_1\times n_2}$, with components in $ \Anal({{\mathbb T}}^d_\rho)$, we have $${\|{\Dif M}\|}_{\rho-\delta}
= \max_{i=1,\ldots,n_1} \sum_{j= 1,\dots, n_2} {\left\|{\Dif M_{i,j}}\right\|}_{\rho-\delta}
\leq \frac{d}{\delta} {\|{M}\|}_{\rho}.$$ A direct consequence is that ${\|{\Dif M^\top}\|}_{\rho-\delta} \leq \frac{d}{\delta} {\|{M^\top}\|}_{\rho}$.
As it was mentioned in Section \[ssec:basic:notation\], the operators $\Dif$ and $(\cdot)^\top$ do not commute. In particular, given a real analytic vector function $w: {{\mathbb T}}^d_\rho \to {{\mathbb C}}^n\simeq {{\mathbb C}}^{n\times 1}$, we have: $${\|{\Dif w}\|}_{\rho-\delta} \leq \frac{d}{\delta} {\|{w}\|}_\rho,\quad
{\|{\Dif w^\top}\|}_{\rho-\delta} \leq \frac{d}{\delta} {\|{w^\top}\|}_\rho \leq \frac{n d}{\delta} {\|{w}\|}_\rho,\quad
{\|{(\Dif w)^\top}\|}_{\rho-\delta} \leq \frac{n}{\delta}{\|{w}\|}_\rho.$$
Ordinary KAM theorem {#ssec:theo}
--------------------
At this point, we are ready to state sufficient conditions to guarantee the existence of a $d$-dimensional invariant torus with fixed frequency close to an approximately invariant one. Notice that the hypotheses in Theorem \[theo:KAM\] are tailored to be verified with a finite amount of computations.
The result is written simultaneously to **Case II** and **Case III** (see Section \[ssec:sym:frame\]). Estimates corresponding to **Case I** can be easily obtained without any remarkable difficulty (see e.g. [@FiguerasHL17] for details). Hence, given a parameterization of a torus $\torus=K({{\mathbb T}}^d)$ (not necessarily invariant) and a tangent frame $L:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n \times n}$, the normal frame $N:{{\mathbb T}}^d \rightarrow {{\mathbb R}}^{2n \times n}$ is constructed as follows: $$\label{eq:N}
N(\theta):= L(\theta) A(\theta) + N^0(\theta) B(\theta)\,,$$ where $$\begin{aligned}
N^0(\theta)={} & J(K(\theta)) L(\theta) \,, \label{eq:N0} \\
B(\theta)={}& (L(\theta)^\top G(K(\theta)) L(\theta))^{-1}\,, \label{eq:B} \\
A(\theta)={}&
\begin{cases} -\displaystyle\frac{1}{2} (B(\theta)^\top L(\theta)^\top \tilde \Omega(K(\theta)) L(\theta) B(\theta)), &
\text{if \textbf{Case II;}} \\
0, & \text{if \textbf{Case III.}}
\end{cases}
\, \label{eq:A} \end{aligned}$$ The torsion matrix $T:{{\mathbb T}}^d\to{{\mathbb R}}^{n\times n}$, given by $$\label{eq:T}
T(\theta) = N(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{N}}(\theta)\,,$$ where $${{{\mathcal X}}_{N}}(\theta)= \Dif X_\H (K(\theta)) N(\theta) -\Dif N (\theta)[\omega]\,,$$ measures the infinitesimal twist of the normal bundle. With this geometric ingredients we are ready to state our main theorem, in the ordinary case.
\[theo:KAM\] Let us consider an exact symplectic structure $\sform=\dif \aform$ and a Riemannian metric $\gform$ on the open set $\mani\subset{{\mathbb R}}^{2n}$. Let $\H$ be a Hamiltonian function, having $n-d$ first integrals in involution $p=(p_1,\ldots,p_{n-d})$, with $2\leq d \leq n$, and let $c$ be any first integral in involution with $(h,p)$. Let $K:{{\mathbb T}}^d\to \mani$ be a parameterization of an approximately invariant torus with frequency vector $\omega\in {{\mathbb R}}^d$, and consider the tangent frame $L:{{\mathbb T}}^d\to{{\mathbb R}}^{2n\times n}$ given by $$\label{eq:L:theo}
L(\theta)=
\begin{pmatrix} \Dif K(\theta) & X_p(K(\theta)) \end{pmatrix}\,.$$ Then, we assume that the following hypotheses hold.
- The global objects can be analytically extended to the complex domain $\B \subset {{\mathbb C}}^{2n}$, and there are constants that quantify the control of their analytic norms.
For the geometric structures $\sform, \gform, \J, \tilde{\sform}= \J^*\sform$ in $\B$, the matrix functions $\Omega, G, J, \tilde \Omega: \B \rightarrow {{\mathbb C}}^{2n \times 2n}$ satisfy: $$\begin{aligned}
&{\|{\Omega}\|}_{\B} \leq \cteOmega\,,
&& {\|{\Dif \Omega}\|}_{\B} \leq \cteDOmega\,,
&& \\
& {\|{\tilde \Omega}\|}_{\B} \leq \ctetOmega\,,
&& {\|{\Dif \tilde\Omega}\|}_{\B} \leq \cteDtOmega\,,
&& {\|{\Dif^2 \tilde \Omega}\|}_{\B} \leq \cteDDtOmega\,, \\
&{\|{G}\|}_{\B} \leq \cteG\,,
&& {\|{\Dif G}\|}_{\B} \leq \cteDG\,,
&& {\|{\Dif^2 G}\|}_{\B} \leq \cteDDG\,, \\
& {\|{J}\|}_{\B} \leq \cteJ\,,
&& {\|{\Dif J}\|}_{\B} \leq \cteDJ\,,
&& {\|{\Dif^2 J}\|}_{\B} \leq \cteDDJ\,, \\
& {\|{J^{\top}}\|}_{\B} \leq \cteJT\,, && {\|{\Dif J^\top}\|}_{\B} \leq \cteDJT\,.
&& \end{aligned}$$
For the Hamiltonian $\H: \B \rightarrow {{\mathbb C}}$ and its corresponding vector field $X_\H: \B \rightarrow {{\mathbb C}}^{2n}$, we have : $$\begin{aligned}
& {\|{\Dif \H}\|}_{\B} \leq \cteDH\,,
&& \\
& {\|{X_\H}\|}_{\B} \leq \cteXH\,,
&& {\|{\Dif X_\H}\|}_{\B} \leq \cteDXH\,, \\
& {\|{\Dif^2 X_\H}\|}_{\B} \leq \cteDDXH \,,
&& {\|{\Dif X_\H^\top}\|}_{\B} \leq \cteDXHT\,. \end{aligned}$$
For the first integrals $p:\B\to {{\mathbb C}}^{n-d}$ and the corresponding vector fields $X_p: \B \rightarrow {{\mathbb C}}^{2n \times (n-d)}$, we have: $$\begin{aligned}
&{\|{\Dif p}\|}_{\B} \leq \cteDp \,,
&& {\|{\Dif p^\top}\|}_{\B} \leq \cteDpT \,,
&& \\
& {\|{X_p}\|}_{\B} \leq \cteXp\,,
&& {\|{\Dif X_p}\|}_{\B} \leq \cteDXp\,,
&& {\|{\Dif^2 X_p}\|}_{\B} \leq \cteDDXp\,, \\
& {\|{X_p^\top}\|}_{\B} \leq \cteXpT\,,
&& {\|{\Dif X_p^\top}\|}_{\B} \leq \cteDXpT\,,
&& {\|{\Dif^2 X_p^\top}\|}_{\B} \leq \cteDDXpT\,.\end{aligned}$$
For the first integral $c: \B \to {{\mathbb C}}$ we have $${\|{\Dif c}\|}_{\B} \leq \cteDc \,.$$
- The parameterization $K$ is real analytic in a complex strip ${{\mathbb T}}^d_\rho$, with $\rho>0$, which is contained in the global domain: $$\dist (K({{\mathbb T}}^d_\rho),\pd \B)>0.$$ Moreover, the components of $K$ and $\Dif K$ belong to $\Anal({{\mathbb T}}^d_\rho)$, and there are constants $\sigmaDK$ and $\sigmaDKT$ such that $${\|{\Dif K}\|}_{\rho} < \sigmaDK\,, \qquad
{\|{(\Dif K)^\top}\|}_{\rho} < \sigmaDKT\,.$$
- We assume that $L(\theta)$ given by has maximum rank for every $\theta \in \bar{{\mathbb T}}_\rho^d$. Moreover, there exists a constant $\sigmaB$ such that $${\|{B}\|}_\rho < \sigmaB,$$ where $B(\theta)$ is given by .
- There exists a constant $\sigmaT$ such that $${|{{\langle{T}\rangle}^{-1}}|} < \sigmaT,$$ where $T(\theta)$ is given by .
- The frequency $\omega$ belongs to ${{{\mathcal D}}_{\gamma,\tau}}$, given by , for certain $\gamma >0$ and $\tau \geq d-1$.
Under the above hypotheses, for each $0<\rho_\infty<\rho$ there exists a constant $\mathfrak{C}_1$ such that, if the error of invariance $$\label{eq:invE}
E(\theta)=X_\H(K(\theta))-\Dif K(\theta) \omega,$$ satisfies $$\label{eq:KAM:HYP}
\frac{\mathfrak{C}_1 {\|{E}\|}_\rho}{\gamma^4 \rho^{4 \tau}} < 1\,,$$ then there exists an invariant torus $\torus_\infty = K_\infty({{\mathbb T}}^d)$ with frequency $\omega$, satisfying $K_\infty \in \Anal({{\mathbb T}}^{d}_{\rho_\infty})$ and $${\|{\Dif K_\infty}\|}_{\rho_\infty} < \sigmaDK\,,
\qquad
{\|{(\Dif K_\infty)^\top}\|}_{\rho_\infty} < \sigmaDKT\,,
\qquad
\dist(K_\infty({{\mathbb T}}^d_{\rho_\infty}),\pd \B) > 0\,.$$ Furthermore, the objects are close to the original ones: there exist constants $\mathfrak{C}_2$ and $\mathfrak{C}_3$ such that $$\label{eq:close}
{\|{K_\infty - K}\|}_{\rho_\infty} < \frac{\mathfrak{C}_2 {\|{E}\|}_\rho}{\gamma^2 \rho^{2\tau}}\,,
\qquad
{|{{\langle{c \circ K_\infty}\rangle} - {\langle{c \circ K}\rangle}}|} < \frac{\mathfrak{C}_3 {\|{E}\|}_\rho}{\gamma^2 \rho^{2\tau}}\,,$$ The constants $\mathfrak{C}_1$, $\mathfrak{C}_2$ and $\mathfrak{C}_3$ are given explicitly in Appendix \[ssec:consts\].
If $d=n$ then there are no additional first integrals and we recover the classical KAM theorem for Lagrangian tori. The corresponding estimates follow by taking zero the constants $\cteDp=0$, $\cteDpT=0$, $\cteXp=0$, $\cteXpT=0$, $\cteDXp=0$, $\cteDXpT=0$, $\cteDDXp=0$, and $\cteDDXpT=0$. Thus, as a by-product, we obtain optimal quantitative estimates for the KAM theorem for flows stated in [@GonzalezJLV05].
In the *canonical case*, we have $\Omega=\tilde \Omega=\Omega_0$, $G = I_{2n}$, and $J=\Omega_0$. Hence, we have $\cteOmega=1$, $\cteDOmega=0$, $\ctetOmega=1$, $\cteDtOmega=0$, $\cteDDtOmega=0$, $\cteG=1$, $\cteDG=0$, $\cteDDG=0$, $\cteJ=1$, $\cteDJ=0$, $\cteDDJ=0$, $\cteJT=1$, and $\cteDJT=0$.
Notice that the condition $d\geq 2$ is optimal. For $d=1$, not only the torus becomes a periodic orbit (and the result would follow from an standard implicit theorem without small divisors) but also $X_\H$ is completely integrable.
\[rem:unicity\] The existence of a $d$-dimensional invariant torus with frequency $\omega$ implies the existence of a $(n-d)$-parameter family of invariant tori with frequency $\omega$. The family is locally unique, meaning that if there is an invariant torus with frequency $\omega$ close enough to the family, then it is a member of the family. Notice also that Theorem \[theo:KAM\] states the existence of the parameterization of an invariant torus, but that we can also change the phase to obtain a new parameterization. As mentioned in Section \[ssec:univocal\], both indeterminacies (the phase and the element of the family) could be fixed by adding $n$ extra scalar equations to the invariance equation.
Generalized iso-energetic KAM theorem {#ssec:theo:iso}
-------------------------------------
Let us consider the setting presented in Section \[ssec:theo\], and let us focus on the first integral $c : \mani \rightarrow {{\mathbb R}}$ that commutes with $\H$ and $p=(p_1,\ldots,p_{n-d})$. It is clear that if $\torus=K({{\mathbb T}}^d)$ is invariant under $X_h$ then $\torus$ is contained in the hypersurface $c(z)=c_0$, for $c_0\in {{\mathbb R}}$. In this section we are interested in finding an invariant torus by pre-fixing such hypersurface, that is, our aim is to obtain a parameterization satisfying $$\label{eq:inv:iso}
X_\H(K(\theta))={\mathfrak{L}_{\omega}}K(\theta)\,,
\qquad
{\langle{c \circ K}\rangle} = c_0 \,,$$ where $c_0\in {{\mathbb R}}$ is fixed and we think in $\omega \in {{\mathbb P}}{{\mathbb R}}^d$.
The following result, which is an extension of Theorem \[theo:KAM\] to this generalized iso-energetic context, establishes quantitative sufficient conditions for the existence of a solution of close to an approximate one. For this reason, we refer the reader to Section \[ssec:theo\] for a compendium of the objects involved in the result and we do not estate the common hypotheses.
\[theo:KAM:iso\] Let us consider the setting of Theorem \[theo:KAM\], assume that the hypotheses $H_2$ and $H_3$ hold, and replace $H_1$, $H_4$ and $H_5$ by
- Assume that all estimates in $H_1$ hold. In addition, for the first integral $c: \B \to {{\mathbb C}}$ we have $${\|{\Dif^2 c}\|}_{\B} \leq \cteDDc \,.$$
- There exists a constant $\sigmaTc$ such that $${|{{\langle{T_c}\rangle}^{-1}}|} < \sigmaTc\,,$$ where $T_c:{{\mathbb T}}^d_\rho \to
{{\mathbb C}}^{(n+1)\times(n+1)}$ is the extended torsion matrix $$\label{eq:Tc}
T_c(\theta) :=
\begin{pmatrix}
T(\theta) & \homega \\
\Dif c(K(\theta)) N(\theta) & 0
\end{pmatrix},
\qquad
\homega :=
\begin{pmatrix} \omega \\ 0_{n-d} \end{pmatrix} \,.$$
- Let us consider a constant $\sigmao>1$ and a frequency vector $\omega_*$ in the set ${{{\mathcal D}}_{\gamma,\tau}}$, given by , for certain $\gamma >0$ and $\tau \geq d-1$. Then, we assume that $\omega \in {{\mathbb R}}^d$ is contained in the ray $$\Theta=\Theta(\omega_*,\sigmao)=\{ s \omega_* \in {{\mathbb R}}^d\,:\, 1<s<\sigmao\} \subset
{{{\mathcal D}}_{\gamma,\tau}} \,.$$ Notice that, by definition, we have $\dist (\omega,\pd \Theta)>0$.
Under the above hypotheses, for each $0<\rho_\infty<\rho$ there exists a constant $\mathfrak{C}_1$ such that, if the total error $$\label{eq:invE:iso}
E_c(\theta)=
\begin{pmatrix}
E(\theta) \\
E^\omega
\end{pmatrix}
=
\begin{pmatrix}
X_\H(K(\theta))-\Dif K(\theta) \omega \\
{\langle{c \circ K}\rangle} - c_0
\end{pmatrix}$$ satisfies $$\label{eq:KAM:HYP:iso}
\frac{\mathfrak{C}_1 {\|{E_c}\|}_\rho}{\gamma^4 \rho^{4 \tau}} < 1\,,$$ then there exists an invariant torus $\torus_\infty = K_\infty({{\mathbb T}}^d)$ with frequency $\omega_\infty \in \Theta$, satisfying $K_\infty \in \Anal({{\mathbb T}}^{d}_{\rho_\infty})$ and $${\|{\Dif K_\infty}\|}_{\rho_\infty} < \sigmaDK\,,
\qquad
{\|{(\Dif K_\infty)^\top}\|}_{\rho_\infty} < \sigmaDKT\,,
\qquad
\dist(K_\infty({{\mathbb T}}^d_{\rho_\infty}),\pd \B) > 0\,.$$ Furthermore, the objects are close to the initial ones: there exist constants $\mathfrak{C}_2$ and $\mathfrak{C}_3$ such that $$\label{eq:close:iso}
{\|{K_\infty - K}\|}_{\rho_\infty} < \frac{\mathfrak{C}_2 {\|{E_c}\|}_\rho}{\gamma^2 \rho^{2\tau}}\,,
\qquad
{|{\omega_\infty - \omega}|} < \frac{\mathfrak{C}_3 {\|{E_c}\|}_\rho}{\gamma^2 \rho^{2\tau}}\,.$$ The constants $\mathfrak{C}_1$, $\mathfrak{C}_2$ and $\mathfrak{C}_3$ are given explicitly in Appendix \[ssec:consts\].
If we consider the case $c(z)=h(z)$ then we recover the classical iso-energetic situation. Notice that, if $\torus$ is invariant with frequency $\omega$, then the frame $P(\theta)$ is symplectic, and $$\begin{split}
\Dif h(K(\theta)) N(\theta) & = X_h(K(\theta))^\top \Omega(K(\theta)) N(\theta)=
-\omega^\top \Dif K(\theta)^\top \Omega(K(\theta)) N(\theta) \\ & =
\begin{pmatrix} \omega^\top & 0_{n-d}^\top \end{pmatrix}= {\hat \omega}^\top.
\end{split}$$ Hence, the extended torsion matrix for an invariant torus is $$\label{def:T_isoenergetic}
T_h(\theta) :=
\begin{pmatrix}
T(\theta) & \homega \\
\homega^\top & 0
\end{pmatrix}\,.$$
If we consider the case $c(z)=p_j(z)$ then we have an iso-momentum situation. In this case, if $\torus$ is invariant with frequency $\omega$, then $$\Dif p_j(K(\theta)) N(\theta)= -X_{p_j}(K(\theta))^\top \Omega(K(\theta)) N(\theta)
= e_{d+j}^\top\, ,$$ where $e_{d+j}$ is the $(d+j)$-th canonic vector of ${{\mathbb R}}^n$ (it has $1$ in the $(d+j)$-th component and $0$ elsewhere). Hence, the extended torsion matrix for an invariant torus is $$\label{def:T_isomomentum}
T_{p_j}(\theta) :=
\begin{pmatrix}
T(\theta) & \homega \\
e_{d+j}^\top & 0
\end{pmatrix} \,.$$
Some lemmas to control approximate geometric properties {#sec:lemmas}
=======================================================
In this section we present some estimates regarding the control of some geometric properties for an approximately invariant torus. For the sake of clarity, we reduce the repetition of hypotheses and present a unique setting for the whole section, consisting in the assumptions of the KAM Theorems in Section \[ssec:theo\] and \[ssec:theo:iso\].
Estimates for cohomological equations
-------------------------------------
Let us first introduce some useful notation regarding the so-called *cohomological equations* that play an important role in KAM theory. Given $\omega \in {{\mathbb R}}^d$ and a periodic function $v$, we consider the cohomological equation $$\label{eq:calL}
{\mathfrak{L}_{\omega}} u = v- {\langle{v}\rangle}\,, \qquad
{\mathfrak{L}_{\omega}} := -\sum_{i=1}^d \omega_i \frac{\pd}{\pd \theta_i}.$$ The notation ${\mathfrak{L}_{}}$ comes from “left-operator”.
Let us assume that $v$ is continuous and $\omega$ is rationally independent (this implies that the flow $t \mapsto \omega t$ is quasi-periodic). If there exists a continuous zero-average solution of equation , then it is unique and will be denoted by $u = {\mathfrak{R}_{\omega}} v$. The notation ${\mathfrak{R}_{}}$ comes from “right-operator”.
Note that the formal solution of equation is immediate. Actually, if $v$ has the Fourier expansion $v(\theta)=\sum_{k \in {{\mathbb Z}}^d} \hat v_k \ee^{2\pi
\ii k \cdot \theta }$ and the dynamics is quasi-periodic, then $$\label{eq:small:formal}
{\mathfrak{R}_{\omega}} v(\theta) = \sum_{k \in {{\mathbb Z}}^d \backslash \{0\} } \hat u_k \ee^{2\pi
\ii k \cdot \theta}, \qquad \hat u_k = \frac{-\hat
v_k}{2\pi \ii k \cdot \omega}.$$ In particular, this implies that ${\mathfrak{R}_{\omega}} v =0$ if $v=0$. The solutions of equation differ by their average.
We point out that quasi-periodicity is not enough to ensure regularity of the solutions of cohomological equations. This is related to the effect of the small divisors $k \cdot \omega$ in equation . To deal with regularity, we require stronger non-resonant conditions on the vector of frequencies. In this paper, we consider the classic Diophantine conditions in $H_5$ and $H_5'$.
\[lem:Russmann\] Let $\omega \in {{{\mathcal D}}_{\gamma,\tau}}$ for some $\gamma>0$ and $\tau \geq d-1$. Then, for any $v \in \Anal({{\mathbb T}}^d_\rho)$, with $\rho>0$, there exists a unique zero-average solution of ${\mathfrak{L}_{\omega}} u = v -{\langle{v}\rangle}$, denoted by $u={\mathfrak{R}_{\omega}}v$. Moreover, for any $0<\delta<\rho$ we have that $u \in \Anal({{\mathbb T}}^d_{\rho-\delta})$ and the estimate $${\|{u}\|}_{\rho-\delta} \leq \frac{c_R}{\gamma \delta^\tau}
{\|{v}\|}_\rho\,,$$ where $c_R$ is a constant that depends on $d$, $\tau$ and possibly on $\delta$.
There is no need to reproduce here this classical result, and we refer the reader to the original references [@Russmann75; @Russmann76a], where a uniform bound (independent of $\delta$) is obtained. We refer to [@FiguerasHL17] for sharp non-uniform computer-assisted estimates (in the discrete case) of the form $c_R=c_R(\delta)$. representing a substantial advantage in order to apply the result to particular problems. Adapting these estimates to the continuous case is straightforward. Also, we refer to [@FiguerasHL18] for a numerical quantification of these estimates and for an analysis of the different sources of overestimation.
Approximate conserved quantities {#ssec:app:conserved}
--------------------------------
If $\torus=K({{\mathbb T}}^d)$ is not an invariant torus with frequency $\omega$, it is clear that a first integral in involution $c$ (such as the energy $\H$ or any of the components of $p$) is not necessarily preserved along $z(t) = K(\theta_0 + \omega t)$, since this is not a true trajectory. However, we can “shadow” its evolution in terms of the error of invariance.
\[lem:cons:H:p\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\]. Then, for a conserved quantity $c$ the following estimates hold: $$\begin{aligned}
&
{\|{c \circ K - {\langle{c \circ K}\rangle}}\|}_{\rho-\delta} \leq \frac{c_R \cteDc}{\gamma \delta^\tau}{\|{E}\|}_\rho \,,
&&
\label{eq:c-avgc} \\
&
{\|{{\mathfrak{L}_{\omega}}(\Dif (c \circ K))}\|}_{\rho-\delta} \leq \frac{ d \cteDc}{\delta}{\|{E}\|}_\rho\,,
&&
{\|{{\mathfrak{L}_{\omega}}(\Dif (c \circ K))^\top}\|}_{\rho-\delta} \leq \frac{ \cteDc}{\delta}{\|{E}\|}_\rho\,,
\label{eq:LDc} \\
&
{\|{\Dif(c \circ K) }\|}_{\rho-2\delta} \leq \frac{c_R d \cteDc }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho\,,
&&
{\|{(\Dif(c \circ K))^\top }\|}_{\rho-2\delta} \leq \frac{c_R \cteDc }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho\,.
\label{eq:Dc}\end{aligned}$$ In particular, $$\begin{aligned}
&
{\|{p \circ K - {\langle{p \circ K}\rangle}}\|}_{\rho-\delta} \leq \frac{c_R \cteDp}{\gamma \delta^\tau}{\|{E}\|}_\rho\,,
&&
\label{eq:p-avgp} \\
&
{\|{{\mathfrak{L}_{\omega}}(\Dif (p \circ K))}\|}_{\rho-\delta} \leq \frac{ d \cteDp}{\delta}{\|{E}\|}_\rho \,,
&&
{\|{{\mathfrak{L}_{\omega}}(\Dif (p \circ K))^\top}\|}_{\rho-\delta} \leq \frac{ \cteDpT}{\delta}{\|{E}\|}_\rho \,,
\label{eq:LDp} \\
&
{\|{\Dif(p \circ K) }\|}_{\rho-2\delta} \leq \frac{ c_R d \cteDp }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho\,,
&&
{\|{(\Dif(p \circ K))^\top }\|}_{\rho-2\delta} \leq \frac{c_R \cteDpT }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho\,.
\label{eq:Dp}\end{aligned}$$
We will prove the result first for the conserved quantity $c$. This case includes analogous estimates for each of the first integrals $h$ and $p_i$, with $i= 1,\dots, n-d$. We apply ${\mathfrak{L}_{\omega}}(\cdot)=-\Dif (\cdot) \omega$ in the expression $c\circ K$, thus obtaining $$\label{eq:LcK}
\begin{split}
{\mathfrak{L}_{\omega}}(c(K(\theta))) = {} & \Dif c(K(\theta)) {\mathfrak{L}_{\omega}} K(\theta) \\
= {} &
\Dif c(K(\theta))
\left(
E(\theta)
-X_\H(K(\theta))
\right) \\
= {} & \Dif c(K(\theta)) E(\theta)\,.
\end{split}$$ In the second line we used the expression of the error of invariance , and in the third line we used that $c$ is in involution with $\H$. In particular, $${\left\|{{\mathfrak{L}_{\omega}}(c\circ K))}\right\|}_\rho \leq \cteDc {\|{E}\|}_\rho,$$ where we use that ${\|{\Dif c}\|}_\B\leq \cteDc$. Thus, we end up with $$c(K(\theta))-{\langle{c \circ K}\rangle} = {\mathfrak{R}_{\omega}}(\Dif c(K(\theta)) E(\theta))\, ,$$ and the estimate follows applying Lemma \[lem:Russmann\].
In order to prove and we just differentiate with respect to $\theta_\ell$, for $\ell= 1,\dots, d$, both formulae and and apply Cauchy estimates. Firstly, $${\left\|{\frac{\partial}{\partial \theta_\ell}{\mathfrak{L}_{\omega}}(c \circ K) }\right\|}_{\rho-\delta} \leq {}
\frac{\cteDc }{\delta}{\|{E}\|}_\rho,$$ so estimates in follow immediately. Secondly, $${\left\|{\frac{\partial}{\partial \theta_\ell}(c \circ K) }\right\|}_{\rho-2\delta} \leq {}
\frac{c_R \cteDc }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho,$$ and then estimates in follow.
In order to prove , and we just notice that , and work for any of the first integrals $p_i$, for $i= 1,\dots, n-d$. Then, we change the occurrences of $c$ by $p_i$ in the formulae, with ${\|{\Dif p_i}\|}_\B\leq c_{p_i,1}$, and use that $$\cteDp= \max_{i= 1,\dots,n-d} c_{p_i,1}, \qquad \cteDpT= \sum_{i= 1,\dots,n-d} c_{p_i,1}$$ to obtain the bounds.
Approximate isotropicity of tangent vectors
-------------------------------------------
In this section we prove that if $\torus$ is approximately invariant, then $K^*\sform$ is small and can be controlled by the error of invariance. We refer the reader to [@FontichLS09; @GonzalezHL13] for similar computations, using generic constants in the estimates.
\[lem:isotrop\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\]. Let us consider $\Omega_K:{{\mathbb T}}^d \to {{\mathbb R}}^{2n \times 2n}$, the matrix representation of the pull-back on ${{\mathbb T}}^d$ of the symplectic form. We have $$\label{eq:OmegaK:aver}
{\langle{\Omega_K}\rangle} = O_d\,,$$ and the following estimate holds: $$\label{eq:estOmegaK}
{\|{\Omega_K}\|}_{\rho-2\delta} \leq \frac{\COmegaK }{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho\,,$$ where the constant $\COmegaK$ is provided in Table \[tab:constants:all\].
Property follows directly from the exact symplectic structure, since $K^*\sform={\rm d}( K^{*}\aform)$. In more algebraic terms, we have $$\begin{aligned}
\Omega_K(\theta) = {} & (\Dif K(\theta))^\top
\left(
(\Dif a(K(\theta)))^\top-
\Dif a(K(\theta))
\right)
\Dif K(\theta) \\
= {} & (\Dif (a(K(\theta))))^\top \Dif K(\theta)
- (\Dif K(\theta))^\top \Dif(a(K(\theta)))\,.\end{aligned}$$ and so, the components of $\Omega_K(\theta)$ are $$\begin{aligned}
(\Omega_K)_{i,j} (\theta)
= {} &
\sum_{m=1}^{2n}
\left(
\frac{\partial(a_m(K(\theta))) }{\partial \theta_i}
\frac{\partial K_m(\theta)}{\partial {\theta_j}}
-
\frac{\partial(a_m(K(\theta))) }{\partial {\theta_j}}
\frac{\partial K_m(\theta)}{\partial {\theta_i}}
\right) \\
= {} &
\sum_{m=1}^{2n}
\left(
\frac{\partial}{\partial {\theta_i}}
\left(
a_m(K(\theta))
\frac{\partial(
K_m(\theta))}{\partial \theta_j}
\right)
-
\frac{\partial}{\partial {\theta_j}}
\left(
a_m(K(\theta))
\frac{\partial(K_m(\theta))}{\partial \theta_i}
\right)\right)
\,.\end{aligned}$$ Hence, the components of $\Omega_K$ are sums of derivatives of periodic functions, and we obtain ${\langle{\Omega_K}\rangle}=O_d$.
Now we will use two crucial geometric properties (following Appendix A in [@GonzalezHL13]). Using the fact that $\sform$ is closed, we first obtain the expression $$\frac{\partial \Omega_{r,s}(z)}{\partial z_t}
+\frac{\partial \Omega_{s,t}(z)}{\partial z_r}
+\frac{\partial \Omega_{t,r}(z)}{\partial z_s} = 0,$$ for any triplet $(r,s,t)$. The second property is obtained by taking derivatives at both sides of $\Omega(z) X_\H(z)= (\Dif \H(z))^\top$, obtaining $$\frac{\partial^2\H}{\partial z_i \partial z_j}(z)= \sum_{m=1}^{2n}
\left(\frac{\partial \Omega_{j,m}(z)}{\partial z_i} X_m(z)
+ \Omega_{j,m}(z) \frac{\partial X_m (z)}{\partial z_i} \right),$$ for any $i, j$, where we use the notation $X_m= (X_\H)_m$ for the components of $X_\H$. Hence, $$\begin{split}
0 = {} & \displaystyle \frac{\partial^2\H}{\partial z_j \partial z_i}(z) - \frac{\partial^2\H}{\partial z_i \partial z_j}(z) \\
= {} &
\displaystyle
\sum_{m=1}^{2n}
\left(\frac{\partial \Omega_{i,m}(z)}{\partial z_j} X_m(z)
+ \Omega_{i,m}(z) \frac{\partial X_m (z)}{\partial z_j} \right) \\
& -
\sum_{m=1}^{2n} \left(
\frac{\partial \Omega_{j,m}(z)}{\partial z_i} X_m(z)
+ \Omega_{j,m}(z) \frac{\partial X_m (z)}{\partial z_i} \right) \\
= {} & \displaystyle
\sum_{m=1}^{2n} \left(\frac{\partial \Omega_{i,j}(z)}{\partial z_m} X_m(z)
+ \Omega_{i,m}(z) \frac{\partial X_m (z)}{\partial z_j}
+ \Omega_{m,j}(z) \frac{\partial X_m (z)}{\partial z_i} \right)
\end{split}$$ The above expressions yield the formula $$\label{eq:prop2killSK}
\Dif \Omega(z) [ X_\H(z) ]
+ (\Dif X_\H(z))^\top \Omega(z)
+ \Omega(z) \Dif X_\H(z)=
O_{2n}.$$
Then, we compute the action of ${\mathfrak{L}_{\omega}}$ on $\Omega_K$, thus obtaining $$\label{eq:LieOK}
\begin{split}
{\mathfrak{L}_{\omega}} \Omega_K(\theta) = {} &
{\mathfrak{L}_{\omega}} (\Dif K(\theta))^\top \Omega(K(\theta)) \Dif K(\theta)
+
(\Dif K(\theta))^\top {\mathfrak{L}_{\omega}} (\Omega(K(\theta))) \Dif K(\theta) \\
& + (\Dif K(\theta))^\top \Omega(K(\theta)) {\mathfrak{L}_{\omega}} \Dif K(\theta)\,,
\end{split}$$ and we use the properties (obtained from the invariance equation ) $$\begin{aligned}
& {\mathfrak{L}_{\omega}} (\Dif K(\theta)) = \Dif E(\theta) - \Dif X_\H(K(\theta) \Dif K(\theta) \,,\\
& {\mathfrak{L}_{\omega}} (\Omega( K(\theta))) = \Dif \Omega(K(\theta))
[{\mathfrak{L}_{\omega}}K(\theta)] = \Dif \Omega(K(\theta))
\left[
E(\theta)
-
X_\H(K(\theta))
\right]\,,\end{aligned}$$ in combination with , thus ending up with $$\begin{aligned}
{\mathfrak{L}_{\omega}} \Omega_K(\theta) = {} &
(\Dif E(\theta))^\top \Omega(K(\theta)) \Dif K(\theta)
+
(\Dif K(\theta))^\top (\Dif \Omega(K(\theta)) [E(\theta)]) \Dif K(\theta) \\
& + (\Dif K(\theta))^\top \Omega(K(\theta)) \Dif E(\theta)\,.\end{aligned}$$ The expression ${\mathfrak{L}_{\omega}} \Omega_K(\theta)$ is controlled using $H_1$, $H_2$, the Banach algebra properties and Cauchy estimates as follows $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \Omega_K}\|}_{\rho-\delta} \leq {} &
{\|{(\Dif E)^\top}\|}_{\rho-\delta} {\|{\Omega}\|}_{\B} {\|{\Dif K}\|}_\rho
+
{\|{(\Dif K)^\top}\|}_{\rho} {\|{\Dif \Omega}\|}_\B {\|{E}\|}_\rho {\|{\Dif K}\|}_\rho \nonumber \\
& + {\|{(\Dif K)^\top}\|}_\rho {\|{\Omega}\|}_\B {\|{\Dif E}\|}_{\rho-\delta} \nonumber \\
\leq {} & \frac{2n \cteOmega \sigmaDK
+ \sigmaDKT \cteDOmega \sigmaDK \delta
+ d \sigmaDKT \cteOmega}{\delta} {\|{E}\|}_\rho
=: \frac{\CLieOmegaK}{\delta} {\|{E}\|}_\rho
\,. \label{eq:CLieOK}\end{aligned}$$ In particular, we used that ${\|{(\Dif \Omega \circ K ) [E]}\|}_\B \leq {\|{\Dif \Omega }\|}_\B {\|{E}\|}_\rho$ (see Section \[ssec-anal-prelims\]). The estimate in is obtained as follows $$\label{eq:COK}
{\|{\Omega_K}\|}_{\rho-2 \delta} \leq \frac{c_R}{\gamma \delta^\tau} {\|{{\mathfrak{L}_{\omega}}\Omega_K}\|}_{\rho-\delta}
\leq \frac{c_R \CLieOmegaK}{\gamma \delta^{\tau+1}} {\|{E}\|}_\rho =:
\frac{\COmegaK}{\gamma \delta^{\tau+1}} {\|{E}\|}_\rho\,,$$ where we used Lemma \[lem:Russmann\] and the estimate in .
Notice that, even though the KAM theorems \[theo:KAM\] and \[theo:KAM:iso\] do not require a quantitative control con the 1-form $\aform$, the facts that the symplectic structure $\sform$ is exact and the vector field $X_\H$ is globally Hamiltonian are crucial to obtain the above result.
Approximate symplectic frame {#ssec:symp}
----------------------------
In this section we prove that if $\torus$ is approximately invariant, then we can construct an adapted frame that is approximately symplectic. First step is finding an adapted approximately Lagrangian bundle, that contains the tangent bundle $T_\torus \mani$.
\[lem:Lang\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\]. Then, the map $L(\theta)$ given by satisfies $$\label{eq:propL}
{\|{L}\|}_\rho \leq \CL\,,
\qquad
{\|{L^\top}\|}_\rho \leq \CLT\,,
\qquad
{\langle{L^\top (\Omega \circ K) E}\rangle}=0_n\,,$$ and defines an approximately Lagrangian bundle, i.e. the error map $$\label{eq:Elag}
\Elag(\theta):=L(\theta)^\top \Omega(K(\theta)) L(\theta)$$ is small in the following sense: $$\label{eq:normElag}
{\|{\Elag}\|}_{\rho-2\delta} \leq
\frac{\Clag}{\gamma \delta^{\tau+1}} {\|{E}\|}_\rho \,.$$ Furthermore, the objects $$\begin{aligned}
G_L(\theta) := L(\theta)^\top G(K(\theta)) L(\theta) \,, \label{eq:def:GL} \\
\tilde \Omega_L(\theta) := L(\theta)^\top \tilde \Omega(K(\theta)) L(\theta) \,, \label{eq:def:tOmegaL}\end{aligned}$$ are controlled as $$\label{eq:est:ObjL}
{\|{G_L}\|}_\rho \leq \CGL\,, \qquad
{\|{\tilde \Omega_L}\|}_\rho \leq \CtOmegaL\,.$$ The above constants are given explicitly in Table \[tab:constants:all\].
We first obtain the property of the average in by computing $$L(\theta)^\top \Omega(K(\theta)) E(\theta)=
\begin{pmatrix}
(\Dif K(\theta))^\top \Omega(K(\theta)) E(\theta) \\
X_p(K(\theta))^\top \Omega(K(\theta)) E(\theta)
\end{pmatrix}\,.$$ The upper term satisfies $$\begin{aligned}
(\Dif K(\theta))^\top \Omega(K(\theta)) E(\theta)
= {} &
(\Dif K(\theta))^\top \Omega(K(\theta)) X_\H(K(\theta)) - \Omega_K(\theta) \omega \\
= {} &
(\Dif K(\theta))^\top (\Dif \H(K(\theta)))^\top - \Omega_K(\theta) \omega \\
= {} &
(\Dif (\H(K(\theta))))^\top - \Omega_K(\theta) \omega \,,\end{aligned}$$ which has zero average, since the first term is the derivative of a periodic function and $\Omega_K$ has zero average (see Lemma \[lem:isotrop\]). Moreover, the lower term satisfies $$\label{eq:aux:comp}
X_p(K(\theta))^\top \Omega(K(\theta)) E(\theta)
=
\Dif p(K(\theta)) E(\theta) =
-\Dif (p(K(\theta))) \omega \,.$$ Hence, it is clear that ${\langle{L^\top (\Omega \circ K) E}\rangle}=0_n$.
We control the norm of the frame $L(\theta)$, using $H_1$ and $H_2$, as follows $$\begin{aligned}
{\|{L}\|}_\rho \leq {} & {\|{\Dif K}\|}_\rho +
{\|{X_p \circ K}\|}_\rho \leq \sigmaDK+\cteXp =: \CL\,, \label{eq:CL}\\
{\|{L^\top}\|}_\rho \leq {} &
\max\{
{\|{(\Dif K)^\top}\|}_\rho \, , \,
{\|{X_p^\top \circ K}\|}_\rho\} \leq \max\{\sigmaDKT \, , \, \cteXpT\} =: \CLT\,. \label{eq:CLT}\end{aligned}$$ We have obtained the estimates in . Using again the expression of $L(\theta)$ we obtain that the anti-symmetric matrix is written as $$\begin{aligned}
\Elag (\theta) = {} &
\begin{pmatrix}
(\Dif K(\theta))^\top
\Omega(K(\theta)) \Dif K(\theta) &
(\Dif K(\theta))^\top
\Omega(K(\theta)) X_p(K(\theta))
\\
X_p(K(\theta))^\top
\Omega(K(\theta)) \Dif K(\theta) &
X_p(K(\theta))^\top
\Omega(K(\theta)) X_p(K(\theta))
\end{pmatrix} \,.\end{aligned}$$ Using the expression $\Omega_K(\theta)$ in , performing similar computations as in , and using the involution of the first integrals, we end up with $$\Elag (\theta) =
\begin{pmatrix}
\Omega_K(\theta) &
(\Dif (p(K(\theta))))^\top
\\
- \Dif (p(K(\theta))) &
O_{n-d}
\end{pmatrix}\,.$$
Then, we have $$\begin{aligned}
{\|{\Elag}\|}_{\rho-2\delta} = {} & \max\{
{\|{\Omega_K}\|}_{\rho-2\delta}+
{\|{(\Dif(p \circ K))^\top}\|}_{\rho-2\delta} \, , \,
{\|{\Dif(p \circ K)}\|}_{\rho-2\delta} \}\,, \nonumber \\
\leq {} & \frac{c_R \max\{\CLieOmegaK + \cteDpT \, , \, d \cteDp \}}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho
=:
\frac{\Clag}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho \,, \label{eq:Clag}\end{aligned}$$ where we use Lemmas \[lem:cons:H:p\] and \[lem:isotrop\]. Thus, we have obtained the estimate in . Finally, the estimates in with $$\begin{aligned}
{\|{G_L}\|}_\rho \leq {} & \CLT \cteG \CL
=: \CGL \,, \label{eq:CGL} \\
{\|{\tilde \Omega_L}\|}_\rho \leq {} & \CLT \ctetOmega \CL
=: \CtOmegaL \,. \label{eq:CtOmegaL} \end{aligned}$$ follow directly.
In the following lemma, we will see that that geometric constructions detailed in Section \[ssec:sym:frame\] lead, for an approximately invariant torus, to an approximately symplectic frame attached to the torus.
\[lem:sympl\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\]. Then, the map $N : {{\mathbb T}}^d \to {{\mathbb R}}^{2n \times n}$ given by satisfies $$\label{eq:propN}
{\|{N}\|}_\rho \leq \CN\,,
\qquad
{\|{N^\top}\|}_\rho \leq \CNT\,,$$ and the map $P: {{\mathbb T}}^d \to {{\mathbb R}}^{2n \times 2n}$ given by $$P(\theta)=
\begin{pmatrix}
L(\theta) & N(\theta)
\end{pmatrix}\,,$$ induces an approximately symplectic vector bundle isomorphism, i.e., the error map $$\label{eq:Esym}
\Esym(\theta) := P(\theta)^\top \Omega(K(\theta)) P(\theta)-\Omega_0\,,
\qquad
\Omega_0 =
\begin{pmatrix}
O_n & -I_n \\
I_n & O_n
\end{pmatrix}\,,$$ is small in the following sense: $$\label{eq:normEsym}
{\|{\Esym}\|}_{\rho-2 \delta} \leq \frac{\Csym}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho\,.$$ The above constants are given explicitly in Table \[tab:constants:all\].
First we control the norm of $N^0(\theta)$ in , using $H_1$ and , as $$\begin{aligned}
{\|{N^0}\|}_\rho \leq {} & {\|{J\circ K}\|}_\rho {\|{L}\|}_\rho
\leq {\|{J}\|}_{\B} {\|{L}\|}_\rho \leq \cteJ \CL =: \CNO \,, \label{eq:CNO} \\
{\|{(N^0)^\top}\|}_\rho \leq {} & {\|{L^\top}\|}_\rho {\|{(J\circ K)^\top}\|}_\rho
\leq {\|{L^\top}\|}_\rho {\|{J^\top}\|}_{\B} \leq \CLT \cteJT =: \CNOT \,, \label{eq:CNOT}\end{aligned}$$ and the norm of $A(\theta)$ in as $$\label{eq:cA}
{\|{A}\|}_\rho \leq
\frac{1}{2} {\|{B^\top}\|}_\rho {\|{L^\top (\tilde \Omega \circ K)
L}\|}_\rho {\|{B}\|}_\rho
\leq \frac{1}{2} \sigmaB \CtOmegaL \sigmaB =: \CA\,,$$ where we also used $H_3$, the second estimate in and that $B(\theta)$ is symmetric. We also control the complementary normal vectors as $$\begin{aligned}
{\|{N}\|}_\rho \leq {} &
{\|{L}\|}_\rho {\|{A}\|}_\rho +
{\|{N^0}\|}_\rho {\|{B}\|}_\rho
\leq {} \CL \CA + \CNO \sigmaB =: \CN \,, \label{eq:CN} \\
{\|{N^\top}\|}_\rho \leq {} &
{\|{A^\top}\|}_\rho {\|{L^\top}\|}_\rho + {\|{B^\top}\|}_\rho {\|{(N^0)^\top}\|}_\rho
\leq {} \CA \CLT + \sigmaB \CNOT =: \CNT \,, \label{eq:CNT}\end{aligned}$$ where we used $H_1$, the estimates , , , and , and the fact that $A(\theta)$ is anti-symmetric. Thus, we have obtained the estimates in .
To characterize the error in the symplectic character of the frame, we compute $$\label{eq:neqEsym}
\Esym(\theta)
=
\begin{pmatrix}
L(\theta)^\top
\Omega(K(\theta)) L(\theta) &
L(\theta)^\top
\Omega(K(\theta)) N(\theta) + I_n
\\
N(\theta)^\top
\Omega(K(\theta)) L(\theta) - I_n &
N(\theta)^\top
\Omega(K(\theta)) N(\theta)
\end{pmatrix} \,,$$ and we expand the components of this block matrix using , , and . For example, we have $$\begin{aligned}
L(\theta)^\top \Omega(K(\theta)) & N(\theta) \nonumber \\
= {} &
L(\theta)^\top \Omega(K(\theta)) L(\theta) A(\theta)+
L(\theta)^\top \Omega(K(\theta)) N^0(\theta) B(\theta) \nonumber \\
= {} & \Elag(\theta) A(\theta) - L(\theta)^\top G(K(\theta)) L(\theta)B(\theta)
\nonumber \\
= {} & \Elag(\theta) A(\theta) - I_n\,, \label{eq:LON}\end{aligned}$$ where we used that $J^\top \Omega=G$ and the definition of $B(\theta)$. We also have $$\begin{aligned}
N(\theta)^\top & \Omega(K(\theta)) N(\theta) \nonumber \\
= {} &
B(\theta)^\top N^0(\theta)^\top \Omega(K(\theta)) L(\theta)
A(\theta)
+ B(\theta)^\top N^0(\theta)^\top \Omega(K(\theta))
N^0(\theta) B(\theta) \nonumber \\
& + A(\theta)^\top L(\theta)^\top \Omega(K(\theta)) L(\theta) A(\theta)
+A(\theta)^\top L(\theta)^\top \Omega(K(\theta)) N^0(\theta)
B(\theta) \nonumber \\
= {} & A(\theta)^\top \Elag(\theta) A(\theta)+A(\theta)-A(\theta)^\top
+ B(\theta)^\top L(\theta)^\top \tilde \Omega(K(\theta))
L(\theta) B(\theta) \nonumber \\
= {} & A(\theta)^\top \Elag(\theta) A(\theta) \,. \label{eq:NON}\end{aligned}$$
Then, introducing the expressions , and into , we get $$\Esym(\theta)
=
\begin{pmatrix}
\Elag(\theta) & \Elag(\theta) A(\theta) \\
A(\theta)^\top \Elag(\theta) &
A(\theta)^\top \Elag(\theta) A(\theta)
\end{pmatrix}\,,$$ which is controlled as $$\label{eq:Csym}
{\|{\Esym}\|}_{\rho-2\delta} \leq \frac{(1+\CA) \max\{1\, , \, \CA\} \Clag}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho =: \frac{\Csym}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho \,,$$ thus obtaining the estimate .
The above estimates can be readily adapted to **Case III**, for which $A=0$. In this case we have (computations are left as an exercise to the reader) $$N(\theta) = N^0(\theta) B(\theta)
\,,
\qquad
\Esym(\theta)
=
\begin{pmatrix}
\Elag(\theta) & O_n \\
O_n &
B(\theta)^\top \Elag(\theta) B(\theta)
\end{pmatrix}\,.$$ The corresponding estimates are given explicitly in Table \[tab:constants:all\].
Sharp control of the torsion matrix
-----------------------------------
In this section we will control the torsion matrix $T(\theta)$, given in . To do so, we could use directly Cauchy estimates to control ${\mathfrak{L}_{\omega}} N(\theta)$, resulting in an additional bite in the domain and an additional factor $\delta$ in the denominator (among other overestimations). Hence, in order to improve this estimate, thus enhancing the threshold of validity of the result, we perform a finer analysis of the expression for $T(\theta)$. To this end, it is convenient to include here an additional smallness condition for the error of invariance (see below), that later on it turns out will be rather irrelevant .
\[lem:twist\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\], and let us assume that $$\label{eq:fake:cond}
\frac{{\|{E}\|}_\rho}{\delta}<\cauxT\,,$$ where $\cauxT$ is an independent constant. Then, the torsion matrix $T(\theta)$, given by , has components in $\Anal({{\mathbb T}}^d_{\rho-\delta})$ and satisfies the estimate $${\|{T}\|}_{\rho-\delta} \leq \CT\,,$$ where the constant $\CT$ is provided in Table \[tab:constants:all\].
Recalling that ${\mathfrak{L}_{\omega}}(\cdot) = - \Dif (\cdot) \omega$ and , we have $$\label{eq:line1}
{{{\mathcal X}}_{N}}(\theta) =
\Dif X_\H (K(\theta)) N(\theta) + {\mathfrak{L}_{\omega}}N (\theta)\,,$$ where $$\label{eq:LieN:expanded}
{\mathfrak{L}_{\omega}}N(\theta) =
{\mathfrak{L}_{\omega}} L(\theta) A(\theta)
+ L(\theta) {\mathfrak{L}_{\omega}} A(\theta)
+ {\mathfrak{L}_{\omega}} N^0(\theta) B(\theta)
+ N^0(\theta) {\mathfrak{L}_{\omega}}B(\theta) \,.$$ Then, we must estimate the terms ${\mathfrak{L}_{\omega}}L(\theta)$, ${\mathfrak{L}_{\omega}} A(\theta)$, ${\mathfrak{L}_{\omega}} N^0(\theta)$, and ${\mathfrak{L}_{\omega}}B(\theta)$, that appear above.
We start considering $$\label{eq:LieK}
{\mathfrak{L}_{\omega}}K(\theta)=E(\theta) - X_{\H}(K(\theta))$$ which is controlled as $$\label{eq:CLieK}
{\|{{\mathfrak{L}_{\omega}}K}\|}_{\rho} \leq {\|{E}\|}_\rho+ {\|{X_\H \circ K}\|}_\rho \leq \delta \cauxT + \cteXH =: \CLieK\,,$$ where we used $H_1$ and the assumption . Then we consider the object ${\mathfrak{L}_{\omega}}L(\theta)$, given by $$\begin{aligned}
{\mathfrak{L}_{\omega}} L(\theta) = {} &
\begin{pmatrix}
{\mathfrak{L}_{\omega}} (\Dif K(\theta)) & {\mathfrak{L}_{\omega}} (X_p(K(\theta)))
\end{pmatrix} \,.\end{aligned}$$ Notice that the left block in the above expression follows by taking derivatives at both sides of , i.e. $${\mathfrak{L}_{\omega}} (\Dif K(\theta)) = \Dif E(\theta) -
\Dif X_\H(K(\theta)) [\Dif K(\theta)]\,,$$ and the right block follows from $${\mathfrak{L}_{\omega}}(X_p(K(\theta)))
=\Dif X_p(K(\theta)) [{\mathfrak{L}_{\omega}}K(\theta)] \,.$$ Then, using Cauchy estimates, the Hypothesis $H_1$ and $H_2$, the assumption and the estimate , we have $$\label{eq:CLieL}
{\|{{\mathfrak{L}_{\omega}}L}\|}_{\rho-\delta} \leq d \cauxT + \cteDXH \sigmaDK + \cteDXp \CLieK
=: \CLieL \,.$$ Similarly, we obtain the estimates $$\label{eq:CLieLT}
{\|{{\mathfrak{L}_{\omega}}L^\top}\|}_{\rho-\delta} \leq
\max \left\{2n \cauxT + \cteDXHT \sigmaDK \, , \, \cteDXpT \CLieK \right\}
=: \CLieLT \,.$$
The term ${\mathfrak{L}_{\omega}}N^0(\theta)$ is controlled using that $$\label{eq:LieNO}
{\mathfrak{L}_{\omega}}N^0(\theta) = {\mathfrak{L}_{\omega}} (J(K(\theta))) L(\theta)+J(K(\theta)) {\mathfrak{L}_{\omega}}L(\theta)$$ and the chain rule. Actually, we have $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}}(J\circ K)}\|}_\rho \leq {} &
\cteDJ \CLieK =: \CLieJ \,, \label{eq:CLieJ} \\
{\|{{\mathfrak{L}_{\omega}}(G\circ K)}\|}_\rho \leq {} &
\cteDG \CLieK =: \CLieG \,, \label{eq:CLieG} \\
{\|{{\mathfrak{L}_{\omega}}(\tilde \Omega \circ K)}\|}_\rho \leq {} &
\cteDtOmega \CLieK =: \CLietOmega \,, \label{eq:CLietOmega}\end{aligned}$$ and then, using , the expression is controlled as $$\label{eq:CLieNO}
{\|{{\mathfrak{L}_{\omega}}N^0}\|}_{\rho-\delta} \leq \CLieJ \CL + \cteJ \CLieL =: \CLieNO\,.$$
Before controlling ${\mathfrak{L}_{\omega}}B(\theta)$ and ${\mathfrak{L}_{\omega}}A(\theta)$, we recall the notation for $G_L(\theta)$ and $\tilde \Omega_L(\theta)$, given by and respectively, and we control the action of ${\mathfrak{L}_{\omega}}$ on these objects. For example, we have $$\begin{aligned}
{\mathfrak{L}_{\omega}} G_L(\theta) = {} & {\mathfrak{L}_{\omega}}L(\theta)^\top G(K(\theta)) L(\theta) \\
& + L(\theta)^\top {\mathfrak{L}_{\omega}}G(K(\theta)) L(\theta)
+ L(\theta)^\top G(K(\theta)) {\mathfrak{L}_{\omega}}L(\theta)\end{aligned}$$ and, using , we obtain the estimate $$\label{eq:CLieGL}
{\|{{\mathfrak{L}_{\omega}} G_L}\|}_{\rho-\delta} \leq
\CLieLT \cteG \CL + \CLT \CLieG \CL + \CLT \cteG \CLieL =:
\CLieGL \,.$$ Analogously, using , we obtain the estimate $$\label{eq:CLietOmegaL}
{\|{{\mathfrak{L}_{\omega}} \tilde \Omega_L}\|}_{\rho-\delta} \leq
\CLieLT \ctetOmega \CL + \CLT \CLietOmega \CL + \CLT \ctetOmega \CLieL =:
\CLietOmegaL\,.$$
Now we obtain a suitable expression for ${\mathfrak{L}_{\omega}}B(\theta)$. To this end, we compute $$O_n = {\mathfrak{L}_{\omega}}(B(\theta)^{-1} B(\theta)) = {\mathfrak{L}_{\omega}}(B(\theta)^{-1}) B(\theta) + B(\theta)^{-1} {\mathfrak{L}_{\omega}}(B(\theta))\,,$$ which, recalling , yields the following expression $${\mathfrak{L}_{\omega}}B(\theta)
= -B(\theta) {\mathfrak{L}_{\omega}}(L(\theta)^\top G(K(\theta)) L(\theta)) B(\theta)
= -B(\theta) {\mathfrak{L}_{\omega}} G_L(\theta) B(\theta) \,.$$ Then, using , we get the estimate $$\label{eq:CLieB}
{\|{{\mathfrak{L}_{\omega}} B}\|}_{\rho-\delta} \leq (\sigmaB)^2 \CLieGL = :\CLieB \,.$$ Since $B(\theta)$ is symmetric, we also have ${\|{{\mathfrak{L}_{\omega}} B^\top}\|}_{\rho-\delta} \leq \CLieB$.
Finally, using the above notation, we expand ${\mathfrak{L}_{\omega}}A(\theta)$ as $$\begin{aligned}
{\mathfrak{L}_{\omega}}A(\theta)
= {} & -\frac12{\mathfrak{L}_{\omega}} B(\theta)^\top \tilde \Omega_L(\theta) B(\theta) -\frac12 B(\theta)^\top{\mathfrak{L}_{\omega}} \tilde \Omega_L(\theta) B(\theta) \\
& -\frac12 B(\theta)^\top \tilde \Omega_L(\theta) {\mathfrak{L}_{\omega}}B(\theta) \,,\end{aligned}$$ which, using the constants and , yields the following estimate $$\label{eq:CLieA}
{\|{{\mathfrak{L}_{\omega}} A}\|}_{\rho-\delta} \leq
\CLieB \CtOmegaL \sigmaB + \frac{1}{2} (\sigmaB)^2 \CLietOmegaL =:
\CLieA\,.$$
With the above objects, we can control as follows $$\label{eq:CLieN}
{\|{{\mathfrak{L}_{\omega}} N}\|}_{\rho-\delta} \leq \CLieL \CA + \CL \CLieA +
\CLieNO \sigmaB + \CNO \CLieB =: \CLieN\,.$$ This estimate will be used later in the proof of Lemma \[lem:KAM:inter:integral\]. Now, we could use in equation to obtain $${\|{{{{\mathcal X}}_{N}}}\|}_{\rho-\delta} \leq \cteDXH \CN + \CLieN\,.$$ However, we obtain a sharper estimate observing that $$\begin{aligned}
{{{\mathcal X}}_{N}}(\theta) = {} &
{{{\mathcal X}}_{L}}(\theta) A(\theta)
+ \Dif X_\H (K(\theta)) N^0(\theta) B(\theta) \\
&
+ L(\theta) {\mathfrak{L}_{\omega}}A(\theta)
+ {\mathfrak{L}_{\omega}} N^0(\theta) B(\theta) + N^0(\theta) {\mathfrak{L}_{\omega}}B(\theta)\,, \end{aligned}$$ where $$\begin{aligned}
{{{\mathcal X}}_{L}}(\theta) = {} &
\Dif X_\H (K(\theta)) L(\theta) + {\mathfrak{L}_{\omega}}L (\theta) \\
= {} &
\begin{pmatrix}
\Dif E(\theta) & \Dif X_p(K(\theta)) [E(\theta)]
\end{pmatrix} \,.\end{aligned}$$ This last expression follows using the previous formula for ${\mathfrak{L}_{\omega}}L(\theta)$ and the fact that the vector field $X_{\H}$ commutes with the fields $X_{p_i}$ for $1\leq i \leq n-d$.
Then, the objects ${{{\mathcal X}}_{L}}(\theta)$ and ${{{\mathcal X}}_{L}}(\theta)^\top$ are controlled as follows: $$\begin{aligned}
{\|{{{{\mathcal X}}_{L}}}\|}_{\rho-\delta} \leq {} & \frac{d + \cteDXp \delta}{\delta} {\|{E}\|}_\rho =: \frac{\CLoperL}{\delta} {\|{E}\|}_\rho \,, \label{eq:CLoperL} \\
{\|{{{{\mathcal X}}_{L}}^\top}\|}_{\rho-\delta} \leq {} & \frac{\max\{2n\, , \, \cteDXpT \delta\}}{\delta} {\|{E}\|}_\rho =: \frac{\CLoperLT}{\delta} {\|{E}\|}_\rho \,, \label{eq:CLoperLT}\end{aligned}$$ and, using again the smallness condition , we obtain $$\begin{aligned}
{\|{{{{\mathcal X}}_{N}}}\|}_{\rho-\delta} \leq {} &
\CLoperL \cauxT \CA+\cteDXH \CNO \sigmaB + \CL \CLieA +
\CLieNO \sigmaB + \CNO \CLieB \nonumber \\
=: {} & \CLoperN \,. \label{eq:CLoperN} \end{aligned}$$
Finally, the torsion matrix satisfies $$\label{eq:CT}
{\|{T}\|}_{\rho-\delta} \leq \CNT \cteOmega \CLoperN =: \CT \,,$$ which completes the proof.
The bound $\CT$ of Lemma \[lem:twist\] could be improved for the particular problem at hand, since the expression for $T(\theta)$ can in many cases be obtained explicitly and may have cancellations.
Approximate reducibility
------------------------
A crucial step in the proofs of the KAM theorems is the resolution of the linearized equation arising from the application of Newton method. The resolution is based on the (approximate) reduction of such linear system into a simpler form, in particular, block triangular form. This is the content of the following lemma.
\[lem:reduc\] Let us consider the setting of Theorem \[theo:KAM\] or Theorem \[theo:KAM:iso\]. Then, the map $P:{{\mathbb T}}^d \to {{\mathbb R}}^{2n \times 2n}$, characterized in Lemma \[lem:sympl\], approximately reduces the linearized equation associated with the vector field $\Dif X_\H \circ K$ to a block-triangular matrix, i.e. the error map $$\label{eq:Ered}
\Ered (\theta) :=
-\Omega_0 P(\theta)^\top \Omega(K(\theta))
\left(
\Dif X_\H(K(\theta)) P(\theta)
+{\mathfrak{L}_{\omega}}P(\theta)
\right) - \Lambda(\theta)\,,$$ with $$\label{eq:Lambda}
\Lambda(\theta)
= \begin{pmatrix}
O_n & T(\theta) \\
O_n & O_n
\end{pmatrix}$$ and $T(\theta)$ is given by , is small in the following sense: $${\|{\Ered}\|}_{\rho-2\delta} \leq \frac{\Cred}{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho\,,$$ where the constant $\Cred$ is provided in Table \[tab:constants:all\].
Using the notation in we write the block components of , denoted as $\Ered^{i,j}(\theta)$, as follows: $$\begin{aligned}
\Ered^{1,1}(\theta) = {} &
N(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{L}}(\theta) \,, \label{eq:Ered11} \\
\Ered^{1,2}(\theta) = {} &
N(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{N}}(\theta) - T(\theta) = O_n \,,
\label{eq:Ered12} \\
\Ered^{2,1}(\theta) = {} &
-L(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{L}}(\theta) \,,
\label{eq:Ered21} \\
\Ered^{2,2}(\theta) = {} &
-L(\theta)^\top \Omega(K(\theta)) {{{\mathcal X}}_{N}}(\theta) \,.
\nonumber\end{aligned}$$ Notice that we have used the definition of $T(\theta)$ to see that vanishes. To gather a suitable expression for $\Ered^{2,2}(\theta)$, we apply ${\mathfrak{L}_{\omega}}$ at both sides of the expression obtained in : $${\mathfrak{L}_{\omega}}(L(\theta)^\top \Omega(K(\theta))) N(\theta)+
L(\theta)^\top \Omega(K(\theta)) {\mathfrak{L}_{\omega}}N(\theta) =
{\mathfrak{L}_{\omega}} (\Elag(\theta) A(\theta))\,.$$ Then, introducing this expression into $\Ered^{2,2}(\theta)$, using and the geometric property , we obtain $$\begin{aligned}
\Ered^{2,2}(\theta) = {} & -L(\theta)^\top \Omega(K(\theta)) \Dif X_\H(K(\theta))N(\theta)
+ {\mathfrak{L}_{\omega}}(L(\theta)^\top \Omega(K(\theta))) N(\theta) \nonumber \\
& - {\mathfrak{L}_{\omega}} (\Elag(\theta) A(\theta)) \nonumber \\
= {} &
L(\theta)^\top (\Dif \Omega(K(\theta)) [E(\theta)]) N(\theta)
+{{{\mathcal X}}_{L}}(\theta)^\top \Omega(K(\theta))N(\theta)
- {\mathfrak{L}_{\omega}} (\Elag(\theta) A(\theta)) \,.
\label{eq:Ered22}\end{aligned}$$
At this point, we could use Cauchy estimates in the expression $${\mathfrak{L}_{\omega}} (\Elag(\theta) A(\theta)) = - \Dif (\Elag(\theta) A(\theta)) [\omega]$$ and obtain an estimate controlled by ${\|{E}\|}_\rho$. However, this would give a control of the form ${\|{\Ered}\|}_{\rho-3\delta}$, and we are interested in keeping the strip of analyticity $\rho-2\delta$. For this reason, we compute $$\label{eq:LieElagA}
{\mathfrak{L}_{\omega}} (\Elag(\theta) A(\theta)) =
{\mathfrak{L}_{\omega}} \Elag(\theta) \ A(\theta)
+
\Elag(\theta) \ {\mathfrak{L}_{\omega}} A(\theta)\,,$$ and consider the block components of $$\label{eq:LieElag}
{\mathfrak{L}_{\omega}} \Elag (\theta)
=
\begin{pmatrix}
{\mathfrak{L}_{\omega}}\Omega_K(\theta) &
{\mathfrak{L}_{\omega}}(\Dif (p(K(\theta))))^\top
\\
-{\mathfrak{L}_{\omega}}(\Dif (p(K(\theta)))) &
O_{n-d}
\end{pmatrix}\,.$$
Then, we control as follows $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \Elag}\|}_{\rho-\delta} \leq {} &
\max\{
{\|{{\mathfrak{L}_{\omega}} \Omega_K}\|}_{\rho-\delta}+
{\|{{\mathfrak{L}_{\omega}}(\Dif (p \circ K))^\top}\|}_{\rho-\delta}
\, , \,
{\|{{\mathfrak{L}_{\omega}}(\Dif (p \circ K))}\|}_{\rho-\delta}
\} \nonumber \\
\leq {} &
\frac{
\max \{
\CLieOmegaK + \cteDpT \, , \, d \cteDp
\}
}{\delta}
{\|{E}\|}_\rho =: \frac{\CLieOmegaL}{\delta}
{\|{E}\|}_\rho
\label{eq:CLieOmegaL}\end{aligned}$$ where we used estimates and from Lemma \[lem:cons:H:p\].
Finally, we estimate the norms of the block components of $\Ered$ using the expressions , , and , and the previous estimates: $$\begin{aligned}
{\|{\Ered^{1,1}}\|}_{\rho-2\delta} \leq {} &
{\|{N^\top}\|}_\rho {\|{\Omega}\|}_\B {\|{{{{\mathcal X}}_{L}}}\|}_{\rho-\delta}
\leq \frac{\CNT \cteOmega \CLoperL}{\delta} {\|{E}\|}_\rho =:
\frac{\Creduu}{\delta}{\|{E}\|}_\rho
\,, \label{eq:Creduu} \\
{\|{\Ered^{1,2}}\|}_{\rho-2\delta} = {} & 0 \,, \nonumber \\
{\|{\Ered^{2,1}}\|}_{\rho-2\delta} \leq {} &
{\|{L^\top}\|}_\rho {\|{\Omega}\|}_\B {\|{{{{\mathcal X}}_{L}}}\|}_{\rho-\delta}
\leq \frac{\CLT \cteOmega \CLoperL}{\delta} {\|{E}\|}_\rho
=:
\frac{\Creddu}{\delta}{\|{E}\|}_\rho
\,, \label{eq:Creddu} \\
{\|{\Ered^{2,2}}\|}_{\rho-2\delta} \leq {} &
\left( \CLT \cteDOmega \CN + \frac{\CLoperLT \cteOmega \CN }{\delta}
+ \frac{\CLieOmegaL \CA}{\delta} +\frac{\Clag \CLieA}{\gamma \delta^{\tau+1}} \right){\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\Creddd}{\gamma \delta^{\tau+1}} {\|{E}\|}_\rho
\,.
\label{eq:Creddd}\end{aligned}$$ Then, we end up with $$\label{eq:CEred}
{\|{\Ered}\|}_{\rho-2\delta} \leq
\frac{\max \{
\Creduu \gamma \delta^\tau
\, , \,
\Creddu \gamma \delta^\tau + \Creddd \} }{\gamma \delta^{\tau+1}}
{\|{E}\|}_\rho
=: \frac{\Cred}{\gamma \delta^{\tau+1}} {\|{E}\|}_\rho\,,$$ thus completing the proof.
Proof of the ordinary KAM theorem {#sec:proof:KAM}
=================================
In the section we present a fully detailed proof of Theorem \[theo:KAM\]. For convenience, we will start by outlining the scheme used to correct the parameterization of the torus. That is, in Section \[ssec:qNewton\] we discuss the approximate solution of linearized equations in the symplectic frame constructed in Section \[ssec:symp\]. This establishes a quasi-Newton method to obtain a solution of the invariance equation. In Section \[ssec:iter:lemmas\] we produce quantitative estimates for the objects obtained when performing one iteration of the previous procedure. Finally, in Section \[ssec:proof:KAM\] we discuss the convergence of the quasi-Newton method.
The quasi-Newton method {#ssec:qNewton}
-----------------------
As it is usual in the a-posteriori approach to KAM theory, the argument consists in refining $K(\theta)$ by means of a quasi-Newton method. Let us consider the equations associated with the invariance error $$E(\theta) = X_\H(K(\theta)) + {\mathfrak{L}_{\omega}}K(\theta)\,.$$ Then, we obtain the new parameterization $\bar K(\theta)= K(\theta)+\DeltaK(\theta)$ by considering the linearized equation $$\label{eq:lin1}
\Dif X_\H (K(\theta)) \DeltaK(\theta) + {\mathfrak{L}_{\omega}}\DeltaK(\theta)
= - E(\theta) \,,$$ If we obtain a good enough approximation of the solution $\DeltaK(\theta)$ of , then $\bar K(\theta)$ provides a parameterization of an approximately invariant torus of frequency $\omega$, with a quadratic error in terms of $E(\theta)$.
To face the linearized equation , we resort to the approximately symplectic frame $P(\theta)$, defined on the full tangent space, which has been characterized in Section \[sec:lemmas\] (see Lemma \[lem:sympl\]). In particular, we introduce the linear change $$\label{eq:choice:DK}
\DeltaK (\theta) = P(\theta) \xi(\theta) \,,$$ where $\xi(\theta)$ is the new unknown. Taking into account this expression, the linearized equation becomes $$\label{eq:lin:1E}
\left( \Dif X_\H (K(\theta)) P(\theta) + {\mathfrak{L}_{\omega}}P(\theta) \right) \xi(\theta)
+P(\theta) {\mathfrak{L}_{\omega}} \xi(\theta)
= - E(\theta) \,,$$ We now multiply both sides of by $-\Omega_0 P(\theta)^\top
\Omega(K(\theta))$, and we use the geometric properties in Lemma \[lem:sympl\] and Lemma \[lem:reduc\], thus obtaining the equivalent equations: $$\label{eq:lin:1Enew}
\begin{split}
\left(\Lambda(\theta)+\Ered(\theta) \right) \xi(\theta)
+ {} & \left( I_{2n} - \Omega_0 \Esym(\theta) \right) {\mathfrak{L}_{\omega}} \xi(\theta) \\
= {} &
\Omega_0 P(\theta)^\top \Omega(K(\theta)) E(\theta)
\,,
\end{split}$$ where $\Lambda(\theta)$ is the triangular matrix-valued map given in .
Then, it turns out that the solution of are approximated by the solutions of a triangular system that requires to solve two cohomological equations of the form consecutively. Quantitative estimates for the solutions of such equations are obtained by applying Rüssmann estimates. This is summarized in the following standard statement.
\[lem:upperT\] Let $\omega \in {{{\mathcal D}}_{\gamma,\tau}}$ and let us consider a map $\eta= (\eta^L,\eta^N) : {{\mathbb T}}^d \to
{{\mathbb R}}^{2n} \simeq {{\mathbb R}}^n\times{{\mathbb R}}^n$, with components in $\Anal({{\mathbb T}}^d_\rho)$, and a map $T : {{\mathbb T}}^d \rightarrow {{\mathbb R}}^{n\times n}$, with components in $\Anal({{\mathbb T}}^d_{\rho-\delta})$. Assume that $T$ satisfies the non-degeneracy condition $\det
{\langle{T}\rangle} \neq 0$ and $\eta$ satisfies the compatibility condition ${\langle{\eta^N}\rangle}=0_n$. Then, for any $\xi^L_0\in {{\mathbb R}}^n$, the system of equations $$\label{eq:lin:last1}
\begin{pmatrix}
O_n & T(\theta) \\
O_n & O_n
\end{pmatrix}
\begin{pmatrix}
\xi^L(\theta) \\
\xi^N(\theta)
\end{pmatrix}
+
\begin{pmatrix}
{\mathfrak{L}_{\omega}} \xi^L(\theta) \\
{\mathfrak{L}_{\omega}} \xi^N(\theta)
\end{pmatrix}
=
\begin{pmatrix}
\eta^L(\theta) \\
\eta^N(\theta)
\end{pmatrix}$$ has a solution of the form $$\begin{aligned}
\xi^N(\theta)={} & \xi^N_0 + {\mathfrak{R}_{\omega}}(\eta^N(\theta)) \,, \\
\xi^L(\theta)={} & \xi^L_0 +{\mathfrak{R}_{\omega}}(\eta^L(\theta) - T(\theta)
\xi^N(\theta)) \,, \end{aligned}$$ where $$\xi^N_0= {\langle{T}\rangle}^{-1} {\langle{\eta^L-T {\mathfrak{R}_{\omega}}(\eta^N)}\rangle}$$ and ${\mathfrak{R}_{\omega}}$ is given by . Moreover, we have the estimates $$\begin{aligned}
& |\xi_0^N| \leq {\left|{{\langle{T}\rangle}^{-1}}\right|}
\Big(
{\|{\eta^L}\|}_\rho
+ \frac{c_R}{\gamma \delta^\tau} {\|{T}\|}_{\rho-\delta} {\|{\eta^N}\|}_\rho
\Big)\, , \\
& {\|{\xi^N}\|}_{\rho-\delta} \leq |\xi_0^N| + \frac{c_R}{\gamma \delta^\tau}
{\|{\eta^N}\|}_\rho\, , \\
& {\|{\xi^L}\|}_{\rho-2\delta} \leq |\xi_0^L| + \frac{c_R}{\gamma \delta^\tau}
\Big(
{\|{\eta^L}\|}_{\rho-\delta} +
{\|{T}\|}_{\rho-\delta} {\|{\xi^N}\|}_{\rho-\delta}
\Big)\, .\end{aligned}$$
This triangular structure is classic in KAM theory and appears in any Kolmogorov scheme (see e.g. [@BennettinGGS84; @Llave01; @Kolmogorov54]). The Lemma is directly adapted from Lemma 4.14 in [@HaroCFLM16] and the estimates are directly obtained using Lemma \[lem:Russmann\].
To approximate the solutions of we will invoke Lemma \[lem:upperT\] taking $$\label{eq:eta:corr}
\eta^L(\theta) = - N(\theta)^\top \Omega(K(\theta)) E(\theta)\,,
\qquad
\eta^N(\theta) = L(\theta)^\top \Omega(K(\theta)) E(\theta)\,,$$ and $T(\theta)$ given by . We recall from Lemma \[lem:sympl\] that the compatibility condition ${\langle{\eta^N}\rangle}=0_n$ is satisfied. Note that ${\langle{\xi^N}\rangle}=\xi^N_0$ and we have the freedom of choosing any value for ${\langle{\xi^L}\rangle}= \xi^L_0 \in {{\mathbb R}}^n$. For convenience, we will select later the solution with $\xi_0^L=0_n$, even though other choices can be selected according to the context (see Remark \[rem:unicity\]).
From Lemma \[lem:upperT\] we read that ${\|{\xi}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E}\|}_\rho)$ and, using the geometric properties characterized in Section \[sec:lemmas\], we have ${\|{\Ered}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E}\|}_\rho)$ and ${\|{\Esym}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E}\|}_\rho)$. From these estimates, we conclude that the solution of equation is approximated by the solution of the cohomological equation $$\label{eq:mycohoxi}
\Lambda(\theta) \xi(\theta) + {\mathfrak{L}_{\omega}} \xi(\theta)=\eta(\theta)\,.$$ This, together with other estimates, will be suitably quantified in the next section.
One step of the iterative procedure {#ssec:iter:lemmas}
-----------------------------------
In this section we apply one correction of the quasi-Newton method described in Section \[ssec:qNewton\] and we obtain sharp quantitative estimates for the new approximately invariant torus and related objects. We set sufficient conditions to preserve the control of the previous estimates.
\[The Iterative Lemma in the ordinary case\] \[lem:KAM:inter:integral\] Let us consider the same setting and hypotheses of Theorem \[theo:KAM\], and a constant $\cauxT>0$. Then, there exist constants $\CDeltaK$, $\CDeltaB$, $\CDeltaTOI$ and $\CE$ such that if the inequalities $$\label{eq:cond1:K:iter}
\frac{\hCDelta {\|{E}\|}_\rho}{\gamma^2 \delta^{2\tau+1}} < 1
\qquad\qquad
\frac{\CE {\|{E}\|}_\rho}{\gamma^4 \delta^{4\tau}} < 1$$ hold for some $0<\delta< \rho$, where $$\label{eq:mathfrak1}
\begin{split}
\hCDelta := \max \bigg\{ & \frac{\gamma^2 \delta^{2\tau}}{\cauxT}
\, , \,
2 \Csym \gamma \delta^{\tau}
\, , \,
\frac{ d \CDeltaK }{\sigmaDK - {\|{\Dif K}\|}_\rho}
\, , \,
\frac{ 2n \CDeltaK }{\sigmaDKT - {\|{(\Dif K)^\top}\|}_\rho}
\, , \,
\\
& \frac{\CDeltaB}{\sigmaB - {\|{B}\|}_\rho}
\, , \,
\frac{\CDeltaTOI }{\sigmaT - {|{{\langle{T}\rangle}^{-1}}|}}
\, , \,
\frac{\CDeltaK \delta}{\dist (K({{\mathbb T}}^d_\rho),\partial B)}
\bigg\} \,,
\end{split}$$ then we have an approximate torus of the same frequency $\omega$ given by $\bar K=K+\DeltaK$, with components in $\Anal({{\mathbb T}}^d_{\rho-2\delta})$, that defines new objects $\bar B$ and $\bar T$ (obtained replacing $K$ by $\bar K$) satisfying $$\begin{aligned}
& {\|{\Dif \bar K}\|}_{\rho-3\delta} < \sigmaDK \,, \label{eq:DK:iter1} \\
& {\|{(\Dif \bar K)^\top}\|}_{\rho-3\delta} < \sigmaDKT \,, \label{eq:DKT:iter1} \\
& {\|{\bar B}\|}_{\rho-3\delta} < \sigmaB \,, \label{eq:B:iter1} \\
& {|{{\langle{\bar T}\rangle}^{-1}}|} < \sigmaT \,, \label{eq:T:iter1} \\
& \dist(\bar K({{\mathbb T}}^d_{\rho-2\delta}),\partial \B) >0 \,, \label{eq:distB:iter1} \end{aligned}$$ and $$\begin{aligned}
& {\|{\bar K-K}\|}_{\rho-2 \delta} < \frac{\CDeltaK}{\gamma^2 \delta^{2\tau}}
{\|{E}\|}_\rho\,, \label{eq:est:DeltaK} \\
& {\|{\bar B-B}\|}_{\rho-3\delta} < \frac{\CDeltaB}{\gamma^2 \delta^{2 \tau+1}}
{\|{E}\|}_\rho\,, \label{eq:est:DeltaB} \\
& {|{{\langle{\bar T}\rangle}^{-1}-{\langle{T}\rangle}^{-1} }|} < \frac{\CDeltaTOI}{\gamma^2 \delta^{2 \tau+1}}
{\|{E}\|}_\rho\,, \label{eq:est:DeltaT}\end{aligned}$$ The new error of invariance is given by $$\bar E(\theta) = X_\H (\bar K(\theta)) + {\mathfrak{L}_{\omega}} \bar K(\theta) \,,$$ and satisfies $$\label{eq:E:iter1}
{\|{\bar E}\|}_{\rho-2\delta} < \frac{\CE}{\gamma^4
\delta^{4\tau}}{\|{E}\|}_\rho^2\,.$$ The above constants are collected in Table \[tab:constants:all:2\].
This result requires rather cumbersome computations, so we divide the proof into several steps.
#### *Step 1: Control of the new parameterization*.
We start by considering the new parameterization $\bar K (\theta)= K(\theta) + \DeltaK(\theta)$ obtained from the system , with $\eta(\theta)$ given by . We choose the solution that satisfies $\xi_0^L=0$. Using the estimates obtained in Section \[sec:lemmas\] we have $${\|{\eta^L}\|}_\rho \leq \CNT \cteOmega {\|{E}\|}_\rho\,,
\qquad
{\|{\eta^N}\|}_\rho \leq \CLT \cteOmega {\|{E}\|}_\rho\,.$$ In order to invoke Lemma \[lem:twist\] (we must fulfill condition ) we have included the inequality $$\label{eq:ingredient:iter:1}
\frac{{\|{E}\|}_\rho}{\delta} < \cauxT$$ into Hypothesis (this corresponds to the first term in ). Hence, combining Lemma \[lem:twist\] and Lemma \[lem:upperT\], we obtain estimates for the solution of the cohomological equations (we recall that $\xi_0^L=0_n$) $$\begin{aligned}
& {|{\xi^N_0}|} \leq {|{{\langle{T}\rangle}^{-1}}|}
\Big(
{\|{\eta^L}\|}_{\rho} + \frac{c_R}{\gamma \delta^\tau} {\|{T}\|}_{\rho-\delta} {\|{\eta^N}\|}_\rho
\Big) \nonumber \\
& \qquad \leq \sigmaT \Big(
\CNT \cteOmega + \frac{c_R}{\gamma \delta^\tau} \CT \CLT \cteOmega
\Big)
{\|{E}\|}_\rho =: \frac{\CxiNO}{\gamma \delta^\tau} {\|{E}\|}_\rho\,, \label{eq:CxiNO} \\
& {\|{\xi^N}\|}_{\rho-\delta} \leq {|{\xi^N_0}|}+ \frac{c_R}{\gamma \delta^\tau} {\|{\eta^N}\|}_\rho \nonumber \\
& \qquad \leq
\frac{\CxiNO}{\gamma \delta^\tau} {\|{E}\|}_\rho
+ \frac{c_R}{\gamma \delta^\tau}\CLT \cteOmega {\|{E}\|}_\rho
=:
\frac{\CxiN}{\gamma \delta^\tau} {\|{E}\|}_\rho \,, \label{eq:CxiN} \\
& {\|{\xi^L}\|}_{\rho-2\delta} \leq {|{\xi_0^L}|} + \frac{c_R}{\gamma \delta^\tau}
\Big(
{\|{\eta^L}\|}_{\rho} + {\|{T}\|}_{\rho-\delta} {\|{\xi^N}\|}_{\rho-\delta}
\Big) \nonumber \\
& \qquad \leq \frac{c_R}{\gamma \delta^\tau} \Big(
\CNT \cteOmega + \CT \frac{\CxiN}{\gamma \delta^\tau}
\Big)
{\|{E}\|}_\rho =: \frac{\CxiL}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,. \label{eq:CxiL} \end{aligned}$$ The norm of the full vector $\xi(\theta)$, which satisfies , is controlled as $$\begin{aligned}
{\|{\xi}\|}_{\rho-2\delta} \leq {} &
\max\{ {\|{\xi^L}\|}_{\rho-2\delta}\, , \, {\|{\xi^N}\|}_{\rho-\delta}\} \nonumber \\
\leq {} &
\frac{\max\{\CxiL \, , \, \CxiN \gamma \delta^\tau\}}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho
=: \frac{\Cxi}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,. \label{eq:Cxi}\end{aligned}$$
The new parameterization $\bar K(\theta)$ and the related objects are controlled using standard computations. Estimate follows directly from $$\bar K(\theta)-K(\theta) = \DeltaK(\theta) = P(\theta) \xi(\theta) = L(\theta) \xi^L(\theta) + N (\theta) \xi^N(\theta)\,,$$ that is, using estimates in and , we obtain $$\label{eq:CDeltaK}
{\|{\bar K - K}\|}_{\rho-2\delta}
\leq \frac{\CL \CxiL + \CN \CxiN \gamma \delta^\tau}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =:
\frac{\CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,.$$
To complete this step, we check that $\bar K(\theta)$ remains inside the domain $\B$ where the global objects are defined. This is important because we need to estimate the new error $\bar E(\theta)$ before controlling the remaining geometrical objects For this, we observe that $$\begin{aligned}
\dist(\bar K({{\mathbb T}}^d_{\rho-2\delta}),\partial \B) \geq {} & \dist(K({{\mathbb T}}^d_\rho),\partial \B) - {\|{\DeltaK}\|}_{\rho-2\delta} \nonumber \\
\geq {} & \dist(K({{\mathbb T}}^d_\rho),\partial \B) - \frac{\CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho>0\,, \label{eq:ingredient:iter:2}\end{aligned}$$ where the last inequality follows from Hypothesis (this corresponds to the seventh term in ). We have obtained the control in .
#### *Step 2: Control of the new error of invariance*.
To control the error of invariance of the corrected parameterization $\bar K$, we first consider the error in the solution of the linearized equation , that is, we control the quadratic terms that are neglected when considering the equation : $$\Elin (\theta) = \Ered(\theta) \xi(\theta) - \Omega_0 \Esym(\theta) {\mathfrak{L}_{\omega}}\xi(\theta)\,.$$ The term ${\mathfrak{L}_{\omega}} \xi(\theta)$ is controlled using that $\xi(\theta)$ is precisely the solution of the cohomological equation : $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \xi^N}\|}_{\rho}
= {} &
{\|{\eta^N}\|}_{\rho}
\leq \CLT \cteOmega {\|{E}\|}_\rho =: \CLiexiN {\|{E}\|}_\rho \,,
\label{eq:CLiexiN}
\\
{\|{{\mathfrak{L}_{\omega}} \xi^L}\|}_{\rho-\delta}
={} &
{\|{\eta^L - T \xi^N}\|}_{\rho-\delta}
\leq \left(\CNT \cteOmega + \CT \frac{\CxiN}{\gamma \delta^\tau}\right){\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CLiexiL}{\gamma \delta^{\tau}} {\|{E}\|}_\rho \,,
\label{eq:CLiexiL} \\
{\|{{\mathfrak{L}_{\omega}} \xi}\|}_{\rho-\delta}
\leq {} & \max\left(\frac{\CLiexiL}{\gamma \delta^{\tau}}, \CLiexiN \right) {\|{E}\|}_\rho =: \frac{\CLiexi}{\gamma \delta^{\tau}} {\|{E}\|}_\rho\,.
\label{eq:CLiexi}\end{aligned}$$
Hence, we control $\Elin(\theta)$ by $$\begin{aligned}
{\|{\Elin}\|}_{\rho-2\delta} \leq {} &
{\|{\Ered}\|}_{\rho-2\delta} {\|{\xi}\|}_{\rho-2\delta} +
\cteOmega {\|{\Esym}\|}_{\rho-2\delta} {\|{{\mathfrak{L}_{\omega}}\xi}\|}_{\rho-2 \delta} \nonumber \\
\leq {} &
\frac{\Cred}{\gamma \delta^{\tau+1}}
\frac{\Cxi}{\gamma^2 \delta^{2 \tau}}
{\|{E}\|}_\rho^2
+
\cteOmega
\frac{\Csym}{\gamma \delta^{\tau+1}}
\frac{\CLiexi}{\gamma \delta^{\tau}}
{\|{E}\|}_\rho^2
=: \frac{\Clin}{\gamma^3 \delta^{3\tau+1}} {\|{E}\|}_\rho^2\,.
\label{eq:Clin}\end{aligned}$$ We remark that this last estimate can be improved by considering the components of $\xi(\theta)=(\xi^L(\theta),\xi^N(\theta))$ separately, thus obtaining a divisor $\gamma^2 \delta^{2\tau+1}$ in . Nevertheless, this improvement is irrelevant for practical purposes.
After performing the correction, the error of invariance associated with the new parameterization is given by $$\label{eq:new:E:comp}
\begin{split}
\bar E(\theta) = {} & X_\H(K(\theta)+\DeltaK(\theta)) + {\mathfrak{L}_{\omega}}K(\theta) + {\mathfrak{L}_{\omega}}\DeltaK(\theta) \\
= {} & X_{\H}(K(\theta))+\Dif X_\H (K(\theta)) \DeltaK(\theta) + {\mathfrak{L}_{\omega}}K(\theta) + {\mathfrak{L}_{\omega}}\DeltaK(\theta)
+\Delta^2 X(\theta) \\
= {} & \Dif X_\H (K(\theta)) \DeltaK(\theta) + {\mathfrak{L}_{\omega}}\DeltaK(\theta) + E(\theta)
+\Delta^2 X(\theta) \\
= {} & \left( \Dif X_\H (K(\theta)) P(\theta) + {\mathfrak{L}_{\omega}}P(\theta) \right) \xi(\theta)
+P(\theta) {\mathfrak{L}_{\omega}} \xi(\theta) + E(\theta) + \Delta^2X(\theta) \\
= {} & (-\Omega_0 P(\theta)^\top \Omega(K(\theta)))^{-1} \Elin(\theta) + \Delta^2X(\theta) \\
= {} & P(\theta) (I_{2n}-\Omega_0 \Esym(\theta))^{-1} \Elin(\theta)+\Delta^2 X(\theta)\,,
\end{split}$$ where $$\label{eq:Delta2X}
\begin{split}
\Delta^2 X(\theta) = {} & X_\H(K(\theta)+\DeltaK(\theta))-X_\H(K(\theta))-\Dif X_\H(K(\theta)) \DeltaK(\theta) \\
= {} & \int_0^1 (1-t) \Dif^2 X_\H(K(\theta)+t \DeltaK(\theta)) [\DeltaK(\theta),\DeltaK(\theta)] \dif t\,,
\end{split}$$ and we used . Notice that the above error function is well defined, due to the computations in , and we estimate its norm as follows $${\|{\bar E}\|}_{\rho-2\delta} \leq {\|{P}\|}_{\rho-2\delta} {\|{(I-\Omega_0 \Esym)^{-1}}\|}_{\rho-2\delta} {\|{\Elin}\|}_{\rho-2\delta}+{\|{\Delta^2 X}\|}_{\rho-2\delta}\,.$$ Then, using a Neumann series argument, we obtain $${\|{(I-\Omega_0 \Esym)^{-1}}\|}_{\rho-2\delta} \leq \frac{1}{1-{\|{\Omega_0 \Esym}\|}_{\rho-2\delta}} <2\,,$$ where we used the inequality $$\frac{\Csym}{\gamma \delta^{\tau+1}}{\|{E}\|}_\rho \leq \frac{1}{2}\ ,$$ that corresponds to the second term in (Hypothesis ). Putting together the above estimates, and applying the mean value theorem to control $\Delta^2 X(\theta)$, we obtain $$\label{eq:CE}
{\|{\bar E}\|}_{\rho-2\delta} \leq \left(\frac{ 2(\CL+\CN)\Clin}{\gamma^3 \delta^{3\tau+1}} + \frac{1}{2} \cteDDXH \frac{(\CDeltaK)^2}{\gamma^4 \delta^{4\tau}} \right)
{\|{E}\|}_\rho^2 =:
\frac{\CE}{\gamma^4 \delta^{4\tau}} {\|{E}\|}_\rho^2\,.$$ We have obtained the estimate . Notice that the second assumption in and imply that $$\label{eq:barEvsE}
{\|{\bar E}\|}_{\rho-2\delta} < {\|{E}\|}_\rho< \delta \cauxT\,.$$ This will be used in Step 6.
#### *Step 3: Control of the new frame $L(\theta)$*.
Combining with Cauchy estimates, we obtain the control : $$\label{eq:ingredient:iter:4}
{\|{\Dif \bar K}\|}_{\rho-3\delta} \leq
{\|{\Dif K}\|}_{\rho} +
{\|{\Dif \DeltaK}\|}_{\rho-3\delta} \leq
{\|{\Dif K}\|}_{\rho} +
\frac{d \CDeltaK}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho < \sigmaDK \,,$$ where the last inequality follows from Hypothesis (this corresponds to the third term in ). The control on the transposed object is analogous $$\label{eq:ingredient:iter:5}
{\|{(\Dif \bar K)^\top}\|}_{\rho-3\delta} \leq
{\|{(\Dif K)^\top}\|}_{\rho} +
\frac{2n \CDeltaK}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho < \sigmaDKT \,,$$ where the last inequality follows from Hypothesis (this corresponds to the fourth term in ).
After obtaining the estimates and , it is clear that $${\|{\bar L}\|}_{\rho-3\delta} \leq \CL\,,
\qquad
{\|{\bar L^\top}\|}_{\rho-3\delta} \leq \CLT\,.$$ Indeed, we can control the norm of the corresponding corrections using Cauchy estimates, the mean value theorem and estimate : $$\begin{aligned}
{\|{\bar L-L}\|}_{\rho-3 \delta} \leq {} &
\frac{d \CDeltaK}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
+
\frac{\cteDXp \CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =: \frac{\CDeltaL}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,, \label{eq:CDeltaL} \\
{\|{\bar L^\top-L^\top}\|}_{\rho-3 \delta} \leq {} &
\frac{\CDeltaK \max\{ 2n \, , \, \cteDXpT \delta\}}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
=: \frac{\CDeltaLT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,. \label{eq:CDeltaLT}\end{aligned}$$
#### *Step 4: Control of the new transversality condition*.
To control $\bar B$ we use Lemma \[lem:aux\] taking $$\begin{aligned}
M(\theta) = {} & G_L(\theta) = L(\theta)^\top G(K(\theta)) L(\theta) \,,\\
\bar M(\theta) = {} & G_{\bar L}(\theta) = \bar L(\theta)^\top G(\bar K(\theta)) \bar L(\theta) \,,\end{aligned}$$ where we have used the notation introduced in . First, we compute $$\begin{aligned}
{\|{G \circ \bar K-G \circ K}\|}_{\rho-2\delta} \leq {} & {\|{\Dif G}\|}_{\B} {\|{\bar K - K}\|}_{\rho-2\delta}
\leq \frac{\cteDG \CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaG}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,, \label{eq:CDeltaG}\end{aligned}$$ and $$\begin{aligned}
{\|{G_{\bar L}-G_L}\|}_{\rho-3\delta}
\leq {} & {\|{\bar L^\top (G \circ \bar K) \bar L - \bar L^\top (G \circ \bar K) L}\|}_{\rho-3\delta} \nonumber\\
& +{\|{\bar L^\top (G \circ \bar K) L - \bar L^\top (G \circ K) L}\|}_{\rho-3\delta} \nonumber \\
&+{\|{\bar L^\top (G \circ K) L - L^\top (G \circ K) L}\|}_{\rho-3\delta} \nonumber \\
\leq {} & {\|{\bar L^\top}\|}_\rho {\|{G}\|}_\B {\|{\bar L- L}\|}_{\rho-3\delta} \nonumber\\
& + {\|{\bar L^\top}\|}_\rho {\|{G \circ \bar K-G \circ K}\|}_{\rho-3\delta} {\|{L}\|}_\rho \nonumber \\
&+{\|{\bar L^\top - L^\top}\|}_{\rho-3\delta}{\|{G}\|}_\B {\|{L}\|}_\rho \nonumber \\
\leq {} & \frac{\CLT \cteG \CDeltaL + \CLT \CDeltaG \CL \delta + \CDeltaLT \cteG \CL }{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaGL}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,. \label{eq:CDeltaGL} \end{aligned}$$ Then, we introduce the constant $$\CDeltaB := 2(\sigmaB)^2 \CDeltaGL$$ and check the condition in Lemma \[lem:aux\]: $$\begin{aligned}
\frac{2 (\sigmaB)^2 {\|{G_{\bar L}-G_L}\|}_{\rho-3\delta}}{\sigmaB-{\|{B}\|}_\rho} \leq {} &
\frac{2 (\sigmaB)^2 \CDeltaGL}{\sigmaB-{\|{B}\|}_\rho} \frac{{\|{E}\|}_\rho}{\gamma^2 \delta^{2\tau+1}}
\nonumber \\
= {} &
\frac{\CDeltaB}{\sigmaB-{\|{B}\|}_\rho} \frac{{\|{E}\|}_\rho}{\gamma^2 \delta^{2\tau+1}}
< 1\,,
\label{eq:ingredient:iter:6}\end{aligned}$$ where the last inequality follows from Hypothesis (this corresponds to the fifth term in ). Hence, by invoking Lemma \[lem:aux\], we conclude that $$\label{eq:CDeltaB}
{\|{\bar B}\|}_{\rho-3\delta} < \sigmaB\,, \qquad
{\|{\bar B-B}\|}_{\rho-3\delta} \leq \frac{2 (\sigmaB)^2 \CDeltaGL}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho
= \frac{\CDeltaB}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho\,,$$ and so, we obtain the estimates and on the new object.
#### *Step 5: Control of the new frame $N(\theta)$*.
To control the new adapted normal frame $\bar N(\theta)$, it is convenient to recall the following notation: $$\begin{aligned}
N(\theta) = {} & L(\theta) A(\theta) + N^0(\theta) B(\theta)\,, \\
N^0(\theta) = {} & J(K(\theta)) L(\theta) \,,\\
A(\theta) = {} & -\tfrac{1}{2} (B(\theta)^\top L(\theta)^\top \tilde \Omega(K(\theta))
L(\theta) B(\theta)) \,, \\
B(\theta) = {} & (L(\theta)^\top G(K(\theta)) L(\theta))^{-1}\,,\end{aligned}$$ where, as usual, the new objects $\bar N(\theta)$, $\bar A(\theta)$, $\bar N^0(\theta)$ and $\bar B(\theta)$ are obtained by replacing $K(\theta)$ by $\bar K(\theta)$. Note that the object $\bar B(\theta)$ has been controlled in Step 4.
Now, we recall the notation introduced in and reproduce the computations in and for the matrix functions $$\begin{aligned}
\tilde \Omega_L(\theta) = {} & L(\theta)^\top \tilde \Omega(K(\theta)) L(\theta) \,,\\
\tilde \Omega_{\bar L}(\theta) = {} & \bar L(\theta)^\top \tilde \Omega(\bar K(\theta)) \bar L(\theta) \,,\end{aligned}$$ thus obtaining $$\label{eq:CDeltatOmega}
{\|{\tilde \Omega \circ \bar K-\tilde \Omega \circ K}\|}_{\rho -2\delta} \leq \frac{\cteDtOmega \CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =:\frac{\CDeltatOmega}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \,,$$ and $$\begin{aligned}
{\|{\tilde \Omega_{\bar L}-\tilde \Omega_L}\|}_{\rho-3\delta}
\leq {} & \frac{\CLT \ctetOmega \CDeltaL + \CLT \CDeltatOmega \CL \delta + \CDeltaLT \ctetOmega \CL }{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltatOmegaL}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,. \label{eq:CDeltatOmegaL}\end{aligned}$$
Now, we control the matrix $\bar A(\theta)$ as follows $$\begin{aligned}
{\|{\bar A-A}\|}_{\rho-3\delta} \leq {} &
\frac{1}{2}
{\|{
\bar B^\top \tilde \Omega_{\bar L} \bar B
- \bar B^\top \tilde \Omega_{\bar L} B
}\|}_{\rho-3\delta} \nonumber \\
& + \frac{1}{2}
{\|{
\bar B^\top \tilde \Omega_{\bar L} B
- \bar B^\top \tilde \Omega_{L} B
}\|}_{\rho-3\delta} + \frac{1}{2}
{\|{
\bar B^\top \tilde \Omega_{L} B
- B^\top \tilde \Omega_{L} B
}\|}_{\rho-3\delta} \nonumber \\
\leq {} &
\sigmaB \CtOmegaL {\|{\bar B-B}\|}_{\rho-3 \delta}
+\frac{1}{2} (\sigmaB)^2 {\|{\tilde \Omega_{\bar L}-\tilde \Omega_L}\|}_{\rho-3 \delta} \nonumber \\
\leq {} & \frac{\sigmaB \CtOmegaL \CDeltaB + \frac{1}{2} (\sigmaB)^2 \CDeltatOmegaL}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
=: \frac{\CDeltaA}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \,, \label{eq:CDeltaA}\end{aligned}$$ where we used that $B(\theta)^\top=B(\theta)$, and the constants , and . The same control holds for ${\|{\bar A^\top-A^\top}\|}_{\rho-3\delta}$, since we have that $A(\theta)^\top=-A(\theta)$. We notice that $\CDeltaA=0$ in **Case III**.
Analogous computations yield to $$\begin{aligned}
{\|{J \circ \bar K-J \circ K}\|}_{\rho -2\delta} \leq {} & \frac{\cteDJ \CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =:\frac{\CDeltaJ}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \,, \label{eq:CDeltaJ} \\
{\|{(J \circ \bar K-J \circ K)^\top}\|}_{\rho -2\delta} \leq {} & \frac{\cteDJT \CDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =:\frac{\CDeltaJT}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \,, \label{eq:CDeltaJT} \\
{\|{\bar N^0-N^0}\|}_{\rho-3\delta} \leq {} &
{\|{J}\|}_\B {\|{\bar L-L}\|}_{\rho-3\delta} + {\|{J \circ \bar K- J \circ K}\|}_{\rho-2 \delta} {\|{L}\|}_\rho \nonumber \\
\leq {} & \frac{\cteJ \CDeltaL + \CDeltaJ \CL \delta}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaNO}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,, \label{eq:CDeltaNO} \\
{\|{(\bar N^0)^\top-(N^0)^\top}\|}_{\rho-3\delta}
\leq {} & \frac{\CDeltaLT \cteJT + \CLT \CDeltaJT \delta }{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaNOT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,. \label{eq:CDeltaNOT}\end{aligned}$$
Finally, we control the correction of the adapted normal frame: $$\begin{aligned}
\|\bar N-N &\|_{\rho-3\delta} \nonumber \\
& \leq {\|{\bar L \bar A-\bar L A}\|}_{\rho-3 \delta}+
{\|{\bar L A- L A}\|}_{\rho-3 \delta} \nonumber \\
& \hphantom{\leq}+ {\|{\bar N^0 \bar B-\bar N^0 B}\|}_{\rho-3 \delta}+
{\|{\bar N^0 B- N^0 B}\|}_{\rho-3 \delta} \nonumber \\
& \leq \frac{\CL \CDeltaA + \CDeltaL \CA + \CNO \CDeltaB + \CDeltaNO \sigmaB}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
\nonumber \\
& =: \frac{\CDeltaN}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,, \label{eq:CDeltaN}
\\
\|\bar N^\top-N^\top&\|_{\rho-3\delta} \nonumber \\
& \leq \frac{\CA \CDeltaLT + \CDeltaA \CLT + \CDeltaB \CNOT + \sigmaB \CDeltaNOT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& =: \frac{\CDeltaNT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,. \label{eq:CDeltaNT}\end{aligned}$$
#### *Step 6: Control of the action of the left operator*.
It is worth mentioning that an important effort is made to obtain optimal estimates for the twist condition. As it was illustrated in the proof of Lemma \[lem:twist\], improved estimates are obtained by avoiding the use of Cauchy estimates when controlling the action of ${\mathfrak{L}_{\omega}}$ on different objects and their corrections.
Using the assumption and the control of the new error of invariance, we preserve the control for the new objects ${\mathfrak{L}_{\omega}}\bar K(\theta)$, ${\mathfrak{L}_{\omega}}\bar L(\theta)$ and ${\mathfrak{L}_{\omega}}\bar L(\theta)^\top$. Indeed, using , we have $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \bar K}\|}_{\rho-2\delta} \leq {} & {\|{\bar E}\|}_\rho+ {\|{X_\H \circ \bar K}\|}_\rho \leq \delta \cauxT + \cteXH = \CLieK\,, \\
{\|{{\mathfrak{L}_{\omega}} \bar L}\|}_{\rho-3\delta} \leq {} &
d \cauxT + \cteDXH \sigmaDK + \cteDXp \CLieK
= \CLieL \,, \\
{\|{{\mathfrak{L}_{\omega}} \bar L^\top}\|}_{\rho-3\delta} \leq {} &
\max \left\{2n \cauxT + \cteDXHT \sigmaDK \, , \, \cteDXpT \CLieK \right\}
= \CLieLT \,.\end{aligned}$$
Then, we also control the action of ${\mathfrak{L}_{\omega}}$ on the correction of the torus, using that $$\begin{aligned}
& {\mathfrak{L}_{\omega}} \bar K(\theta) - {\mathfrak{L}_{\omega}} K(\theta) =
{\mathfrak{L}_{\omega}} \DeltaK(\theta) \\
& \qquad =
{\mathfrak{L}_{\omega}} L(\theta) \xi^L(\theta)
+L(\theta) {\mathfrak{L}_{\omega}} \xi^L(\theta)
+{\mathfrak{L}_{\omega}} N(\theta) \xi^N(\theta)
+N(\theta) {\mathfrak{L}_{\omega}}\xi^N(\theta)\end{aligned}$$ and, recalling previous estimates, we obtain: $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \bar K - {\mathfrak{L}_{\omega}} K}\|}_{\rho-2\delta} \leq {} &
\left(
\frac{\CLieL \CxiL}{\gamma^2 \delta^{2\tau}} +
\frac{\CL \CLiexiL}{\gamma \delta^\tau} + \frac{\CLieN \CxiN}{\gamma \delta^{\tau}}
+\CN \CLiexiN
\right){\|{E}\|}_\rho
\nonumber \\
=:{} & \frac{\CDeltaLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,.
\label{eq:CDeltaLieK}\end{aligned}$$
The action of ${\mathfrak{L}_{\omega}}$ on the correction of $L(\theta)$ is similar. On the one hand, we have $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \bar L-{\mathfrak{L}_{\omega}} L}\|}_{\rho -3\delta}
\leq {} &
{\|{{\mathfrak{L}_{\omega}}( \Dif \bar K - \Dif K)}\|}_{\rho-3\delta}
+{\|{{\mathfrak{L}_{\omega}} (X_p \circ \bar K - X_p \circ K)}\|}_{\rho-3\delta}\nonumber \\
\leq {} &
{\|{ \Dif ({\mathfrak{L}_{\omega}}\bar K - {\mathfrak{L}_{\omega}} K)}\|}_{\rho-3\delta} \nonumber \\
& +{\|{(\Dif X_p \circ \bar K) [{\mathfrak{L}_{\omega}} \bar K] -
(\Dif X_p \circ \bar K) [{\mathfrak{L}_{\omega}}K]}\|}_{\rho-3\delta} \nonumber \\
& +{\|{(\Dif X_p \circ \bar K) [{\mathfrak{L}_{\omega}} K] -
(\Dif X_p \circ K) [{\mathfrak{L}_{\omega}}K]}\|}_{\rho-3\delta} \nonumber \\
\leq {} & \frac{d \CDeltaLieK}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
+ \frac{\cteDXp \CDeltaLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho
+ \frac{\cteDDXp \CDeltaK \CLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaLieL}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,, \label{eq:CDeltaLieL}\end{aligned}$$ and on the other hand, we have $$\begin{aligned}
&{\|{{\mathfrak{L}_{\omega}} \bar L^\top-{\mathfrak{L}_{\omega}} L^\top}\|}_{\rho -3\delta} \nonumber \\
& \qquad\qquad \leq
\max\{{\|{{\mathfrak{L}_{\omega}}( \Dif \bar K - \Dif K)^\top}\|}_{\rho-3\delta}
\, , \,
{\|{{\mathfrak{L}_{\omega}} (X_p \circ \bar K - X_p \circ K)^\top}\|}_{\rho-3\delta}\} \nonumber \\
& \qquad\qquad \leq
\max
\left \{\frac{2n \CDeltaLieK}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho
\, , \,
\frac{\cteDXpT \CDeltaLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho
+ \frac{\cteDDXpT \CDeltaK \CLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho \right\} \nonumber \\
& \qquad\qquad =: \frac{\CDeltaLieLT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,. \label{eq:CDeltaLieLT}\end{aligned}$$
To control the action of ${\mathfrak{L}_{\omega}}$ on the correction of the matrix $B(\theta)$ that provides the transversality condition, we first consider the correction $$\begin{aligned}
\|{\mathfrak{L}_{\omega}}(G \circ \bar K) - & {\mathfrak{L}_{\omega}}(G\circ K)\|_{\rho-2\delta}
\leq {\|{\Dif G}\|}_{\B} {\|{{\mathfrak{L}_{\omega}}\bar K -{\mathfrak{L}_{\omega}}K}\|}_{\rho-2\delta} \nonumber \\
&~+ {\|{\Dif^2 G}\|}_{\B} {\|{\bar K-K}\|}_{\rho-2\delta} {\|{{\mathfrak{L}_{\omega}} K}\|}_{\rho-\delta} \nonumber \\
&\leq \frac{\cteDG \CDeltaLieK + \cteDDG \CDeltaK \CLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =: \frac{\CDeltaLieG}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,, \label{eq:CDeltaLieG}\end{aligned}$$ and we control the correction of the adapted metric $$\label{eq:eqCDeltaLieGL}
\begin{split}
{\mathfrak{L}_{\omega}} G_{\bar L}(\theta) - {\mathfrak{L}_{\omega}} G_L(\theta)
= {} & {\mathfrak{L}_{\omega}} \bar L^\top(\theta) G(\bar K(\theta)) \bar L(\theta)
- {\mathfrak{L}_{\omega}}L^\top(\theta) G (K(\theta)) L(\theta) \\
& + \bar L^\top(\theta) {\mathfrak{L}_{\omega}} G (\bar K(\theta)) \bar L (\theta)
- L^\top(\theta) {\mathfrak{L}_{\omega}} G(K(\theta)) L(\theta) \\
& +\bar L^\top(\theta) G(\bar K(\theta)) {\mathfrak{L}_{\omega}}\bar L(\theta)
- L^\top(\theta) G (K(\theta)) {\mathfrak{L}_{\omega}}L(\theta)
\end{split}$$ as follows $$\begin{aligned}
\|{\mathfrak{L}_{\omega}} G_{\bar L} & - {\mathfrak{L}_{\omega}} G_L\|_{\rho-3\delta} \nonumber \\
& \leq
\frac{\CLieLT \cteG \CDeltaL + \CLieLT \cteDG \CDeltaK \CL \delta + \CDeltaLieLT \cteG \CL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} +
\frac{\CLT \cteDG \CLieK \CDeltaL + \CLT \CDeltaLieG \CL \delta + \CDeltaLT \cteDG \CLieK \CL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} +
\frac{\CLT \cteG \CDeltaLieL + \CLT \cteDG \CDeltaK \CLieL \delta + \CDeltaLT \cteG \CLieL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& =: \frac{\CDeltaLieGL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho\,. \label{eq:CDeltaLieGL}\end{aligned}$$ Moreover, the following estimates (borrowed from Lemma \[lem:twist\]) will be also useful $${\|{{\mathfrak{L}_{\omega}} G_L}\|}_{\rho-\delta}
\leq
\CLieGL\,,
\qquad
{\|{{\mathfrak{L}_{\omega}} G_{\bar L}}\|}_{\rho-3\delta}
\leq
\CLieGL\,,$$ Again, it is clear that this control is preserved for the corrected objects. Using the previous estimates, we have $$\begin{aligned}
& {\|{{\mathfrak{L}_{\omega}}\bar B -{\mathfrak{L}_{\omega}} B}\|}_{\rho-3\delta} \leq
{\|{\bar B {\mathfrak{L}_{\omega}} G_{\bar L} \bar B - \bar B {\mathfrak{L}_{\omega}} G_{\bar L} B}\|}_{\rho-3 \delta} \nonumber \\
& \qquad + {\|{\bar B {\mathfrak{L}_{\omega}} G_{\bar L} B - \bar B {\mathfrak{L}_{\omega}} G_L B}\|}_{\rho-3 \delta}
+ {\|{\bar B {\mathfrak{L}_{\omega}} G_{L} B - B {\mathfrak{L}_{\omega}} G_L B}\|}_{\rho-3 \delta} \nonumber \\
& \qquad \leq
\frac{
2 \sigmaB \CLieGL \CDeltaB + (\sigmaB)^2 \CDeltaLieGL
}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho =:
\frac{
\CDeltaLieB
}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,. \label{eq:CDeltaLieB}\end{aligned}$$
By repeating the computations , and , mutatis mutandis, we obtain the estimates $$\begin{aligned}
\|{\mathfrak{L}_{\omega}}(\tilde \Omega \circ \bar K) - & {\mathfrak{L}_{\omega}}(\tilde \Omega \circ K)\|_{\rho-2\delta}
\nonumber \\
&\leq \frac{\cteDtOmega \CDeltaLieK + \cteDDtOmega \CDeltaK \CLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =: \frac{\CDeltaLietOmega}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,, \label{eq:CDeltaLietOmega}\end{aligned}$$ and $$\begin{aligned}
\|{\mathfrak{L}_{\omega}} \tilde \Omega_{\bar L} & -{\mathfrak{L}_{\omega}} \tilde\Omega_L\|_{\rho-3\delta} \nonumber \\
& \leq
\frac{\CLieLT \ctetOmega \CDeltaL + \CLieLT \cteDtOmega \CDeltaK \CL \delta + \CDeltaLieLT \ctetOmega \CL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} +
\frac{\CLT \cteDtOmega \CLieK \CDeltaL + \CLT \CDeltaLietOmega \CL \delta + \CDeltaLT \cteDtOmega \CLieK \CL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} +
\frac{\CLT \ctetOmega \CDeltaLieL + \CLT \cteDtOmega \CDeltaK \CLieL \delta + \CDeltaLT \ctetOmega \CLieL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho \nonumber \\
& =: \frac{\CDeltaLietOmegaL}{\gamma^2 \delta^{2\tau+1}}{\|{E}\|}_\rho\,. \label{eq:CDeltaLietOmegaL}\end{aligned}$$ We also recall the following controls $${\|{{\mathfrak{L}_{\omega}} \tilde \Omega_L}\|}_{\rho-\delta}
\leq
\CLietOmegaL\,.
\qquad
{\|{{\mathfrak{L}_{\omega}} \tilde \Omega_{\bar L}}\|}_{\rho-3\delta}
\leq
\CLietOmegaL\,.$$ Finally, we obtain $$\begin{aligned}
\|{\mathfrak{L}_{\omega}}\bar A & -{\mathfrak{L}_{\omega}} A\|_{\rho-3\delta} \nonumber \\
& \leq \frac{\CLieB \CtOmegaL \CDeltaB + \CLieB \CDeltatOmegaL \sigmaB + \CDeltaLieB \CtOmegaL \sigmaB}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} + \frac{1}{2}
\frac{\sigmaB \CLietOmegaL \CDeltaB + (\sigmaB)^2 \CDeltaLietOmegaL + \CDeltaB \CLietOmegaL \sigmaB}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& =:
\frac{
\CDeltaLieA
}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,, \label{eq:CDeltaLieA} \\
\|{\mathfrak{L}_{\omega}}(J \circ \bar K) & -{\mathfrak{L}_{\omega}} (J\circ K)\|_{\rho-2\delta} \nonumber \\
& \leq \frac{\cteDJ \CDeltaLieL + \cteDDJ \CDeltaK \CLieK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho =: \frac{
\CDeltaLieJ
}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,,
\label{eq:CDeltaLieJ} \\
\|{\mathfrak{L}_{\omega}}\bar N^0 & -{\mathfrak{L}_{\omega}} N^0\|_{\rho-3\delta} \nonumber \\
& \leq \frac{\CLieJ \CDeltaL + \CDeltaLieJ \CL\delta + \cteJ \CDeltaLieL + \CDeltaJ \CLieL \delta}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& =: \frac{
\CDeltaLieNO
}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,,
\label{eq:CDeltaLieNO} \\
\|{\mathfrak{L}_{\omega}}\bar N & -{\mathfrak{L}_{\omega}} N\|_{\rho-3\delta} \nonumber \\
& \leq \frac{\CLieL \CDeltaA + \CDeltaLieL \CA + \CL \CDeltaLieA + \CDeltaL \CLieA}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& \hphantom{\leq} + \frac{\CLieNO \CDeltaB + \CDeltaLieNO \sigmaB + \CNO \CDeltaLieB + \CDeltaNO \CLieB}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& =: \frac{
\CDeltaLieN
}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,,
\label{eq:CDeltaLieN}\end{aligned}$$ where we used the constant .
#### *Step 7: Control of the new torsion condition*.
Notice that this step could be replaced by the use of Cauchy estimates, thus obtaining a much pessimistic control. Now we control the non-degeneracy (twist) condition associated to the new torsion matrix $$\bar T(\theta) = \bar N^\top \Omega(\bar K(\theta)) {{{\mathcal X}}_{\bar N}}(\theta)\,,$$ where $${{{\mathcal X}}_{\bar N}}(\theta)=\Dif X_\H (\bar K(\theta)) \bar N(\theta) + {\mathfrak{L}_{\omega}} \bar N(\theta)\,,$$ is the infinitesimal displacement of the normal subbundle for the linearized dynamics. At this point, intermediate computations will be skipped for convenience. Such details are left to the reader (they are analogous to previous computations).
We start by controlling the correction of the displacement. To this end, we observe that $$\begin{aligned}
{{{\mathcal X}}_{\bar N}}(\theta) - {{{\mathcal X}}_{N}}(\theta) = {} &
\Dif X_{\H}(\bar K(\theta)) \bar N(\theta) -
\Dif X_{\H}(K(\theta)) N(\theta)
\\
& + {\mathfrak{L}_{\omega}} \bar N(\theta) - {\mathfrak{L}_{\omega}} N(\theta) \,,\end{aligned}$$ and we readily obtain $$\begin{aligned}
{\|{{{{\mathcal X}}_{\bar N}} - {{{\mathcal X}}_{N}}}\|}_{\rho-3\delta} \leq {} &
\frac{\cteDXH \CDeltaN + \cteDDXH \CDeltaK \CN \delta + \CDeltaLieN}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaLoperN}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \,.
\label{eq:CDeltaLoperN}\end{aligned}$$ Now, we can control the correction of the torsion matrix $$\begin{aligned}
& {\|{\bar T -T}\|}_{\rho-3\delta} \leq
{\|{\bar N^\top}\|}_{\rho-3\delta} {\|{\Omega \circ \bar K}\|}_{\rho-2\delta}
{\|{
{{{\mathcal X}}_{\bar N}} - {{{\mathcal X}}_{N}}
}\|}_{\rho-3\delta} \nonumber \\
& \qquad \hphantom{\leq} +
{\|{\bar N^\top}\|}_{\rho-3\delta} {\|{\Omega \circ \bar K - \Omega \circ K }\|}_{\rho-2\delta}
{\|{{{{\mathcal X}}_{N}}}\|}_{\rho-\delta} \nonumber \\
& \qquad \hphantom{\leq}+
{\|{\bar N^\top-N^\top}\|}_{\rho-3\delta} {\|{\Omega \circ K}\|}_{\rho}
{\|{{{{\mathcal X}}_{N}}}\|}_{\rho-\delta} \nonumber \\
& \qquad \leq
\frac{\CNT \cteOmega \CDeltaLoperN + \CNT \cteDOmega \CDeltaK \CLoperN \delta + \CDeltaNT \cteOmega \CLoperN)}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho \nonumber \\
& \qquad
=: \frac{\CDeltaT}{\gamma^2 \delta^{2\tau+1}} {\|{E}\|}_\rho\,. \label{eq:CDeltaT}\end{aligned}$$ Then, we introduce the constant $$\CDeltaTOI := 2 (\sigmaT)^2 \CDeltaT$$ and check the condition in Lemma \[lem:aux\]: $$\begin{aligned}
\frac{2 (\sigmaT)^2 |{\langle{\bar T}\rangle} - {\langle{T}\rangle}|}{\sigmaT-|{\langle{T}\rangle}^{-1}|} \leq {} &
\frac{2 (\sigmaT)^2 {\|{\bar T - T}\|}_{\rho-3\delta}}{\sigmaT-|{\langle{T}\rangle}^{-1}|} \leq
\frac{2 (\sigmaT)^2 \CDeltaT}{\sigmaT-|{\langle{T}\rangle}^{-1}|} \frac{{\|{E}\|}_\rho}{\gamma^2 \delta^{2\tau+1}}
\nonumber \\
= {} &
\frac{\CDeltaTOI}{\sigmaB-{\|{B}\|}_\rho} \frac{{\|{E}\|}_\rho}{\gamma^2 \delta^{2\tau+1}}
< 1\,,
\label{eq:ingredient:iter:8}\end{aligned}$$ where the last inequality follows from Hypothesis (this corresponds to the sixth term in ). Hence, by invoking Lemma \[lem:aux\], we conclude that $$\label{eq:CDeltaTOI}
|{\langle{\bar T}\rangle}^{-1}| <\sigmaT\,, \qquad
|{\langle{\bar T}\rangle}^{-1} - {\langle{T}\rangle}^{-1}| \leq \frac{2 (\sigmaT)^2 \CDeltaT}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho = \frac{\CDeltaTOI}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho\,,$$ and so, we obtain the estimates and on the new object.
Convergence of the iterative process {#ssec:proof:KAM}
------------------------------------
Now we are ready to proof our first KAM theorem with conserved quantities. Once the quadratic procedure has been established in Section \[ssec:iter:lemmas\], proving the convergence of the scheme follows standard arguments. Nevertheless, the required computations will be carefully detailed since we are interested in providing explicit conditions for the KAM theorem.
\[Proof of Theorem \[theo:KAM\]\] Let us consider the approximate torus $K_0:=K$ with initial error $E_0:=E$. We also introduce $B_0:=B$ and $T_0:=T$ associated with the initial approximation. By applying Lemma \[lem:KAM:inter:integral\] recursively, we obtain new objects $K_s=\bar K_{s-1}$, $E_s=\bar E_{s-1}$, $B_s=\bar B_{s-1}$ and $T_s=\bar T_{s-1}$. The domain of analyticity of these objects is reduced at every step. To characterize this fact, we introduce parameters $a_1>1$, $a_2>1$, $a_3=3 \frac{a_1}{a_1-1} \frac{a_2}{a_2-1}$ and define $$\rho_0=\rho, \quad
\delta_0=\frac{\rho_0}{a_3}, \quad
\rho_s=\rho_{s-1} - 3 \delta_{s-1}, \quad
\delta_s= \frac{\delta_0}{a_1^s}, \quad
\rho_\infty = \lim_{s\to \infty} \rho_s = \frac{\rho_0}{a_2}\,.$$ We can select the above parameters (together with the parameter $\cauxT$) to optimize the convergence of the KAM process for a particular problem (see [@FiguerasHL17]).
Let us assume that we have successfully applied $s$ times Lemma \[lem:KAM:inter:integral\] (the Iterative Lemma), and let $K_s$, $E_s$, $B_s$ and $T_s$ be the objects at the $s$-step of the quasi-Newton method. We observe that condition is required at every step, but the construction has been performed in such a way that we control ${\|{\Dif K_s}\|}_{\rho_s}$, ${\|{(\Dif K_s)^\top}\|}_{\rho_s}$, ${\|{B_s}\|}_{\rho_s}$, $\dist(K_s({{\mathbb T}}^d_{\rho_s}),\partial \B)$, and ${|{{\langle{T_s}\rangle}^{-1}}|}$ uniformly with respect to $s$, so the constants that appear in Lemma \[lem:KAM:inter:integral\] (which are obtained in Table \[tab:constants:all\] and Table \[tab:constants:all:2\]) are taken to be the same for all steps by considering the worst value of $\delta_s$, that is, $\delta_0 = \rho_0/a_3$.
The first computation is tracking the sequence $E_s$ of invariance errors: $$\label{eq:conv:geom}
\begin{split}
{\|{E_s}\|}_{\rho_s} < {} &
\frac{\CE}{\gamma^4 \delta_{s-1}^{4\tau}} {\|{E_{s-1}}\|}_{\rho_{s-1}}^2 =
\frac{\CE a_1^{4\tau(s-1)}}{\gamma^4 \delta_0^{4\tau}} {\|{E_{s-1}}\|}_{\rho_{s-1}}^2 \\
< {} & \left( \frac{a_1^{4 \tau} \CE {\|{E_0}\|}_{\rho_0}}{\gamma^4 \delta_0^{4\tau}}
\right)^{2^s-1} a_1^{-4\tau s} {\|{E_0}\|}_{\rho_0}
< a_1^{-4\tau s} {\|{E_0}\|}_{\rho_0}
\,,
\end{split}$$ where we used the sums $1+2+\ldots+2^{s-1}=2^s-1$, and $1(s-1)+2(s-2)+2^2(s-3)\ldots+2^{s-2} 1 = 2^s-s-1$. Notice that we also used the inequality $$\label{eq:theo:conv:1}
\frac{a_1^{4 \tau} \CE {\|{E_0}\|}_{\rho_0}}{\gamma^4 \delta_0^{4\tau}} <1\,,$$ which is included in . Now, using expression , we check the Hypothesis of the iterative Lemma, so that we can perform the step $s+1$. The required sufficient condition will be included in the hypothesis of the KAM theorem.
The right inequality in reads: $$\frac{\CE {\|{E_s}\|}_{\rho_s}}{\gamma^4 \delta_s^{4\tau}} \leq
\frac{\CE a_1^{-4\tau s} {\|{E_0}\|}_{\rho_0}}{\gamma^4 \delta_s^{4\tau}}
=
\frac{\CE {\|{E_0}\|}_{\rho_0}}{\gamma^4 \delta_0^{4\tau}}
\leq a_1^{-4\tau}
< 1 \,,$$ where we used and .
The left inequality in has several terms (which correspond to the different components in ). The first of them, using again , is given by $$\label{eq:theo:conv:2}
\frac{{\|{E_s}\|}_{\rho_s}}{\delta_s} \leq \frac{a_1^s a_1^{-4\tau s} {\|{E_0}\|}_{\rho_0}}{\delta_0}
\leq \frac{{\|{E_0}\|}_{\rho_0}}{\delta_0} < \cauxT\,.$$ We used that $\tau \geq d-1 \geq 1$, so that $1-4\tau<0$. The last inequality in is included in . The second term is guaranteed by performing the following computation $$\label{eq:theo:conv:3}
\frac{2 \Csym {\|{E_s}\|}_{\rho_s}}{\gamma \delta_s^{\tau+1}}
\leq
\frac{2 \Csym a_1^{s(\tau+1)} a_1^{-4\tau s} {\|{E_0}\|}_{\rho_0}}{\gamma \delta_0^{\tau+1}}
\leq
\frac{2 \Csym {\|{E_0}\|}_{\rho_0}}{\gamma \delta_0^{\tau+1}}
< 1 \,,$$ where we used , the fact that $1-3\tau<0$, and we have included the last inequality in . The remaining conditions are similar. We only need to pay attention to the fact that they involve the objects $\Dif K_s$, $(\Dif K_s)^\top$, $B_s$ and ${\langle{T_s}\rangle}^{-1}$, at the $s^{\mathrm{th}}$ step. Hence, we have to relate these conditions to the corresponding initial objects $\Dif K_0$, $(\Dif K_0)^\top$, $B_0$ and ${\langle{T_0}\rangle}^{-1}$. For example, the third term in reads $$\left(\frac{ d\CDeltaK}{\sigmaDK-{\|{\Dif K_s}\|}_{\rho_s}}\right) \frac{{\|{E_s}\|}_{\rho_s}}{\gamma^2 \delta_s^{2\tau+1}}< 1\,,$$ and it is checked by performing the following computations $$\begin{aligned}
{\|{\Dif K_s}\|}_{\rho_s} + & \frac{d \CDeltaK{\|{E_s}\|}_{\rho_s}}{\gamma^2 \delta_s^{2\tau+1}}
<
{\|{\Dif K_0}\|}_{\rho_0} + \sum_{j=0}^s \frac{d \CDeltaK {\|{E_j}\|}_{\rho_j}}{\gamma^2 \delta_j^{2\tau+1}} \nonumber \\
< {} &
{\|{\Dif K_0}\|}_{\rho_0} + \sum_{j=0}^\infty \frac{d \CDeltaK
a_1^{(1-2\tau)j}}{\gamma^2 \delta_0^{2\tau+1}}
{\|{E_0}\|}_{\rho_0} \nonumber
\\
= {} &
{\|{\Dif K_0}\|}_{\rho_0} + \frac{d
\CDeltaK {\|{E_0}\|}_{\rho_0}}{\gamma^2 \delta_0^{2\tau+1}}
\left(
\frac{1}{1-a_1^{1-2\tau}}
\right) < \sigmaDK\,. \label{eq:theo:conv:4}\end{aligned}$$ Again, the last inequality is included into . The fourth, fifth and sixth terms in (associated to $(\Dif K_s)^\top$, $B_{s}$ and ${\langle{T_s}\rangle}^{-1}$, respectively) follow by reproducing the same computations. Finally, the seventh term in is checked as follows $$\begin{aligned}
\dist (K_s({{\mathbb T}}^d_{\rho_s}),\partial \B) - & \frac{\CDeltaK}{\gamma^2 \delta_s^{2\tau}} {\|{E_s}\|}_{\rho_s}>
\dist (K_0({{\mathbb T}}^d_{\rho_0}),\partial \B) - \sum_{j=0}^\infty \frac{\CDeltaK a_1^{-2\tau j} {\|{E_0}\|}_{\rho_0}}{\gamma^2 \delta_0^{2\tau}}
\nonumber \\
& = \dist (K_0({{\mathbb T}}^d_{\rho_0}),\partial \B) - \frac{\CDeltaK {\|{E_0}\|}_{\rho_0}}{\gamma^2 \delta_0^{2\tau}}
\frac{1}{1-a_1^{-2\tau}} > 0\,,
\label{eq:check:dist}\end{aligned}$$ where the last inequality is included into .
Having guaranteed all hypothesis of Lemma \[lem:KAM:inter:integral\], we collect the inequalities , , , and that are included into hypothesis . This follows by introducing the constant $\mathfrak{C}_1$ as $$\label{eq:cte:mathfrakC1}
\mathfrak{C}_1:=\max
\left\{
(a_1 a_3)^{4\tau} \CE \, , \, (a_3)^{2\tau+1} \gamma^2 \rho^{2\tau-1} \CDeltatot
\right\}$$ where $$\begin{aligned}
\CDeltaI := {} &
\max
\Bigg\{
\frac{d \CDeltaK}{\sigmaDK-{\|{\Dif K}\|}_{\rho}} \, , \,
\frac{2n \CDeltaK}{\sigmaDKT-{\|{(\Dif K)^\top}\|}_{\rho}} \, , \label{eq:CDeltaI}
\\
& \qquad\qquad\qquad\qquad \frac{\CDeltaB}{\sigmaB-{\|{B}\|}_{\rho}} \, , \,
\frac{\CDeltaTOI}{\sigmaT-{|{{\langle{T}\rangle}^{-1}}|}}
\Bigg\} \,, \nonumber \\
\CDeltaII := {} & \frac{\CDeltaK \delta}{\dist(K({{\mathbb T}}^d_{\rho}),\partial \B)}\,,
\label{eq:CDeltaII} \\
\CDeltatot := {} & \max
\Bigg\{
\frac{\gamma^2 \delta^{2\tau}}{\cauxT} \, , \,
2 \Csym \gamma \delta^\tau \, , \,
\frac{\CDeltaI}{1-a_1^{1-2\tau}} \, , \,
\frac{\CDeltaII}{1-a_1^{-2\tau}}
\Bigg\}
\label{eq:CDeltatot} \,.\end{aligned}$$ Note that we recovered the original notation $K=K_0$, $B=B_0$, $T=T_0$, $\rho=\rho_0$ and $\delta=\delta_0$ for the original objects.
Therefore, by induction, we can apply the iterative process infinitely many times. Indeed, we have $${\|{E_s}\|}_{\rho_s} < a_1^{-4 \tau s} {\|{E}\|}_{\rho} \longrightarrow 0\
\quad \mbox{when} \quad s\rightarrow 0\,$$ so the iterative scheme converges to a true quasi-periodic torus $K_\infty$. As a result of the output of Lemma \[lem:KAM:inter:integral\], this object satisfies $K_\infty \in \Anal({{\mathbb T}}^{d}_{\rho_\infty})$ and $${\|{\Dif K_\infty}\|}_{\rho_\infty} < \sigmaDK\,,
\qquad
{\|{(\Dif K_\infty)^\top}\|}_{\rho_\infty} < \sigmaDKT\,,
\qquad
\dist(K_\infty({{\mathbb T}}^d),\pd \B) > 0\,.$$ Furthermore, we control the the limit objects by repeating the computations in as follows $$\begin{aligned}
&{\|{K_\infty - K}\|}_{\rho_\infty} \leq {} \sum_{j=0}^\infty
{\|{K_{j+1} - K_j}\|}_{\rho_{j+1}} < \frac{\CDeltaK {\|{E}\|}_{\rho}}{\gamma^2 \delta^{2\tau}}
\frac{1}{1-a_1^{-2\tau}} =:
\frac{\mathfrak{C}_2 {\|{E}\|}_{\rho}}{\gamma^2 \rho^{2\tau}} \label{eq:Cmathfrak2}\,, \\
&{|{{\langle{c \circ K_\infty}\rangle} - {\langle{c \circ K}\rangle}}|} \leq {}
{\|{\Dif c}\|}_\B {\|{K_\infty - K}\|}_{\rho_\infty} <
\frac{\cteDc \mathfrak{C}_2 {\|{E}\|}_{\rho}}{\gamma^2 \rho^{2\tau}} =:
\frac{\mathfrak{C}_3 {\|{E}\|}_{\rho}}{\gamma^2 \rho^{2\tau}} \label{eq:Cmathfrak3}\,, \end{aligned}$$ thus obtaining the estimates in . This completes the proof of the ordinary KAM theorem.
Proof of the generalized iso-energetic KAM theorem {#sec:proof:KAM:iso}
==================================================
In the section we present a proof of Theorem \[theo:KAM:iso\] following the same structure of the proof of Theorem \[theo:KAM\] in Section \[sec:proof:KAM\]. We will emphasize the differences between both proofs, and the computations that are analogous will be conveniently omitted for the sake of brevity. In Section \[ssec:qNewton:iso\] we discuss the approximate solution of linearized equations in the symplectic frame constructed in Section \[sec:lemmas\]. In Section \[ssec:iter:lemmas:iso\] we produce quantitative estimates for the objects obtained when performing one iteration of the previous procedure. Finally, in Section \[ssec:proof:KAM:iso\] we discuss the convergence of the quasi-Newton method.
The quasi-Newton method {#ssec:qNewton:iso}
-----------------------
The proof of Theorem \[theo:KAM:iso\] consists again on refining $K(\theta)$ and $\omega$ by means of a quasi-Newton method. In this case, the total error is associated with the invariance error and the target energy level: $$E_c(\theta)=
\begin{pmatrix}
E(\theta) \\
E^\omega
\end{pmatrix}
=
\begin{pmatrix}
X_\H(K(\theta)) + {\mathfrak{L}_{\omega}} K(\theta) \\
{\langle{c \circ K}\rangle} - c_0
\end{pmatrix}\, .$$ Then, we look for a corrected parameterization $\bar K(\theta)= K(\theta)+\DeltaK(\theta)$ and a corrected frequency $\bar \omega= \omega + \Deltao$ by considering the linearized system $$\label{eq:lin1:iso}
\begin{split}
& \Dif X_\H (K(\theta)) \DeltaK(\theta) + {\mathfrak{L}_{\omega}}\DeltaK(\theta) +
{\mathfrak{L}_{\Deltao}} K(\theta)
= - E(\theta) \,, \\
& {\langle{(\Dif c \circ K) \DeltaK }\rangle} = - E^\omega \,.
\end{split}$$ If we obtain a good enough approximation of the solution $(\DeltaK(\theta),\Deltao)$ of , then $\bar K(\theta)$ provides a parameterization of an approximately invariant torus of frequency $\bar \omega$, with a quadratic error in terms of $E_c(\theta)=(E(\theta),E^\omega)$.
To face the linearized equations , we introduce again the linear change $$\label{eq:choice:DK:iso}
\DeltaK (\theta) = P(\theta) \xi(\theta) \,,$$ where $P(\theta)$ is the approximately symplectic frame characterized in Lemma \[lem:sympl\]. In addition, to ensure Diophantine properties for $\bar
\omega$, we select a parallel correction of the frequency: $$\label{eq:choice:Do:iso}
\Deltao = - \omega \, \xio\,,$$ where $\xio$ is a real number. This guarantees the solvability of system along the iterative procedure. The following notation for the new unknowns will be useful $$\xi_c(\theta)=
\begin{pmatrix}
\xi(\theta) \\
\xi^\omega
\end{pmatrix}
\,,
\qquad
\xi(\theta)=
\begin{pmatrix}
\xi^L(\theta) \\
\xi^N(\theta)
\end{pmatrix}
\,.$$
Then, taking into account the expressions and , the system of equations in becomes $$\begin{aligned}
& \left( \Dif X_\H (K(\theta)) P(\theta) + {\mathfrak{L}_{\omega}}P(\theta) \right) \xi(\theta)
+P(\theta) {\mathfrak{L}_{\omega}} \xi(\theta)
- {\mathfrak{L}_{\omega}} K(\theta) \xio
= - E(\theta) \,,
\label{eq:lin:1E:iso} \\
& {\langle{(\Dif c \circ K) P \xi}\rangle} = - E^\omega \,.
\nonumber\end{aligned}$$ We now multiply both sides of by $-\Omega_0 P(\theta)^\top
\Omega(K(\theta))$, and we use the geometric properties in Lemma \[lem:sympl\] and Lemma \[lem:reduc\], thus obtaining the equivalent equations: $$\begin{aligned}
& \left(\Lambda(\theta)+\Ered(\theta) \right) \xi(\theta)
+ \left( I_{2n} - \Omega_0 \Esym(\theta) \right) {\mathfrak{L}_{\omega}} \xi(\theta)
\label{eq:lin:1Enew:iso} \\
\
&\qquad\qquad + \Omega_0 P(\theta)^\top \Omega(K(\theta)) {\mathfrak{L}_{\omega}} K(\theta) \xio
=
\Omega_0 P(\theta)^\top \Omega(K(\theta)) E(\theta) \,, \nonumber \\
& {\langle{(\Dif c \circ K) N \xi^N }\rangle} + {\langle{(\Dif c \circ K)L \xi^L }\rangle}
= - E^\omega \,.\label{eq:lin:1Eh:new:iso} \end{aligned}$$
We observe that $$\begin{split}
\Omega_0 P(\theta)^\top \Omega(K(\theta)) {\mathfrak{L}_{\omega}} K(\theta)
& = -\Omega_0 P(\theta)^\top \Omega(K(\theta)) L(\theta) \homega \\
& =
\begin{pmatrix}
\homega \\
0_n
\end{pmatrix}
+
\begin{pmatrix}
A(\theta)^\top \Elag(\theta)\homega \\
-\Elag(\theta)\homega
\end{pmatrix}\, ,
\end{split}$$ where we used the notation for $\homega$ introduced in the statement of the theorem (e.g. equation ). Moreover, recalling the computations in Lemma \[lem:cons:H:p\], we obtain $$\begin{aligned}
\Dif c(K(\theta)) L(\theta) = {} &
\begin{pmatrix}
\Dif c(K(\theta)) \Dif K(\theta) & \Dif c(K(\theta)) X_p(K(\theta))
\end{pmatrix} \\
= {} &
\begin{pmatrix}
\Dif (c(K(\theta))) & 0_{n-d}^\top
\end{pmatrix} \\
= {} &
\begin{pmatrix}
\Dif ({\mathfrak{R}_{\omega}}(\Dif c(K(\theta))E(\theta))) & 0_{n-d}^\top
\end{pmatrix} \,.\end{aligned}$$
From the above expressions, we conclude that equations and , are approximated by a triangular system that requires to solve two cohomological equations of the form consecutively. Quantitative estimates for the solutions of such equations are provided in the following statement.
\[lem:upperT:iso\] Let $\omega \in {{{\mathcal D}}_{\gamma,\tau}}$, $\eta^\omega \in {{\mathbb R}}$, and let us consider the map $\eta= (\eta^L,\eta^N) : {{\mathbb T}}^d \to
{{\mathbb R}}^{2n}\simeq {{\mathbb R}}^n\times{{\mathbb R}}^n$, with components in $\Anal({{\mathbb T}}^d_\rho)$, and the maps $T : {{\mathbb T}}^d \rightarrow {{\mathbb R}}^{n\times n}$ and $\Tdown : {{\mathbb T}}^d \to {{\mathbb R}}^{1\times n}$, with components in $\Anal({{\mathbb T}}^d_{\rho-\delta})$. Let us introduce the notation $$T_c(\theta) =
\begin{pmatrix}
T(\theta) & \homega \\
\Tdown(\theta) & 0
\end{pmatrix},
\qquad
\homega =
\begin{pmatrix} \omega \\ 0_{n-d} \end{pmatrix}\,.$$ Let us assume that $T_c$ satisfies the non-degeneracy condition $\det
{\langle{T_c}\rangle} \neq 0$, and $\eta$ satisfies the compatibility condition ${\langle{\eta^N}\rangle}=0_n$. Then, for any $\xi^L_0\in {{\mathbb R}}^n$, the system of equations $$\label{eq:syst:iso:peich}
\begin{split}
\begin{pmatrix}
O_n & T(\theta) \\
O_n & O_n
\end{pmatrix}
\begin{pmatrix}
\xi^L(\theta) \\
\xi^N(\theta)
\end{pmatrix}
+
\begin{pmatrix}
{\mathfrak{L}_{\omega}} \xi^L(\theta) \\
{\mathfrak{L}_{\omega}} \xi^N(\theta)
\end{pmatrix}
+
\begin{pmatrix}
\homega \xio \\
0_n
\end{pmatrix}
= {} &
\begin{pmatrix}
\eta^L(\theta) \\
\eta^N(\theta)
\end{pmatrix} \\
{\langle{\Tdown \xi^N}\rangle} = {} & \eta^\omega\,,
\end{split}$$ has a solution given by $$\begin{aligned}
\xi^N(\theta)={} & \xi^N_0 + {\mathfrak{R}_{\omega}}(\eta^N(\theta)) \,, \label{eq:xiy:iso} \\
\xi^L(\theta)={} & \xi^L_0 + {\mathfrak{R}_{\omega}}(\eta^L(\theta) - T(\theta)
\xi^N(\theta)) \,,\label{eq:xix:iso}\end{aligned}$$ where $$\label{eq:averxiy:iso}
\begin{pmatrix}
\xi^N_0 \\
\xio
\end{pmatrix}
=
{\langle{T_c}\rangle}^{-1}
\begin{pmatrix}
{\langle{\eta^L-T {\mathfrak{R}_{\omega}}(\eta^N)}\rangle} \\
{\langle{\eta^\omega- \Tdown {\mathfrak{R}_{\omega}}(\eta^N)}\rangle}
\end{pmatrix}$$ and ${\mathfrak{R}_{\omega}}$ is given by .
Moreover, we have the estimates $$\begin{aligned}
& |\xi_0^N|, |\xio| \leq {\left|{{\langle{T_c}\rangle}^{-1}}\right|} \max \left\{
{\|{\eta^L}\|}_\rho
+ \frac{c_R}{\gamma \delta^\tau}
{\|{T}\|}_{\rho-\delta}
{\|{\eta^N}\|}_\rho
\, , \, \right.\\
& \qquad\qquad\qquad\qquad\qquad\qquad \left.
|\eta^\omega| + \frac{c_R }{\gamma \delta^\tau} {\|{\Tdown}\|}_{\rho-\delta} {\|{\eta^N}\|}_\rho\right\}\,, \\
& {\|{\xi^N}\|}_{\rho-\delta} \leq |\xi_0^N| + \frac{c_R}{\gamma \delta^\tau}
{\|{\eta^N}\|}_\rho \,, \\
& {\|{\xi^L}\|}_{\rho-2\delta} \leq |\xi_0^L| + \frac{c_R}{\gamma \delta^\tau}
\left(
{\|{\eta^L}\|}_{\rho-\delta}+{\|{T}\|}_{\rho-\delta}
{\|{\xi^N}\|}_{\rho-\delta}
\right) \,.\end{aligned}$$
It is analogous to Lemma \[lem:upperT\]. After solving $\xi^N = \xi_0^N + {\mathfrak{R}_{\omega}}(\eta^N)$, $\xi_0^N \in {{\mathbb R}}^n$ from the triangular system, we observe that $${\langle{\Tdown \xi^N}\rangle} =
{\langle{\Tdown}\rangle} \xi_0^N +
{\langle{\Tdown {\mathfrak{R}_{\omega}} (\eta^N)}\rangle} \,,$$ so that the last equation in becomes $${\langle{\Tdown}\rangle} \xi_0^N =
\eta^\omega-{\langle{\Tdown {\mathfrak{R}_{\omega}}(\eta^N)}\rangle}
={\langle{\eta^\omega-\Tdown {\mathfrak{R}_{\omega}}(\eta^N)}\rangle}
\,.$$ This equation, together with the compatibility condition required to solve the equation for $\xi^L$, yields to a linear system which can be solved using that $\det {\langle{T_c}\rangle} \neq 0$, thus obtaining . The estimates are obtained using Lemma \[lem:Russmann\].
To approximate the solutions of –, we will invoke Lemma \[lem:upperT:iso\] taking $$\begin{aligned}
& \eta^L(\theta) = - N(\theta)^\top \Omega(K(\theta)) E(\theta)\,,
\label{eq:etaL:corr:iso} \\
& \eta^N(\theta) = L(\theta)^\top \Omega(K(\theta)) E(\theta)\,,
\label{eq:etaN:corr:iso} \\
& \eta^\omega = -E^\omega\,,
\label{eq:etao:corr:iso} \\
& \Tdown(\theta) = \Dif c(K(\theta)) N(\theta)\,,
\label{eq:Tdown:corr:iso}\end{aligned}$$ and $T(\theta)$ given by . We recall from Lemma \[lem:sympl\] that the compatibility condition ${\langle{\eta^N}\rangle}=0_n$ is satisfied. We will select the solution that satisfies $\xi_0^L={\langle{\xi^L}\rangle}=0_n$, even though other choices can be selected according to the context (see Remark \[rem:unicity\]).
Then, from Lemma \[lem:upperT:iso\] we have ${\|{\xi}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E_c}\|}_\rho)$ and $|\xi^\omega|={{\mathcal O}}({\|{E_c}\|}_\rho)$, and using the geometric properties characterized in Section \[sec:lemmas\], we have ${\|{\Ered}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E}\|}_\rho)$ and ${\|{\Esym}\|}_{\rho-2\delta}={{\mathcal O}}({\|{E}\|}_\rho)$. Hence, the error produced when approximating the solutions of – using the solutions of will be controlled by ${{\mathcal O}}({\|{E_c}\|}_\rho^2)$. This, together with other estimates, will be suitably quantified in the next section.
One step of the iterative procedure {#ssec:iter:lemmas:iso}
-----------------------------------
In this section we apply one correction of the quasi-Newton method described in Section \[ssec:qNewton:iso\] and we obtain sharp quantitative estimates for the new approximately invariant torus and related objects. We set sufficient conditions to preserve the control of the previous estimates.
\[The Iterative Lemma in the iso-energetic case\] \[lem:KAM:inter:integral:iso\] Let us consider the same setting and hypotheses of Theorem \[theo:KAM:iso\], and a constant $\cauxT>0$. Then, there exist constants $\CDeltaK$, $\CDeltao$, $\CDeltaB$, $\CDeltaTcOI$ and $\CEc$ such that if the inequalities $$\label{eq:cond1:K:iter:iso}
\frac{\hCDelta {\|{E_c}\|}_\rho}{\gamma^2 \delta^{2\tau+1}} < 1
\qquad\qquad
\frac{\CEc {\|{E_c}\|}_\rho}{\gamma^4 \delta^{4\tau}} < 1$$ hold for some $0<\delta< \rho$, where $$\label{eq:mathfrak1:iso}
\begin{split}
\hCDelta := {} & \max \bigg\{\frac{\gamma^2 \delta^{2\tau}}{\cauxT}
\, , \,
2 \Csym \gamma \delta^{\tau}
\, , \,
\frac{ d \CDeltaK }{\sigmaDK - {\|{\Dif K}\|}_\rho}
\, , \,
\frac{ 2n \CDeltaK }{\sigmaDKT - {\|{(\Dif K)^\top}\|}_\rho}
\, , \,
\\
& \frac{\CDeltaB}{\sigmaB - {\|{B}\|}_\rho}
\, , \,
\frac{\CDeltaTcOI}{\sigmaTc - {|{{\langle{T_c}\rangle}^{-1}}|}}
\, , \,
\frac{\CDeltaK \delta}{\dist (K({{\mathbb T}}^d_\rho),\partial B)}
\, , \,
\frac{\CDeltao \gamma \delta^{\tau+1}}{\dist (\omega,\partial \Theta)}
\bigg\} \,,
\end{split}$$ then we have an approximate torus of frequency $\bar \omega=\omega+\Deltao$ given by $\bar K=K+\DeltaK$, with components in $\Anal({{\mathbb T}}^d_{\rho-2\delta})$, that defines new objects $\bar B$ and $\bar T_c$ (obtained replacing $K$ by $\bar K$) satisfying $$\begin{aligned}
& {\|{\Dif \bar K}\|}_{\rho-3\delta} < \sigmaDK \,, \label{eq:DK:iter1:iso} \\
& {\|{(\Dif \bar K)^\top}\|}_{\rho-3\delta} < \sigmaDKT \,, \label{eq:DKT:iter1:iso} \\
& {\|{\bar B}\|}_{\rho-3\delta} < \sigmaB \,, \label{eq:B:iter1:iso} \\
& {|{{\langle{\bar T_c}\rangle}^{-1}}|} < \sigmaTc \,, \label{eq:T:iter1:iso} \\
& \dist(\bar K({{\mathbb T}}^d_{\rho-2\delta}),\partial \B) >0 \,, \label{eq:distB:iter1:iso}\\
& \dist(\bar \omega,\partial \Theta) >0 \,, \label{eq:disto:iter1:iso} \end{aligned}$$ and $$\begin{aligned}
& {\|{\bar K-K}\|}_{\rho-2 \delta} < \frac{\CDeltaK}{\gamma^2 \delta^{2\tau}}
{\|{E_c}\|}_\rho\,, \label{eq:est:DeltaK:iso} \\
& {|{\bar \omega-\omega}|} < \frac{\CDeltao}{\gamma \delta^{\tau}}
{\|{E_c}\|}_\rho\,, \label{eq:est:Deltao:iso} \\
& {\|{\bar B-B}\|}_{\rho-3\delta} < \frac{\CDeltaB}{\gamma^2 \delta^{2 \tau+1}}
{\|{E_c}\|}_\rho\,, \label{eq:est:DeltaB:iso} \\
& {|{{\langle{\bar T_c}\rangle}^{-1}-{\langle{T_c}\rangle}^{-1} }|} < \frac{\CDeltaTcOI}{\gamma^2 \delta^{2 \tau+1}}
{\|{E_c}\|}_\rho\,, \label{eq:est:DeltaT:iso}\end{aligned}$$ The new total error is given by $$\bar E_c(\theta)=
\begin{pmatrix}
\bar E(\theta) \\
\bar E^\omega
\end{pmatrix}
=
\begin{pmatrix}
X_\H(\bar K(\theta)) + {\mathfrak{L}_{\bar \omega}} \bar K(\theta) \\
{\langle{c \circ \bar K}\rangle} - c_0
\end{pmatrix} \,,$$ and satisfies $$\label{eq:E:iter1:iso}
{\|{\bar E_c}\|}_{\rho-2\delta} < \frac{\CEc}{\gamma^4
\delta^{4\tau}}{\|{E_c}\|}_\rho^2\,.$$ The above constants are collected in Table \[tab:constants:all:2:iso\].
The proof of this result is parallel to the ordinary situation. On the one hand, those constants that must be changed in this result (e.g. $\CxiNO$) will be redefined using the symbol “$:=$” and will be included in Table \[tab:constants:all:2:iso\]. On the other hand, those constants that are not redefined (e.g. $\CDeltaK$) will have the same expression that in the proof of Lemma \[lem:KAM:inter:integral\] and the reader is referred to Table \[tab:constants:all:2\] for the corresponding label in the text.
#### *Step 1: Control of the new parameterization*.
We start by considering the new objects $\bar K(\theta)=K(\theta) + \DeltaK(\theta)$ and $\bar \omega = \omega+\Deltao$, obtained from the expressions and , using the solutions of the system taking the objects and . We have $${\|{\eta^L}\|}_\rho \leq \CNT \cteOmega {\|{E}\|}_\rho \,,
\qquad
{\|{\eta^N}\|}_\rho \leq \CLT \cteOmega {\|{E}\|}_\rho \,,
\qquad
{|{\eta^\omega}|} \leq {|{E^\omega}|} \,,$$ and $${\|{\Tdown}\|}_{\rho-\delta} =
{\|{(\Dif c\circ K) N}\|}_{\rho-\delta} \leq {\|{\Dif c}\|}_\B {\|{N}\|}_\rho \leq \cteDc \CN\,.$$ Notice that $${\|{E_c}\|}_\rho = \max \{{\|{E}\|}_\rho,|E^\omega|\}\,,$$ so, from now on, we will use that ${\|{E}\|}_\rho \leq {\|{E_c}\|}_\rho$ and $|E^\omega| \leq {\|{E_c}\|}_\rho$.
In order to invoke Lemma \[lem:twist\] (we must fulfill condition ) we include the inequality $$\label{eq:ingredient:iter:1:iso}
\frac{{\|{E_c}\|}_\rho}{\delta} < \cauxT$$ into Hypothesis (this corresponds to the first term in ). Hence, combining Lemma \[lem:twist\] and Lemma \[lem:upperT:iso\], we obtain estimates for the solution of the cohomological equations (we recall that $\xi_0^L=0_n$) $$\begin{aligned}
|\xi_0^N|,|\xi^\omega| \leq {} &
\sigmaTc \max \Big\{\CNT \cteOmega + \frac{c_R}{\gamma \delta^\tau}
\CT \CLT \cteOmega \, , \, 1 + \frac{c_R}{\gamma \delta^\tau} \cteDc \CN \CLT
\cteOmega \Big\}{\|{E_c}\|}_\rho \nonumber \\
& =: \frac{\CxiNO}{\gamma \delta^\tau}{\|{E_c}\|}_\rho
=: \frac{\Cxio}{\gamma \delta^\tau}{\|{E_c}\|}_\rho\,, \label{eq:CxiNO:iso}\end{aligned}$$ and we observe that $\xi^L(\theta)$ and $\xi^N(\theta)$ are controlled as in Lemma \[lem:KAM:inter:integral\], and so are the objects $\xi(\theta)$, $\bar K(\theta)$ and $\bar K(\theta)-K(\theta)$, thus obtaining the estimate in . Observing that $$\label{eq:Comega}
{|{\omega_*}|} < {|{\omega}|} < \sigmao {|{\omega_*}|} =: \Comega\,,$$ we get the estimate as follows: $$\label{eq:CDeltao}
{|{\bar \omega-\omega}|} = {|{\omega \xi^\omega}|}< \frac{\Comega \Cxio}{\gamma \delta^\tau}
{\|{E_c}\|}_\rho
=: \frac{\CDeltao}{\gamma \delta^\tau}
{\|{E_c}\|}_\rho
\,.$$
To obtain , we repeat the computations in using Hypothesis (this corresponds to the seventh term in ). Similarly, we obtain : $$\dist (\bar\omega,\pd\Theta) \geq \dist(\omega,\pd\Theta) - {|{\Deltao}|} \geq
\dist (\omega,\pd\Theta) - \frac{\CDeltao}{\gamma \delta^\tau} {\|{E_c}\|}_\rho >0\,,$$ where we used Hypothesis (this corresponds to the eight term in ). A direct consequence of the fact that the new frequency $\bar\omega$ is strictly contained in $\Theta$, and then ${|{\omega_*}|} < {|{(1-\xi^\omega) \omega}|} < \sigmao {|{\omega_*}|}$, is that we have $$\label{eq:prop:omega}
\frac{1}{\sigmao} < {1-\xi^\omega} < \sigmao\,,
\qquad
\frac{1}{\sigmao} < \frac{1}{1-\xi^\omega} < \sigmao\,.$$
#### *Step 2: Control of the new error of invariance*.
To control the error of invariance of the corrected parameterization $\bar K$, we first consider the error in the solution of the linearized equation , that is $$\begin{aligned}
\Elin (\theta) := {} & \Ered(\theta) \xi(\theta) - \Omega_0 \Esym(\theta) {\mathfrak{L}_{\omega}}\xi(\theta)
+
\begin{pmatrix}
A(\theta)^\top \Elag(\theta) \\
-\Elag(\theta)
\end{pmatrix}
\hat\omega \xi^\omega
\,, \\
\Elin^\omega := {} &
{\langle{
\begin{pmatrix}
\Dif ( {\mathfrak{R}_{\omega}}((\Dif c \circ K) E)) & 0_{n-d}^\top)
\end{pmatrix}
\xi^L}\rangle} \,.\end{aligned}$$ First, we control ${\mathfrak{L}_{\omega}}\xi(\theta)$ in a similar fashion as in Step 2 of the ordinary case, but using now that $\xi(\theta)$ is the solution of . The only difference is that $$\begin{aligned}
\nonumber
{\|{{\mathfrak{L}_{\omega}} \xi^L}\|}_{\rho-\delta}
={} &
{\|{\eta^L - T \xi^N - \hat\omega \xi^\omega}\|}_{\rho-\delta} \\
\leq & \left(\CNT \cteOmega + \CT \frac{\CxiN}{\gamma \delta^\tau} + \Comega \frac{\Cxio}{\gamma \delta^\tau} \right){\|{E}\|}_\rho
=: \frac{\CLiexiL}{\gamma \delta^{\tau}} {\|{E}\|}_\rho \,,
\label{eq:CLiexiL:iso}\end{aligned}$$
Then, we control $\Elin(\theta)$ and $\Elin^\omega$ by $$\begin{aligned}
{\|{\Elin}\|}_{\rho-2\delta} \leq {} &
{\|{\Ered}\|}_{\rho-2\delta} {\|{\xi}\|}_{\rho-2\delta} +
\cteOmega {\|{\Esym}\|}_{\rho-2\delta} {\|{{\mathfrak{L}_{\omega}}\xi}\|}_{\rho-2 \delta} \nonumber \\
& + \max \{ {\|{A}\|}_{\rho-2\delta} \, , \, 1\} {\|{\Elag}\|}_{\rho-2 \delta} {|{\hat \omega}|} {|{\xi^\omega}|}
\nonumber \\
\leq {} &
\frac{\Cred}{\gamma \delta^{\tau+1}}
\frac{\Cxi}{\gamma^2 \delta^{2 \tau}}
{\|{E_c}\|}_\rho^2
+
\cteOmega
\frac{\Csym}{\gamma \delta^{\tau+1}}
\frac{\CLiexi}{\gamma \delta^{\tau}}
{\|{E_c}\|}_\rho^2 \nonumber \\
& + \frac{\max\{ \CA \, , \, 1\} \Clag \Comega \CxiNO}{\gamma^2 \delta^{2\tau+1}}
{\|{E_c}\|}_\rho^2
=: \frac{\Clin}{\gamma^3 \delta^{3\tau+1}} {\|{E_c}\|}_\rho^2\,,
\label{eq:Clin:iso} \\
{|{\Elin^\omega}|} \leq {} & \frac{d c_R \cteDc}{\gamma \delta^{\tau+1}} {\|{E_c}\|}_\rho
\frac{\CxiL}{\gamma^2\delta^{2\tau}} {\|{E_c}\|}_\rho =: \frac{\Clino}{\gamma^3 \delta^{3 \tau+1}}{\|{E_c}\|}_\rho^2 \,.
\label{eq:Clino:iso}\end{aligned}$$
After performing the correction, the total error associated with the new parameterization is given by (the computation is analogous to ) $$\bar E_c(\theta) =
\begin{pmatrix}
\bar E(\theta) \\
\bar E^\omega
\end{pmatrix} =
\begin{pmatrix}
P(\theta) (I-\Omega_0 \Esym(\theta))^{-1} \Elin(\theta)+\Delta^2 X(\theta) + {\mathfrak{L}_{\Deltao}}\DeltaK(\theta) \\
\Elin^\omega + {\langle{ \Delta^2 c(\theta)}\rangle}
\end{pmatrix} \,,$$ where $\Delta^2 X(\theta)$ is given by and $$\begin{aligned}
\Delta^2 c(\theta) = {} & \int_0^1 (1-t) \Dif^2 c(K(\theta)+t \DeltaK(\theta)) [\DeltaK(\theta),\DeltaK(\theta)] \dif t\,.\end{aligned}$$ Now we observe that $${\mathfrak{L}_{\Delta\omega}} \DeltaK(\theta) = -\xio {\mathfrak{L}_{\omega}} \DeltaK(\theta) \,,$$ where ${\mathfrak{L}_{\omega}} \DeltaK(\theta)$ is controlled using the expression $${\mathfrak{L}_{\omega}} \DeltaK(\theta)
=
{\mathfrak{L}_{\omega}} L(\theta) \xi^L(\theta)
+L(\theta) {\mathfrak{L}_{\omega}} \xi^L(\theta)
+{\mathfrak{L}_{\omega}} N(\theta) \xi^N(\theta)
+N(\theta) {\mathfrak{L}_{\omega}}\xi^N(\theta) \,,$$ thus obtaining $$\begin{aligned}
{\|{{\mathfrak{L}_{\omega}} \Delta K}\|}_{\rho-2\delta} \leq {} &
\left(
\frac{\CLieL \CxiL}{\gamma^2 \delta^{2\tau}} +
\frac{\CL \CLiexiL}{\gamma \delta^\tau} + \frac{\CLieN \CxiN}{\gamma \delta^{\tau}}
+\CN \CLiexiN
\right){\|{E}\|}_\rho
\nonumber \\
=:{} & \frac{\CLieDeltaK}{\gamma^2 \delta^{2\tau}} {\|{E}\|}_\rho\,.
\label{eq:CLieDeltaK}\end{aligned}$$ Hence, we have: $$\begin{aligned}
{\|{\bar E}\|}_{\rho-2\delta} \leq {} &\left(\frac{ 2(\CL+\CN)\Clin}{\gamma^3 \delta^{3\tau+1}}
+ \frac{1}{2} \cteDDXH \frac{(\CDeltaK)^2}{\gamma^4 \delta^{4\tau}}
+ \frac{ \Cxio \CLieDeltaK}{\gamma^3 \delta^{3\tau}}
\right)
{\|{E_c}\|}_\rho^2 \nonumber \\
=: {} & \frac{\CE}{\gamma^4 \delta^{4\tau}} {\|{E_c}\|}_\rho^2\,,
\label{eq:CE:iso}\end{aligned}$$ where we used the second term in (Hypothesis ). Moreover, we get $$\begin{aligned}
|\bar E^\omega| \leq & \left( \frac{\Clino}{\gamma^3 \delta^{3 \tau+1}} + \frac{1}{2} \cteDDc \frac{(\CDeltaK)^2}{\gamma^4 \delta^{4\tau}} \right) {\|{E_c}\|}_\rho^2 =: \frac{\CEo}{\gamma^4 \delta^{4\tau}} {\|{E_c}\|}_\rho^2\,
\label{eq:CEo:iso}\end{aligned}$$ and, finally, $$\label{eq:CEc:iso}
{\|{\bar E_c}\|}_{\rho-2\delta} \leq
\frac{\max \left\{
\CE \, , \, \CEo
\right\}}{\gamma^4 \delta^{4\tau}} {\|{E_c}\|}_\rho^2 =:
\frac{\CEc}{\gamma^4 \delta^{4\tau}} {\|{E_c}\|}_\rho^2\,.$$ We have obtained the estimate . Notice that the second assumption in and imply that $$\label{eq:barEvsE:iso}
{\|{\bar E_c}\|}_{\rho-2\delta} < {\|{E_c}\|}_\rho< \delta \cauxT\,.$$
#### *Step 3, 4 and 5*.
All arguments and computations presented in these steps depend only on the invariance equation. Hence, the control of the new frames $L(\theta)$, $N(\theta)$ and the new transversality condition is exactly the same as in Lemma \[lem:KAM:inter:integral\], but replacing ${\|{E}\|}_\rho$ by ${\|{E_c}\|}_\rho$. Specifically, we obtain the estimates and using Hypothesis (they correspond to the third and fourth term in , respectively). We obtain the estimates and following the computations in and (using the fifth term in ).
#### *Step 6: Control of the action of left operator*.
Notice that the action of ${\mathfrak{L}_{\omega}}$ is affected by the change of the frequency, since now the natural operator to control is ${\mathfrak{L}_{\bar \omega}}$. From now on, given any object $X$, we introduce the operator $$\label{eq:overline:operator}
\DLieo X (\theta) :=
{\mathfrak{L}_{\bar \omega}} \bar X(\theta) - {\mathfrak{L}_{\omega}} X(\theta)$$ for the convenience of notation.
The control of ${\mathfrak{L}_{\bar \omega}} \bar K$ is straightforward, since $${\|{{\mathfrak{L}_{\bar \omega}} \bar K}\|}_{\rho-2\delta} \leq
{\|{\bar E_c}\|}_{\rho-2\delta}+{\|{X_\H \circ \bar K}\|}_{\rho-2\delta}
\leq
\delta \cauxT + \cteXH = \CLieK\,,$$ and similarly we obtain ${\|{{\mathfrak{L}_{\bar \omega}} \bar L}\|}_{\rho-3\delta}\leq \CLieL$ and ${\|{{\mathfrak{L}_{\bar \omega}} \bar L^\top}\|}_{\rho-3\delta}\leq \CLieLT$. However, to control increments of the form ${\|{\DLieo K}\|}_{\rho-2\delta}$ we need to include an additional term. More specifically, $$\begin{aligned}
\DLieo K(\theta)
= {} & {\mathfrak{L}_{\Delta\omega}}\bar K(\theta) + {\mathfrak{L}_{\omega}} \Delta K(\theta)
= {} -\frac{\xi^\omega}{1-\xi^\omega} {\mathfrak{L}_{\bar\omega}} \bar K(\theta) + {\mathfrak{L}_{\omega}} \Delta K(\theta)\,,\end{aligned}$$ where we use that $\bar\omega= (1-\xi^\omega)\omega$, and, from bounds and , $$\begin{aligned}
{\|{\DLieo K}\|}_{\rho-2\delta} \leq {} &
\left(\frac{\sigmao \Cxio \CLieK}{\gamma \delta^\tau} + \frac{\CLieDeltaK}{\gamma^2 \delta^{2\tau}}\right) {\|{E_c}\|}_\rho =: \frac{\CDeltaLieK}{\gamma^2 \delta^{2\tau}} {\|{E_c}\|}_\rho\,.
\label{eq:CDeltaLieKiso}\end{aligned}$$
We now observe that this is the only estimate that must be updated, since it is the only place where cohomological equations play a role. For example, we have $$\begin{aligned}
\DLieo L(\theta)
= {} &
\begin{pmatrix}
\DLieo \Dif K(\theta)
&
\DLieo(X_p \circ K)
\end{pmatrix} \\
= {} &
\begin{pmatrix}
\Dif (\DLieo K(\theta))
&
\Dif X_p \circ \bar K\, \DLieo K + (\Dif X_p \circ \bar K - \Dif X_p \circ K)\, {\mathfrak{L}_{\omega}} K
\end{pmatrix}\end{aligned}$$ and this expression yields formally to the same estimate in , but using the constants $\CDeltaLieK$, $\CDeltaK$ and $\CLieK$ defined in this section (and replacing $E$ by $E_c$). This also affects to the control of the objects $\DLieo L^\top$, $\DLieo (G \circ K)$, $\DLieo G_L$, $\DLieo (\Omega \circ K)$, $\DLieo \tilde \Omega_L$, $\DLieo A$, $\DLieo (J \circ K)$, $\DLieo N^0$ and $\DLieo N$.
#### *Step 7: Control of the new torsion condition*.
Now we consider the control of the extended torsion matrix $T_c(\theta)$ and the corresponding non-degeneracy condition. First, we observe that the upper-left block $T(\theta)$ is controlled as in Lemma \[lem:KAM:inter:integral\]. Thus, we control the extended torsion as $$\begin{aligned}
{\|{\bar T_c - T_c}\|}_{\rho-3\delta}
\leq {} & \max
\left\{
\frac{\CDeltaT}{\gamma^2 \delta^{2\tau+1}} + \frac{\CDeltao}{\gamma \delta^\tau}
\, , \,
\frac{\cteDc \CDeltaN}{\gamma^2 \delta^{2\tau+1}}+
\frac{\cteDDc \CDeltaK \CN }{\gamma^2 \delta^{2\tau}}
\right\}
{\|{E_c}\|}_\rho \nonumber \\
=: {} & \frac{\CDeltaTc}{\gamma^2 \delta^{2\tau+1}} {\|{E_c}\|}_\rho\,. \label{eq:CDeltaTc}\end{aligned}$$ Finally, we obtain the estimates and by adapting the computations in and . We use the second term in (Hypothesis ) to get the estimate $$\label{eq:CDeltaTcOI}
|{\langle{\bar T_c}\rangle}^{-1} - {\langle{T_c}\rangle}^{-1}| \leq \frac{2 (\sigmaTc)^2 \CDeltaTc}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho =: \frac{\CDeltaTcOI}{\gamma^2 \delta^{2\tau+1}}
{\|{E}\|}_\rho\,.$$ This completes the proof of the lemma.
Convergence of the iterative process {#ssec:proof:KAM:iso}
------------------------------------
Now we are ready to proof our second KAM theorem with conserved quantities. Again, we comment the differences with respect to Theorem \[theo:KAM\] and omit the common parts.
\[Proof of Theorem \[theo:KAM:iso\]\] Let us consider the approximate torus $K_0:=K$ with frequency $\omega_0:=\omega$ and with initial errors $E_0:=E$ and $E_0^\omega = E^\omega$. We also introduce $B_0:=B$, $T_0:=T$, $T_{c,0}=T_c$ and $E_{c,0}=E_c$ associated with the initial approximation. We reproduce the iterative construction in the proof of Theorem \[theo:KAM\], but applying Lemma \[lem:KAM:inter:integral:iso\] recursively, and taking intro account the evolution of the error $E_{c,s}$ at the $s$-step of the quasi-Newton method.
Computations are the same mutatis mutandis. In this case, we need to consider additional computations regarding the correction of the frequency. In particular, the eight term in is checked as follows $$\dist (\omega_s,\partial \Theta) - \frac{\CDeltao {\|{E_{c,s}}\|}_{\rho_s}}{\gamma \delta_s^{\tau}} >
\dist (\omega_0,\partial \Theta) - \frac{\CDeltao {\|{E_{c,0}}\|}_{\rho_0}}{\gamma \delta_0^{\tau}}
\frac{1}{1-a_1^{-3\tau}} > 0\,,$$ where the last inequality is included into .
Having guaranteed all hypothesis of Lemma \[lem:KAM:inter:integral:iso\], we collect the assumptions by introducing the constant $\mathfrak{C}_1$ as $$\label{eq:cte:mathfrakC1:iso}
\mathfrak{C}_1:=\max
\left\{
(a_1 a_3)^{4\tau} \CEc \, , \, (a_3)^{2\tau+1} \gamma^2 \rho^{2\tau-1} \CDeltatot
\right\}$$ where $$\begin{aligned}
\CDeltaI := {} &
\max
\Bigg\{
\frac{d \CDeltaK}{\sigmaDK-{\|{\Dif K}\|}_{\rho}} \, , \,
\frac{2n \CDeltaK}{\sigmaDKT-{\|{(\Dif K)^\top}\|}_{\rho}} \, , \label{eq:CDeltaI:iso}
\\
& \qquad\qquad\qquad\qquad \frac{\CDeltaB}{\sigmaB-{\|{B}\|}_{\rho}} \, , \,
\frac{\CDeltaTcOI}{\sigmaTc-{|{{\langle{T_{c}}\rangle}^{-1}}|}}
\Bigg\} \,, \nonumber \\
\CDeltaII := {} & \frac{\CDeltaK \delta}{\dist(K({{\mathbb T}}^d_{\rho}),\partial \B)}\,,
\label{eq:CDeltaII:iso} \\
\CDeltaIII := {} & \frac{\CDeltao \gamma \delta^{\tau+1}}{\dist(\omega,\partial \Theta)}\,,
\label{eq:CDeltaIII:iso} \\
\CDeltatot := {} & \max
\Bigg\{
\frac{\gamma^2 \delta^{2\tau}}{\cauxT} \, , \,
2 \Csym \gamma \delta^\tau \, , \,
\frac{\CDeltaI}{1-a_1^{1-2\tau}} \, , \,
\frac{\CDeltaII}{1-a_1^{-2\tau}} \, , \,
\frac{\CDeltaIII}{1-a_1^{-3\tau}}
\Bigg\}
\label{eq:CDeltatot:iso} \,.\end{aligned}$$ Note that we recovered the original notation $K=K_0$, $\omega=\omega_0$, $B=B_0$, $T_c=T_{c,0}$, $\rho=\rho_0$ and $\delta=\delta_0$ for the original objects.
Therefore, by induction, we can apply the iterative process infinitely many times. Indeed, we have $${\|{E_{c,s}}\|}_{\rho_s} < a_1^{-4 \tau s} {\|{E_c}\|}_{\rho} \longrightarrow 0\
\quad \mbox{when} \quad s\rightarrow 0\,$$ so the iterative scheme converges to a true quasi-periodic torus $K_\infty$ with frequency $\omega_\infty$. As a result of the output of Lemma \[lem:KAM:inter:integral:iso\], these objects satisfy $K_\infty \in \Anal({{\mathbb T}}^{d}_{\rho_\infty})$, $\omega_\infty\in \Theta$ and $${\|{\Dif K_\infty}\|}_{\rho_\infty} < \sigmaDK\,,
\qquad
{\|{(\Dif K_\infty)^\top}\|}_{\rho_\infty} < \sigmaDKT\,,
\qquad
\dist(K_\infty({{\mathbb T}}^d),\pd \B) > 0\,.$$ Furthermore, we control the the limit objects as follows: $$\begin{aligned}
{\|{K_\infty - K}\|}_{\rho_\infty} < \frac{\CDeltaK {\|{E_c}\|}_{\rho}}{\gamma^2 \delta^{2\tau}}
\frac{1}{1-a_1^{-2\tau}}
=: {} & \frac{\mathfrak{C}_2 {\|{E_c}\|}_{\rho}}{\gamma^2 \rho^{2\tau}}\,,
\label{eq:Cmathfrak2:iso} \\
{|{\omega_\infty - \omega}|} <
\frac{\CDeltao {\|{E_c}\|}_{\rho}}{\gamma \delta^{\tau}}
\frac{1}{1-a_1^{-3\tau}}
=: {} &
\frac{\mathfrak{C}_3 {\|{E_c}\|}_{\rho}}{\gamma \rho^{\tau}} \,,
\label{eq:Cmathfrak3:iso}\end{aligned}$$ thus obtaining the estimates in . This completes the proof of the generalized iso-energetic KAM theorem.
Acknowledgements
================
A. H. acknowledges the Spanish grants MTM2015-67724-P (MINECO/FEDER), MDM-2014-0445 (MINECO) and 2014 SGR 1145 (AGAUR), and the European Union’s Horizon 2020 research and innovation programme MSCA 734557. A. L. acknowledges the Spanish grants MTM2016-76072-P (MINECO/FEDER) and SEV-2015-0554 (MINECO, Severo Ochoa Programme for Centres of Excellence in R&D), and the ERC Starting Grant 335079. We are also grateful to D. Peralta-Salas and J.-Ll. Figueras for fruitful discussions.
An auxiliary lemma to control the inverse of a matrix
=====================================================
To prove Lemmas \[lem:KAM:inter:integral\] and \[lem:KAM:inter:integral:iso\], we control the correction of inverses of matrices several times using Neumann series This affects to the estimates in , , and . For convenience, we present the following auxiliary result separately. Notice that the result is presented for matrices but it is directly extended for matrix-valued maps with the corresponding norm (see Section \[ssec-anal-prelims\]).
\[lem:aux\] Let $M \in {{\mathbb C}}^{n \times n}$ be an invertible matrix satisfying ${|{M^{-1}}|} < \sigma$. Assume that $\bar M \in {{\mathbb C}}^{n \times n}$ satisfies $$\label{eq:lem:aux}
\frac{2 \sigma^2 {|{\bar M-M}|}}{\sigma-{|{M^{-1}}|}} \leq 1\,.$$ Then, we have that $\bar M$ is invertible and $${|{\bar M^{-1}-M^{-1}}|} < 2 \sigma^2 {|{\bar M-M}|}\,,
\qquad
{|{\bar M^{-1}}|} < \sigma\,.$$
A direct computation shows that $$\label{eq:Neu1}
\bar M^{-1} = (I_n + M^{-1} (\bar M-M))^{-1} M^{-1}\,.$$ By hypothesis we obtain $$\label{eq:Neu2}
{|{M^{-1}}|}{|{\bar M-M}|} < \frac{\sigma^2 {|{\bar M-M}|}}{\sigma-{|{M^{-1}}|}} < \frac{1}{2}\,.$$ Then a Neumann series argument in , using and ${|{M^{-1}}|} < \sigma$, yields the estimate $${|{\bar M^{-1}-M^{-1}}|} \leq \frac{{|{M^{-1}}|}^2 {|{\bar M-M}|}}{1-{|{M^{-1}}|} {|{\bar M-M}|}} <
2 \sigma^2 {|{\bar M-M}|}\,.$$ Finally, we conclude that $$\begin{aligned}
{|{\bar M^{-1}}|} \leq {} & {|{M^{-1}}|} + {|{\bar M^{-1}-M^{-1}}|} \leq {|{M^{-1}}|} +
2 \sigma^2 {|{\bar M-M}|} \\
< {} & {|{M^{-1}}|} + \sigma - {|{M^{-1}}|} = \sigma\,,\end{aligned}$$ where we used again .
Compendium of constants involved in the KAM theorem {#ssec:consts}
===================================================
In this appendix we collect the recipes to compute all constants involved in the different estimates presented in the paper. Keeping track of these constants is crucial to apply the presented KAM theorems in particular problems and for concrete values of parameters. In addition, we think that the labels included in the tables will be of valuable help assisting the reading of the paper. Thus, the reader can find the place where a particular object is estimated.
Given an object $X :{{\mathbb T}}^n_{\rho_*} \to {{\mathbb C}}^{n_1 \times n_2}$, the following tables code an estimate of the form $${\|{X}\|}_{\rho_*} \leq\frac{C_X}{\gamma^{a_*} \delta^{b_*}} {\|{E_*}\|}_\rho^{c_*}\,.$$ Notice that the strip $\rho_*$ and the exponents $a_*, b_*, c_*$ can be tracked following the corresponding label; and $E_*$ is the target error ($E_*=E$ for Theorem \[theo:KAM\] and $E_*=E_c$ for Theorem \[theo:KAM:iso\]).
Let us remark that, as it becomes clear in the proof, the numbers $a_1$, $a_2$, $a_3$ and $\cauxT$ are independent parameters, that can be selected in order to optimize the applicability of the theorems depending on the particular problem at hand.
Table \[tab:constants:all\] corresponds to the geometric construction that is common to both theorems. The constants associated to the ordinary KAM theorem \[theo:KAM\] are presented in Tables \[tab:constants:all:2\] and \[tab:constants:all:3\]. The constants associated to the iso-energetic KAM theorem \[theo:KAM:iso\] are presented in Tables \[tab:constants:all:2:iso\] and \[tab:constants:all:3:iso\]. To reduce the length of the tables, in the iso-energetic situation we have omitted those constants that have the same formula that in the ordinary case.
[^1]: This source of skepticism was pointed out by the distinguished astronomer M. Hénon, who found a threshold of validity for the perturbation of the order of $10^{-333}$ in early KAM theorems.
[^2]: For example, for the Chirikov standard map, it is proved that the invariant curve with golden rotation persists up to ${\varepsilon}\leq 0.9716$, which corresponds to a threshold value with a defect of 0.004% with respect to numerical observations (e.g. [@Greene79; @MacKay93]).
[^3]: The invariant tori we consider in this paper are isotropic, but they are not lower dimensional invariant tori in strict meaning (see [@BroerHS96]). The invariant tori are neither partially hyperbolic nor elliptic (see the a-posteriori formulations in [@FontichLS09] and [@LuqueV11]), but partially parabolic tori that appear in families of invariant tori with the same frequency vector.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Granular materials of different sizes are present on the surface of several atmosphere-less Solar System bodies. The phenomena related to granular materials have been studied in the framework of the discipline called Granular Physics; that has been studied experimentally in the laboratory and, in the last decades, by performing numerical simulations. The Discrete Element Method simulates the mechanical behavior of a media formed by a set of particles which interact through their contact points.
The difficulty in reproducing vacuum and low-gravity environments makes numerical simulations the most promising technique in the study of granular media under these conditions.
In this work, relevant processes in minor bodies of the Solar System are studied using the Discrete Element Method. Results of simulations of size segregation in low-gravity environments in the cases of the asteroids Eros and Itokawa are presented. The segregation of particles with different densities was analysed, in particular, the case of comet P/Hartley 2. The surface shaking in these different gravity environments could produce the ejection of particles from the surface at very low relative velocities. The shaking causing the above processes is due to: impacts, explosions like the release of energy by the liberation of internal stresses or the re accommodation of material. Simulations of the passage of impact-induced seismic waves through a granular medium were also performed.
We present several applications of the Discrete Element Methods for the study of the physical evolution of agglomerates of rocks under low-gravity environments.
author:
- |
G. Tancredi $^{1,2}$[^1], A. Maciel $^{1}$, L. Heredia $^{3}$, P. Richeri $^{3}$, S. Nesmachnow $^{3}$\
$^{1}$Departamento de Astronoma, Facultad de Ciencias, Iguá 4225, 11400 Montevideo, URUGUAY\
$^{2}$Observatorio Astronómico Los Molinos, Ministerio de Educación y Cultura, Montevideo, URUGUAY\
$^{3}$Centro de Cálculo, Instituto de Computación, Facultad de Ingeniera, Montevideo, URUGUAY\
date: 'Accepted 2011 November 24. Received 2011 November 21; in original form 2011 August 12'
title: 'Granular physics in low-gravity environments using DEM'
---
\[firstpage\]
minor planets, asteroids: general – comets: general – methods: numerical
Introduction
============
Granular materials of different sizes are present on the surface of several atmosphere-less Solar System bodies. The presence of very fine particles on the surface of the Moon, the so-called regolith, was confirmed by the Apollo astronauts. From polarimetric observations and phase angle curves, it is possible to indirectly infer the presence of fine particles on the surface of asteroids and planetary satellites. More recently, the visit of spacecraft to several asteroids and comets has provided us with close pictures of the surface, where particles of a wide size range from $cm$ to hundreds of meters have been directly observed. The presence of even finer particles on the visited bodies can also be inferred from image analysis.
It has been proposed that several typical processes of granular materials, such as the size segregation of boulders on Itokawa, the displacement of boulders on Eros, among others (see e.g. [@Asphaug] and references therein), can explain some features observed on the surfaces of these bodies. The conditions at the surface and the interior of these small Solar System bodies are very different compared to the conditions on the Earth’s surface. Below we point out some of these differences:
- while on the Earth’s surface the acceleration of gravity is $9.8\ m/s^{2}$ with minor variations, on the surface of elongated $km$-size asteroid is on the order of $10^{-2}$ to $10^{-4}\ m/s^{2}$, with typical variations of a factor of 2
- the presence of an atmosphere or any other fluid media plays an important role on the evolution of grains, particularly in the small ones ([@Pak]). Under vacuum conditions in space, this effect does not occur.
- In vacuum and low gravity conditions, other forces might play a role comparable to that of gravity, e.g. van der Waals forces ([@Scheeres]), although these forces are not considered in our present approach.
The phenomena related to granular material have been studied in the discipline called Granular Physics. Granular media are formed by a set of macroscopic objects (grains) which interact through temporal or permanent contacts. The range of materials studied by Granular Physics is very broad: rocks, sands, talc, natural and artificial powders, pills, etc.
Granular materials show a variety of behaviours under different circumstances: when excited (fluidised), they often resemble a liquid, as is the case of grains flowing through pipes; or they may behave like a solid, like in a dune or a heap of sand.
These processes have been studied experimentally in the laboratory, and, in the last decades, by numerical analysis. The numerical simulation of the evolution of granular materials has been done recently with the Discrete Element Method (DEM). DEM is a family of numerical methods for computing the motion of a large number of particles such as molecules or grains under given physical laws. DEMs simulate the mechanical behavior in a media formed by a set of particles which interact through their contact points.
Low-gravity environments in space are difficult to reproduce in a ground-based laboratory; especially if one is interested in keeping a stable value of the acceleration of gravity on the order of $10^{-2}$ to $10^{-4}\ m/s^{2}$ for several hours, since under these low-gravity conditions the dynamical processes are much slower than on Earth. Parabolic flights are not suitable for these experiments, since it is not possible to attain a stable value during the free-fall flight. For laboratory experiments, we are then left with experiences to be carried on board space stations.
Therefore, numerical simulation is the most promising technique to study the phenomena affecting granular material in vacuum and low-gravity environments.
The rest of the article is organised as follows. In Section \[secdem\] we describe the implementation of the Discrete Element Methods used in our simulations. In Section \[secbne\] we present the results of simulations of the process of size segregation in low-gravity environments, the so-called Brazil nut effect, in the cases of Eros, Itokawa and P/Hartley 2. In Section \[secden\], the segregation of particles with different densities is analysed, with the application to the case of P/Hartley 2. The surface shaking in these different gravity environments could produce the ejection of particles from the surface at very low relative velocities; this issue is discussed in Section \[secparlif\]. The shaking that causes the above processes is due to impacts or explosions like the release of energy by the liberation of internal stresses or the re accommodation of material. Although DEM methods are not suitable to reproduce the impact event, we are able to make simulations of the passage of impact-induced seismic waves through a granular media; these experiments are shown in Section \[secimp\]. The conclusions and the applications of these results are discussed in Section \[secdis\].
Discrete Element Methods {#secdem}
========================
DEM are a set of numerical calculations based on statistical mechanic methods. This technique is used to describe the movements of a large amount of particles which are subjected to certain physical interactions.
DEMs present the following basic properties that generally define this class of numerical algorithms:
- The quantities are calculated at points fixed to the material. DEM is a case of a Lagrangian numerical method.
- The particles can move independently or they can have bounds, and they interact in the contact zones through different types of physical laws.
- Each particle is considered a rigid body, subject to the laws of rigid body mechanics.
The forces acting on a particle are calculated from the interaction of this particle with its nearest neighbors, i.e. the particles it touches. Several types of forces are usually considered in the literature; e.g free elastic forces, bonded elastic forces, frictional forces, viscoelastic forces, interaction of the particles with other objects, such as walls and mesh objects acting as boundary conditions, global force fields (i.e. gravity), velocity dependent damping, etc.
The main drawback of the method is the computational cost of computing the interacting forces for each particle at each time step. A simple all-to-all approach would require to perform $O(N(N-1)/2)$ operations per time step, where $N$ is the number of particles in the simulation. Several efficient methods to reduce the number of pairs to compute have been implemented; e.g. the Verlet lists method, the link cells algorithm, and the lattice algorithm. Another problem for the simulation is the length of the time step, which should be much less than the duration of the collisions, typically $1/10$ to $1/20$ of collisions duration. Based on the Hertzian elastic contact theory, the duration of contact ($\tau$) can be expressed as: $$\tau=5.84\left(\frac{\rho(1-\nu^{2})}{E}\right)^{0.4}rv^{-0.2}$$
([@Wada], after [@Timoshenko]), where $\rho$ is the grain density, $nu$ is the Poisson ratio, $E$ is the material strength, $r$ is the radius of the particle, and $v$ is the collisional velocity.
In Figure \[fdurcol\] we plot the previous estimate of the duration of collision as a function of the collisional velocity for particles with $r = 0.1$, 1 and 10 m. The other parameters are assumed as follows: $\rho=2000\ gr/cm^{3}$, $\nu=0.17$, and $E=100\ Gpa$. We are interested in the processes that occur on the surface of the small Solar System bodies, where the interactions among the boulders occur at velocities comparable to the escape velocity on their surface. The upper $x$-axis indicates the radius of the body, while the lower one shows the corresponding escape velocity (assuming a constant density of $\rho=3000\ gr/cm^{3}$). For $km$-size asteroids and $m$-size boulders, the collisions typically last for a few $10^{-3}s$, therefore, the time step required to correctly simulate the collision would be $\sim10^{-4}s$.
{width="35.00000%"}
. \[fdurcol\]
Viscolelastic spheres with friction
-----------------------------------
The contact force between two spherical particles can be decomposed in two vectors (Figure \[fcontact\]): the normal force, along the direction that joins the centres of the interacting particles; and the tangential force, perpendicular to this line. Naming $i$ and $j$ the two interacting particles, the total force $\overrightarrow{F_{ij}}$ can then be expressed as:
![Scheme of the contact forces between two spherical particles[]{data-label="fcontact"}](figure2.eps){width="30.00000%"}
$$\overrightarrow{F_{ij}}=\left\{ \begin{array}{rl}
\overrightarrow{F_{ij}^{n}}+\overrightarrow{F_{ij}^{t}} & \mbox{ if \ensuremath{\psi_{ij}>0}}\\
0 & \mbox{ otherwise}
\end{array}\right.
\label{eqtotfor}$$
where $\overrightarrow{F_{ij}^{n}}$ is the normal force and $\overrightarrow{F_{ij}^{t}}$ the tangential one. $\psi_{ij}$ is the deformation given by:
$$\psi_{ij}=R_{i}+R_{j}-|\overrightarrow{r_{i}}-\overrightarrow{r_{j}}|
\label{eqdefor}$$
where $R_{i}$ and $R_{j}$ are the radius of the particle $i$ and $j$, respectively, and $\overrightarrow{r_{i}}$ and $\overrightarrow{r_{j}}$ are the position vectors.
Several models have been used for the normal and tangential forces in the literature. Among the most used ones is the damped dash-pot, also known as the Kelvin-Voigt model. Instead of using this model in our simulation, we use an extension of an elastic-spheres one developed by [@Hertz], since it is a more realistic representation of two colliding particles.
The normal interaction force between two elastic spheres $F_{ij}^{n;el}$ was inferred by Hertz as a function of the deformation $\psi$:
$$F^{n;el}=\frac{2Y\sqrt{R_{eff}}}{3(1-\nu^{2})}\psi^{3/2}
\label{eqfnel}$$
where $Y$ is the Young modulus and $\nu$ is the Poisson ratio. The effective radius $R_{eff}$ is given by the expression:
$$\frac{1}{R_{eff}}=\frac{1}{R_{i}}+\frac{1}{R_{j}}$$
A viscoelastic interaction between the particles can be modelled by including a dissipation factor in eq. \[eqfnel\]. The viscoelastic normal forces $F^{n;ve}$ then become: $$F^{n;ve}=\frac{2Y\sqrt{R_{eff}}}{3(1-\nu^{2})}\left(\psi^{3/2}+A\sqrt{\psi}\frac{d\psi}{dt}\right)
\label{eqfnve}$$
where $A$ is a dissipative constant and $d\psi/dt$ is the time derivative of the deformation.
Considering the previous expression for the normal force could lead to unrealistic results, since it does not take into account the fact that the particles do not overlap, but they become deformed ([@Poschel]). During the compression phase and most of the decompression phase, the term $\left(\psi^{3/2}+A\sqrt{\psi}\frac{d\psi}{dt}\right)$ in eq. \[eqfnve\] is positive, leading to a repulsive (positive) normal force. However, at a certain stage of the decompression, the deformation $\psi$ could still be positive, but the second term could be negative, which would lead to a negative (attractive) force. This is an unrealistic situation, since there are no attractive forces during the collision of two particles. The problem arises when the centres of the particles separate too fast from one another to allow their surfaces to keep in touch while recovering their shape. In order to overcome this problem, for the condition $\psi>0$ in eq. \[eqtotfor\], we use the following expression for the normal force:
$$F^{n;ve}=\max\left\{ 0,\frac{2Y\sqrt{R_{eff}}}{3(1-\nu^{2})}\left(\psi^{3/2}+A\sqrt{\psi}\frac{d\psi}{dt}\right)\right\}
\label{eqfnvecor}$$
Following the model by [@Cundall] for the tangential force ($F^{t}$), when two particles first touch, a shear spring is created at the contact point. The static friction is then modelled as a spring acting in a direction tangential to the contact plane. The particles start sliding with the shear spring resisting the motion. When the shear force exceeds the normal force multiplied by the friction coefficient, dynamic sliding starts. We limit the shear force by Coulomb’s friction law; i.e. $|F^{t}\leq\mu F^{n;ve}|$. The expression for the tangential force then becomes:
$$F^{t}=-sign(v_{rel}^{t})\ \min\{\|\kappa\varsigma\|\ ,\ \mu\|F^{n;ve}\|\}
\label{eqft}$$
where the first term inside the curly brackets corresponds to the static friction, and the second one is the dynamic friction. $\kappa$ is a constant, $\varsigma$ is the elongation of the spring, and $\mu$ is the dynamic friction parameter.
ESyS-particle
-------------
For the DEM simulations we developed a version of the ESyS-particle package ([@Abe2004]; https://launchpad.net/esys-particle) adapted to our needs. ESyS-particle is an Open Source software for particle-based numerical modeling, designed for execution on parallel supercomputers, clusters or multi-core computers running a Linux-based operating system. The C++ simulation engine implements a spatial domain decomposition for parallel programming via the Message Passing Interface (MPI). A Python wrapper API provides flexibility in the design of numerical models, specification of modeling parameters and contact logic, and analysis of simulation data.
The separation of the pre-processing, simulation and post-processing tools facilitates the ESyS-particle development and maintenance. The setup of the model geometry is given by scripts, since the whole package is script driven (no interactive GUI is provided by ESyS).
The particles can be either rotational or non-rotational spheres. The material properties of the simulated solids can be elastic, viscoelastic, brittle or frictional. Particles can be bonded to other particles in order to simulate breakable material. It is possible to implement triangular meshes for specifying boundary conditions and walls. The package also includes a variety of particle-particle and particle-wall interaction laws; such as linear elastic repulsion between unbounded contacting particles, linear elastic bonds between bonded particle pairs, non-rotational and rotational frictional interactions between unbounded particles, rotational bonds implementing torsion and bending stiffness and normal and shear stiffness. Boundary conditions and walls can move according to pre-defined laws.
The DEM implementation in ESyS-particle employs the explicit integration approach, i.e. the calculation of the state of the model at a given time only considers data from the state of the model at earlier times. Although the explicit approach requires shorter time steps, it is easier to develop a parallel version for execution in high performance computing infrastructures.
ESyS-particle has shown good scaling performance when using additional computing elements (processor cores), if at least $\sim 5000$ particles are processed by each core. Otherwise, when a lower number of particles is handled by each core, the impact of the overhead by the communications between processes reduces the computational efficiency of the application. As long as the problem size is scaled with the number of cores, the scalability is close to linear. Therefore, large amounts of particles, typically a few million, are possible to model.
For the analysis of the results, ESyS-particle can format the output to be used in 3D visualisation platforms like VTK and POV-Ray. In particular, we use the software Paraview, based on VTK and developed by Kitware Inc. and Sandia National Labs (EEUU), which offers good quality in 3D graphics and allows us to implement several visualisation filters to the data.
ESyS-particle has been employed to simulate earthquake nucleation, comminution in shear cells, silo flow, rock fragmentation, and fault gouge evolution, to name but a few applications. Just to give a few references, we mention examples in fracture mechanics ([@Schopfer]), fault mechanics ([@Abe2005], [@Mair]), and fault rupture propagation ([@Abe2006]).
For our simulations, we have implemented the Hertzian viscoelastic interaction model with and without friction into the ESyS-particle package, according to the eqs. \[eqfnvecor\] and \[eqft\]. Several modifications were necessary to implement either in the C++ code as well as in the Python interface ([@Heredia]).
Tests
-----
In order to test the code and to set the values of the relevant physical and simulation parameters, we choose a few problems for which there exists an analytical solution or we can compare the output with experiments.
### Test case 1: a direct collision of two equal spheres
We consider the case of two equal viscoelastic spheres: one starts at rest and the other one approaches from the negative $x$-direction at a given speed along the line joining the particle centres. Friction is not considered, since the collision between the particles is normal. The aim of this test is to study the viscoelastic collision.
The coefficient of restitution can be used to characterise the change of relative velocity of inelastically colliding particles. Let us note $\overrightarrow{v_{1}}$ and $\overrightarrow{v_{2}}$ the velocities before the collision of particles 1 and 2, respectively; and $\overrightarrow{v'_{1}}$ and $\overrightarrow{v'_{2}}$ the velocities immediately after. When the relative velocity is along the line joining the particle centres, we note $v=|\overrightarrow{v_{2}}-\overrightarrow{v_{1}}|$ and $v'=|\overrightarrow{v'_{2}}-\overrightarrow{v'_{1}}|$. The coefficient of restitution $\epsilon$ is then calculated as: $$\epsilon=\frac{v'}{v}$$
In general, this coefficient depends not only on the impact velocity, but also on material properties.
Because of their deformation, particles lose contact slightly before the distance of the centres between the spheres reaches the sum of the radii. [@Schwager] present an analytical estimate of the coefficient of restitution which takes into account this fact. The computation of $\epsilon$ is then presented as a divergent series of the dimensionless parameter $\beta v^{1/5}$, where $\beta=\gamma\kappa^{-3/5}$; $\gamma=\frac{3}{2}\frac{\rho A}{m_{eff}}$; $\kappa=\frac{\delta}{m_{eff}}$; $\delta=\frac{2Y}{3(1-\nu^{2})\sqrt{(}R_{eff}}$; $\frac{1}{R_{eff}}=\frac{1}{R_{i}}+\frac{1}{R_{j}}$; and $\frac{1}{m_{eff}}=\frac{1}{m_{i}}+\frac{1}{m_{j}}$. The material parameters $Y$, $\nu$ and $A$ are defined above. $R_{1}$, $R_{2}$, $m_{1}$, $m_{2}$ are the radius and mass of particle 1 and 2, respectively.
We run simulations of two colliding particles with the following combination of parameters: $Y=\{10^{9},10^{10}\}\ Pa$, $A=\{10^{-4},10^{-3}\}\ s^{-1}$, $\nu=0.3$; for a set of initial relative velocities $v=\{0.1,0.5,1,5,10\}\ m/s$. The particles have a radii of 1m and a density of $3000\ kg/m^{3}$. The time step of the integration is $10^{-5}\ s$.
The coefficient of restitution for the numerical simulations is presented in Figure \[fcoefres\]a as a function of $\beta v^{1/5}$. The symbols correspond to different combinations of parameters: circle $Y=10^{9}$, $A=10^{-4}$; down triangle $Y=10^{9}$, $A=10^{-3}$; square $Y=10^{10}$, $A=10^{-4}$; up triangle $Y=10^{10}$, $A=10^{-3}$ ($Y$ in $Pa$ and $A$ in $s^{-1}$). The analytical estimates are computed with Maple’s codes presented in [@Schwager], where the expansions are up to 40th order. The ratio between the numerical and analytical estimate is presented in Figure \[fcoefres\]b. We find a good agreement between the two estimates up to values of $\beta v^{1/5}$ closer to 1. The discrepancy is due to the cut-off of the higher terms.
![a) The coefficient of restitution for the numerical simulations of the collision of two viscoelastic spheres as a function of $\beta v^{1/5}$. The symbols correspond to different combinations of parameters: circle $Y=10^{9}$, $A=10^{-4}$; down triangle $Y=10^{9}$, $A=10^{-3}$; square $Y=10^{10}$, $A=10^{-4}$; up triangle $Y=10^{10}$, $A=10^{-3}$ ($Y$ in $Pa$ and $A$ in $s^{-1}$). b) The ratio between the numerical and analytical estimate.[]{data-label="fcoefres"}](figure3.eps){width="50.00000%"}
Several laboratory experiments have been conducted to estimate the coefficient of restitution of rock materials (see e.g. [@Imre], [@Durda]). For impact velocities in the range $1-2\ \ m/s$, values of $\epsilon\sim0.8-0.9$ have been obtained. Looking back at Figure \[fcoefres\]a, we observe that this range of values of $\epsilon$ are obtained for the following set of material parameters: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$. Therefore, we will choose these parameter values for our numerical simulations of colliding rocky spheres.
### Test case 2: a grazing collision between two spheres
In this case we consider two equal viscoelastic spheres: one starts at rest and the other one approaches from the negative $x$-direction at a given speed; but, in contrast to the previous case, the distance between the $y$-values of the particles centres is slightly less than the sum of the radius. We then have a grazing collision. The aim of this test is to compare the results of the viscoelastic interaction with and without friction. We run simulations of two colliding particles with the following set of parameters: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$, $R_{1}=R_{2}=1\ m$, and a density of $\rho=3000\ kg/m^{3}$. The friction parameters of eq. \[eqft\] are chosen as: $\kappa=0.4$, $\mu=0.6$. The distance between the centres in the $y$-direction is $(0.999R_{1}+R_{2})$. We run simulations where particle 2 has initial velocities of $v=\{10^{-3},0.01,0.1,1,10\}\ \ m/s$.
In Figure \[fratiocol\] we plot the ratio between the modulus of the particles relative velocity after exiting and before the interaction, as a function of the initial velocity. The star symbols correspond to the simulations without the friction interaction and the cross symbols to the ones with it. Due to the fact that the collision is almost grazing, the ratios are almost 1 for the simulations without friction, regardless of the initial velocity. For simulations with friction, as it is expected, the ratio decreases as the initial velocity decreases, because the friction interaction becomes more relevant for low velocities.
![The ratio between the modulus of the relative velocity between the particles after exiting and before the interaction as a function of the initial velocity. The symbols correspond to different contact force models: square Hertzian viscoelastic spheres; circle Hertzian viscoelastic spheres with friction.[]{data-label="fratiocol"}](figure4.eps){width="40.00000%"}
### Test case 3: a bouncing ball
In ESyS-particle the interaction between a particle and a mesh wall can be linear, elastic or a linear elastic bond. Viscoelastic and frictional interactions of particles and walls are not yet implemented. Therefore, in order to simulate a frictional viscoelastic collision of a ball against a fixed wall, we have to glue balls to the wall with a linear elastic bond. The following test case consists on a free-falling ball impacting on an equal size ball that is bonded to the floor. The objective of this experiment is to test different time steps for the simulations.
We use the following set of parameters: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$, $R_{1}=R_{2}=1\ m$, and a density of $\rho=3000\ kg/m^{3}$. The bonded particle has an elastic bond with a modulus $K=10^{9}\ Pa$. Particle 2 falls from a height of $2.75m$. In the first set of simulations we assume the Earth’s surface gravity ($g=9.81\ \ m/s^{2}$). For this set, we use the following time steps in the simulations: $dt=\{6\times10^{-4},5\times10^{-4},10^{-4},10^{-5},10^{-6},5\times10^{-7}\}\ s$.
The duration of the collisions is computed from the simulations as the interval of time while the deformation parameter defined in eq. \[eqdefor\] is greater than 0. As mentioned above this interval is slightly larger than the time the balls are in contact, but it is good enough for the purpose of having an order of magnitude estimate of it. For the previous set of parameters, the duration of the collision is $\sim0.003\ s$.
In Figure \[frelheight\]a, we plot the distance of the falling particle respect to the edge of the resting one (centre height minus $3R$) as a function of time. In Figure \[frelheight\]b, we plot the ratio of the previous values to the one for the smallest time step at each time. We find that for time steps $dt \le 10^{-5}\ s$, there is a very good agreement between the simulations. For longer time steps the bouncing ball presents an implausible behavior.
![a) The distance of the falling particle respect to the edge of the resting one (centre height minus $3R$) as a function of time. b) The ratio of the previous values to the one for the smallest time step at each time.[]{data-label="frelheight"}](figure5.eps){width="50.00000%"}
The coefficient of restitution can be computed as the ratio between the velocity at the iteration step just after the collision and at the step just before the collision (just after and before the deformation defined in eq. \[eqdefor\] is $\psi < 0$). For time steps $dt \le 10^{-5}\ s$ there is a good agreement among the different estimates. We obtained a value of 0.593.
Therefore, for the previous set of parameters, we will use a time step of $dt=10^{-5}\ s$ for the simulations with Earth’s gravity, since the collision is covered with $\sim$30 time steps and it is a good compromise between quality of the results and a longer time step.
In another set of simulations we use very low surface gravity, similar to the one found on the surface of asteroid Itokawa and comet P/Hartley 2; i.e. a rocky object of $\sim500\ m$ in diameter or an icy object of $\sim1\ km$ in diameter. For this set, we use the following time steps in the simulations: $dt=\{5\times10^{-4},10^{-4},10^{-5},10^{-6}\}\ s$. For the previous set of parameters, the duration of the collision is $\sim0.01\ s$. For time steps $dt\le10^{-4}\ s$ there is a good agreement among the different runs. The coefficient of restitution in these simulations is 0.721. For the simulations in this low-gravity environments we will use a time step $dt=10^{-4}\ s$, which corresponds to a collision lasting $\sim100$ time steps.
### Test case 4: Newton’s cradle
A Newton’s cradle is a device used to demonstrate the conservation of linear momentum and energy via a series of swinging hard spheres. When one ball at the end is lifted and released, it knocks a second ball and this one the next until the last ball in the line is pushed upward. A typical Newton’s cradle consists of a series of identically sized metal balls hanging by equal length strings from a metal frame so that they are just touching each other at rest.
We simulate the Newton’s cradle with four spheres aligned in the x-axis. We number the particles from right to left: \#1 being the particle at the right extreme and \#4 the one at the left extreme. The x-axis increases to the right. Particle \#1 has a negative initial velocity $v_{x}=-10\ \ m/s$. Two types of simulation are run: Hertzian elastic and viscoelastic spheres. We use the following set of parameters: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$ (for the viscoelastic simulation). The radius of the spheres are $R=1\ m$, and a density of $\rho=3000\ kg/m^{3}$.
In Figure \[fnewton\] we present the time evolution of the following parameters for each simulation: i) $x$-position of each particle (\#1 to \#4); ii) $x$-velocity for each particle; iii) relative change of $x$-total momentum: $(Momentum(t)-Momentum(t=0))/Momentum(t=0)$; iv) relative change of total kinetic energy: $(K.E(t)-K.E.(t=0))/K.E.(t=0)$. Figure \[fnewton\] a) corresponds to the Herztian elastic (HE) simulation, and Figure \[fnewton\] b) to the Herztian viscoelastic (HVE) one.
![a) Results of the Herztian elastic (HE) simulations of the Newton’s cradle: i) $x$-position of each particle (\#1 to \#4); ii) $x$-velocity for each particle; iii) relative change of $x$-total momentum; iv) relative change of the total kinetic energy. b) Similar set of plots for the Herztian viscoelastic (HVE) simulations of the Newton’s cradle.[]{data-label="fnewton"}](figure6.eps){width="50.00000%"}
Note that for the HE simulation particle \#4 acquires almost the velocity of the initial impacting particle and little rebound is observed in the particles \#1 to \#3. The linear momentum is conserved after the collision up to a relative precision $<10^{-12}$, and the kinetic energy after the rebound is conserved up to a relative precision of $10^{-11}$. In the HVE simulation, the particle \#4 acquires 70% of the velocity of the initial impacting particle, and particle \#3 acquires 25%. No rebound is observed and all the particles move to the left. The final velocities increase from right to left. The linear momentum is also conserved after the collision up to a relative precision $<10^{-12}$ (down to the last output digit). The kinetic energy after the rebound is not conserved $\sim$ 50% of the initial kinetic energy is spent on the damping of the viscosity interaction.
Size segregation in low-gravity environments: The Brazil nut effect {#secbne}
===================================================================
The shaking or knocking procedure
---------------------------------
Consider a recipient with one large ball on the bottom and a number of smaller ones on top of it. All the balls have similar densities. After shaking the recipient for a while, the large ball rises to the top and the small ones sink to the bottom ([@Rosato], [@Knight], [@Kudrolli]). This is the so called Brazil nut effect (BNE), because it can be easily seen when one mixes nuts of different sizes in a can; the large Brazil nuts rise to the top of the can. Unless there is a large difference in the density of the balls, a mixture of different particles will segregate by size when shaken.
The BNE has been attributed to the following processes ([@Hong]): i) the percolation effect, where the smaller ones pass through the holes created by the larger ones ([@Jullien]); ii) geometrical reorganisation, through which small particles readily fill small openings below the large particles ([@Rosato]); iii) global convection which brings the large particles up but does not allow for reentry in the downstream ([@Knight]); iv) due to its larger kinetic energy, the large particle still follows a ballistic upraise, penetrating by inertia into the bed ([@Nahmad-Molinari]).
While size ratio is a dominant factor, particle-specific properties such as density, inelasticity and friction can also play important roles.
[@Williams] performed a model experiment with a single large particle (intruder) and a set of smaller beads inside a rectangular container. When the container was vibrated appropriately, the intruder would always rise and reach a height in the bed that depends on vibration strength.
In order to simulate this effect under different gravity conditions, we run simulations of a 3D box with many small particles and one big particle at the bottom, the so-called intruder model system ([@Williams], [@Kudrolli]). On the floor we glue one row of small particles with a linear elastic bond. The box is subjected to a given surface gravity.
We run simulations under several gravity conditions: the surface of the Earth, Moon, Ceres, Eros and a very-low gravity environment like the surface of asteroid Itokawa or comet P/Hartley 2. The parameters for the simulations are summarised in Table \[tabsur\]. The physical and elastic parameters of the particles are similar to the ones used in the previous tests: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$, $\kappa=0.4$, $\mu=0.6$, $K=10^{9}\ Pa$, $\rho=3000\ kg/m^{3}$.
The floor is vertically displaced at a certain speed ($v_{floor}$) for a short interval ($dt_{shake}$), according to a staircase-like function like the one presented in Fig \[fstair\]. The process is repeated every given number of seconds ($\Delta t_{rep}$), depending on the settling time given by the surface gravity. We have chosen this vibration scheme instead of the frequently used sinusoidal oscillation of the floor, because we are interested in the effects of a sudden shock coming from below. This shock could arise from the translation of the impulse generated by an impact in a far region. We refer this vibration scheme as a shaking or knocking procedure.
![The floor is vertically displaced at a certain speed ($v_{floor}$) for a short interval ($dt_{shake}$), according to a staircase-like function.[]{data-label="fstair"}](figure7.eps){width="35.00000%"}
In order to prepare the initial conditions for the simulations of the BNE, we run a set of simulations where the particles start at a certain height over the surface and they free fall. The floor is slightly shaken at the beginning of these preliminary simulations in order to obtain a random settling of the particles. After finishing the shaking and letting the particles settle down, we use the positions at the end of the runs as the initial conditions for the set of BNE simulations. We must run different preliminary simulations for each gravity environment.
In the BNE simulations, the floor’s velocity is linearly increased from 0 up to the final value $v_{floor}$, which is reached after 20 jumps. We note that the shaking procedure is parameterized with the floor’s velocity.
The 3D box is constructed with elastic mesh walls. The box has a base of $6\times6\ m$ and a height of 150 $m$. A set of $12\times12\ m$ small balls of radius $R_{1}=0.25\ m$ are glued to the floor. The big ball has a radius $R_{2}=0.75\ m$, and on top of it, there are 1000 small balls with a normal distribution of radii (mean radius $R_{1}=0.25\ m$, standard deviation $\sigma=0.01\ m$). We use the same box for all the simulations.
The size range of the balls are selected in correspondence with the boulders size observed on the surface of asteroid Itokawa and Eros.
------------ -------------------- -------------------- ---------- -------------------- -------------
Parameter Earth Moon Ceres Eros Low-gravity
Itokawa &
P/Hartley 2
** 9.81 1.62 0.27 $5.9\times10^{-3}$ $10^{-4}$
** $11.2\times10^{3}$ $2.38\times10^{3}$ 510 10 0.17
** 0.3 - 10 0.1 - 3 0.03 - 1 0.01 - 0.3 0.003 - 0.1
** 0.1 0.1 0.1 0.1 0.1
** 2 5 5 15 15
\[tabsur\]
------------ -------------------- -------------------- ---------- -------------------- -------------
Earth
-----
We run simulations with the following set of floor velocities: $v_{floor}=\{0.3,1,3,5,10\}\ \ m/s$. Snapshots at start and after 100 shakes (100 sec. of simulated time) are presented in Figure \[fsnapbneearth\]. The snapshots correspond to the simulation with floor’s velocity $v_{floor}= 5\ \ m/s$. In the supplementary material we include movies with the complete simulation (movie1 with all the spheres drawn and movie2 with the small spheres erased).
In Figure \[fevolbigearth\] we present the evolution of the big ball’s height as a function of the number of shakes for the different floor velocities. The thick black line marks the height of a box enclosing the 1000 small particles with a random close packing. Random close packing has a maximum porosity of $P = 0.64$ ([@Jaeger]). The volume of the enclosing box is calculated as the sum of the volume of the 1000 small particles divided by the porosity; i.e.: $V = 1000 (\frac{4/3}pi R_1^3)/P = 102\ m^3$. For a box with a $6\times6\ m$ base, we obtain a height of the enclosing box of 2.84 $m$. The thick black line is drawn at this height.
![Snapshots at start and after 50 shakes (100 sec. of simulated time) for the simulation under Earth’s gravity. The snapshots correspond to the simulation with floor’s velocity $v_{floor}= 5\ \ m/s$. See the movies in the supplementary material.[]{data-label="fsnapbneearth"}](figure8.eps){width="40.00000%"}
![The evolution of the big ball’s height as a function of the number of shakes for different floor velocities ($v_{floor}=\{0.3,1,3,5,10\}\ \ m/s$) under Earth’s gravity. Note that the floor’s velocity is used as the varying parameter in the shaking process. A black line is drawn at a height of 2.84 $m$, which is the height of a compact enclosing box (see text).[]{data-label="fevolbigearth"}](figure9.eps){width="40.00000%"}
For the two lowest velocities ($v_{floor}=\{0.3,1\}\ \ m/s$) the big ball stays at the bottom, for the two largest ones ($v_{floor}=\{5,10\}\ \ m/s$) it rises to the top, and for the intermediate one ($v_{floor}=3\ \ m/s$) it starts rising but does not reach the top at the end of the simulation.
When the floor’s displacement velocity is below $\sim3\ \ m/s$, the Brazil nut effect does not occur. Above this threshold, the time required by the big ball to reach the top decreases for increasing floor velocities. Note that there is a sharp decrease in the rising time for small changes in the floor’s velocity (from 3 to 5 $m/s$). For large displacement velocities, the balls on the top, including the big one that is 27 times more massive than the small ones, can be lifted at considerable heights, as it is seen in the large excursions made by the big ball for $v_{floor}=10\ \ m/s$.
Comparison with other gravity environments
------------------------------------------
Similar simulations were run for other gravity environments, like the surface of the Moon, Ceres, Eros and a very-low gravity environment like the surface of asteroid Itokawa or comet P/Hartley 2. The simulation parameters are presented in Table \[tabsur\].
For the simulation under the very-low gravity environment, we present a movie of 4500 sec. of simulated time (300 shakes) in the supplementary material. The movie corresponds to the simulation with floor’s velocity $v_{floor}= 0.05\ \ m/s$ (movie3 with all the spheres drawn and movie4 with the small spheres erased).
Figure \[fevolbigother\] presents the evolution of the big ball’s height as a function of the number of shakes for the different floor velocities and the different gravity environments: a) Moon, b) Ceres, c) Eros, d) Itokawa.
![The evolution of the big ball’s height as a function of the number of shakes for different floor velocities under different gravity environments: a) Moon; b) Ceres; c) Eros; d) Itokawa. The legends correspond to the floor velocities ($v_{floor}$).[]{data-label="fevolbigother"}](figure10.eps){width="50.00000%"}
As in the cases of the simulations in Earth’s gravity, in all the different gravity environments we can find a threshold for the floor’s velocity, below which the Brazil nut effect does not occur. From the previous plots, we get a rough estimate of these thresholds. In Figure \[fthreshold\] we plot the velocity thresholds as a function of the surface gravity in a log-log scale. A straight line in the log-log space is a good fit to the data points:
$$\log_{10}v_{thre}\left[\ m/s\right]=0.42\log_{10}g\left[\ m/s^{2}\right]+0.05$$
![Comparison of the floor’s velocity threshold for the different gravity environments. The Brazil nut effect does not occur if the floor’s velocity is below the threshold. The floor’s velocity thresholds are presented as small circles. Note that the thresholds are not precisely estimate, because they are computed from the plots in Figure \[fevolbigearth\] and \[fevolbigother\] a-d. A straight line in the log-log space is fitted to the data points. The up triangles represent the escape velocity for the given surface gravity. The escape velocity for the largest objects are out of the plots. []{data-label="fthreshold"}](figure11.eps){width="40.00000%"}
We conclude that the Brazil nut effect is effective in a wide range of gravity environments, expanding 5 orders of magnitude on surface gravity.
In Figure \[fthreshold\], we plot the escape velocity for the given surface gravity. Note that the floor’s velocity thresholds approach the escape velocity for the low gravity environments. For example, in the case of Itokawa, the escape velocity is $v_{esc}=0.17\ \ m/s$, while the estimated floor’s velocity threshold is $v_{floor}=0.015\ \ m/s$. This point is revisited in Section \[secparlif\].
Density segregation in low-gravity environments {#secden}
===============================================
As mentioned above, other particle-specific properties can affect the segregation process. In particular, the effects of density have been studied the most. For ratios of the density of the large to the small particles much larger than 1 (denser large particles), the segregation effect could be reversed, and the large particles would sinks to the bottom, producing the so-called Reverse Brazil Nut Effect (RBNE) ([@Shinbrot], [@Hong]).
However, for particles of similar sizes but different densities, both laboratory ([@Mobius], [@Shi]) or numerical ([@Lim]) experiments have shown that the lighter particles tend to rise and form a pure layer on the top of the system, while the heavier particles and some of the lighter ones stay at the bottom and form a mixed layer. In the Solar System, we might encounter bodies with such a mixture of heavy and light particles. Cometary nuclei are believed to be formed of a mix of icy and rocky material. However, the intimacy of this mixture is still unknown, with two possible scenarios: 1) every particle is made of a mixture of ice and dust, and 2) there exist some particles mainly formed by icy material and some others mainly formed by rocky constituents that are mixed together.
We shall investigate the behavior of a mixture of light and heavy particles under different gravity environments.
For the simulations we create a 3D box similar to the previous one, with a $6\times6\ m$ base and a height of 150 $m$. The box is constructed with elastic mesh walls. On the floor we glue a set of $12\times12$ small balls of radius $R_{1}=0.25\ m$ and density $\rho=2000\ kg/m^{3}$. There are 500 light balls with a normal distribution of radii (mean radius $R_{1}=0.25\ m$, standard deviation $\sigma=0.01\ m$) and density $\rho=500\ kg/m^{3}$. On top of them, there are 500 heavy balls with the same distribution of radii and density $\rho=2000\ kg/m^{3}$. At the beginning of the simulations the balls are placed sparsely, the light balls at the bottom and the heavy ones on top. They free fall and settle down before starting the floor shaking.
Elastic parameters of the particles are the same for both types of particles and similar to the ones used in the previous tests for all the particles: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$, $\kappa=0.4$, $\mu=0.6$, $K=10^{9}\ Pa$.
The floor is displaced with a staircase function in a similar way as in the previous set of simulations.
Two gravity environments were tested: the Earth’s surface gravity and a very-low gravity environment like the surface of comet P/Hartley 2.
Figure \[fsnapdensitylow\] presents snapshots of the initial and final state (after 1300 shakes) for a simulation under the low-gravity environment and a floor velocity of $v_{floor}=0.05\ \ m/s$. In the supplementary material we include movies with the complete simulations (movie5 corresponds to the simulation under Earth’s gravity and $v_{floor}=3\ \ m/s$; movie6 corresponds to the simulation under low gravity and $v_{floor}=0.05\ \ m/s$. Note that in these movies the camera moves with the floor, therefore it seems that the floor is always located in the same position, but it really is moving with the staircase function described above).
![Snapshots of the initial and final state (after 1300 shakes) for a simulation under the low-gravity environment and a floor velocity of $v_{floor}=0.05\ \ m/s$. (see movie5 and movie6 in the supplementary material.[]{data-label="fsnapdensitylow"}](figure12.eps){width="35.00000%"}
At every snapshot, we compute the median height of the light and heavy particles, respectively. These median heights are plotted as a function of the number of shakes in Figure \[fevoldensityearth\] a) for the Earth’s gravity simulations, and b) for the low-gravity ones. For each simulation there are two lines: the one that starts on top corresponds to the heavy particles and the one that starts at the bottom to the light ones. In the Earth environment simulations, the lines do not cross for the two lowest floor velocities: $v_{floor}=\{1,3\}\ \ m/s$; therefore, the particles do not overturn the initial segregation. Though, for $v_{floor}=3\ \ m/s$, the lines start to approach. However, for the highest floor velocities, i.e. $v_{floor}=5\ \ m/s$, the lines cross at an early stage of the simulation after which they remain almost parallel. Most of the light particles move to the top and most of the heavy ones sink to the bottom; the end state is similar to the one seen in Figure \[fsnapdensitylow\] for the low-gravity simulations. Due to the strong shakes, the particles suffer large displacements, but, in a statistical sense, the two set of particles are segregated. A density segregation is then observed, although it is not complete.
![The median height of the light and heavy particles are plotted as a function of the number of shakes for different floor velocities and under different gravity environments: a) Earth, b) low-gravity like P/Hartley 2. The legends correspond to the floor velocities ($v_{floor}$). For each simulation there are two lines: the one that starts on top corresponds to the medium height of the heavy particles (labeled with ) and the one that starts at the bottom to the medium height of the light ones (labeled with )[]{data-label="fevoldensityearth"}](figure13.eps){width="50.00000%"}
The results of the simulations under a low-gravity environment are presented in Figure \[fevoldensityearth\]b. The lines for the light and heavy particles median height do cross for the three studied floor velocities ($v\_{floor}=\{0.03,0.05,0.1\}\ \ m/s$), although for the lowest velocity the simulations do not last long enough to reach the stable stage where the median heights reach almost a stable value.
Note that in both gravity environments, the density segregation is effective for floor’s velocity over a threshold similar to the ones of the size segregation effect of Section \[secbne\].
Particle lifting and ejection {#secparlif}
=============================
Let us consider the following simple experiment: we have a layer of material that is uniformly shocked from the bottom. The motivation of this experiment is to consider what would happen if a seismic wave, generated somewhere in a body and propagating through it, reaches another region of the body from below. What would happen with material deposit on the surface? Let us take into account three different materials: a solid block, a compressible fluid and a set of grains. The outcome of the experiment will be different depending on the material. When the seismic wave knocks the solid block, the block is pushed upward. It starts to move upward, forming a gap between the layer’s bottom and the floor. In the case of a layer of compressible fluid, an elastic p-wave is transmitted through it, producing compression and rarefaction of the material.
But, what happens in the case of a layer of grains? Before presenting the results of some simulations, let us reconsider the simulations of Newton’s cradle with Hertzian viscoelastic spheres. We have seen that after the first particle knocks the second one from the right, all the particles move to the left. Particle \#4, the last one on the row, moves faster, the next one to the right moves slower and so forth. Therefore, the whole set of particles move in the same direction, but they do not do it as a compact set, the particles separate from each other.
We perform a first set of simulations with a homogeneous set of particles. A 3D box with a base of $7.5\times7.5\ m$ is filled with $15\times15=225$ particles glued to the bottom, with a radius $R=0.25\ m$. We create 2744 particles with a mean radius $R_{1}=0.25\ m$, standard deviation $\sigma=0.01\ m$ and density $\rho=3000\ kg/m^{3}$. To generate the initial conditions for the simulations, these particles are located a few cm from the bottom and they free fall under the different gravity environments until they settle down.
Elastic parameters of all the particles are similar to the ones used in the previous tests for all the particles: $Y=10^{10}\ Pa$, $A=10^{-3}\ s^{-1}$, $\nu=0.3$, $\kappa=0.4$, $\mu=0.6$, $K=10^{9}\ Pa$.
With the initial conditions generated above, we run the following experiment: after a given time ($t_{sep}$), the floor is vertically displaced at a certain speed ($v_{floor}$) for a short interval ($dt_{shake}$), only one time. Two gravity environments are used for the simulations: Earth’s surface and the low-gravity environment of Itokawa. For the Earth’s simulations we use the following set of parameters: $t_{sep}=1\ s$, $v_{floor}=\{1,3,10\}\ \ m/s$, $dt_{shake}=0.1\ s$. At every snapshot, we sort the particles by their height respect to the floor, and we compute the height of the particles at the 10% ($h_{10}$) and 90% ($h_{90}$) percentile. In Figure \[fdiffdensityearth\] we plot the difference of these two quantities ($h_{90}-h_{10}$) for the different floor velocities. We observe that these differences increase with time up to a certain instant when the particles fall back. Therefore, the particles are not moving as a compact set, rather, the upper particles are moving faster and the particles separate from each other. The upper particles can reach velocities larger than the floor’s velocity; e.g. in the case of $v_{floor}=10\ m/s$, the 10% fastest particles reach velocities of $\sim17\ m/s$ just after the end of the floor’s displacement. We observe that the upper particles are lifted at considerable heights before they fall back.
![Lifting of particles under Earth gravity. At every snapshot, we sort the particles by their height respect to the floor, and we compute the height of the 10% ($h_{10}$) and 90% ($h_{90}$) percentile. We plot the difference of these two quantities ($h_{90}-h_{10}$) for the different floor velocities.[]{data-label="fdiffdensityearth"}](figure14.eps){width="35.00000%"}
Similar results are obtained in low-gravity simulations, using the following set of parameters: $t_{sep}=10\ s$, $v_{floor}=\{0.01,0.03,0.1\}\ m/s$, $dt_{shake}=0.1\ s$. The upper particles move faster and they can reach velocities up to $\sim0.02,0.05,0.2\ m/s$ with respect to the floor velocities. Note that the escape velocity in this environment is $v_{esc}=0.17\ m/s$, therefore the fastest ejection velocities of the lifted particles are higher than $v_{esc}$. We run another experiment: on top of the layer of particles with mean radius $R_{1}\sim0.25\ m$, we deposit a layer of 2700 smaller particles, with mean radius $R_{2}=0.1\ m$ and standard deviation $\sigma=0.01\ m$. The rest of the physical parameters are the same as for the bigger particles. The aim of this experiment is to check whether the small particles are ejected with higher velocities than the big ones. As in the previous simulations, we order the particles in increasing height. We compute the height of the 90% percentile of the big ($h_{b,90}$) and small particles ($h_{s,90}$). Although the small particles on top of the big ones tend to separate, the differences in the velocities are relatively small. There is no significant ejection of the small particles.
Another relevant result regarding the lifting and ejection of particles from the surface due to an incoming shock from below, can be obtained from the Brazil nut effect simulations presented in Section \[secbne\]. In the animations produced with a sequence of snapshots for the simulations where the segregation process was effective, we observe many particles lifted at considerable heights. In Figure \[fheightlow\] we plot the maximum height of the particles as a function of the simulated time for the case of the low-gravity environment and different floor velocities. Note that the ejection velocities the fastest particles can acquire are comparable to the floor’s displacement velocities, and even, a little bit higher. For a floor velocity of $0.1\ m/s$, the particles can reach an ejection velocity higher than the escape velocity at the surface.
![The maximum height of the particles as a function of the simulated time for the case of the low-gravity environment and different floor velocities.[]{data-label="fheightlow"}](figure15.eps){width="40.00000%"}
Taking into consideration the previous results, we conclude that a layer shocked from below would produce the lifting of particles at the surface if the displacement of the bottom exceeds a certain velocity threshold. Particles can acquire vertical velocities comparable to the displacement velocity of the bottom. For very low-gravity environments, this velocity could be comparable to the escape velocity at the surface. The particles could enter in sub-orbital or orbital flights, creating a cloud of gravitational weakly bounded particles around the object.
Global shaking due to impacts and explosions {#secimp}
============================================
In the previous sections we have shown that several physical processes can occur in a layer of granular media when it is shocked from below: size and density segregation, lifting and ejection of particles. A big quake in a distant point could produce such a shock. The quake could be produced by another small object impacting the body or by the release of some internal stress. Interplanetary impacts typically occur at velocities of several $km/s$. These are hypervelocity impacts, i.e. impacts with velocities that are above the sound speed in the target material, which give rise to physical deformation of the target, heating and shock waves spreading out from the impact point. The DEM algorithms described above can not successfully reproduce these set of phenomena. Therefore, we have to implement a different approach if we are interested in understanding the effect of an impact induced shock wave passing through a granular media.
Let’s consider a $km$-size agglomerated body, formed by many $m$-size boulders. We raise the following question: what happens if a small projectile impacts in such an object at distances far from the impact point? Or alternatively, what happens if a large amount of kinetic energy is released in a small volume close to the surface of such an object? In order to answer these questions we run the following set of simulations. We fill a sphere of radius 250 and 1000 $m$ with small spheres of a given size range, using the configurations, number of moving particles, and total mass listed in Table \[tabcase\]. For each sphere, we fill the volume with two different distributions of small spheres: one with $\sim 90,000$ particles and another one with a larger number of particles $\sim 700,000$. We try to make the total mass of the moving particles similar for each of the studied radii.
A time step of $dt = 10^{-4} \ s$ is used in all the simulations. The simulations are run in a cluster with Intel Xeon multi-core processors (Model E5410, at 2.33 GHz, with 12MB Cache). For cases B and D we use up to 8 cores. In these cases, a simulation of 10 $s$ takes $\sim 20 \ hr$ of CPU-time in each core.
Case A B C D
--------------------------------------- ------------ -------- --------- --------
Parameter Radius Radius Radius Radius
250 m 250 m 1000 m 1000 m
Size range of spheres ($m$) 2.5 - 12.5 1 - 10 10 - 50 5 - 25
Number of particles 88570 783552 89144 688443
Porosity 0.31 0.22 0.31 0.31
Total Mass ($10^{12}kg$) 0.135 0.152 8.66 8.63
Escape velocity at surface ($m/s$) 0.269 0.285 1.075 1.073
Number of initially moving particles 10 140 10 200
Mass of moving particles ($10^{6}kg$) 21 21 599 608
** 9.88 0.88 2.67 2.68
** 2.57 2.56 7.82 7.85
** 3.26 3.23 9.84 9.89
** 5.53 5.52 16.83 16.92
** 36 29 1 1
** 907 710 25 26
** 0.79 0.69 0.35 0.35
** 20 17 8.6 8.8
\[tabcase\]
Since we can not successfully simulate the physics of a hypervelocity impact during the very short initial stages, we implemented another approach. At a given point on the surface we select a certain number of particles of the body that are close to this place. Each particle has at the beginning of the simulation a velocity along the radial vector toward the centre. We substitute the impact by a near-surface underground explosion, where several particles are released at a given speed. For each set of configurations listed in Table \[tabcase\], we run simulations with initial particle velocities of 100 $m/s$ and 500 $m/s$. These velocities are well below the sound speed in the target material.
Since these initial conditions would correspond to a stage after the impact where some energy has already been spent in the compression, fracturing and heating of the target material, we cannot equal the sum of the kinetic energy of the moving particles with the kinetic energy of the impactor. However, we can provide a lower limit to the kinetic energy of the impactor by assuming efficiency factor $\epsilon_{KE}=1$, or a corresponding lower limit of the impactor size for a given impact velocity. In Table \[tabcase\], we also present the radius of the equivalent projectile for the two set of initial particle velocities, assuming an energy efficiency factor $\epsilon_{KE}=1$ and an impact velocity of 5 $km/s$. For lower values of the efficiency factor, the projectile radius would scale with $\epsilon_{KE}^{-1/3}$. As we have seen in the simulations of the Newton’s cradle with viscoelastic interactions, there is a considerable loss of kinetic energy after a series of collisions, although the total linear momentum is conserved. As far as we know, there is very limited data on the transfer of momentum in hypervelocity impacts, and we do not know the efficiency factor of this transfer ($\epsilon_{LM}$). A similar estimate of the lower limit for the impactor size can be done by assuming a momentum efficiency factor $\epsilon_{LM}=1$ and an impact velocity of 5 $km/s$. In Table \[tabcase\], we present the radius of the equivalent projectile for the two set of initial particle velocities. The projectile radius would scale with $\epsilon_{LM}^{-1/3}$.
The location of the explosion is always at the surface and with angular coordinates ($latitude=45\deg$ , $longitude=45\deg$). In Figure \[fsnapexp\] we present snapshots showing the propagation of the wave into the interior, by using slices passing through the centre of the sphere, the explosion point and the poles. Figure \[fsnapexp\] a and b correspond to the simulations with body radius of 250 $m$, the largest number of particles ($N=783552$) and particles velocities of 100 $m/s$ (case B-100). Snapshot a is at 0.4 $s$ after the explosion and b is at 2 $s$. The particles are coloured using a colour bar that scales with the modulus of the velocity. On the other hand, Figure \[fsnapexp\]c and d correspond to the simulations with body radius of 1000 $m$, the largest number of particles ($N=688443$) and particles velocities of 500 $m/s$ (case D-500). Snapshot c is at 3 $s$ after the explosion and d is at 6 $s$. In the supplementary material we present movies of these simulations. (movie7 corresponds to the case B-100 $m/s$, movie8 to case B-500 $m/s$, movie9 to case D-100 $m/s$, and movie10 to case D-500 $m/s$. In the movies we observed the variation of the velocity of the particles in a slice passing through the centre of the sphere, the explosion point and the poles. The particles are coloured using a colour bar that scales with the modulus of the velocity.
![Snapshots of the sphere explosions simulations. These are slices passing through the centre of the sphere, the explosion point and the poles. Snapshots a) and b) correspond to the simulations with body radius of 250 $m$, the largest number of particles ($N=783552$) and particles velocities of 100 $m/s$ (case B-100). Snapshot a is at 0.4 $s$ after the explosion and b is at 2 $s$. The particles are coloured using a colour bar that scales with the modulus of the velocity. Snapshots c) and d) correspond to the simulations with body radius of 1000 $m$, the largest number of particles ($N=688443$) and particles velocities of 500 $m/s$ (case D-500). Snapshot c is at 3 $s$ after the explosion and d is at 6 $s$. (see movies in the supplementary material)[]{data-label="fsnapexp"}](figure16.eps){width="50.00000%"}
We note that a shock front with a spherical shape propagates to the interior from the explosion point. On the surface, there appears a layer of fast moving particles that extends until it intersects with the spherical front, creating inside the volume limited by the surface layer and the spherical front, a cavity of slow moving particles. The velocity of the propagation front has a weak dependence on the velocity of the initial particles. For example, in the simulations of the smaller body (case B-100), the propagation shock requires 1.8 $s$ to reach the antipodes of the explosion point, implying a velocity of 278 $m/s$. In the case B-500, the required time is 1.2 $s$, and the velocity 416 $m/s$. For the largest body, the figures are: case D-100: time 9.6 $s$, velocity 208 $m/s$; case D-500: time 5.8 $s$, velocity 435 $m/s$. Although there is an increase in the initial velocity of the moving particles of a factor of 5 among the cases, the velocity of the propagation shock has an in increase of $\sim$ 2. The velocity of the propagation shock is quite constant while the shock travels through the interior.
We are interested in the effects of the explosion at large distances from the explosion point. The body is divided in 8 quadrants. The explosion occurs on the surface at the centre of the first quadrant (in Cartesian coordinates the first quadrant is: $x>0\ \&\ y>0\ \&\ z>0$; and the explosion point is at: $x=y=z=R/\sqrt{3}$, $R$ - radius). We analyse the distribution of ejection velocities of the particles close to the surface ($r>0.8R$) on the other 7 quadrants. Histograms of these distributions are presented in Figure \[fejecvel\] a and b. In Figure \[fejecvel\]a there are two overlapping histograms which correspond to the cases B-100 and B-500, while in Figure \[fejecvel\]b, they correspond to the cases D-100 and D-500. A vertical line marking the escape velocity for each body is included in the plots. Note that for the smallest object and for both initial velocities, there is a significant fraction of particles that acquire ejection velocities over the escape limit. Considering the total fraction of particles with velocities over this threshold (not only the ones near the surface), we obtain values of 18% in the case B-100, and 81% in the case B-500. In the case of the largest body, there is a significant fraction of escaping particles only for the largest initial velocity. The total fraction of escaping particles are 0.6% in the case D-100, and 100% in the case D-500. For the simulations with initial velocities of 500 $m/s$, there is a total disruption of both bodies ($>50\%$ of the mass is ejected at velocities over the escape one). It is out of the scope of this paper to derive the disruption laws for this type of experiments; we just mention that with a set of experiments like the previous ones, we could obtain the kinetic energy threshold over which the explosions lead to a total disruption of the body as a function of size. In Table \[tabcase\] we also include the ratio between the kinetic energy of the moving particles over the potential energy of the body and the specific energy (defined as the deposited energy per unit mass). [@Housen] have defined the critical specific energy ($Q^{*}$) as the energy per unit mass necessary to catastrophically disrupt a body. [@Ryan] presents a plot comparing different estimates of $Q^{*}$ by several authors as a function of the target radius. Let us note the fact that the largest body ($R=1000\ m$) is more disrupted than the smallest body ($R=250\ m$), although the specific energy is lower, it is in agreement with the dip in the $Q^{*}\ vs\ R$ plot ([@Ryan]) in this radius range.
![The distribution of the ejection velocity of the particles for the simulated explosion. a) Simulations with body radius of 250 $m$ and the largest number of particles ($N=783552$) (case B). Two histograms are presented for initial velocities of 100 and 500 $m/s$. b) Simulations with body radius of 1000 $m$ and the largest number of particles ($N=688443$) (case D). Two histograms for initial velocities of 100 and 500 $m/s$. In each plot, a vertical dashed line is drawn at the value of the escape velocity at the surface.[]{data-label="fejecvel"}](figure17.eps){width="50.00000%"}
Except for the case of low velocity explosions for the large body, in all the other simulations, a fraction of the near surface particles far from the explosion point acquire velocities over the escape one (see Figure \[fejecvel\]). Therefore, an explosion would induce the ejection of particles from the surface at low velocities. These particles could either enter into orbit around the body or slowly escape from it, producing a cloud of fine particles that may take many days before disappearing. This result is complementary to the one obtained in Section \[secparlif\] regarding the lifting and ejection of particles produced by a shake coming from below the surface.
In the case of the smallest body, even the low velocity explosions would induce displacement velocities over several tenths of $m/s$ on many near surface particles far from the explosion point (see Figure \[fejecvel\]). This displacement would produce a shake coming from below, similar to the shakes simulated in Section \[secbne\]. The surface gravity of the smallest body is similar to the surface gravity used in the low-gravity simulations of Section \[secbne\], and for the largest body the conditions are similar to the simulation of Eros. Looking back to Figure \[fsnapexp\], we conclude that explosion events like the one produced in our simulations would be enough to induce the shaking required to produce size and density segregation on the surface of these bodies.
This process of shaking the entire object after an impact is suitable for small bodies where the escape velocity is comparable to the impact induced displacement velocity at large distance from the impact point. Further work should study up to which body sizes the shaking process is expected to occur.
Conclusions and applications of the results {#secdis}
===========================================
The main objective of this paper is to present the applications of Discrete Element Methods for the study of the physical evolution of agglomerates of rocks under low-gravity environments. We have presented some initial results regarding process like size and density segregation due to repeated shakings or knocks, the lifting and ejections of particles from the surface due to an incoming shock and the effect of a surface explosion on a spherical agglomerated body. We recall that our shaking process is due to repeated set of knocks.
The main conclusions of these preliminary results are:
- A shaking induced size segregation the so-called Brazil nut effectdoes occur even in the low-gravity environments of the surface of small Solar Systems bodies, like $km$-size asteroids and comets.
- A shaking induced density segregation is also observed in these environments, although it is not complete.
- A particle layer shocked from below would produce the lifting of particles at the surface, which can acquire vertical velocities comparable to the surface escape velocity in very low-gravity environments.
- A surface explosion, like the one produced by an impact or the release of energy by the liberation of internal stresses or by the re accommodation of material, would induce a shock transmitted through the entire body, and the ejection of surface particles at low velocities at distances far from the explosion point. This process is only suitable for small bodies.
The application of these results to real cases will be the subject of further papers, but we foresee some situations where the results presented here will be relevant:
- The internal structure of asteroid Itokawa and similar small asteroids formed as an agglomerate of $m$-size particles, and the relevance of the Brazil nut effect produced by repeated impacts.
- The non-uniform distribution of active zones in comets, like P/Hartley 2, and the internal density segregation of icy and rocky boulders produced by shakes caused by explosions and impacts.
- The formation of dust clouds at low escaping velocities after an impact onto a $km$-size asteroid.
The supplement online material can be accesed at:\
<http://www.astronomia.edu.uy/Publications/> [Tancredi/Granular\_Physics/](Tancredi/Granular_Physics/)
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Dion Weatherley, Steffen Abe and the ESyS-particle users community for helpful suggestions about the package. We thank Mariana Martnez Carlevaro for a careful reading of the text and many linguistic suggestions.
Abe, S., Mair, K. 2005, Geophysical Research Letters, 32, L05305, doi:10.1029/2004GL022123
Abe, S., Place, D., Mora, P. 2004, Pure Appl. Geophys., 161, 2265-2277
Abe, S., Latham, S., Mora, P. 2006, Pure Appl. Geophys. 163, 1881–1892
Asphaug, E. 2007, Science 316, 993-994
Cundall, P., Stark, P. 1979, Geotechnique, 29, 47-65
Durda, D., Movshovitz, N., Richardson, D., Asphaug, E., Morgan, A. Rawlings, A., Vest, C. 2011, Icarus, 211, 849–855
Heredia, L., Richeri, P. 2009, Paralelismo aplicado al estudio de medios granulares, Proyecto de Grado, Inst. Computación, Fac. Ingeniera, UdelaR, Uruguay
Hertz, H. 1882, J. f. reine u. angewandte Math., 92, 156-171
Hong, D., Quinn, P., Luding, S. 2001, Phys. Rev. Let., 86, 3423-3426
Housen, K., Holsapple, K. 1990, Icarus, 84, 226-253
Imre, B., Räbsamen, S., Springman, S. 2008, Computers & Geosciences, 34, 339–350
Jaeger, H., Nagel, S. 1992, Science 255, 1523
Jullien, R., Meakin, P. 1992, Phys. Rev. Lett., 69, 640-643
Knight, J., Jaeger, H., Nagel, S. 1993, Phys. Rev. Lett., 70, 3728–31
Kudrolli, A. 2004, Rep. Prog. Phys., 67, 209-247
Lim, E. 2010, American Institute of Chemical Engineers Journal, 56, 2588-2597
Mair, K., Abe, S. 2008, Earth and Planetary Science Letters, 274, 72–81
Möbius, M., Lauderdale, B., Nagel, S., Jaeger, H. 2001, Nature, 414, 270
Nahmad-Molinari, Y., Canul-Chay, G., Ruiz-Suárez, J.C. 2003, Phys. Rev. E, 68, 041301
Pak, H., Van Doom, E., Behringer R. 1995, Phys. Rev. Let., 74, 4643-4646
Pöschel, T., Schwager, T. 2005, Computational Granular Dynamics (Springer-Verlag, Berlin Heidelberg)
Richardson, Jr. J., Melosh, H., Greenberg, R., O’Brien, D. 2005, Icarus, 179, 325-349.
Rosato, A., Strandburg, K., Prinz, F., Swendsen, R. 1987, Phys. Rev. Let., 58, 1038-1040
Ryan, E. 2000, Annu. Rev. Earth Planet. Sci., 28, 367–389
Scheeres, D. 2010, Icarus, 210, 968–984
Schopfer, M., Abe, S., Childs, C., Walsh, J. 2009, International Journal of Rock Mechanics and Mining Sciences, 46, 250–261
Schwager, T., Pöschel, T. 2008, Phys. Rev. E, 78, 51304, 1-12
Shinbrot, T., Muzzio, F. 1998, Phys. Rev. Let., 81, 4365-4368
Shi, Q., Sun, G., Hou, M., Lu, K. 2007, Phys. Rev. E, 75, 61302, 1-4
Timoshenko, S., Goodier, J. 1970, Theory of Elasticity, third ed. (McGraw– Hill, New York)
Wada, K., Senshu, H., Matsui, T. 2006, Icarus, 180, 528–545
Williams, J. 1963, Fuel Soc. J., 14, 29–34
Supplementary online material for *“Granular physics in low-gravity environments using DEM”* {#supplementary-online-material-for-granular-physics-in-low-gravity-environments-using-dem .unnumbered}
============================================================================================
Hereby you will find a set of movies included in the article “Granular physics in low-gravity environments using DEM” by Tancredi et al. (MNRAS, 2011).
The supplement online material can be accesed at:\
<http://www.astronomia.edu.uy/Publications/> [Tancredi/Granular\_Physics/](Tancredi/Granular_Physics/)
Size segregation (the Brazil nut effect) simulations {#size-segregation-the-brazil-nut-effect-simulations .unnumbered}
====================================================
A 3D box is constructed with elastic mesh walls. The box has a base of $6\times6\ m$ and a height of 150 $m$. A set of $12\times12\ m$ small balls of radius $R_{1}=0.25\ m$ are glued to the floor. The big ball has a radius $R_{2}=0.75\ m$, and on top of it, there are 1000 small balls with radii $R_{1} \sim 0.25\ m$.
The floor is displaced with a staircase function as described in the paper with different velocities.
We present movies for two set of simulations: a) under Earth’s gravity (surface gravity $g\ = 9.81 \ m/s^{2}$) and a floor’s velocity ($v_{floor}= 5 \ m/s$), b) in a low-gravity environment ($g\ = 10^{-4} \ m/s^{2}$) and ($v_{floor}= 0.05 \ m/s$).
movie1.avi is a movie with all the spheres drawn and movie2.avi with the small spheres erased for the first simulation. The movies correspond to 100 seconds of simulated time and 50 shakes.
While movie3.avi and movie4.avi correspond to the second one. The movies correspond to 10000 seconds of simulated time and 667 shakes.
Density segregation simulations {#density-segregation-simulations .unnumbered}
===============================
A 3D box similar to the previous one is created, with a $6\times6\ m$ base and a height of 150 $m$. The box is constructed with elastic mesh walls. On the floor we glue a set of $12\times12$ small balls of radius $R_{1}=0.25\ m$ and density $\rho=2000\ kg/m^{3}$. There are 500 light balls with radii $R_{1} \sim 0.25\ m$ and density $\rho=500\ kg/m^{3}$. On top of them, there are 500 heavy balls with similar radii and density $\rho=2000\ kg/m^{3}$. At the beginning of the simulations the balls are placed sparsely, the light balls at the bottom and the heavy ones on top. They free fall and settle down before starting the floor shaking.
The floor is displaced with a staircase function in a similar way as in the previous set of simulations.
We present movies for two set of simulations: a) under Earth’s gravity (surface gravity $g\ = 9.81 \ m/s^{2}$) and a floor’s velocity ($v_{floor}= 3 \ m/s$), b) in a low-gravity environment ($g\ = 10^{-4} \ m/s^{2}$) and ($v_{floor}= 0.05 \ m/s$).
movie5.avi is a movie of the first simulation, while movie6.avi corresponds to the second one. The movie5 corresponds to 1000 seconds of simulated time and 500 shakes, while the movie6 corresponds to 20000 seconds of simulated time and 1333 shakes.
Note that in these movies the camera moves with the floor, therefore it seems that the floor is always located in the same position, but it really is moving with the staircase function described above.
In movie5 the density segregation is not reached; while in movie6, most of the light particles move to the top and most of the heavy ones sink to the bottom.
Global shaking due to impacts and explosions {#global-shaking-due-to-impacts-and-explosions .unnumbered}
============================================
We consider a km-size agglomerated body, formed by many small size boulders. We fill a sphere of radius 250 and 1000 $m$ with $\sim 700,000$ small spheres of a given size range (1-10 $m$-size boulders in the case of the small body, and 5-25 $m$-size boulders for the big body).
At a given point on the surface we select a certain number of particles of the body that are close to this place. Each particle has at the beginning of the simulation a velocity along the radial vector toward the centre. The location of the explosion is always at the surface and with angular coordinates ($latitude=45\deg$ , $longitude=45\deg$). We run simulations with initial particle velocities of 100 $m/s$ and 500 $m/s$.
In the movies we present snapshots showing the propagation of the wave into the interior. These are slices passing through the centre of the sphere, the explosion point and the poles. The particles are coloured using a colour bar that scales with the modulus of the velocity.
movie7.avi and movie8.avi correspond to the simulation with a body of radius 250 $m$, $N=783552$ small particles and 140 particles with initial velocities of 100 $m/s$ (case B-100) and 500 $m/s$ (case B-500), respectively.
movie9.avi and movie10.avi correspond to the simulation with a body of radius 1000 $m$, $N=688443$ small particles and 200 particles with velocities of 100 $m/s$ (case D-100) and 500 $m/s$ (case D-500). All the movies correspond to 10 seconds of simulated time.
\[lastpage\]
[^1]: E-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'R. Hounsell [^1]'
- 'M. J. Darnley'
- 'M. F. Bode'
- 'D. J. Harman'
- 'L. A. Helton'
- 'G. J. Schwarz'
date: 'Received date / Accepted date'
title: 'A very luminous, highly extinguished, very fast nova - V1721 Aquilae'
---
Introduction and Observations {#sec:intro}
=============================
Classical novae (CNe) occur in interacting binary systems consisting of a white dwarf (WD) primary and a main sequence secondary star, which fills its Roche-lobe. They are a subclass of the cataclysmic variables (CVs). Hydrogen-rich material is transferred from the secondary and deposited onto the surface of the WD, usually via an accretion disc [see @CNbook08 for recent review papers]. A thermonuclear runaway eventually occurs within the accreted hydrogen-rich layer on the WD surface leading to a nova outburst [see @Starrfieldpaper]. CNe tend to exhibit outburst amplitudes of approximately 10-20 magnitudes [@Shara] and eject $10^{-5}-10^{-4}$ M$_{\odot}$ of material at velocities between hundreds to a few thousand km $\mathrm{s^{-1}}$ [@Prialnik], with outbursts occurring approximately once every $10^{4}-10^{5}$ years. If a nova system is seen to have more than one outburst, it is classed as a recurrent nova (RN). RNe undergo outbursts on a time-scale of decades up to $\sim$ 100 years and tend to have higher ejection velocities and lower ejected masses than CNe [@Anupama]. The secondary star in a RN system is often an evolved star such as a sub-giant or red giant [see also review by @Bode10].
Novae can be grouped into classes depending on their speed of decline from maximum light [“speed classes”, @PG] or the dominance of certain non-Balmer emission lines in their post-outburst spectrum [@Williams]. These emission lines are either those of Fe II or He/N, yielding two spectral classes. The spectra of He/N novae tend to exhibit “boxy” structures and high expansion velocities, whereas Fe II spectra are more Gaussian and have lower expansion velocities. Recurrent novae typically show features consistent with the He/N novae.
In the Milky Way the distribution of novae shows a strong concentration towards the Galactic plane and the bulge. It has been shown by @Della that fast novae are more concentrated toward the Galactic plane (*z* $<$ 100 pc) than slow novae, which are associated with the Galactic bulge extending up to 1 kpc. Fast novae are more luminous at peak than slow novae and possess more massive and luminous WDs. Work by @Duerbeck indicates that many of these bright novae, however, go undetected. The reasons for this are high interstellar extinction inherent in the Galactic plane, and that for many novae their speed of decline alone makes them difficult to detect [see @Warner; @Hounsell for discussion]. Even now, many fast highly extinguished novae are only ever detected and tracked by amateur astronomers.
Nova V1721 Aquilae ($\alpha=19^{h}06^{m}28^{s}\!\!.58, \delta=+7^{\circ}06^{\prime}44^{\prime\prime}\!\!.3$; J2000) was discovered on 2008 September 22.5 UT by K. Itagkai. The outburst was confirmed on 2008 September 22.586 UT and reached a peak unfiltered magnitude of 14.0. Discovery of the nova was presented in @Yamaokab along with an initial spectral investigation. Post-outburst spectra were obtained on 2008 September 25.19 and 25.25 UT using the Steward Observatory Bok 2.29m telescope on Kitt Peak via the Boller $\&$ Chivens optical spectrograph (details of the instrumental set-up can be found in Section \[subsec:spectra\]). Initial analysis of the spectra revealed a broad triple-peaked H$\alpha$ emission profile with a full width half maximum (FWHM) of 6450 km s$^{-1}$, along with O I 7773 Åand O I 8446 Åstructures [@Helton]. The ejecta velocities derived from the initial analysis of the spectra were very high indeed for a typical CN and there was initial suspicion that it may be a supernova (S. J. Smartt - private communication). Spectra also indicated that extinction towards the object was high and by comparison to other novae during similar evolutionary phases was estimated to be $A_{V}\approx 9.3$. Hence, the distance to the nova was initially derived as 5 kpc, by assuming at maximum $M_{V} \approx-9$ [@Helton]. Due to the faintness of the source at maxima and its rapid decline, follow up spectroscopy of the object has not been possible.
This paper aims to determine the nature of V1721 Aql through the examination of available photometric and spectroscopic data. Archival pre-outburst Two Micron All-Sky Survey[2MASS; @Skrutskie] photometry has enabled the determination of the spectral type of the secondary star within the system. Post-outburst photometric and spectroscopic data have been used to obtain the speed class of the nova, its extinction, the average ejection velocity of the system and potential spectral class. Section \[sec:data\] presents the analysis of the data; Section \[sec:discussion\] discusses the results found and the subsequent classification of the object.
Data Analysis {#sec:data}
=============
Distance determination {#subsec:distance}
----------------------
After outburst, V1721 Aql continued to be monitored photometrically until 2008 October 6 UT these results are reproduced in Figure \[figure1\]. These data indicate that $t_{2} \approx 6$ days for the nova, classifying it as very fast [@PG]. Using these data and the maximum magnitude-rate of decline relation [MMRD, @Mclaughlin] with parameters from @Downes, we derive an absolute magnitude $M_{V} = -9.4 \pm 0.5$.
The extinction towards V1721 Aql is thought to be extremely high, with as noted above, an estimate of $A_{V} \approx 9.3$ given in @Helton. This however, is based purely upon comparisons with other novae at a similar early evolutionary state. In order to obtain an independent extinction we made use of the @Rowles extinction maps which have a high spatial resolution and are able to detect a greater number of small-scale high extinction cores compared to other maps. These extinction maps are generated using 100 nearest neighbour stars and give an $A_{V} = 11.6\pm0.2$, much higher than the original estimate. Using the more accurate extinction from @Rowles, we derive a distance to V1721 Aql of $2.2\pm 0.6$ kpc.
The Galactic coordinates of the object are l$^{II}$=41$^o$, b$^{II}$=-0.1$^o$. This indicates that nova V1721 Aql is located very close to the Galactic plane with *z* = 2.5 pc, and is in a region of the sky in which it is typically very difficult to observe novae because of high extinction along the line of sight.
![Apparent magnitudes of Nova V1721 Aql as observed by K. Itagaki (squares - unfiltered) and R. King (triangles - Visual). These results are presented in VSNETand AAVSO. The two additional tick marks on the x-axis represent the dates on which the Blue (see Figure \[figure2\]) and Red (see Figure \[figure3\]) spectra were taken.[]{data-label="figure1"}](16085fg1.eps){width="\columnwidth"}
Post-outburst Spectra {#subsec:spectra}
---------------------
Post-outburst spectra were obtained on 2008 September 25.19 and 25.25 UT using the Steward Observatory Bok 2.29m telescope on Kitt Peak with the Boller $\&$ Chivens optical spectrograph, and are presented in Figures \[figure2\] and \[figure3\]. The “Blue” set-up utilised a 400 l mm$^{-1}$ 1$^{\mathrm{st}}$ order grating with a UV blocking filter to prevent order contamination below $\sim 3600$ Å. The spectral coverage was from $\sim 3600$ to $\sim 6750$ Å at a spectral resolution of roughly 2.8 Å pixel$^{-1}$. The “Red” set-up was identical but with the grating centred near 7600 Å providing coverage from $\sim 6000$ to $\sim 9250$ Å and with a blocking filter effective below 4800 Å. Flat fielding was performed using a continuum arc lamps. Red observations at wavelengths beyond $\sim 7700$ Å are subject to fringing effects arising at the CCD that are unable to be corrected by flat fielding. The effect of this fringing on the data depends upon the target position on the sky and the target intensity. Wavelength calibration was performed using He-Ar-Ne calibration lamps at each target position. The spectroscopic standard Wolf 1346 was used for flux calibration. Spectra have also been corrected for heliocentric velocity and reddening ($A_V$ = 11.6). All data reduction was performed in IRAFfollowing standard optical data reduction procedures.
![Heliocentric velocity and extinction corrected ($A_{V}$ = 11.6, see Section \[subsec:distance\]) optical spectrum of V1721 Aql, taken on 2008 September 25.19 (2.69 days after discovery) with the Steward Observatory Bok 2.29m telescope.[]{data-label="figure2"}](16085fg2.eps){width="\columnwidth"}
![Heliocentric velocity and extinction corrected ($A_{V}$ = 11.6, see Section \[subsec:distance\]) optical spectrum of V1721 Aql, taken on 2008 September 25.25 (2.75 days after discovery) with the Steward Observatory Bok 2.29m telescope.[]{data-label="figure3"}](16085fg3.eps){width="\columnwidth"}
The Blue spectrum of V1721 Aql is presented in Figure \[figure2\]. It is important to note that this spectrum is devoid of detectable emission lines blue-wards of H$\alpha$, likely owing to the very high extinction. Because of the absence of H$\beta$ emission in the spectrum, a lower limit on the extinction is obtained using the Balmer decrement for Case B HI recombination and the observed intensity ratio of H$\alpha$ and H$\beta$. We note that this spectrum is taken early in the nova outburst and although the nova is very fast, conditions may not yet be those of Case B. From this we estimate a lower limit of A$_{V}$ $\geq$ 8. This value is consistent with both the above determinations of $A_{V}$ and helps to confirm that the extinction is indeed high.
The Red spectrum of V1721 Aql is shown in Figure \[figure3\] and indicates the presence of a triple-peaked H$\alpha$ emission line along with emission structures corresponding to O I 7773 Åand O I 8446 Å. It is necessary to determine if the “boxy” structure around H$\alpha$ consists of purely H$\alpha$ or combined lines of H$\alpha$ $+$ \[N II\] 6482, 6548, 6584, 6611 Å. However, although \[N II\] is expected in the spectra, this early in the outburst the \[N II\] line strength is unlikely to be significant in comparison to H$\alpha$ and so an unlikely contributor to the boxy structure. An absence of \[N II\] altogether may simply be owing to the fact that the nova has indeed been caught at maxima and the lines tend to develop a little later in the outburst. For example the fast, potentially U Sco like nova V2491 Cyg, showed evidence of \[N II\] 4.62 days after peak magnitude, with these lines becoming more defined 32.7 to 108 days after peak [@Munari]. The lack of \[N II\] 5755 Åat shorter wavelengths also contradicts the idea of a strong N II presence, although this line may have been missed due to a combination of the object’s rapid evolution and high extinction. Additionally, the observed emission peaks at the blue and red edge of the H$\alpha$ profile in V1721 Aql are nearly symmetric and expected positions of potential \[N II\] contaminants are not.
A relative velocity diagram of the H$\alpha$, O I 7773 Å, and O I 8446 Åstructures is given in Figure \[figure4\]. This diagram indicates that the H$\alpha$ and O I lines contain similar weak blue/strong red wing morphologies, however, the H$\alpha$ central peak is much more prominent and may arise in an emitting region distinct from the other components of the emission profile. The velocity shifts of the three components are also similar which supports the hypothesis that the H$\alpha$ structure consists of H$\alpha$ emission only.
![Relative velocity diagram of the H$\alpha$, O I 7773 Å, and O I 8446 Åstructures. Note that lines have been off-set on the y-axis for ease of comparison.[]{data-label="figure4"}](16085fg4.eps){width="\columnwidth"}
In order to identify any potential emission lines that may be contaminating the H$\alpha$ structure a spectral fit of the region (using the Red spectrum, Figure \[figure3\]) was conducted using STSDAS’s*Specfit*, the results of which are presented in Figure \[figure5\] and Table \[table1\]. It should be noted that the central H$\alpha$ peak was fit by two separate Gaussians with the second component added as a correction to the first in compensation for the oversimplification of the fit, a possible physical explanation for this is given in Section \[sec:discussion\]. The O I 8446 Åstructure has also been fit with these results presented in Figure \[figure6\] and Table \[table2\]. It is evident that fringing occurs within the spectra at wavelengths $\gtrsim$ 8000 Å. The effect of this fringing has been to contribute to components 2 and 5 of Figure \[figure6\] and to create fine structure short-ward of the O I 8446 Åprofile. It was therefore necessary to fit these contaminating structures which would otherwise interfere with the results. Taking fringing effects into account the central structure of the O I 8446 Åline profile is most likely relatively flat. Although there are slight inherent differences in the strengths of the red/blue peaks in the profiles of the O I 7773 and 8446 Åfeatures, the intrinsic shape before fringing effects of the 8446 Åfeature is likely very similar to the 7773 Åfeature, as illustrated in Figure \[figure4\]. The fitting of the O I 7773 Åprofile has not been conducted as this structure has been severely truncated on the blue edge by an atmospheric absorption feature.
![Observed H$\alpha$ structure (black line) with the sum of *Specfit* Gaussian components (red line). The blue lines represent separate Gaussian components. See Section \[sec:discussion\] for further discussion. The lower plot shows the residual to the fit. []{data-label="figure5"}](16085fg5.eps){width="\columnwidth"}
![Observed O I 8446 Åstructure (black line) with the sum of *Specfit* Gaussian components (red line). The blue lines represent separate Gaussian components. Gaussian 4 represents a spectral artifact. Gaussians 2 and 5 represent components within the profile that are partially caused by fringing. Grey Gaussians represent fine structure caused by fringing effects. All fringing effects required fitting in order to produce the best overall match with observations. The lower plot shows the residual to the fit.[]{data-label="figure6"}](16085fg6.eps){width="\columnwidth"}
Gaussian Wavelength (Å) FWHM (km s$^{-1}$) Relative velocity (km s$^{-1}$)
---------- ---------------- -------------------- ---------------------------------
1 6493$\pm$3 1400$\pm$100 -3200$\pm$100
2 6563$\pm$1 4300$\pm$100 -10$\pm$50
3 6563$\pm$1 800$\pm$100 -10$\pm$50
4 6639.6$\pm$0.4 1260$\pm$50 3510$\pm$20
Gaussian Wavelength (Å) FWHM (km s$^{-1}$) Relative velocity (km s$^{-1}$)
---------- ---------------- -------------------- ---------------------------------
1 8358.7$\pm$0.5 1110$\pm$40 -3100$\pm$20
3 8447$\pm$1 3200$\pm$200 40$\pm$50
6 8541.9$\pm$0.4 930$\pm$30 3410$\pm$10
We find that no other spectral lines expected to be found in novae match the wavelengths presented within Tables \[table1\] and \[table2\]. Given this, and that the blue and red wings of the H$\alpha$, O I 7773, and O I 8446 profiles are similar, we conclude that these structures consist of H$\alpha$ and O I only. Combining the relative velocities of H$\alpha$ Gaussians 1 and 4 gives a mean expansion velocity of $V_{exp} = 3400\pm200 km s^{-1}$. The structure of each line profile may also tell us something about the nova ejecta geometry (see Section \[subsec:secondary\] for further discussion).
On examination of both Blue and Red spectra, no evidence of Fe II/\[Fe II\] was found. This could be caused by the faintness of the spectra, noting the high extinction to the object, and hence the high noise level. There may be a some evidence of He I 7001 Åand N I 8680, 8703, 8711 Åemission. However, due again to noise within the spectra and fringing effects at these longer wavelengths, it is difficult to calculate their significance. Exact spectral classification of the object according to the @Williams system therefore remains elusive.
Pre-outburst identification {#subsec:preout}
---------------------------
Pre-outburst images of a source at the location of V1721 Aql are found within the 2MASS catalogue, with *near*-IR co-ordinates given as $\alpha=19^{h}06^{m}28^{s}\!\!.60, \delta=+7^{\circ}06^{\prime}44^{\prime\prime}\!\!.46$; J2000. Observed 2MASS apparent magnitudes and colours of the *near*-IR source located at the position of the nova can be found in Table \[table3\]. This table also contains de-reddened colours using the extinction value $A_{V}= 11.6\pm 0.2$.
Filter Apparent magnitude Absolute Magnitude Colour Value De-reddened Value
----------- -------------------- -------------------- ------------- ------------- -------------------
*J* $16.6\pm0.2$ $1.8\pm0.6$ *J-K$_{s}$* $2.0\pm0.2$ $0.1\pm0.2$
*H* $15.5\pm0.1$ $1.7\pm0.6$ *J-H* $1.2\pm0.2$ $0.0\pm0.2$
*K$_{s}$* $14.7\pm0.1$ $1.7\pm0.6$ *H-K$_{s}$* $0.8\pm0.2$ $0.1\pm0.2$
The V1721 Aql discovery imageand 2MASS *K$_{s}$* image were aligned and compared via IRAF packages. Based on the stellar density within the 2MASS *K$_{S}$* pre-outburst image the probability of a chance alignment at least as close as that found between the nova and the 2MASS object is less than 1$\%$. The archival 2MASS *K$_{s}$* band image is presented in Figure \[figure7\](a) and the discovery image presented in Figure \[figure7\](b).
The Nature of the Secondary {#subsec:secondary}
---------------------------
There are several factors that contribute to the observed *near*-IR colours of a nova system in quiescence, (i) the spectral type of the secondary and evolutionary phase, (ii) the rate of mass transfer $\dot{M}$, (iii) the extinction $A_{V}$, (iv) the accretion disc and its inclination *i*, and (v) the mass of the primary. In CNe one would expect that the effect of the emission from the WD on the *near*-IR colours to be negligible, and the accretion disc to only provide a significant contribution to the emission when *i* $\lesssim 30^{\circ}$, where an angle of $\textit{i} = 90^{\circ}$ is defined as an edge-on accretion disc [@Weight]. The location of a quiescent nova on a *near*-IR two-colour diagram (*$H-K_{s}, J-H$*) is therefore an important determinant of the nature of the secondary star in the system.
![*Near*-IR colour-colour diagram of quiescent classical nova systems reproduced from Figures 4 $\&$ 7 in @Hoard using Table 1 of their data. The figure is adjusted to include the quiescent 2MASS colours of the nova V1721 Aql system. The light cross-hatched area represents *near*-IR colours of main sequence stars with the denser cross-hatched area representing the giant branch [see references within @Hoard]. The black points show individual nova systems and are coded according to the time since outburst, $\tau$, as follows; filled squares: $\tau < 25$ years; filled triangles: $\tau = 25-50$ years; open triangles: $\tau = 50-75$ years: filled circles: $\tau = 75-100$ years; open circles: $\tau > 100$ years. The star-shaped points are the recurrent nova systems. The nova systems presented here have not been corrected for extinction as in most cases the reddening is not accurately known, but it is assumed to be small to negligible in the *near*-IR for most Galactic novae systems. The large points for individual nova systems have 1$\sigma$ uncertainties of $\leq 0.1$ magnitudes, smaller points have 1$\sigma$ uncertainties of $>$ 0.1 magnitudes. The red cross represents the observed quiescent *near*-IR colours of the V1721 Aql nova system. The red line indicates the system’s reddening vector with the arrowhead indicating its *near*-IR colours once corrected for an extinction of $A_{V}$ = 11.6. The region enclosed by the red cross-hatching indicates all colours the nova system could possess within the error circle of $A_{V} = 11.6 \pm 0.2$. A de-reddening vector corresponding to $A_{V}$ = 3 is also shown.[]{data-label="figure8"}](16085fg8.eps){width="\columnwidth"}
The *near*-IR apparent colours of the V1721 Aql nova system in quiescence are shown in Figure \[figure8\]. The system’s colours occupy a region which contains the RN V745 Sco, which has a giant secondary with an M5+ III spectral type, and the suspected recurrent V1172 Sgr [@Weight], which is also thought to contain a giant secondary. The extinction of these novae however, is much lower [for V745 Sco $A_{V} = 3.1 \pm 0.6$; @Schaefer] than that of V1721 Aql. Nova V1721 Aql’s occupancy of this region is merely coincidental and does not indicate that it is a RN-like system. Nova Aql’s de-reddening vector is indicated with a red line, the arrow head on this line represents an extinction value of $A_{V} = 11.6$, the surrounding red region represents the error circle of the corrected colours. The de-reddened quiescent *near*-IR colours of the V1721 Aql nova system lie within a region occupied by many quiescent CNe. Assuming that the *near*-IR emission of the nova system is dominated by the secondary, Figure \[figure8\] indicates that its spectral type is that of a late F-G (possibly K) main sequence star. However, the inclination of the accretion disc must be taken into account and if it is less than 30$^{\circ}$ (approaching face-on to the observer) then its contribution to the *near*-IR colours would be to cause a significant blue-wards offset.
The line profiles observed in the nova spectra, and resultant high velocities, would suggest that the inclination of the disc within the binary system is low (face-on). The blue and red peaks seen within the H$\alpha$ and O I structures would therefore be the result of material ejected along the poles towards and away from the observer [observations and shaping models predict that the minor axis of a remnant lies in the disc plane; @Slavin; @Porter]. This inclination however, would mean that the contribution by the disc to the *near*-IR colours is significant. Taking this blue contribution into account shifts the *near*-IR system colours along the main sequence and into the sub-giant region. Based on the speed and luminosity of the nova, the object may therefore be thought of as a U Sco type RN system. Comparisons at quiescence between the absolute *J* band magnitudes and *H-K$_{S}$* colours of V1721 Aql (see Table \[table3\]), U Sco (M$_{\textit{J}} = 1.3\pm0.4, H-K_{S} = 0.0\pm0.1$), and V2491 Cyg ($M_{\textit{J}} = 1.0 \pm 0.3$, Darnley et al., submitted 2011, a suspected recurrent nova belonging to the U Sco class) support this argument as they all possess similar absolute *J* magnitudes and occupy the same region of space in an equivalent colour magnitude diagram around the sub-giant branch. The probability of a red giant as the secondary can also be ruled out as the *J* band absolute magnitude of the system would have to be approximately five magnitudes brighter.
Given the speed of decline of the nova, work by @Slavin would suggest that the axis ratio (ratio of semi-major to semi-minor axis) of Nova V1721 Aql’s ejected shell is low (**$\approx$1**). The nova ejecta may therefore be modelled by an approximately spherical shell with discreet randomly distributed knots of brighter emission. We attempted to model such a nova system by calculating the expected emission line profiles from models of the ejecta distribution and comparing them to observed profiles, specifically H$\alpha$. In order to do this we used *XS5*, a morphological and kinematical modelling programme [@Dan] for producing 3D representations of astrophysical shells, synthetic images, and spectra. This program allows the user to generate a geometrical shape, such as an ellipsoid or an hourglass, which can be rotated and inclined. By adjusting additional parameters, such as the major and minor axis lengths, the FWHM of line profiles from the shell, the polar axis emission gradient, and the expansion velocity, the output emission line profile can be altered until a match with observations is achieved. Models of the nova ejecta with axis ratios between 1.0 and 2.0 (at 0.1 increments) were created. The results of this program are presented in Figures \[figure9\] and \[figure10\]. Figure \[figure9\](c) presents two modelled spectra compared to the observed H$\alpha$ structure. The red spectrum is that of a spherical shell, axis ratio of 1 and the blue spectrum is of an ellipsoidal-like shell with an axis ratio of 1.4. Both shells are smooth with uniform emission, and the inclination of the system is such that the central accretion disc is face-on. Figure \[figure10\](c) illustrates the results from the same two structures with the same inclination, but this time there is a slight emission enhancement in the equatorial region. From these modelled spectra it would seem that an ellipsoidal-like morphology may actually be more suited to the V1721 Aql ejecta, however we have far too little information to make a strong argument for this. We note that this higher axis ratio is contradictory to expectations in @Slavin. However, recent work on the 2010 outburst of U Sco by @Drake has indicated that nova ejecta can be significantly shaped by circumbinary gas and/or a high accretion disc gas density. We have also been unable to reproduce the stronger red peak of the H$\alpha$ emission line profile. This could possibly be due to clumps in the ejecta, but more detailed data and modelling are needed to explore this further.
\
\
Discussion and Conclusions {#sec:discussion}
==========================
The results presented in this paper indicate that V1721 Aql is a very fast nova (t$_{2} \sim$ 6 days) and very luminous ($M_{V} = -9.4 \pm 0.5$). The extinction of the object is high, $A_{V} = 11.6 \pm 0.2$ as the nova is very close to the Galactic plane. Based on the value of $A_{V}$, the distance to the nova is estimated to be $2.2 \pm 0.6$ kpc
Pre-outburst *near*-IR colours of the nova have been compared to other novae in quiescence (all post-outburst) and the *near*-IR colours of main sequence and giant stars. The results indicate that, when de-reddened, the nova occupies a region of the colour-colour phase-space in which most CNe are found and appears to have a late (F-M) main sequence secondary or a sub-giant. However, we cannot rule out the possibility of V1721 Aql being a RN, only that it does not appear to contain a giant secondary and can therefore not belong to the RS Oph or T CrB class of recurrent. The U Sco class of RNe however, consists of an evolved main sequence or sub-giant secondary, much like CNe, and like V1721 Aql, the novae are very fast. Similarities in absolute *J* band magnitudes at quiescence between V1721 Aql, U Sco, and V2491 Cyg (suspected U Sco member) also indicate that this object may be a U Sco type RN.
Post-outburst spectra of the V1721 Aql revealed boxy structures around H$\alpha$, O I 7773, and O I 8446 Å. We note that similar complex H$\alpha$ profiles have been observed in other fast novae such as the 1999 outburst of U Sco [@Iijima], and the 2009 nova V2672 Oph [@Munari2], a suspected U Sco type object. Examination indicates that the features in V1721 Aql are not contaminated significantly by other emission lines. The structure of the line emission would suggest that material is being ejected from the poles of the nova shell moving towards and away from the observer, leading to the blue and red wing emission seen. This would indicate that the disc of the binary is face-on, an argument which is also supported when considering the high observed velocity of the system. If the accretion disc were edge-on, concealed velocities would be greater than those observed, this is unlikely. A face-on accretion disc is also more likely when considering the physical reasons for Gaussian 3 of Figure \[figure3\]. The Gaussian may represent a narrow core of H$\alpha$ emission, in which case we are seeing H recombination emission both from the expanding ejecta, which gives rise to the broad overall emission, and emission from a reestablished accretion disc, or possibly even a disc that was never completely disrupted.
With a face-on accretion disc the hotter inner region of the disc is exposed possibly giving a significant blue contribution to the *near*-IR colours of the nova and thus seriously affecting previous spectral classification of the secondary. The speed of decline of the nova also suggest that the nova shell itself has a low axis ratio so that it is almost spherical. Basic models generated by *XS5* to reproduce the H$\alpha$ line profile, however, produce a best fit when using an ellipsoidal-like shell with an axis ratio 1.4. The departure from a spherical shell is most likely due to the nature of the explosion environment [circumbinary material and/or high density disc gas; @Drake]. The model shell produced is smooth with a slight emission enhancement within the equatorial region, implying a face-on central accretion disc.
Relative velocity shifts found via spectral fitting of the H$\alpha$ and O I emission are comparable to those presented in [@Helton] and we estimate an ejecta expansion velocity of $V_{exp} = 3400\pm$ 200 km s$^{-1}$ along the line of sight. This $V_{exp}$ is, perhaps, more in consistent with that of a fast classical nova system rather than fast recurrent novae, which tend to have slightly higher expansion velocities of $V_{exp} \gtrsim$ 4000 km s$^{-1}$ [@Anupama]. There is no evidence of emission blue-wards of the H$\alpha$ structure. This is likely due to the high extinction towards the nova. An alternative explanation is that the nova is of the Fe II class and at this stage the shell is still optically thick. However, no dominant lines of Fe II or \[Fe II\] are present within the spectra. These lines may not have developed yet or may have been lost within the noise of the spectra. Therefore, an Fe II classification can not be ruled out. Another complication for this hypothesis is that Fe II novae tend to be slower than V1721 Aql.
The spectra show no conclusive evidence of He and N emission. This may also be due to the low signal to noise level within the spectra and the high extinction. No absorption features are seen in the spectra. This is unusual as we would expect to see absorption lines within the optically thick expansion stage.
In conclusion the precise nova sub-class of this object remains elusive, and the results of this work suggest two possibilities. The first is that this is a highly energetic luminous and fast classical nova with a main sequence secondary of spectral type F-M, and that any Fe II lines that may have been observable within the nova spectra have simply been extinguished. The second possibility is that this is a U Sco type RN and that evidence of He/N within the spectra is lost due to the high extinction. The latter scenario may prove itself within the next few decades and therefore this object is one that merits continued monitoring for future outbursts.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infra-red Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We would like to thank Stephen Smartt and Rubina Kotak of Queen’s University Belfast for pointing out this very interesting nova and providing us with further information. We would also like to thank an anonymous referee for detailed and thoughtful comments that have helped improve the paper. R. Hounsell is supported by a PhD studentship from the Science and Technology Facilities Council of the UK.
[29]{} natexlab\#1[\#1]{}
, G. C. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 401, RS Ophiuchi (2006) and the Recurrent Nova Phenomenon, ed. [A. Evans, M. F. Bode, T. J. O’Brien, & M. J. Darnley]{}, 31
, M. F. 2010, Astronomische Nachrichten, 331, 160
, M. F. & [Evans]{}, A. 2008, [Classical Novae, 2nd Edition]{}, ed. M. F. [Bode]{} & A. [Evans]{}, Cambridge Astrophysics Series, No. 43, Cambridge: Cambridge University Press
, M., [Bianchini]{}, A., [Livio]{}, M., & [Orio]{}, M. 1992, , 266, 232
, R. A. & [Duerbeck]{}, H. W. 2000, , 120, 2007
, J. J. & [Orlando]{}, S. 2010, , 720, L195
, H. W. 1990, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 369, IAU Colloq. 122: Physics of Classical Novae, ed. [A. Cassatella & R. Viotti]{}, 34
, D. A. 1985, , 213, 443
, D. J., [Bryce]{}, M., [O’Brien]{}, T. J., & [Meaburn]{}, J. 2003, in IAU Symposium, Vol. 209, Planetary Nebulae: Their Evolution and Role in the Universe, ed. [S. Kwok, M. Dopita, & R. Sutherland]{}, 531
, L. A., [Woodward]{}, C. E., [Vanlandingham]{}, K., & [Schwarz]{}, G. J. 2008, , 8989, 2
, D. W., [Wachter]{}, S., [Clark]{}, L. L., & [Bowers]{}, T. P. 2002, , 565, 511
, R., [Bode]{}, M. F., [Hick]{}, P. P., [et al.]{} 2010, , 724, 480
, T. 2002, , 387, 1013
, D. B. 1945, , 57, 69
, U., [Ribeiro]{}, V. A. R. M., [Bode]{}, M. F., & [Saguner]{}, T. 2011, , 410, 525
, U., [Siviero]{}, A., [Dallaporta]{}, S., [et al.]{} 2011, , 16, 209
, C. 1957, [The galactic novae.]{}, ed. [Payne-Gaposchkin, C.]{}, Amsterdam, North-Holland Pub. Co.; New York, Interscience Publishers
, J. M., [O’Brien]{}, T. J., & [Bode]{}, M. F. 1998, , 296, 943
, D. & [Kovetz]{}, A. 1995, , 445, 789
, J. & [Froebrich]{}, D. 2009, , 395, 1640
, B. E. 2010, , 187, 275
, M. M. 1981, , 243, 268
, M. F., [Beichman]{}, C., [Capps]{}, R., [et al.]{} 1995, in Bulletin of the American Astronomical Society, Vol. 27, Bulletin of the American Astronomical Society, 1392
, A. J., [O’Brien]{}, T. J., & [Dunlop]{}, J. S. 1995, , 276, 353
, S., [Iliadis]{}, C., & [Hix]{}, W. R. 2008, [in Classical Novae, 2nd Edition]{}, ed. [[Bode]{}, M. F. and [Evans]{}, A.]{}, Cambridge Astrophysics Series, No. 43, Cambridge: Cambridge University Press, 77
, B. 2008, [in Classical Novae, 2nd Edition]{}, ed. M. F. [Bode]{} & A. [Evans]{}, Cambridge Astrophysics Series, No. 43, Cambridge: Cambridge University Press, 16
, A., [Evans]{}, A., [Naylor]{}, T., [Wood]{}, J. H., & [Bode]{}, M. F. 1994, , 266, 761
, R. E. 1992, , 104, 725
, H., [Itagaki]{}, K., [Nakano]{}, S., [et al.]{} 2008, , 8989
[^1]:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Researches in novel viewpoint synthesis majorly focus on interpolation from multi-view input images. In this paper, we focus on a more challenging and ill-posed problem that is to synthesize novel viewpoints from one single input image. To achieve this goal, we propose a novel deep learning-based technique. We design a full resolution network that extracts local image features with the same resolution of the input, which contributes to derive high resolution and prevent blurry artifacts in the final synthesized images. We also involve a pre-trained depth estimation network into our system, and thus 3D information is able to be utilized to infer the flow field between the input and the target image. Since the depth network is trained by depth order information between arbitrary pairs of points in the scene, global image features are also involved into our system. Finally, a synthesis layer is used to not only warp the observed pixels to the desired positions but also hallucinate the missing pixels with recorded pixels. Experiments show that our technique performs well on images of various scenes, and outperforms the state-of-the-art techniques.'
author:
- 'Xiaodong Cun[^1]'
- Feng Xu
- 'Chi-Man Pun[^2]'
- Hao Gao
bibliography:
- 'egpaper\_for\_review.bib'
title: 'Depth Assisted Full Resolution Network for Single Image-based View Synthesis'
---
Introduction
============
Synthesizing images of novel viewpoints is widely investigated in computer vision and graphics. Most works in this topic focus on using multi-view images to synthesize viewpoints in-between [@Leimkuehler2017TVCG; @Efrat2016; @DidykSFDM2013; @Kellnhofer2017; @Binolf2015; @Ji_2017_CVPR; @LearningViewSynthesis]. In this paper, we consider extrapolation, and we take a step further to do extrapolation from one single input image. This technique is quite useful for many applications, such as multi-view rendering in virtual reality, light field reconstruction from less recording[@LearningViewSynthesis], and image post-processing like image refocusing. However, this task is very challenging for two major reasons. First, some parts of the scene may be not observed in the input viewpoint but are required for novel ones. Second, 3D information is lacking for single view input but is crucial to determine pixel changes between viewpoints. Although very challenging, we observe that human brains are always able to imagine novel viewpoints. The reason is that human brains have learned in our daily lives to understand the depth order of objects in a scene [@NIPS2016_6489] and infer what the scene looks like when viewing from another viewpoint. Inspired by human brains, we use deep neural networks to learn to synthesize novel viewpoint from a light field dataset (A result is shown in Figure. \[fig:0\]). We believe that for novel viewpoint synthesis, both global and local image features are important. And our key observation is that after modeling the process as two steps: depth prediction and depth-based image warping, the extraction of the two kinds of features can be decoupled. Depth estimation from a single image is ill-posed, so global high-level image features are required to tackle the problem. But given the depth, only local image warping is required to synthesize the final result, and local depth and color information are enough to determine the warping. Based on this observation, we focus on the two kinds of features in the two steps respectively.
{width="\textwidth"} \[fig:0\]
First, we explicitly estimate depth information from images by global high-level image features. As learning global features requires a large dataset to cover sufficient variations and the current light field dataset is small and lack of coverage, we leverage an existing large image dataset (421k images) with labeled depth orders to pre-train a depth prediction network [@NIPS2016_6489]. Then, with the good depth information, our view synthesis network is further trained to extract local features directly from a light field dataset. As local features do not have as much variations as global ones, current light field datasets are relatively sufficient, which is also demonstrated in our experiments.
However, the network design is not trivial to extract important local features for view synthesis. Existing deep CNNs [@he2015; @Simonyan14c; @he2017mask; @zhou2016view; @qifeng2017ICCV; @long2015fully; @yu2015multi] majorly focus on extracting global high level features, which have two key drawbacks. First, global features are usually invariant to spatial transformations (scale, translation, and rotation). It is not desired for novel view synthesis which needs to delicately change the orientations of objects on the 2D image domain. Second, global features are expected to be invariant to local details. It is not desired either because local details need to be correctly modified to guarantee a reasonable synthesis of novel viewpoints. Based on these observations, we propose a full resolution network (whose all layers are with the same size of the input image) to encode content orientations and images details, which benefit to achieve high quality and high-resolution view synthesis.
Recently, a concurrent work [@pratul2017lightField] also uses deep learning to synthesize novel viewpoints from a single image for flowers, where light field images of flowers are used to train a network which first infers per-pixel depth values and then synthesize a 4D light field by dilation convolutions. Compared with this work, we have two major differences. First, our network for depth prediction is trained on plenty of images from various scenes, instead of a small set of light field images, so the depth prediction is expected to have better generalization capability. Second, since the depth is obtained by a pre-trained network, the following synthesizing network only needs to combine local image features with the depth to infer the local image warping. Thus our full resolution network focuses on local features and does not need dilation convolutions to enlarge receptive fields, where gridding artifacts are inevitable.
We summarize our main contributions as follows:
- We propose a depth assisted full resolution network to synthesize user-desired viewpoint from a single natural image, which achieves the state-of-the-art performance over the existing techniques.
- We leverage a large image dataset for depth prediction, which breaks the limitation caused by the small size of the current light field datasets.
- We propose a full resolution network that extracts local image features to warp local image details in the synthesis.
Related work
============
**View synthesis for scenes** is a popular problem in both computer vision and computer graphics. Usually, synthesizing novel views requires estimating disparity from multiple input images, and then the synthesis is performed by warping input images with the disparity. [@DidykSFDM2013; @Efrat2016; @Kellnhofer2017] synthesize automultiscopic images from stereo images. [@Binolf2015] presents a disparity based method to reconstruct light field from micro-image pairs. However, in our problem, the surrounding views need to be predicted by a single image.
Recently, deep learning based methods have been utilized in novel view synthesis. [[@flynn2016deepstereo] is inspired by traditional plane sweep algorithms and uses multiple input images to learn the target image of a novel viewpoint.]{} [@jaderberg2015spatial] learns a set of transformation parameters describing the relationship between the input image and the target image. [@xie2016deep3d] tries to synthesize stereo image pairs from a single input image. It just generates the corresponding right image from the left image, without the controllability of the coordinates of the target image. [@Ji_2017_CVPR] presents a convolutional network based method to morph a novel view from a stereo image pair. [@LearningViewSynthesis; @EPICNN17] synthesize novel views from four corner images, which focus on reconstructing the light field from multiple input images. [@pratul2017lightField] trains a network from flower dataset to reconstruct the light field images from one single image. **View synthesis for objects** is another related topic in computer vision. The 3D structure of a single object can be predicted by a single input image. [@dosovitskiy2015learning] tries to synthesize an object at novel views from a single input image by neural networks directly. [@zhou2016view] extends this method by estimating an appearance flow. By considering the relationship between different viewpoints of the same object, [@yang2015weakly] proposes a network deriving from recurrent neural network(RNN) for chair synthesis. Most recently, a generative adversarial network [@goodfellow2014generative] is proposed by [@tvsncvpr2017] to predict novel views from a single input image. Although these methods can generate novel viewpoints that are quite different from the input, they just work well on simple objects, such as cars and chairs.
**Single view depth estimation** reconstructs per-pixel depth of an input image, which is an important information in the view synthesis task. [@saxena2008make3d] tries to predict depth by learning algorithms. Deeper networks have also been tested by [@eigen2015predicting; @laina2016deeper]. These methods have limited accuracy in natural scenes because scene variations are restricted by the coverage of their training datasets. Inspired by depth based image rendering, unsupervised methods [@monodepth17; @garg2016unsupervised] based on stereo constraints have shown great potential in depth estimation. However, the camera parameters are required for these techniques. Varying from previous methods, [@NIPS2016_6489] proposes a relative depth-based framework to estimate depth for unconstrained images (“in the wild"). It leverages a large dataset containing 421k images and performs well in deciding the relative position of objects in the scenes.
Methods
=======
Given a single image $I_p$ at the center viewpoint $p$ and the position of a novel viewpoint $q$, our goal is to synthesize a novel image $I_q$ at the viewpoint $q$. This problem can be formulated as: [@false]{}$$\begin{aligned}
\label{eq:full1}
I_q=\Phi(I_p,q),\end{aligned}$$ where $\Phi$ is a function which defines the relationship between $I_q$ and $I_p$. The view point position $p$ and $q$ are represented in a 2D coordinate system which centers at the viewpoint of $p$ and is orthogonal to the viewing direction. We propose a depth assisted full resolution network for novel view synthesis. In our pipeline, we train the network to infer a flow field describing the relationship between the input image and the novel view. Then we warp the input image by the flow field. Following this strategy, we can reformulate Equation. \[eq:full1\] as follows: [@false]{}$$\begin{aligned}
\label{eq:full2}
F_q=\Phi_d(I_p,q), \\
I_q=\Phi_{w}(I_p,F_q),\end{aligned}$$ where $\Phi_d$ describes our network which predicts a pixel-level flow field $F_q$ for the novel view $q$. $\Phi_w$ is the warping method that warps the input image $I_p$ with the flow field $F_q$ on viewpoint $q$.
Depth Assisted Full Resolution Network
--------------------------------------
In this section, we illustrate our depth assisted full resolution network for novel view synthesis (Figure. \[fig:1\]). The encoder part of our full resolution network extracts important local features from the input image. Then our depth predictor, which is pre-trained on a large image dataset by exploring global image information, estimates a depth map of the input image. Next the local features and the depth are fed into our decoder, as well as a 2-channel map indicating the position of the target viewpoint. Finally, our decoder translates the combined features into a warping field to synthesize the final target image.
![Network Structure. A full resolution network is proposed for our problem. A depth, estimated by a pre-trained depth predictor, is added as a feature map in the network, as well as $(u,v)$ denoting the coordinates of the target viewpoint. Then, a flow field is decoded from the combined feature and a warping layer is added at the end of the network to synthesize the final target view.[]{data-label="fig:1"}](networkUpdate.pdf){width="0.9\columnwidth"}
### Encoder {#sec:encoder}
The encoder is designed for extracting local features of the input image. Following the method in [@LearningViewSynthesis], the encoder network is a series of convolution kernels with different convolution kernel sizes but generating features of the same resolution with the input image. A Rectified Linear Units (ReLU) layer is added after each convolution layers. These features will be used to rebuild the final transformed image. [Notice that we do not use pooling or batch normalization for reducing the parameters and resolutions of the network as they may be harmful to encode content orientations and image details as discussed before.]{}
### Feature Connection {#sec::connection}
As shown in Figure. \[fig:1\], we add the predicted relative depth, estimated by [@NIPS2016_6489], as a feature of the input image. [@NIPS2016_6489] trains a depth prediction network from the labeled depth ranking of pixel pairs on one image. So the output indicates the relative depth of the input images. 421k images, which are gathered from Flickr and marked by crowdsourcing with the relative depth ordering of two random pixels, are used for training the network. We only take the forward output of this network for extracting the depth of our input image because we do not have the ground truth depth for backward training.
There are four main advantages for using this depth feature: Firstly, depth is a closely relevant feature of our flow field. The relationship of depth $Z$ and disparity $D$ between an input image and a novel view can be written as [@xie2016deep3d]: $$\begin{aligned}
\label{eq:depth}
D = \frac {B(Z-f)} {Z},\end{aligned}$$ where $B$ is the absolute distance between two viewpoints and $f$ is the focus. There is also a clear relationship between the disparity $D_q$ of novel view $q$ and our flow field $F_q$: $$\begin{aligned}
\label{eq:depth2}
F_q(s) = ( D_q(s) \times \Delta u , D_q(s) \times \Delta v ),\end{aligned}$$ where $\Delta u$, $\Delta v$ are the differences of the viewpoint coordinates in $u$, $v$ direction, respectively. According to Equation. \[eq:depth\] and Equation. \[eq:depth2\], depth information is very important for estimating flow field. Secondly, [@NIPS2016_6489] predicts the relative depth of an image, which gives a more clear relative position relationship between objects than other methods [@eigen2015predicting; @laina2016deeper]. And it is important to generate head motion parallax for viewpoint transitions. Thirdly, the network for predicting the depth has been trained by information (depth orders) of two largely apart pixels, so large perception field is implicitly considered by our network by involving the depth. As the full resolution network preserves local features, we have collected both local and global information for the final synthesis. Finally, the dataset for training the depth predictor is very large in size and covers a large amount of nature scenes, so it is important for the generalization capability of our system.
Besides the depth image connected to the network as a feature layer in the end of the encoder part, the 2D coordinates $(u, v)$ of the novel view is also added as two layer features which have the same size as the input image [@LearningViewSynthesis]. This is for giving the viewpoint information of the target to the network.
### Decoder {#sec:decoder}
The network in this part estimates the dense flow for all pixels. It has been demonstrated in several previous works [@zhou2016view; @tvsncvpr2017] that synthesizing the flow from the input view to novel views is better than generating the novel views by networks directly. Notice that [ as we use backward interpolate method]{}, the flow field is also used to handle the occlusion regions which is not visible in the input. In this case, the flow does not stand for real correspondences, but is only used to hallucinate the occlusion regions, and is also learned in our method. The network in this decoder part contains four convolution layers. The first three are followed by ReLU layers, while the last one is followed by a $Tanh$ layer.
### Flow based warping {#sec:warp}
Following the idea of appearance flow [@zhou2016view] and the spatial transform network [@ranjan2016optical], we apply flow based warping method for synthesizing the final image. There is a clear mathematical relationship between the predicted flow fields and novel view images. For every pixel $s$ in one novel view image, its pixel value can be expressed as: $$\begin{aligned}
I_q(s) = I_p[ s + F_q(s) ],
\label{eq:warp}\end{aligned}$$ where $F_q(s)$ is the two-dimensional flow which is the output of our neural network. Here, a backward warping is utilized to transform the input image to the novel view as the flow is defined at pixel $s$ on the target view. Since the warping function described in Equation. \[eq:warp\] is differentiable, and the gradient can be calculated efficiently [@jaderberg2015spatial], all the layers of our network are differentiable and the whole network can be trained end-to-end in a supervised manner. The details of our network can be found in Table \[tab:1\].
convolution size input channel output channel
---- ------------------ --------------- ----------------
E1 $7\times7$ 3 32
E2 $5\times5$ 32 64
E3 $3\times3$ 64 128
E4 $1\times1$ 128 192
D1 $3\times3$ 195 192
D2 $3\times3$ 192 128
D3 $3\times3$ 128 64
D4 $3\times3$ 64 2
: Network Structure[]{data-label="tab:1"}
{width="\textwidth"}
Loss Function
-------------
The objective function $C$ of our network can be written as: $$\begin{aligned}
C = L_1(I_q , \widehat{I}_q) + \alpha L_{TV}(F_{q})\end{aligned}$$
The first part of the loss function is a traditional image reconstruction error($L_1$), which restricts the similarity between the result $I_q$ and the ground truth $\widehat{I_q}$. The second part of our loss function is a total variation regularization for the predicted flow field $F_{q}$. It is necessary to add the regularization to our method because the total variation constraint in the flow field $F_{q}$ will guarantee smoothness and produce high quantity results. we empirically set $\alpha = 0.001$ for all the experiments.
Training Details
----------------
### Relative position for training
Lytro Illum camera captures the light field of a scene by a regular microlens array. Because of the distances among the viewpoints are much less than the distance between the camera and the scene object, we assume all the viewpoints are in a 2D $u-v$ plane. In the training, we denote the position of the center viewpoint $p_{center}(u, v)$ as $[0, 0]$. And $p_{novel}(u, v)$ ranges in $[- 3, + 3]\times[- 3, + 3]$, accordingly. To make full use of the dataset, all light field images will have a chance to be chosen as the center view, and the coordinates of other images are determined by their relative positions to the center image.
### Datasets and Parameters
We have used two datasets for experiments and validates. One is a light field dataset from [@LearningViewSynthesis] (VS100 dataset). It contains 100 training images and 30 test images with the angular resolution of $8\times8$ . Diverse scenes, such as cars, flowers, and trees, are included in this dataset. It is a challenging dataset because it only contains limited number of samples and their variations are complex. We have also tested our method on a recent light field dataset [@pratul2017lightField] of flowers (Flower dataset). This dataset contains 3433 light field images of various kind of flowers. We randomly split flower dataset as [@pratul2017lightField], getting 3233 for training and 100 for testing. For the trade-off in time and space requirement of the network, we randomly crop the original input image from $541\times376$ to $320\times240$ for training. We use a mini-batches of $4$ to have the best trade-off between speed and convergence. In experiments, our network is trained in 12k iterations. The whole experiment takes almost $2$ days for training. Following the work of [@LearningViewSynthesis], the weights of our network are initialized by Xavier [@glorot2010understanding] approach. We use ADAM [@kingma2014adam] for optimization, with $\beta_1$ = 0.9, $\beta_2$ = 0.999 and learning rate of 0.0001.
Experiments
===========
In this section, we first compare our method with some baseline works on VS100 dataset.
Besides [@pratul2017lightField], we adapt the state-of-the-art methods for stereo pair synthesis[@xie2016deep3d], for synthesis with multi-view input[@LearningViewSynthesis] and for handling single object[@zhou2016view], to fit our problem to perform the comparisons, because there are few previous works focusing on synthesizing user-specified novel viewpoints from one single nature image. As [@pratul2017lightField] is our most relevant work, we also compare with it on the Flower dataset, which is used in their experiments. Several evaluation metrics, such as Peak Signal to Noise Ratio (PSNR), structural similarity (SSIM) and Mean Absolute Error (MAE), are used for the quantitative comparisons. And image and video results are also demonstrated in our paper and supplementary materials.
Then, to evaluate the effectiveness of our key contributions, i.e. the depth-assistant mechanism and the full resolution network, we further test our method by removing them or replacing them with traditional solutions. The experiments show that either of them is crucial to the final good results of our technique. Next, we show more results of our technique, where unconstrained images from the Internet are also tested. Finally, we demonstrate the application of our method on image refocusing and discuss the limitations of our technique.
{width="\textwidth"}
Baseline methods
----------------
Here we introduce the necessary adaptations on the baseline methods.
**Deep3D** [@xie2016deep3d] predicts the right view image from a single left view image. Considering the GPU memory, we use $100$ disparity layers in total, and re-sample the disparity range with $10$ disparity levels on either the horizontal and the vertical direction. For the network, we follow the original Deep3D method. Features, which are extracted by pre-trained VGG16 [@Simonyan14c] network, are used for processing the input view. Disparity candidates are rebuilt by connecting the re-sampled high-level VGG16 features and other low-level features.
**LNVS** [@LearningViewSynthesis] proposes a network to use four corner views to predict novel views in-between. By replacing the four corner input views to one center image, we extend this method to our problem. Because there is only one view as the input, other than the mean and variance of input views, we just feed the $100$ disparity levels as the $100$ layer features to the neural network.
**AF** [@zhou2016view] focuses on novel view synthesis for an object and fly-through a scene. We extend this method by replacing the transformation parameters of objects to viewpoint coordinates directly.
**4DLF** [@pratul2017lightField] proposes a network for light field reconstruction for flowers. To compare this method with our method on a general dataset, we pre-train their network on the Flower dataset as they proposed, and then fine-tune the network to fit VS100 dataset at a lower learning rate. The network trained directly on VS100 is also compared in Table \[tab:2\], denoted as *4DLF(VS100)*.
[XXXX]{} & PSNR $\uparrow$ & SSIM $\uparrow$ & MAE $\downarrow$\
4DLF(VS100) & 33.0853 & 0.8299 & 0.0328\
4DLF & 34.5788 & 0.8545 & 0.0285\
LNVS & 34.1789 & 0.8483 & 0.0282\
Deep3D & 34.9809 & 0.8567 & 0.0232\
AF & 35.5367 & 0.8531 & 0.0237\
Ours & **36.4401** & **0.8875** & **0.0202**\
Comparisons on VS100 dataset
-----------------------------
We test our method and the aforementioned four methods on all the 30 test images in the VS100 dataset and generate 48 novel viewpoints for each image. By comparing with the ground truth, we calculate three numeric metrics and represent the average values in Table \[tab:2\]. We can clearly see that our approach outperforms all the other methods. Notice that except for our method, all the other ones do not need the depth order dataset to train a depth predictor, but we treat this as a contribution of our technique as it really benefits to the task of single view-based viewpoint synthesis and it only requires a pre-training step which can be performed in advance and does not need to be changed in all the following steps.
Besides the quantitative comparisons, we also show some synthesized results in Figure \[fig:3\]. For Deep3D, as the sampled disparities are quite sparse. It is difficult to assign correct disparities for all patches in images. As shown in the result in the second row, a small part of the building is missed as its disparity is not correctly estimated. For LNVS, without the constraints from the four corner images, the cascade network generates blurring results. The same problem happens in 4DLF. As it tries to learn by patches ($192 \times 192$), it is more suitable for constrained scenes with large number of training samples, such as the Flower dataset. VS100 dataset is more challenging as it contains various different natural scenes but a limited number of samples, so it does not perform well in this situation. For our method, as we have better depth information to infer disparity, we get noticeably better results. AF predicts the flow of appearance by an encoder-decoder structure. As illustrated before, this structure will not preserve pixel level details for image transformations, and thus generates artifacts (see the edge of the building is no longer straight in the result of the second row).
The reconstruction errors between the ground truth and the synthesized novel view are also shown in Figure \[fig:4\], and our method shows the minimum reconstruction errors than the other methods both in the views near (top row) the input and in the corner views (bottom row). A more clear demonstration for comparing the five methods is shown in the video.
\[tbp!\] ![Comparison between 4DLF method and our method on the test set of Flower dataset. Left histogram plot is the MAE comparison between 4DLF and our method. The mean MAE of our method is much lower than 4DLF. As for the visual results on the right, as the second occlusion network of 4DLF fails in some examples, the border of the flower is not as good as our result. This difference is better viewed by zooming in the figure. []{data-label="fig:his"}](histogram_sketch.pdf "fig:"){width="0.95\columnwidth"}
Comparisons on Flower dataset
------------------------------
We also compare our method and 4DLF[@pratul2017lightField] on the Flower dataset. Our method is trained on the $320\times240$ random crops of the original image, while we train [@pratul2017lightField] as reported in their paper. To fit our image ratio, we center crop the original test images from $542\times364$ to $480\times360$ to perform the comparison. For our method, we resize the test image to $320\times240$ to fit our model. And we resize the output results back to $480\times360$ for comparison. For [@pratul2017lightField], we follow their methods to test the images on the same crop of $480\times360$. Figure \[fig:his\] shows the histogram of MAE error on the test dataset. The MAE of our method mainly falls in the interval of $[0.012,0.023]$ where their MAE error most falls in $[0.015,0.028]$. The mean MAE error of our method ($0.0201$) is also better than their method ($0.0244$). Figure \[fig:his\] also shows the failure case of [@pratul2017lightField] in large occlusion regions. As large occlusions are caused by large depth discontinuity and their results are sensitive to depth errors when synthesizing novel viewpoints, the failure case indicates that 3D-CNN and dilated convolution used in [@pratul2017lightField] cannot generate very accurate depth in these regions. More comparisons of our method and 4DLF can be found in the supplementary figure.
\[tbp!\] ![Evaluation of the key techniques of our method. The top part shows the color-coded flow field estimated by different alternative solutions, while the lower part shows the corresponding synthesized results. For the method w/o depth, the center of a flower is regarded as background, leading to the incorrect transformation of the flower center, shown in the purple rectangle. For the Encoder-decoder method, the flow looks blurring as local features (such as edges) are missing, leading the background to be translated as foreground, shown in the green rectangle. []{data-label="fig:5"}](evaluation_compressed.pdf "fig:"){width="0.95\columnwidth"}
Evaluations
-----------
We first evaluate the full resolution network of our method. We believe this network is more suitable for our task of dense pixel value synthesis, compared with the traditional Encoder-Decoder network which is widely used for tasks with sparse outputs, like [@long2015fully] and [@zhou2016view]. As shown in Table \[tab:3\] and Figure \[fig:5\], our full resolution network performs better in our task. The underline reason is that the traditional encoder and decoder network basically represents the input image as low-resolution features with high dimension. As features are already in low resolution, it is difficult to reconstruct output images with highly accurate pixel positions. From the green crop of Figure \[fig:5\], because of the blurred flow field, the background leaf transforms with the foreground flower while our method shows the correct movement similar to the ground truth.
Then we evaluate the effectiveness of involving depth information in our system. We know that depth is important because it actually determines the disparity of scene objects in different viewpoints. However, as depth estimation from a single input is an ill-posed problem, we utilize a network trained on a very large image dataset to get a reference depth as good as possible. And as image warping can be fully decided by local depth information, our synthesis network do not need to focus on global image features when the depth is already encoded in a feature layer. So involving depth matches our full resolution network which mainly focuses on local features. From Table \[tab:3\] and Figure \[fig:5\], the network without predicted depth considers the border of flower and the center of flower as background by their local colors, generating artifacts indicated in the purple patch.
[lccc]{} & PSNR $\uparrow$ & SSIM $\uparrow$ & MAE $\downarrow$\
Using Encoder-Decoder & 35.6235 & 0.8648 & 0.0215\
W/O Predicted Depth & 35.5281 & 0.8624 & 0.0224\
Ours Full & **36.4401** & **0.8875** & **0.0202**\
![ Although VS100 dataset is limited, our method still shows good generalization capability to normal scenes. This is an example of our method performed on a random image from the Internet. We treat this image at the $(0,0)$ coordinates of the light field, and feed it to our network to synthesis the surrounding viewpoints. The right images show the comparisons between the selected patches of the input view and the four synthesized corner views at $(-3,-3)$, $(-3,3)$, $(3,-3)$ and $(3,3)$, respectively. []{data-label="fig:more"}](moreresults.pdf){width="\columnwidth"}
More results and Image Refocusing
---------------------------------
Our method can handle images of various scenes, not only the test images in the dataset, but also the Internet images. All our results are shown in the supplementary video for a better demonstration. some example are shown in Figure \[fig:0\] and \[fig:more\], where local regions are correctly changed with the change of the viewpoint.
As our method can also be regarded as a light field reconstruction technique from a single image (by synthesizing dense viewpoints surrounding the input), it is possible to refocus a 2D image by adding all the light field images together with different offsets, as shown in Figure \[fig:focus\].
\[tbp!\] ![Some image refocusing examples of our method. The top example is from the test set of Flower dataset while the bottom example is from the test set of VS100 dataset. From left to right are the original input image, the image focusing on foreground and the image focusing on background. []{data-label="fig:focus"}](refocusing2-eps-converted-to.pdf "fig:"){width="0.95\columnwidth"}
Limitations
-----------
First, we can not handle nature images of arbitrary scenes, as there are many scenes never being seen by our light field dataset. But we argue that we do not need a large amount of samples of a particular scene, as we only need to train a network for local feature extraction. Second, our full resolution network requires more memory cost and training/testing time compared with the traditional encoder-decoder network. Third, we can not synthesize novel viewpoint which is far from the input one because it requires much more information which is missing in the input.
Conclusion
==========
In this paper, we propose a method to synthesize user-desired novel views from one single input image. It is challenging and ill-posed, and difficult for current powerful deep learning techniques as there does not exist sufficient light field dataset for training. To tackle this problem, we first leverage a large image dataset with sparsely labeled depth orders to train a depth predictor. We demonstrate that combining the depth with only the local image features extracted by a specially designed full resolution network, novel view synthesis can be achieved on various input images. As the full resolution network only needs to extract local features, the current light field dataset is sufficient as shown in our experiments on the VS100 dataset. So we have made a step further in increasing the generalization capability on this new and important task.
[^1]: These two authors contributed equally
[^2]: Corresponding Author
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The problem of reducing the bias of maximum likelihood estimator in a general multivariate elliptical regression model is considered. The model is very flexible and allows the mean vector and the dispersion matrix to have parameters in common. Many frequently used models are special cases of this general formulation, namely: errors-in-variables models, nonlinear mixed-effects models, heteroscedastic nonlinear models, among others. In any of these models, the vector of the errors may have any multivariate elliptical distribution. We obtain the second-order bias of the maximum likelihood estimator, a bias-corrected estimator, and a bias-reduced estimator. Simulation results indicate the effectiveness of the bias correction and bias reduction schemes.
[**Keywords:**]{} Bias correction; bias reduction; elliptical model; maximum likelihood estimation; general parameterization.
author:
- |
Tatiane F. N. Melo\
[*Institute of Mathematics and Statistics, Federal University of Goiás, Brazil*]{}\
[email: [[email protected]]{}]{}\
\
Silvia L. P. Ferrari\
[*Department of Statistics, University of São Paulo, Brazil*]{}\
[email: [[email protected]]{}]{}\
\
Alexandre G. Patriota\
[*Department of Statistics, University of São Paulo, Brazil*]{}\
[email: [[email protected]]{}]{}\
\
title: Improved estimation in a general multivariate elliptical model
---
Introduction
============
It is well known that, under some standard regularity conditions, maximum-likelihood estimators (MLEs) are consistent and asymptotically normally distributed. Hence, their biases converge to zero when the sample size increases. However, for finite sample sizes, the MLEs are in general biased and bias correction plays an important role in the point estimation theory.
A general expression for the term of order $O(n^{-1})$ in the expansion of the bias of MLEs was given by Cox and Snell (1968). This term is often called second-order bias and can be useful in actual problems. For instance, a very high second-order bias indicates that other than maximum-likelihood estimation procedures should be used. Also, corrected estimators can be formulated by subtracting the estimated second-order biases from the respective MLEs. It is expected that these corrected estimators have smaller biases than the uncorrected ones, especially in small samples.
Cox and Snell’s formulae for second-order biases of MLEs were applied in many models. Cordeiro and McCullagh (1991) use these formulae in generalized linear models; Cordeiro and Klein (1994) compute them for ARMA models; Cordeiro et al. (2000) apply them for symmetric nonlinear regression models; Vasconcellos and Cordeiro (2000) obtain them for multivariate nonlinear Student t regression models. More recently, Cysneiros et al. (2010) study the univariate heteroscedastic symmetric nonlinear regression models (which are an extension of Cordeiro et al. 2000) and Patriota and Lemonte (2009) obtain a general matrix formula for the bias correction in a multivariate normal model where the mean and the covariance matrix have parameters in common.
An alternative approach to bias correction was suggested by Firth (1993). The idea is to adjust the estimating function so that the estimate becomes less biased. This approach can be viewed as a “preventive" method, since it modifies the original score function, prior to obtaining the parameter estimates. In this paper, estimates obtained from Cox and Snell’s approach and Firth’s method will be called bias-corrected estimates and bias-reduced estimates, respectively. Firth showed that in generalized linear models with canonical link function the preventive method is equivalent to maximizing a penalized likelihood that is easily implemented via an iterative adjustment of the data. The bias reduction proposed by Firth has received considerable attention in the statistical literature. For models for binary data, see Mehrabi and Matthews (1995); for censored data with exponential lifetimes, see Pettitt et al. (1998). In Bull et al. (2002) bias reduction is obtained for the multinomial logistic regression model. In Kosmidis and Firth (2009) a family of bias-reducing adjustments was developed for a general class of univariate and multivariate generalized nonlinear models. The bias reduction in cumulative link models for ordinal data was studied in Kosmidis (2014). Additionally, Kosmidis and Firth (2011) showed how to obtain the bias-reducing penalized maximum likelihood estimator by using the equivalent Poisson log-linear model for the parameters of a multinomial logistic regression.
It is well-known and was noted by Firth (1993) and Kosmidis and Firth (2009) that the reduction in bias may sometimes be accompanied by inflation of variance, possibly yielding an estimator whose mean squared error is bigger than that of the original one. Nevertheless, published empirical studies such as those mentioned above show that, in some frequently used models, bias-reduced and bias-corrected estimators can perform better than the unadjusted maximum likelihood estimators, especially when the sample size is small.
Our goal in this paper is to obtain bias correction and bias reduction to the maximum likelihood estimators for the general multivariate elliptical model. We extend the work of Patriota and Lemonte (2009) to the elliptical class of distributions defined in Lemonte and Patriota (2011). We focus on analytical methods only, because simulations for a general multivariate normal model suggests that analytical bias corrections outperforms the computationally intensive bootstrap methods (Lemonte, 2011).
In order to illustrate the ampleness of the general multivariate elliptical model, we mention some of its submodels: multiple linear regression, heteroscedastic multivariate nonlinear regressions, nonlinear mixed-effects models (Patriota 2011), heteroscedastic errors-in-variables models (Patriota et al. 2009a,b), structural equation models, multivariate normal regression model with general parametrization (Lemonte, 2011), simultaneous equation models and mixtures of them. It is important to note that the usual normality assumption of the error is relaxed and replaced by the assumption of elliptical errors. The elliptical family of distributions includes many important distributions such as multivariate normal, Student $t$, power exponential, contaminated normal, Pearson II, Pearson VII, and logistic, with heavier or lighter tails than the normal distribution; see Fang et al. (1990).
The paper is organized as follows. Section \[BiasCorrectionReduction\] presents the notation and general results for bias correction and bias reduction. Section \[model\] presents the model and our main results, namely the general expression for the second-order bias of MLEs, in the general multivariate elliptical model. Section \[special-cases\] applies our results in four important special cases: heteroscedastic nonlinear (linear) model, nonlinear mixed-effects models, multivariate errors-in-variables models and log-symmetric regression models. Simulations are presented in Section \[simulation\]. Applications that use real data are presented and discussed in Section \[applications\]. Finally, Section \[conclusion\] concludes the paper. Technical details are collected in one appendix.
Bias correction and bias reduction {#BiasCorrectionReduction}
==================================
Let $\theta$ be the $p$-vector of unknown parameters and ${\theta}_{r}$ its $r$th element. Also, let $U(\theta)$ be the score function and $U_r(\theta) = U_r$ its $r$th element. We use the following tensor notation for the cumulants of the log-likelihood derivatives introduced by Lawley (1956):
$$\kappa_{rs} = E\bigg(\frac{\partial U_r}{\partial\theta_{s}}\bigg),\quad \kappa_{r,s} = E(U_r U_s),\quad
\kappa_{rs,t} = E\bigg(\frac{\partial U_r}{\partial\theta_{s}} U_{t}\bigg),$$ $$\kappa_{rst} = E\bigg(\frac{\partial^{2} U_r}{\partial\theta_{s}\partial\theta_{t}}\bigg),\quad
\kappa_{rs}^{(t)} = \frac{\partial\kappa_{rs}}{\partial\theta_{t}}, \quad \kappa_{r,s,t} = E(U_r U_s U_t),$$ and so on. The indices $r$, $s$ and $t$ vary from $1$ to $p$. The typical $(r,s)$th element of the Fisher information matrix $K(\theta)$ is $\kappa_{r,s}$ and we denote by $\kappa^{r,s}$ the corresponding element of $K(\theta)^{-1}$. All $\kappa$’s refer to a total over the sample and are, in general, of order $n$. Under standard regular conditions, we have that $\kappa_{rs} = - \kappa_{r,s}$, $\kappa_{rs,t} = \kappa_{rs}^{(t)}- \kappa_{rst} $ and $\kappa_{r,s,t} = 2\kappa_{rst} -\kappa_{rs}^{(t)} - \kappa_{rt}^{(s)} - \kappa_{st}^{(r)}$. These identities will be used to facilitate some algebraic operations.
Let $B_{\widehat{\theta}}(\theta)$ be the second-order bias vector of $\widehat{\theta}$ whose $j$th element is $B_{\widehat{\theta}_{j}}(\theta)$, $j=1,2,\ldots,$ $p$. It follows from the general expression for the multiparameter second-order biases of MLEs given by Cox and Snell (1968) that $$\label{biascorrection}
B_{\widehat{\theta}_{j}}(\theta) = \sum_{r,s,t = 1}^p\kappa^{j,r}\kappa^{s,t}\biggl\{
\frac{1}{2}\kappa_{rst}+\kappa_{rs,t}\biggr\}.$$
The bias corrected MLE is defined as $${\widehat{\theta}}_{BC} = \widehat{{\theta}} - B_{\widehat{\theta}}(\widehat{\theta}).$$ The bias-corrected estimator ${\widehat{\theta}}_{BC}$ is expected to have smaller bias than the uncorrected estimator, $\widehat{{\theta}}$.
Firth (1993) proposed an alternative method to partially remove the bias of MLEs. The method replaces the score function by its modified version $$\label{scoreModified}
U^*(\theta) = U(\theta) - K(\theta) B_{\widehat{\theta}}(\theta),$$ and a modified estimate, ${\widehat{\theta}}_{BR}$, is given as a solution to $U^*(\theta) = 0$. It is noticeable that, unlike Cox and Snell’s approach, Firth’s bias reduction method does not depend on the finiteness of ${\widehat{\theta}}$.
Model and main results {#model}
======================
We shall follow the same notation presented in Lemonte and Patriota (2011). The elliptical model as defined in Fang et al. (1990) follows. A $q\times 1$ random vector ${Y}$ has a multivariate elliptical distribution with location parameter ${\mu}$ and a definite positive scale matrix ${\Sigma}$ if its density function is $$\label{dens}
f_{{Y}}({y}) = |{\Sigma}|^{-1/2} g\bigl(({y} - {\mu})^{\top}{\Sigma}^{-1}({y} - {\mu})\bigr),$$ where $g:[0,\infty)\to(0,\infty)$ is called the density generating function, and it is such that $\int_{0}^{\infty}u^{\frac{q}{2}-1}g(u) du < \infty$. We will denote ${Y}\sim El_{q}({\mu},{\Sigma}, g) \equiv El_{q}({\mu},{\Sigma})$. It is possible to show that the characteristic function is $\psi(t) = \mbox{E}(\exp(it^\top Y))=\exp(it^\top\mu)\varphi(t^\top\Sigma t)$, where $t \in \mathbb{R}^q$ and $\varphi: [0,\infty) \to \mathbb{R}$. Then, if $\varphi$ is twice differentiable at zero, we have that $\mbox{E}(Y) = \mu$ and $\mbox{Var}(Y) = \xi \Sigma$, where $\xi = \varphi'(0)$. We assume that the density generating function $g$ does not have any unknown parameter, which implies that $\xi$ is a known constant. From (\[dens\]), when ${\mu} = {0}$ and ${\Sigma} = {I}_{q}$, where ${I}_{q}$ is a $q\times q$ identity matrix, we obtain the spherical family of densities. A comprehensive exposition of the elliptical multivariate class of distributions can be found in Fang et al. (1990). Table 1 presents the density generating functions of some multivariate elliptical distributions.
[cc]{}\
Distribution & Generating function $g(u)$\
normal & $ \frac{1}{(\sqrt{2 \pi})^q} \ e^{-u/2}$\
\
Cauchy & $ \frac{\Gamma\left(\frac{1+q}{2}\right)}{\Gamma\left(\frac{1}{2}\right)} \pi^{-q/2} (1 + u)^{-(1 + q)/2}$\
\
Student $t$ & $ \frac{\Gamma\left(\frac{\nu + q}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)} \pi^{-q/2} \nu^{-q/2} \left(1 + \frac{u}{\nu}\right)^{-(\nu + q)/2}$, $\nu > 0$\
\
power exponential & $ \frac{\lambda \Gamma\left(\frac{q}{2}\right)}{\Gamma\left(\frac{q}{2\lambda}\right)} 2^{-q/(2 \lambda)} \pi^{-q/2} e^{-u^{\lambda}/2}$, $\lambda > 0$\
Let $Y_1 , Y_2 , . . . , Y_n$ be $n$ independent random vectors, where $Y_i$ has dimension $q_i\in \mathbb{N}$, for $i=1,2, . . ., n$. The general multivariate elliptical model (Lemonte and Patriota 2011) assumes that $${Y}_{i} = {\mu}_{i}({\theta},{x}_{i}) + {e}_{i},\quad i=1,\ldots,n,$$ with ${e}_{i} \stackrel{ind}{\sim} El_{q_{i}}({0},{\Sigma}_i({\theta}, {w}_{i}))$, where “$ \stackrel{ind}{\sim}$” means “independently distributed as”, ${x}_i$ and ${w}_i$ are $m_i\times 1$ and $k_{i}\times 1$ nonstochastic vectors of auxiliary variables, respectively, associated with the $i$th observed response ${Y}_{i}$, which may have components in common. Then,
$$\label{MainModel}{Y}_{i} \stackrel{ind}{\sim}El_{q_{i}}({\mu}_{i}, {\Sigma}_i),\quad i=1,\ldots,n,$$
where ${\mu}_{i} = {\mu}_{i}({\theta},{x}_{i})$ is the location parameter and ${\Sigma}_i = {\Sigma}_i({\theta}, {w}_{i})$ is the definite positive scale matrix. Both ${\mu}_{i}$ and ${\Sigma}_i$ have known functional forms and are twice differentiable with respect to each element of ${\theta}$. Additionally, ${\theta}$ is a $p$-vector of unknown parameters (where $p<n$ and it is fixed). Since ${\theta}$ must be identifiable in model (\[MainModel\]), the functions ${\mu}_i$ and ${\Sigma}_i$ must be defined to accomplish such restriction.
Several important statistical models are special cases of the general formulation (\[MainModel\]), for example, linear and nonlinear regression models, homoscedastic or heteroscedastic measurement error models, and mixed-effects models with normal errors. It is noteworthy that the normality assumption for the errors may be relaxed and replaced by any distribution within the class of elliptical distributions, such as the Student $t$ and the power exponential distributions. The general formulation allows a wide range of different specifications for the location and the scale parameters, coupled with a large collection of distributions for the errors. Section 4 presents four important particular cases of the main model (\[MainModel\]) that show the applicability of the general formulation.
For the sake of simplifying the notation, let ${z}_i = {Y}_i - {\mu}_i$ and $u_{i} = {z}_i^{\top}{\Sigma}_i^{-1}{z}_i$. The log-likelihood function associated with (\[MainModel\]), is given by $$\label{log-likelihood}
\ell({\theta}) = \sum_{i=1}^n \ell_{i}({\theta}),$$ where $\ell_{i}({\theta}) = -\frac{1}{2} \log{|{\Sigma}_i|} + \log g(u_i)$. It is assumed that $g(\cdot)$, ${\mu}_i$ and ${\Sigma}_i$ are such that $\ell({\theta})$ is a regular log-likelihood function (Cox and Hinkley 1974, Ch. 9) with respect to ${\theta}$. To obtain the score function and the Fisher information matrix, we need to derive $\ell(\theta)$ with respect to the unknown parameters and to compute some moments of such derivatives. We assume that such derivatives exist. Thus, we define
$${a}_{i(r)} = \frac{\partial {\mu}_i}{\partial \theta_{r}}, \quad
{a}_{i(sr)} = \frac{\partial^2 {\mu}_i}{\partial
\theta_{s} \partial\theta_{r}}, \quad {C}_{i(r)} = \frac{\partial
{\Sigma}_i}{\partial \theta_{r}}, \quad {C}_{i(sr)} =
\frac{\partial^2 {\Sigma}_i}{\partial \theta_{s} \partial\theta_{r}}$$ and$${A}_{i(r)} = -{\Sigma}_i^{-1} {C}_{i(r)}{\Sigma}_i^{-1},$$ for $r,s = 1,\ldots,p$. We make use of matrix differentiation methods (Magnus and Neudecker 2007) to compute the derivatives of the log-likelihood function. The score vector and the Fisher information matrix for ${\theta}$ can be shortly written as $$\label{ScoreFisher2}
U(\theta) = {F}^{\top}{H}{s} \quad \mbox{and}\quad K(\theta) ={F}^{\top}\widetilde{H}{F},$$ respectively, with $F = \left(F_1^\top, \ldots, F_n^\top\right)^{\top}$, $H = \mbox{block-diag}\left\{H_1, \ldots, H_n\right\}$, $s = ( s_1^\top, \ldots, s_n^\top )^{\top}$, $\widetilde{H} = H M H$ and $M = \mbox{block-diag}\left\{ M_1^\top,\ldots, M_n^\top \right\}$, wherein $$\label{matrix-score}
{F}_i =
\begin{pmatrix}
{D}_i\\
{V}_i\\
\end{pmatrix},\quad
{H}_i =
\begin{bmatrix}
{\Sigma}_i & {0}\\
{0} & 2{\Sigma}_i\otimes {\Sigma}_i
\end{bmatrix}^{-1}, \quad
{s}_i =
\begin{bmatrix}
v_{i}{z}_i\\
-{\textrm{vec}}({\Sigma}_i - v_{i}{z}_i{z}_i^{\top})
\end{bmatrix},$$ where the “vec" operator transforms a matrix into a vector by stacking the columns of the matrix, ${D}_i = ({a}_{i(1)}, \ldots, {a}_{i(p)})$, ${V}_i = ({\textrm{vec}}({C}_{i(1)}),\ldots, {\textrm{vec}}({C}_{i(p)}))$, $v_{i} = -2W_{g}(u_{i})$ and $W_g(u) = \mbox{d} \log g(u)/\mbox{d} u$. Here, we assume that $F$ has rank $p$ (i.e., ${\mu}_i$ and ${\Sigma}_i$ must be defined to hold such condition). The symbol “$\otimes$” indicates the Kronecker product. Following Lange et al. (1989) we have, for the $q$-variate Student $t$ distribution with $\nu$ degrees of freedom, $t_q({\mu}, {\Sigma}, \nu)$, that $W_g(u) = -(\nu + q )/\{2(\nu + u)\}$. Following Gómez et al. (1998) we have, for the $q$-variate power exponential $PE_q({\mu}, \delta, \lambda)$ with shape parameter $\lambda>0$ and $u\neq0$, that $W_g(u) = - \lambda u^{\lambda - 1}/2$, $\lambda \neq 1/2$. In addition, we have
$$\label{matrix-Mi}
{M}_i =
\begin{bmatrix}
\frac{4\psi_{i(2,1)}}{q_{i}}{\Sigma}_i & {0}\\
{0} & 2c_i{\Sigma}_i\otimes{\Sigma}_i
\end{bmatrix}
+(c_i - 1)
\begin{bmatrix}
{0} & {0}\\
{0} & {\textrm{vec}}({\Sigma}_i){\textrm{vec}}({\Sigma}_i)^{\top}
\end{bmatrix},$$
where $c_i = 4\psi_{i(2,2)}/\{q_{i}(q_{i} + 2)\}$, $\psi_{i(2,1)} = E(W_{g}^2(r_{i})r_{i})$ and $\psi_{i(2,2)} =$ $E(W_{g}^2(r_{i})r_{i}^{2})$, with $r_{i} = ||{L}_{i}||^2$, ${L}_{i}\sim El_{q_{i}}({0},{I}_{q_{i}})$. Here, we assume that $g(u)$ is such that $\psi_{i(2,1)}$ and $\psi_{i(2,2)}$ exist and are finite for all $i=1, \ldots, n$. One can verify these results by using standard differentiation techniques and some standard matrix operations.
The values of $\psi_{i(l,k)}$ are obtained from solving the following one-dimensional integrals (Lange et al. 1989): $$\label{psi}
\psi_{i(l,k)} = \int_{0}^{\infty} W_g(s^2)^l g(s^2)r^{q_i + 2k-1}c_{q_i} \mbox{d} s,$$ where $c_{q_i}=2\pi^{\frac{q_i}{2}}/\Gamma(\frac{q_i}{2})$ is the surface area of the unit sphere in $\mathbb{R}^{q_i}$ and $\Gamma(a)$ is the well-known gamma function. One can find these quantities for many distributions simply solving (\[psi\]) algebraically or numerically. Table 2 shows these quantities for the normal, Cauchy, Student $t$ and power exponential distributions.
\[Tab:psi\]
[lcccc]{} & $\psi_{i(2,1)}$ & $\psi_{i(2,2)}$ & $\psi_{i(3,2)}$ & $\psi_{i(3,3)}$\
normal & $\frac{q_i}{4}$ & $\frac{q_i(q_i + 2)}{4}$ & $-\frac{q_i(q_i + 2)}{8}$ & $-\frac{q_i(q_i + 2)(q_i + 4)}{8}$ $q_i \geq 1$\
\
\
Cauchy & $\frac{q_i(q_{i} + 1)}{4(q_{i} + 3)}$ & $\frac{q_{i}(q_{i} + 2)(q_{i} + 1)}{4(q_{i} + 3)}$& $-\frac{q_i(q_i+2)(q_i+1)^2}{8(q_i+3)(q_i+5)}$ & $-\frac{q_i(q_i+2)(q_i+4)(q_i+1)^2 }{8(q_i+3)(q_i+5)}$ $q_i \geq 1$\
\
\
Student $t$ & $\frac{q_i(q_{i} + \nu)}{4(q_{i} + \nu + 2)}$ & $\frac{q_{i}(q_{i} + 2)(q_{i} + \nu)}{4(q_{i} + \nu + 2)}$& $-\frac{q_i(q_i+2)(q_i+\nu)^2}{8(q_i+2+\nu)(q_i+4+\nu)}$ & $-\frac{q_i(q_i+2)(q_i+4)(q_i+\nu )^2 }{8(q_i+2+\nu )(q_i+4+\nu)}$ $q_i \geq 1$\
\
\
P.E.& $\frac{\lambda^2\Gamma(\frac{4\lambda - 1}{2\lambda})}{2^{1/\lambda}\Gamma(\frac{1}{2\lambda})}$& $\frac{2\lambda + 1}{4}$ & $- \frac{\lambda^3\Gamma(\frac{6\lambda - 1}{2\lambda})}{2^{1/\lambda}\Gamma(\frac{1}{2\lambda})}$& $-\frac{(2\lambda + 1)(4\lambda + 1) }{8}$ $q_i = 1$, $\lambda > \frac{1}{4}$\
\
P.E.& $\frac{\lambda^2\Gamma(\frac{q_{i} - 2}{2\lambda} + 2)}{2^{1/\lambda}\Gamma(\frac{q_{i}}{2\lambda})}$& $\frac{q_{i}(2\lambda + q_{i})}{4}$ & $- \frac{\lambda^3\Gamma(\frac{q_{i} - 2}{2\lambda} + 3)}{2^{1/\lambda}\Gamma(\frac{q_{i}}{2\lambda})}$& $-\frac{q_i(2\lambda + q_i)(4\lambda +q_i) }{8}$ $q_i \geq 2$, $\lambda > 0$\
It is important to remark that the $\psi_{i(l, k)}$’s may involve unknown quantities (for instance, the degrees of freedom $\nu$ of the Student $t$ distribution and the shape parameter $\lambda$ of the power exponential distribution). One may want to estimate these quantities via maximum likelihood estimation. Here, we consider these as known quantities for the purpose of keeping the robustness property of some distributions. Lucas (1997) shows that the protection against “large” observations is only valid when the degrees of freedom parameter is kept fixed for the Student $t$ distribution. Therefore, the issue of estimating these quantities is beyond of the main scope of this paper. In practice, one can use model selection procedures to choose the most appropriate values of such unknown parameters.
Notice that, in the Fisher information matrix $K(\theta)$, the matrix ${M}$ carries all the information about the adopted distribution, while ${F}$ and ${H}$ contain the information about the adopted model. Also, $K(\theta)$ has a quadratic form that can be computed through simple matrix operations. Under the normal case, $v_{i} = 1$, ${M}={H}^{-1}$ and hence $\widetilde{H} = {H}$.
The Fisher scoring method can be used to estimate ${\theta}$ by iteratively solving the equation
$$\label{Fisher-Scoring}
({F}^{(m)\top}\widetilde{H}^{(m)}{F}^{(m)}){\theta}^{(m+1)} =
{F}^{(m)\top} \widetilde{H}^{(m)}{s}^{*(m)}, \quad m = 0, 1,\ldots,$$
where the quantities with the upper index “$(m)$" are evaluated at $\widehat{{\theta}}$, $m$ is the iteration counter and $${s}^{*(m)} = {F}^{(m)}{\theta}^{(m)} + {H}^{-1(m)}{M}^{-1(m)}{s}^{(m)}.$$ Each loop, through the iterative scheme (\[Fisher-Scoring\]), consists of an iterative re-weighted least squares algorithm to optimize the log-likelihood (\[log-likelihood\]). Thus, (\[ScoreFisher2\]) and (\[Fisher-Scoring\]) agree with the corresponding equations derived in Patriota and Lemonte (2009). Observe that, despite the complexity and generality of the postulated model, expressions (\[ScoreFisher2\]) and (\[Fisher-Scoring\]) are very simple and friendly.
Now, we can give the main result of the paper.
[**Theorem 3.1.**]{} The second-order bias vector $B_{\widehat{\theta}}(\theta)$ under model (\[MainModel\]) is given by $$\label{BIAS-vector}
B_{\widehat{\theta}}(\theta) = (F^{\top}\widetilde{{H}}F)^{-1} F^{\top}\widetilde{{H}}\xi,$$ where ${\xi} = (\Phi_1,\ldots,\Phi_p){\textrm{vec}}(({F}^{\top} \widetilde{H}F)^{-1})$, $\Phi_r = (\Phi_{1(r)}^{\top}, \ldots \Phi_{n(r)}^{\top})^{\top}$, and $\Phi_{i(r)}$ is given in the Appendix.
[**Proof:**]{} See the Appendix. In many models the location vector and the scale matrix do not have parameters in common, i.e., ${\mu}_i = {\mu}_i({\theta}_1,{x}_{i})$ and ${\Sigma}_i =$ ${\Sigma}_i({\theta}_2,{w}_{i})$, where ${\theta} = ({\theta}_1^{\top}, {\theta}_2^{\top} )^{\top}$. Therefore, $F = \mbox{block--diag}\{F_{\theta_1}, F_{\theta_2}\}$ and the parameter vectors ${\theta}_1$ and ${\theta}_2$ will be orthogonal (Cox and Reid 1987). This happens in mixed models, nonlinear models, among others. However, in errors-in-variables and factor analysis models orthogonality does not hold. Model (\[MainModel\]) is general enough to encompass a large number of models even those that do not have orthogonal parameters.
[**Corollary 3.1.**]{} When ${\mu}_i = {\mu}_i({\theta}_1,{x}_{i})$ and ${\Sigma}_i = {\Sigma}_i({\theta}_2,{w}_{i})$, where ${\theta} = ({\theta}_1^{\top}, {\theta}_2^{\top} )^{\top}$ the second-order bias vector of $\widehat{\theta}_1$ and $\widehat{\theta}_2$ are given by $$\label{BIAS-vector1}
B_{\widehat{\theta}_1}(\theta) = (F_{\theta_1}^{\top} \widetilde{H}_{1} F_{\theta_1})^{-1} F_{\theta_1}^{\top} \widetilde{H}_{1} \xi_{1}$$ and $$\label{BIAS-vector2}
B_{\widehat{\theta}_2}(\theta) = (F_{\theta_2}^{\top} \widetilde{H}_{2} F_{\theta_2})^{-1} F_{\theta_2}^{\top} \widetilde{H}_{2} \xi_{2},$$ respectively. The quantities $F_{\theta_1}$, $F_{\theta_2}$, $\widetilde{H}_{1}$, $\widetilde{H}_{2}$, $\xi_{1}$ and $\xi_{2}$ are defined in the Appendix.
[**Proof:**]{} See the Appendix. Formula (\[BIAS-vector\]) says that, for any particular model of the general multivariate elliptical class of models (\[MainModel\]), it is always possible to express the bias of $\widehat{{\theta}}$ as the solution of an weighted least-squares regression. Also, if $z_i \sim N_{q_i}(0, \Sigma_i)$ then $c_i = -\widetilde{\omega}_{i} = 1$, $\eta_{1i} = 0$, $\eta_{2i} = -2$, $\widetilde{H} = H$, $$J_{i(r)} =
\begin{pmatrix}
0 \\
2(I_{q_i}\otimes a_{i(r)})D_i
\end{pmatrix},$$ and formula (\[BIAS-vector\]) reduces to the one obtained by Patriota and Lemonte (2009).
Theorem 3.1 implies that all one needs to compute bias-corrected and bias-reduced MLEs in the general elliptical model is: (i) the first and second derivatives of the location vector $\mu_i$ and the scale matrix $\Sigma_i$ with respect to all the parameters; (2) the derivatives $W_g(u)$; (3) some moments involving the chosen elliptical distribution (these moments are given in Table 2 for some elliptical distributions). With these quantities, the matrices in (\[BIAS-vector\]) can be computed and the bias vector can be computed through an weighted least-squares regression.
Special models {#special-cases}
==============
In this section, we present four important particular cases of the main model (\[MainModel\]). All special cases presented in Patriota and Lemonte (2009) are also special cases of the general multivariate elliptical model defined in this paper.
Heteroscedastic nonlinear models
--------------------------------
Consider the univariate heteroscedastic nonlinear model defined by $$\label{ModelNonLinHetero}
Y_i = f({x}_i, {\alpha}) + e_i, \ \ i = 1, 2, \ldots, n,$$ where $Y_i$ is the response, ${x}_i$ is a column vector of explanatory variables, ${\alpha}$ is a column vector $p_1 \times 1$ of unknown parameters and $f$ is a nonlinear function of ${\alpha}$. Assume that $e_1, e_2, \ldots, e_n$ are independent, with $e_i \sim El(0, \sigma_i^2)$. Here $\sigma_i^2 = \sigma_i^2(\gamma) = h(\omega_i^\top \gamma)$, where $\gamma$ is a $p_2 \times 1$ vector of unknown parameters. Then
$$\label{DistHeteroNonLin}
{Y}_{i} \stackrel{ind}{\sim}El(f({x}_i, \alpha), \sigma_i^2),$$
which is a special case of (\[MainModel\]) with $\theta = (\alpha^{\top}, \gamma^{\top})^{\top}$, ${\mu}_{i} = f({x}_i, \alpha)$ and ${\Sigma}_i = \sigma_i^2$. Here $El$ stands for $El_1$. Notice that for the heteroscedastic linear model $f({x}_i, {\alpha}) = {x}_i^{\top} {\alpha}$.
The second-order bias vector $B_{\widehat{\theta}}(\theta)$ comes from (\[BIAS-vector\]), which depends on derivatives of $f({x}_i, \alpha)$ and $\sigma_i^2$ with respect to the parameter vector ${\theta}$. Also, it depends on the quantities $\psi_{i(2,1)}, \psi_{i(2,2)}, \psi_{i(3,2)}$, $\psi_{i(3,3)}$ (see Table 2) and $W_g(u_i)$ containing information about the adopted distribution.
Nonlinear mixed-effects model
-----------------------------
One of the most important examples is the nonlinear mixed-effects model introduced by Lange et al. (1989) and studied under the assumption of a Student $t$ distribution. Let $$Y_i = \mu_i({x}_i, {\alpha}) + Z_i b_i + u_i,$$ where $Y_i$ is the $q_i \times 1$ vector response, $\mu_i$ is a $q_i$-dimensional nonlinear function of ${\alpha}$, ${x}_i$ is a vector of nonstochastic covariates, ${Z}_i$ is a matrix of known constants, ${\alpha}$ is a $p_1 \times 1$ vector of unknown parameters and ${b}_i$ is an $r \times 1$ vector of unobserved random regression coefficients. Assume that, $$\label{Matrizbiui}
\left(\begin{array}{c} b_i \\ u_i \\ \end{array}\right) \sim El_{r+q_i}\left(\left[\begin{array}{c} 0 \\ 0 \\ \end{array}\right], \left[\begin{array}{cc} \Sigma_{b}(\gamma_1) & 0 \\ 0 & R_i(\gamma_2) \\ \end{array}\right]\right),$$ where $\gamma_1$ is a $p_2$-dimensional vector of unknown parameters and $\gamma_2$ is a $p_3 \times 1$ vector of unknown parameters. Furthermore, the vectors $(b_1, u_1)^{\top}$, $(b_2, u_2)^{\top}$, $\ldots$, $(b_n, u_n)^{\top}$ are independent. Therefore, the marginal distribution of the observed vector is
$$\label{general-mixed}
Y_i \sim El_{q_i}\left(\mu_i({x}_i, {\alpha}); {\Sigma}_i({Z}_i, {\gamma})\right),$$
where $\gamma = (\gamma_1^{\top}, \gamma_2^{\top})^{\top}$ and ${\Sigma}_i(Z_i,{\gamma}) =
{Z}_i \Sigma_b(\gamma_1){Z}_i^{\top} + R_i(\gamma_2)$. Equation (\[general-mixed\]) is a special case of (\[MainModel\]) with $\theta = (\alpha^{\top}, \gamma^{\top})^{\top}$, ${\mu}_{i} = \mu_i({x}_i, {\alpha})$ and ${\Sigma}_i = {\Sigma}_i({Z}_i, {\gamma})$. From (\[BIAS-vector\]) one can compute the bias vector $B_{\widehat{\theta}}(\theta)$.
Errors-in-variables model
-------------------------
Consider the model
$$\label{error-model}
{x}_{1i} = {\beta}_0 + {\beta}_1 {x}_{2i} + {q}_i,\quad i=1,\ldots, n,$$
where ${x}_{1i}$ is a $v\times 1$ latent response vector, ${x}_{2i}$ is a $m\times 1$ latent vector of covariates, ${\beta}_0$ is a $v\times 1$ vector of intercepts, ${\beta}_1$ is a $v \times m$ matrix of slopes, and ${q}_i$ is the equation error having a multivariate elliptical distribution with location vector zero and scale matrix ${\Sigma}_{{q}}$. The variables ${x}_{1i}$ and ${x}_{2i}$ are not directly observed, instead surrogate variables ${X}_{1i}$ and ${X}_{2i}$ are measured with the following additive structure: $$\label{error-model-O}
{X}_{1i} = {x}_{1i} + {\delta}_{{x}_{1i}} \quad \mbox{and} \quad {X}_{2i} ={x}_{2i} + {\delta}_{{x}_{2i}}.$$ The random quantities $x_{2i}$, $q_i$, ${\delta}_{{x}_{1i}}$ and ${\delta}_{{x}_{2i}}$ are assumed to follow an elliptical distribution given by $$\begin{pmatrix}
x_{2i}\\
q_i\\
{\delta}_{{x}_{1i}}\\
{\delta}_{{x}_{2i}}\\
\end{pmatrix}
\stackrel{ind}{\sim}El_{2v+2m}
\begin{bmatrix}
\begin{pmatrix}
\mu_{x_2}\\
0\\
{0}\\
{0}\\
\end{pmatrix},
\begin{pmatrix}
\Sigma_{x_2} & 0 & 0 & 0\\
0 & \Sigma_q & 0 & 0\\
0 & 0 & {\tau}_{{x}_{1i}} & {0} \\
0 & {0} & 0 & {\tau}_{{x}_{2i}}\\
\end{pmatrix}
\end{bmatrix},$$ where the matrices ${\tau}_{xi}$ and ${\tau}_{zi}$ are known for all $i=1, \ldots, n$. These “known" matrices may be attained, for example, through an analytical treatment of the data collection mechanism, replications, machine precision, etc (Kulathinal et al. (2002)).
Therefore, the observed vector $Y_i =(X_{1i}^{\top}, X_{2i}^{\top})^{\top}$ has marginal distribution given by $$\label{ModErros}
Y_i \stackrel{ind}{\sim}El_{v+m} (\mu(\theta), \Sigma_i(\theta))$$ with $$\mu(\theta) =
\begin{pmatrix}
\beta_0 + {\beta}_1 {\mu}_{{x_2}}\\
\mu_{x_2}
\end{pmatrix} \quad \mbox{and} \quad
\Sigma_i(\theta)=
\begin{pmatrix}
\beta_1\Sigma_{x_2}\beta_1^{\top}+ \Sigma_q +
\tau_{{x}_{1i}} & {\beta}_1 \Sigma_{{x_{2}}}\\
{\Sigma}_{x_{2}}{\beta}_1^{\top} &{\Sigma}_{x_{2}} + \tau_{{x}_{2i}}
\end{pmatrix},$$ where ${\theta} = ({\beta}_0^{\top}, {\textrm{vec}}({\beta}_1)^{\top},{\mu}_{x_2}^{\top}, \mbox{vech}({\Sigma}_{x_2})^{\top}, \mbox{vech}(\Sigma_q)^{\top})^{\top}$, “vech" operator transforms a symmetric matrix into a vector by stacking into columns its diagonal and superior diagonal elements. The mean vector $(\theta)$ and the covariance-variance matrix $\Sigma_i(\theta)$ of observed variables have the matrix ${\beta}_1$ in common, i.e., they share $mv$ parameters. Kulathinal et al. (2002) study the linear univariate case ($v=1$, $m=1$).
Equation (\[ModErros\]) is a special case of (\[MainModel\]) with $q_i = v + m$, $\theta = (\alpha^{\top}, \gamma^{\top})^{\top}$, ${\mu}_{i} = \mu_i(\theta)$ and ${\Sigma}_i = {\Sigma}_i(\theta)$. In this case, a programming language or software that can perform operations on vectors and matrices, e.g. [Ox]{} (Doornik, 2013) and [R]{} (Ihaka and Gentleman, 1996), can be used to obtain the bias vector $B_{\widehat{\theta}}(\theta)$ from (\[BIAS-vector\]).
Log-symmetric regression models
-------------------------------
Let $T$ be a continuous positive random variable with probability density function
$$\label{ModelLogSym1}
f_T(t; \eta, \phi, g) = \frac{1}{\sqrt{\phi} t} g\left(\log^2 \left[\left(\frac{t}{\eta}\right)^{\frac{1}{\sqrt{\phi}}}\right]\right), \ \eta > 0, \ \phi > 0,$$
where $g$ is the density generating function of a univariate elliptical distribution, and we write $T \sim LS(\eta, \phi, g)$. Vanegas and Paula (2014) called the class of distribution in (\[ModelLogSym1\]) the log-symmetric class of distributions. It includes log-normal, log-Student $t$, log-power-exponential distributions, among many others, as special cases. It is easy to verify that $\log(T)$ has a univariate elliptical distribution (i.e., symmetric distribution) with location parameter $\mu = \log(\eta)$ and scale parameter $\phi$. The parameter $\eta$ is the median of $T$, and $\phi$ can be interpreted as a skewness or relative dispersion parameter.
Vanegas and Paula (2015) defined and studied semi-parametric regression models for a set $T_1, T_2, \ldots, T_n$ with $T_i \sim LS(\eta_i, \phi_i, g)$ with $\eta_i > 0$ and $\phi_i > 0$ following semi-parametric regression structures. Here we assume parametric specification for $\eta_i$ and $\phi_i$ as $\eta_i = \eta_i(x_i, \alpha)$ and $\phi_i = \phi_i(\omega_i, \gamma)$.
Hence,
$$\label{ModelLogSym2}
Y_i = \log(T_i) \stackrel{ind}{\sim}El\left(\mu_i(x_i, \alpha), \phi_i(\omega_i, \gamma)\right),$$
where $\mu_i(x_i, \alpha) = \log(\eta(x_i, \alpha))$. Therefore, (\[ModelLogSym2\]) is a special case of the general elliptical model (\[MainModel\]), and formula (\[BIAS-vector\]) applies.
Simulation results {#simulation}
==================
In this section, we shall present the results of Monte Carlo simulation experiments in which we evaluate the finite sample performances of the original MLEs and their bias-corrected and bias-reduced versions. The simulations are based on the univariate nonlinear model without random effects (Section 4.1) and the errors-in-variables model presented in Section 4.2, when $Y_i$ follows a normal distribution, a Student $t$ distribution with $\nu$ degrees of freedom, or a power exponential distribution with shape parameter $\lambda$. For all the simulations, the number of Monte Carlo replications is 10,000 (ten thousand) and they have been performed using the `Ox` matrix programming language (Doornik, 2013).
First consider the model described in (\[general-mixed\]) with $q_i=1$, $Z_i = 0$, $\Sigma_i = \sigma^2$ and $$\label{nonlinear-model}
\mu_i({\alpha}) = \mu_i({x}_i, {{\alpha}}) = \alpha_1 + \frac{\alpha_2}{1 + \alpha_3 x_i^{\alpha_4}},\quad i = 1,\ldots, n.$$
Here the unknown parameter vector is ${\theta} = (\alpha_1, \alpha_2, \alpha_3, \alpha_4, \sigma^2)^\top$. The values of $x_i$ were obtained as random draws from the uniform distribution $U(0,100)$. The sample sizes considered are $n = 10, 20, 30, 40$ and $50$. The parameter values are $\alpha_1 = 50$, $\alpha_2 = 500$, $\alpha_3 = 0.50$, $\alpha_4 = 2$ and $\sigma_i^2 = 200$. For the Student $t$ distribution, we fixed the degrees of freedom at $\nu = 4$, and for the power exponential model the shape parameter is fixed at $\lambda = 0.8$.
Tables 3-4 present the bias, and the root mean squared errors ($\sqrt{MSE}$) of the maximum likelihood estimates, the bias-corrected estimates and the bias-reduced estimates for the nonlinear model with normal and Student $t$ distributed errors, respectively. To save space, the corresponding results for the power exponential model are not shown.[^1] We note that the bias-corrected estimates and the bias-reduced estimates are less biased than the original MLE for all the sample sizes considered. For instance, when $n=20$ and the errors follow a Student $t$ distribution (see Table 4) the estimated biases of ${\widehat \sigma}^2$ are $-41.24$ (MLE), $-12.30$ (bias-corrected) and $-4.55$ (bias-reduced). For the normal case with $n = 10$ (see Table 3), the estimated biases of ${\widehat \alpha}_2$ are $2.16$ (MLE), $0.70$ (bias-corrected) and $-0.27$ (bias-reduced). We also observe that the bias-reduced estimates are less biased than the bias-corrected estimates in most cases. As $n$ increases, the bias and the root mean squared error of all the estimators decrease, as expected. Additionally, we note that the MLE of ${\alpha}_2$ has $\sqrt{MSE}$ larger than those of the modified versions. For the estimation of $\sigma^2$, $\sqrt{MSE}$ is smaller for the original MLE. In other cases, we note that the estimators have similar root mean squared errors.
\[tab3\]
[rrrrrrrrrrrr]{} & && && &&\
$n$&${\theta}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$\
& $\alpha_1$ && $-0.29$ & $6.69$ && $-0.13$ & $6.67$ && $-0.01$ & $6.67$\
& $\alpha_2$ && $2.16$ & $20.07$ && $0.70$ & $19.40$ && $-0.27$ & $19.06$\
10 & $\alpha_3$ && $0.01$ & $0.13$ && $0.00$ & $0.12$ && $0.00$ & $0.12$\
& $\alpha_4$ && $0.03$ & $0.30$ && $0.01$ & $0.29$ && $-0.00$ & $0.29$\
& $\sigma^2$ && $-80.05$ & $106.44$ && $-32.06$ & $103.32$ && $9.09$ & $128.72$\
& $\alpha_1$ && $-0.08$ & $4.07$ && $-0.01$ & $4.07$ && $0.01$ & $4.07$\
& $\alpha_2$ && $0.66$ & $17.94$ && $-0.08$ & $17.84$ && $-0.27$ & $17.82$\
20 & $\alpha_3$ && $0.00$ & $0.09$ && $0.00$ & $0.09$ && $-0.00$ & $0.09$\
& $\alpha_4$ && $0.02$ & $0.21$ && $0.01$ & $0.20$ && $0.00$ & $0.20$\
& $\sigma^2$ && $-40.07$ & $69.73$ && $-8.09$ & $68.95$ && $0.86$ & $72.02$\
& $\alpha_1$ && $-0.10$ & $3.11$ && $-0.04$ & $3.10$ && $-0.02$ & $3.10$\
& $\alpha_2$ && $0.71$ & $17.24$ && $-0.05$ & $17.15$ && $-0.18$ & $17.13$\
30 & $\alpha_3$ && $0.00$ & $0.09$ && $-0.00$ & $0.09$ && $-0.00$ & $0.09$\
& $\alpha_4$ && $0.02$ & $0.20$ && $0.00$ & $0.19$ && $0.00$ & $0.19$\
& $\sigma^2$ && $-26.41$ & $55.26$ && $-3.26$ & $55.11$ && $0.82$ & $56.32$\
& $\alpha_1$ && $-0.08$ & $2.69$ && $-0.02$ & $2.69$ && $-0.01$ & $2.69$\
& $\alpha_2$ && $0.83$ & $16.80$ && $0.09$ & $16.70$ && $0.01$ & $16.69$\
40 & $\alpha_3$ && $0.00$ & $0.09$ && $0.00$ & $0.09$ && $0.00$ & $0.09$\
& $\alpha_4$ && $0.02$ & $0.19$ && $0.00$ & $0.18$ && $-0.00$ & $0.18$\
& $\sigma^2$ && $-20.04$ & $47.26$ && $-2.04$ & $47.13$ && $0.33$ & $47.74$\
& $\alpha_1$ && $-0.08$ & $2.39$ && $-0.03$ & $2.38$ && $-0.02$ & $2.38$\
& $\alpha_2$ && $1.07$ & $14.25$ && $0.30$ & $14.12$ && $0.23$ & $14.11$\
50 & $\alpha_3$ && $0.00$ & $0.08$ && $0.00$ & $0.08$ && $0.00$ & $0.08$\
& $\alpha_4$ && $0.01$ & $0.19$ && $0.00$ & $0.18$ && $-0.00$ & $0.18$\
& $\sigma^2$ && $-15.93$ & $41.41$ && $-1.21$ & $41.30$ && $0.36$ & $41.67$\
\[tab4\]
[rrrrrrrrrrrr]{} & && && &&\
$n$&${\theta}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$\
& $\alpha_1$ && $-0.51$ & $8.66$ && $-0.31$ & $8.63$ && $-0.20$ & $8.56$\
& $\alpha_2$ && $3.34$ & $28.47$ && $1.39$ & $27.34$ && $2.05$ & $27.67$\
10 & $\alpha_3$ && $0.01$ & $0.17$ && $0.00$ & $0.16$ && $0.01$ & $0.16$\
& $\alpha_4$ && $0.06$ & $0.42$ && $0.03$ & $0.39$ && $-0.01$ & $0.38$\
& $\sigma^2$ && $-93.18$ & $127.60$ && $-54.24$ & $130.73$ && $-17.40$ & $170.35$\
& $\alpha_1$ && $-0.17$ & $5.03$ && $-0.07$ & $5.02$ && $-0.04$ & $5.01$\
& $\alpha_2$ && $2.01$ & $25.64$ && $0.91$ & $25.11$ && $1.29$ & $24.98$\
20 & $\alpha_3$ && $0.01$ & $0.14$ && $0.01$ & $0.14$ && $0.01$ & $0.13$\
& $\alpha_4$ && $0.04$ & $0.29$ && $0.01$ & $0.28$ && $0.00$ & $0.27$\
& $\sigma^2$ && $-41.24$ & $85.51$ && $-12.30$ & $89.41$ && $-4.55$ & $93.08$\
& $\alpha_1$ && $-0.10$ & $3.81$ && $-0.01$ & $3.80$ && $0.01$ & $3.82$\
& $\alpha_2$ && $2.25$ & $25.75$ && $1.13$ & $25.34$ && $1.61$ & $25.41$\
30 & $\alpha_3$ && $0.01$ & $0.14$ && $0.01$ & $0.14$ && $0.01$ & $0.14$\
& $\alpha_4$ && $0.04$ & $0.29$ && $0.01$ & $0.27$ && $0.00$ & $0.26$\
& $\sigma^2$ && $-27.15$ & $70.02$ && $-6.15$ & $72.64$ && $-1.78$ & $107.53$\
& $\alpha_1$ && $-0.10$ & $3.27$ && $-0.02$ & $3.26$ && $-0.01$ & $3.26$\
& $\alpha_2$ && $1.82$ & $24.94$ && $0.75$ & $24.67$ && $1.18$ & $24.78$\
40 & $\alpha_3$ && $0.01$ & $0.12$ && $0.00$ & $0.12$ && $0.01$ & $0.12$\
& $\alpha_4$ && $0.03$ & $0.26$ && $0.01$ & $0.25$ && $0.00$ & $0.25$\
& $\sigma^2$ && $-20.38$ & $60.43$ && $-4.01$ & $62.21$ && $-1.82$ & $62.98$\
& $\alpha_1$ && $-0.13$ & $2.86$ && $-0.05$ & $2.85$ && $-0.03$ & $2.85$\
& $\alpha_2$ && $1.48$ & $18.86$ && $0.38$ & $18.59$ && $0.24$ & $18.46$\
50 & $\alpha_3$ && $0.01$ & $0.11$ && $0.00$ & $0.11$ && $0.00$ & $0.11$\
& $\alpha_4$ && $0.02$ & $0.24$ && $0.00$ & $0.23$ && $0.00$ & $0.23$\
& $\sigma^2$ && $-15.40$ & $53.99$ && $-1.94$ & $55.56$ && $-0.43$ & $56.11$\
We now consider the errors-in-variables model described in (\[error-model-O\]). The sample sizes considered are $n = 15, 25, 35$ and $50$. The parameter values are ${\beta}_0 = 0.70 \ {{\boldsymbol}1}_{v \times 1}$, ${\beta}_1 = 0.40 \ {{\boldsymbol}1}_{v \times m}$, ${\mu}_{x_2} = 70 \ {{\boldsymbol}1}_{m \times 1}$, ${\Sigma}_{q} = 40 \ {{\boldsymbol}I}_{v \times 1}$ and ${\Sigma}_{x_2} = 250 \ {{\boldsymbol}I}_{m \times 1}$. Here, ${{\boldsymbol}1}_{r \times s}$ is as $r \times s$ matrix of ones and $ {{\boldsymbol}I}_{r \times s}$ is the identity matrix with dimension $r \times s$. For the Student $t$ distribution, we fixed the degrees of freedom at $\nu = 4$ and, for power exponential model, the shape parameter was fixed at $\lambda = 0.7$. We consider $v \in \{1, 2\}$ and $m = 1$.
In Tables 5-6, we present the MLE, the bias-corrected estimates, the bias-reduced estimates, and corresponding estimated root mean squared errors for the Student $t$ and power exponential distributions, for the errors-in-variables model. The results for the normal distribution are not shown to save space. We observe that, in absolute value, the biases of the bias-corrected estimates and bias-reduced estimates are smaller than those of the original MLE for different sample sizes. Furthermore, the bias-reduced estimates are less biased than the bias-corrected estimates in most cases. This can be seen e.g. in Table 6 when $v = 1$, $m = 1$, $Y_i$ follows a power exponential distribution and $n=15$. In this case, the bias of the MLE, the bias-corrected estimate and the bias-reduced estimate of ${\Sigma}_q$ are $-4.92$, $-0.66$ and $-0.17$, respectively. When $Y_i$ follows a Student $t$ distribution, $n = 15$, $v = 1$ and $m = 1$ we observe the following biases of the estimates of ${\Sigma}_{x_2}$: $5.18$ (MLE), $2.91$ (bias-corrected) and $2.66$ (bias-reduced); see Table 5. We note that the root mean squared errors decrease with $n$.
For the sake of saving space, the simulation results for the normal, Student $t$ and power exponential errors-in-variable models with $v=2$ and $m=1$ are not presented. Overall, our findings are similar to those reached for the other models.
\[tab7\]
[rrrrrrrrrrrr]{} & && && &&\
$n$&${\theta}$&& Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$\
& $\beta_0$ && $-0.00$ & $9.90$ && $0.01$ & $9.90$ && $0.01$ & $9.89$\
& $\beta_1$ && $0.00$ & $0.14$ && $0.00$ & $0.14$ && $0.00$ & $0.14$\
15 & $\mu_{x_2}$ && $0.05$ & $4.82$ && $0.05$ & $4.82$ && $0.05$ & $4.82$\
& $\Sigma_{x_2}$ && $5.18$ & $129.34$ && $2.91$ & $128.12$ && $2.66$ & $127.86$\
& $\Sigma_q$ && $-3.64$ & $19.52$ && $-0.68$ & $20.72$ && $-0.42$ & $20.85$\
& $\beta_0$ && $-0.02$ & $7.14$ && $-0.01$ & $7.14$ && $-0.01$ & $7.14$\
& $\beta_1$ && $0.00$ & $0.10$ && $0.00$ & $0.10$ && $0.00$ & $0.10$\
25 & $\mu_{x_2}$ && $0.03$ & $3.69$ && $0.03$ & $3.69$ && $0.03$ & $3.69$\
& $\Sigma_{x_2}$ && $3.61$ & $97.32$ && $2.25$ & $96.76$ && $2.17$ & $96.69$\
& $\Sigma_q$ && $-2.31$ & $14.87$ && $-0.47$ & $15.41$ && $-0.38$ & $15.44$\
& $\beta_0$ && $-0.02$ & $5.93$ && $-0.02$ & $5.93$ && $-0.02$ & $5.93$\
& $\beta_1$ && $0.00$ & $0.08$ && $0.00$ & $0.08$ && $0.00$ & $0.08$\
35 & $\mu_{x_2}$ && $-0.01$ & $3.12$ && $-0.01$ & $3.12$ && $-0.01$ & $3.12$\
& $\Sigma_{x_2}$ && $1.94$ & $79.78$ && $0.98$ & $79.45$ && $0.94$ & $79.44$\
& $\Sigma_q$ && $-1.65$ & $12.63$ && $-0.31$ & $12.96$ && $-0.26$ & $12.97$\
& $\beta_0$ && $-0.01$ & $4.92$ && $-0.01$ & $4.92$ && $-0.01$ & $4.92$\
& $\beta_1$ && $0.00$ & $0.07$ && $0.00$ & $0.07$ && $0.00$ & $0.07$\
50 & $\mu_{x_2}$ && $0.01$ & $2.59$ && $0.01$ & $2.59$ && $0.01$ & $2.59$\
& $\Sigma_{x_2}$ && $1.04$ & $65.50$ && $0.37$ & $65.33$ && $0.36$ & $65.33$\
& $\Sigma_q$ && $-1.18$ & $10.53$ && $-0.24$ & $10.78$ && $-0.21$ & $10.79$\
\[tab8\]
[rrrrrrrrrrrr]{} & && && &&\
$n$&${\theta}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$ && Bias &$\sqrt{MSE}$\
& $\beta_0$ && $-0.12$ & $9.25$ && $-0.11$ & $9.25$ && $-0.11$ & $9.24$\
& $\beta_1$ && $0.00$ & $0.13$ && $0.00$ & $0.13$ && $0.00$ & $0.13$\
15 & $\mu_{x_2}$ && $-0.02$ & $6.47$ && $-0.02$ & $6.47$ && $-0.02$ & $6.47$\
& $\Sigma_{x_2}$ && $-9.27$ & $103.32$ && $0.52$ & $107.51$ && $0.82$ & $107.64$\
& $\Sigma_q$ && $-4.92$ & $15.67$ && $-0.66$ & $17.55$ && $-0.17$ & $17.76$\
& $\beta_0$ && $0.02$ & $6.83$ && $0.03$ & $6.83$ && $0.03$ & $6.83$\
& $\beta_1$ && $0.00$ & $0.09$ && $-0.00$ & $0.09$ && $-0.00$ & $0.09$\
25 & $\mu_{x_2}$ && $-0.02$ & $4.98$ && $-0.02$ & $4.98$ && $-0.02$ & $4.98$\
& $\Sigma_{x_2}$ && $-5.60$ & $80.20$ && $0.36$ & $81.95$ && $0.47$ & $81.99$\
& $\Sigma_q$ && $-3.04$ & $12.94$ && $-0.36$ & $13.49$ && $-0.18$ & $13.54$\
& $\beta_0$ && $0.01$ & $5.59$ && $0.02$ & $5.58$ && $0.02$ & $5.58$\
& $\beta_1$ && $-0.00$ & $0.08$ && $-0.00$ & $0.08$ && $-0.00$ & $0.08$\
35 & $\mu_{x_2}$ && $-0.04$ & $4.21$ && $-0.04$ & $4.21$ && $-0.04$ & $4.21$\
& $\Sigma_{x_2}$ && $-3.53$ & $68.01$ && $0.77$ & $69.10$ && $0.82$ & $69.12$\
& $\Sigma_q$ && $-2.14$ & $11.11$ && $-0.18$ & $11.46$ && $-0.08$ & $11.49$\
& $\beta_0$ && $0.03$ & $4.67$ && $0.03$ & $4.67$ && $0.03$ & $4.67$\
& $\beta_1$ && $-0.00$ & $0.06$ && $-0.00$ & $0.06$ && $-0.00$ & $0.06$\
50 & $\mu_{x_2}$ && $-0.03$ & $3.52$ && $-0.03$ & $3.52$ && $-0.03$ & $3.52$\
& $\Sigma_{x_2}$ && $-2.83$ & $56.89$ && $0.18$ & $57.51$ && $0.21$ & $57.52$\
& $\Sigma_q$ && $-1.51$ & $9.21$ && $-0.12$ & $9.41$ && $-0.07$ & $9.42$\
Applications
============
Radioimmunoassay data
---------------------
Tiede and Pagano (1979) present a dataset, referred here as the radioimmunoassay data, obtained from the Nuclear Medicine Department at the Veteran’s Administration Hospital, Buffalo, New York. Lemonte and Patriota (2011) analyzed the data to illustrate the applicability of the elliptical models with general parameterization. Following Tiede and Pagano (1979) we shall consider the nonlinear regression model (\[nonlinear-model\]), with $n = 14.$ The response variable is the observed radioactivity (count in thousands), the covariate corresponds to the thyrotropin dose (measured in micro-international units per milliliter) and the errors follow a normal distribution or a Student $t$ distribution with $\nu = 4$ degrees of freedom. We assume that the scale parameter is unknown for both models. In Table 7 we present the maximum likelihood estimates, the bias-corrected estimates, the bias-reduced estimates, and the corresponding estimated standard errors are given in parentheses. We note that all the estimates present smaller standard errors under the Student $t$ model than under the normal model (Table 7).
For all parameters, the original MLEs are very close to the bias-corrected MLE and the bias-reduced MLE when the Student $t$ model is used. However, under the normal model, significant differences in the estimates of $\alpha_1$ are noted. The estimates for $\alpha_1$ are $0.44$ (MLE), $0.65$ (bias-corrected MLE) and $1.03$ (bias-reduced MLE).
[cccc]{}\
${\theta}$ & MLE & Bias-corrected MLE & Bias-reduced MLE\
$\alpha_1$ & $0.44$ $(0.80)$ & $0.65$ $(0.99)$ & $1.03$ $(1.06)$\
$\alpha_2$ & $7.55$ $(0.95)$ & $7.34$ $(1.16)$ & $6.91$ $(1.25)$\
$\alpha_3$ & $0.13$ $(0.06)$ & $0.13$ $(0.06)$ & $0.13$ $(0.08)$\
$\alpha_4$ & $0.96$ $(0.24)$ & $0.93$ $(0.28)$ & $0.95$ $(0.34)$\
$\sigma^2$ & $0.31$ $(0.12)$ & $0.40$ $(0.15)$ & $0.50$ $(0.19)$\
\
${\theta}$ & MLE & Bias-corrected MLE & Bias-reduced MLE\
$\alpha_1$ & $0.90$ $(0.12)$ & $0.91$ $(0.13)$ & $0.90$ $(0.15)$\
$\alpha_2$ & $7.09$ $(0.17)$ & $7.08$ $(0.19)$ & $7.07$ $(0.22)$\
$\alpha_3$ & $0.09$ $(0.01)$ & $0.09$ $(0.01)$ & $0.09$ $(0.02)$\
$\alpha_4$ & $1.31$ $(0.08)$ & $1.31$ $(0.09)$ & $1.29$ $(0.10)$\
$\sigma^2$ & $0.02$ $(0.01)$ & $0.02$ $(0.01)$ & $0.03$ $(0.01)$\
Fluorescent lamp data
---------------------
Rosillo and Chivelet (2009) present a dataset referred here as the fluorescent lamp data. The authors analyze the lifetime of fluorescent lamps in photovoltaic systems using an analytical model whose goal is to assist in improving ballast design and extending the lifetime of fluorescent lamps. Following Rosillo and Chivelet (2009) we shall consider the nonlinear regression model (\[general-mixed\]) with $q_i=1$, $Z_i = 0$, $\Sigma_i = \sigma^2$, ${\theta} = \left({\alpha}^{\top}, \sigma^2\right)^{\top} = \left(\alpha_0, \alpha_1, \alpha_2, \alpha_3, \sigma^2\right)^{\top}$ and $$\label{nonlinear-model2}
\mu_i({\alpha}) = \frac{1}{1 + \alpha_0 + \alpha_1 x_{i1} + \alpha_2 x_{i2} + \alpha_3 x_{i2}^2}, \quad i = 1,\ldots, 14,$$ where the response variable is the observed lifetime/advertised lifetime ($Y$), the covariates correspond to a measure of gas discharge ($x_1$) and the observed voltage/ad- vertised voltage (measure of performance of lamp and ballast - $x_2$) and the errors are assumed to follow a normal distribution. Here we also assume a Student $t$ distribution with $\nu = 4$ degrees of freedom for the errors.
In Table 8 we present the maximum likelihood estimates, the bias-corrected estimates, the bias-reduced estimates, and the corresponding estimated standard errors. As in the previous application, the estimates present smaller standard errors under the Student $t$ model than under the normal model.
The original MLEs for $\alpha_0$ and $\alpha_3$ are bigger than the corresponding corrected and reduced versions by approximately one unit (normal and Student $t$ models). The largest differences are among the estimates of $\alpha_2$; for example, for the normal model we have $-56.33$ (MLE), $-54.45$ (bias-corrected MLE) and $-53.86$ (bias-reduced MLE).
We now use the Akaike Information Criterion ($AIC$, Akaike, 1974), the Schwarz Bayesian criterion ($BIC$, Schwarz, 1978) and the finite sample $AIC$ ($AIC_C$, Hurvich and Tsai, 1989) to evaluate the quality of the normal and Student $t$ fits. For the normal model we have $AIC = -9.98$, $BIC = -6.79$ and $AIC_C = -2.48$. For the $t$ model we have $AIC = -11.24$, $BIC = -8.04$ and $AIC_C = -3.74$. Therefore, the $t$ model presents the best fit for this dataset, since the values of the $AIC$, $BIC$ and $AIC_C$ are smaller.
Let $$\widehat{D} = \sum_{j=1}^{14} (\widehat{Y} - \widehat{Y}_{(j)})^{\top} (\widehat{Y} - \widehat{Y}_{(j)}),$$ where $\widehat{Y}$ and $\widehat{Y}_{(j)}$ are the vectors of predicted values computed from the model fit for the whole sample and the sample without the $j$th observation, respectively. The quantity $\widehat{D}$ measures the total effect of deleting one observation in the predicted values. For a fixed sample size, it tends to be high if a single observation can highly influence the prediction of new observations. We have $\widehat{D} = 0.119, 0.120$, and $0.123$ (normal model) and $\widehat{D} = 0.101, 0.100$, and $0.095$ (Student $t$ model) when using the MLE, the bias-corrected estimate, and the bias-reduced estimate, respectively. Notice that $\widehat{D}$ is smaller for the Student $t$ model regardless of the estimate used. This is evidence that the Student $t$ model is more suitable than the normal model for predicting lifetime of fluorescent lamps in this study.
\[tab13\]
[cccc]{}\
${\theta}$ & MLE & Bias-corrected MLE & Bias-reduced MLE\
$\alpha_0$ &$29.49$ $(5.21)$ &$28.54$ $( 5.66)$ &$28.25$ $( 5.84)$\
$\alpha_1$ & $9.99$ $(4.69)$ & $9.68$ $( 5.21)$ & $9.62$ $( 5.42)$\
$\alpha_2$ & $-56.33$ $(10.10)$ & $-54.45$ $(10.93)$ & $-53.86$ $(11.26)$\
$\alpha_3$ &$26.53$ $(4.89)$ &$25.61$ $( 5.28)$ &$25.31$ $( 5.43)$\
$\sigma^2$ & $1.40 \times 10^{-2}$ $(5.00 \times 10^{-3})$ & $1.80 \times 10^{-2}$ $(7.00 \times 10^{-3})$ & $1.90 \times 10^{-2}$ $(7.00 \times 10^{-3})$\
\
${\theta}$ & MLE & Bias-corrected MLE & Bias-reduced MLE\
$\alpha_0$ &$30.66$ $(4.64)$ &$29.94$ $(5.05)$ & $29.85$ $( 5.20)$\
$\alpha_1$ & $8.48$ $(4.00)$ & $8.24$ $(4.42)$ & $8.46$ $( 4.57)$\
$\alpha_2$ & $-58.20$ $(8.94)$ & $-56.79$ $(9.71)$ & $-56.67$ $(10.00)$\
$\alpha_3$ &$27.27$ $(4.30)$ &$26.58$ $(4.66)$ & $26.55$ $( 4.80)$\
$\sigma^2$ & $7.30 \times 10^{-3}$ $(3.60 \times 10^{-3})$ & $9.20 \times 10^{-3}$ $(4.60 \times 10^{-3})$ & $9.80 \times 10^{-3}$ $(4.90 \times 10^{-3})$\
WHO MONICA data
---------------
We now turn to a dataset from the WHO MONICA Project that was considered in Kulathinal et al. (2002). This dataset was first analyzed under normal distributions for the marginals of the random errors (Kulathinal et al. 2002; Patriota et al. 2009a). Thereafter, it was studied under a scale mixture of normal distributions for the marginals of the random errors (Cao et al., 2012). The approach used in the present paper is different from the others because here we consider a joint elliptical distribution for the vector of random errors. The other authors assumed that the distributions of the errors were independent, while we assume that they are uncorrelated but not independent. For our proposal, the errors will only be independent under normality.
The dataset considered here corresponds to the data collected for men ($n=38$). As describe in Kulathinal et al. (2002), the data are trends of the annual change in the event rate $(y)$ and trends of the risk scores ($x$). The risk score is defined as a linear combination of smoking status, systolic blood pressure, body mass index, and total cholesterol level. A follow-up study using proportional hazards models was employed to derive its coefficients, and provides the observed risk score and its estimated variance. Therefore, the observed response variable, $X_1$, is the average annual change in event rate (%) and the observed covariate, $X_2$, is the observed risk score (%). We use the heteroscedastic model (\[error-model-O\]) with $v=m=1$ and zero covariance between the errors $\delta_{x_{1i}}$ and $\delta_{x_{2i}}$.
Table 9 gives the MLE and the bias-corrected/reduced estimates (standard errors are given in parentheses). We considered the full sample ($n=38$) and randomly chosen sub-samples of $n = 10, 20$ and $30$ observations.
The original MLEs for $\beta_0$, $\beta_1$ and $\mu_{x_2}$ are practically the same as their bias-corrected and bias-reduced versions for all sample sizes. The largest differences are among the estimates of ${\Sigma}_q$; for example, for $n = 10$ we have $6.17$ (MLE), $8.14$ (bias-corrected MLE) and $8.81$ (bias-reduced MLE). In general, as expected, larger sample sizes correspond to smaller standard errors.
\[tab14\]
[ccccc]{} $n$ & ${\theta}$ & MLE & Bias-corrected MLE & Bias-reduced MLE\
& $\beta_0$ & $-2.58$ $(1.34)$ & $-2.58$ $(1.44)$ & $-2.45$ $(1.47)$\
& $\beta_1$ & $0.05$ $(0.60)$ & $0.05$ $(0.63)$ & $0.07$ $(0.64)$\
$10$ & $\mu_{x_2}$ & $-1.54$ $(0.58)$ & $-1.54$ $(0.61)$ & $-1.53$ $(0.62)$\
& $\Sigma_{x_2}$ & $2.89$ $(1.50)$ & $3.22$ $(1.65)$ & $3.29$ $(1.69)$\
& $\Sigma_q$ & $6.17$ $(3.99)$ & $8.14$ $(4.93)$ & $8.81$ $(5.25)$\
& $\beta_0$ & $-2.68$ $(0.65)$ & $-2.69$ $(0.68)$ & $-2.69$ $(0.69)$\
& $\beta_1$ & $0.48$ $(0.30)$ & $0.47$ $(0.31)$ & $0.43$ $(0.31)$\
$20$ & $\mu_{x_2}$ & $-1.29$ $(0.44)$ & $-1.29$ $(0.46)$ & $-1.29$ $(0.46)$\
& $\Sigma_{x_2}$ & $3.53$ $(1.25)$ & $3.73$ $(1.31)$ & $3.76$ $(1.32)$\
& $\Sigma_q$ & $3.00$ $(1.66)$ & $3.59$ $(1.87)$ & $3.73$ $(1.92)$\
& $\beta_0$ & $-2.22$ $(0.54)$ & $-2.22$ $(0.55)$ & $-2.20$ $(0.55)$\
& $\beta_1$ & $0.43$ $(0.24)$ & $0.43$ $(0.25)$ & $0.42$ $(0.25)$\
$30$ & $\mu_{x_2}$ & $-0.77$ $(0.42)$ & $-0.77$ $(0.42)$ & $-0.77$ $(0.42)$\
& $\Sigma_{x_2}$ & $4.71$ $(1.34)$ & $4.88$ $(1.39)$ & $4.89$ $(1.39)$\
& $\Sigma_q$ & $4.36$ $(1.86)$ & $4.89$ $(2.01)$ & $4.88$ $(2.01)$\
& $\beta_0$ & $-2.08$ $(0.53)$ & $-2.08$ $(0.54)$ & $-2.08$ $(0.54)$\
& $\beta_1$ & $0.47$ $(0.23)$ & $0.47$ $(0.24)$ & $0.46$ $(0.24)$\
$38$ & $\mu_{x_2}$ & $-1.09$ $(0.36)$ & $-1.09$ $(0.36)$ & $-1.09$ $(0.36)$\
& $\Sigma_{x_2}$ & $4.32$ $(1.10)$ & $4.44$ $(1.13)$ & $4.45$ $(1.13)$\
& $\Sigma_q$ & $4.89$ $(1.78)$ & $5.34$ $(1.89)$ & $5.30$ $(1.88)$\
Concluding remarks {#conclusion}
==================
We studied bias correction and bias reduction for a multivariate elliptical model with a general parameterization that unifies several important models (e.g., linear and nonlinear regressions models, linear and nonlinear mixed models, errors-in-variables models, among many others). We extend the work of Patriota and Lemonte (2009) to the elliptical class of distributions defined in Lemonte and Patriota (2011). We express the second order bias vector of the maximum likelihood estimates as an weighted least-squares regression.
As can be seen in our simulation results, corrected-bias estimators and reduced-bias estimators form a basis of asymptotic inferential procedures that have better performance than the corresponding procedures based on the original estimator. We further note that, in general, the bias-reduced estimates are less biased than the bias-corrected estimates. Computer packages that perform simple operations on matrices and vectors can be used to compute bias-corrected and bias-reduced estimates.
Appendix {#appendix .unnumbered}
========
[**Lemma A.1.**]{} Let $z_i \sim El_{q_i}(0, \Sigma_i, g)$, and $c_{i}$ and $\psi_{i(2,1)}$ as previously defined. Then, $$\begin{split}
&E\bigl(v_i{z}_i\bigr) = 0, \\
&E\bigl(v_i^2{z}_i{z}_i^{\top}\bigr) = \dfrac{4\psi_{i(2,1)}}{q_{i}}{\Sigma}_i, \\
&E\bigl(v_i^2 {\textrm{vec}}({z}_i{z}_i^{\top}){z}_i^{\top}\bigr) = 0, \\
&E\bigl(v_i^2{\textrm{vec}}({z}_i{z}_i^{\top}){\textrm{vec}}({z}_i{z}_i^{\top})^{\top}\bigr) = c_i\bigl({\textrm{vec}}({\Sigma}_i) {\textrm{vec}}({\Sigma}_i)^{\top} + 2{\Sigma}_i\otimes {\Sigma}_i\bigr), \\
&E\bigl(v_i^3{\textrm{vec}}({z}_i{z}_i^{\top}){\textrm{vec}}({z}_i{z}_i^{\top})^{\top}\bigr) = - c_i^*\bigl({\textrm{vec}}({\Sigma}_i) {\textrm{vec}}({\Sigma}_i)^{\top} + 2{\Sigma}_i\otimes {\Sigma}_i\bigr), \\
&E\bigl(v_i^3 {z}_i^{\top}A_{i(t)}{z}_i {z}_i^{\top}A_{i(s)}{z}_i{z}_i^{\top}A_{i(r)}{z}_i\bigr) = -8\widetilde{\omega}_i \bigl({\textrm{tr}}\{A_{i(t)}\Sigma_i\}{\textrm{tr}}\{A_{i(s)}\Sigma_i\}{\textrm{tr}}\{A_{i(r)}\Sigma_i\} \\
& \hspace{5.6cm} + 2{\textrm{tr}}\{A_{i(t)}\Sigma_i\}{\textrm{tr}}\{A_{i(s)}\Sigma_iA_{i(r)}\Sigma_i\} \\
& \hspace{5.6cm} + 2{\textrm{tr}}\{A_{i(s)}\Sigma_i\}{\textrm{tr}}\{A_{i(t)}\Sigma_iA_{i(r)}\Sigma_i\}\\
& \hspace{5.6cm} + 2{\textrm{tr}}\{A_{i(r)}\Sigma_i\}{\textrm{tr}}\{A_{i(t)}\Sigma_iA_{i(s)}\Sigma_i\}) \\
& \hspace{5.6cm} + 8 {\textrm{tr}}\{A_{i(t)}\Sigma_i A_{i(s)}\Sigma_i A_{i(r)}\Sigma_i\}\bigr),
\end{split}$$ where $c_i^* = 8\psi_{i(3,2)}/\{q_i(q_i+2)\}$, $\psi_{i(3,2)} = E(W_g^3({r}_i){r}_i^2)$, $\widetilde{\omega}_i =\psi_{i(3,3)}/\{q_i(q_i+2)(q_i + 4)\}$ and $\psi_{i(3,3)} = E(W_g^3({r}_i){r}_i^3)$.
[**Proof:** ]{}The proof can be obtained by adapting the results of Mitchell (1989) for a matrix version.
From Lemma A.1, we can find the cumulants of the log-likelihood derivatives required to compute the second-order biases.
[**Proof of Theorem 3.1:**]{} Following Cordeiro and Klein (1994), we write (\[biascorrection\]) in matrix notation to obtain the second-order bias vector of $\widehat{{\theta}}$ in the form $$\label{BTheta}
B_{\widehat{\theta}}(\theta) = K(\theta)^{-1}{W}{\textrm{vec}}(K(\theta)^{-1}),$$ where ${W} = ({W}^{(1)},\ldots,{W}^{(p)})$ is a $p\times p^{2}$ partitioned matrix, each ${W}^{(r)}$, referring to the $r$th component of ${\theta}$, being a $p\times p$ matrix with typical $(t,s)$th element given by
$$w_{ts}^{(r)} = \frac{1}{2}\kappa_{tsr}+\kappa_{ts,r} = \kappa_{ts}^{(r)} - \frac{1}{2}\kappa_{tsr} = \frac{3}{4}\kappa_{ts}^{(r)} - \frac{1}{4}(\kappa_{t,s,r} + \kappa_{sr}^{(t)}+\kappa_{rt}^{(s)}).$$
Because $K(\theta)$ is symmetric and the $t$th element of ${W}{\textrm{vec}}(K(\theta)^{-1})$ is $w_{t1}^{(1)}\kappa^{1,1} + (w_{t2}^{(1)} + w_{t1}^{(2)})\kappa^{1,2} + \cdots + (w_{tr}^{(s)} + w_{ts}^{(r)})\kappa^{s,r} + \cdots +(w_{tp}^{(p-1)} + w_{t(p-1)}^{(p)})\kappa^{p-1,p} +w_{tp}^{(p)}\kappa^{p,p}$, we may write
$$\label{WTS}
w_{ts}^{(r)} = \frac{1}{2}(w_{tr}^{(s)} + w_{ts}^{(r)}) = \frac{1}{4}(\kappa_{ts}^{(r)} + \kappa_{tr}^{(s)} - \kappa_{sr}^{(t)} - \kappa_{t,s,r}).$$
Comparing (\[BTheta\]) and (\[BIAS-vector\]) we note that for the proof of this theorem it suffices to show that $F^{\top}\widetilde{{H}}\xi = W {\textrm{vec}}(({F}^{\top} \widetilde{H}F)^{-1})$, i.e., $$W = F^{\top} H M H (\Phi_1,\ldots,\Phi_p).$$
Notice that $$\label{kappasr}
\begin{split}
\kappa_{sr} & = \sum_{i=1}^n \biggl\{ \frac{c_i}{2} {\textrm{tr}}\{A_{i(r)}C_{i(s)}\} - \frac{4\psi_{i(2,1)}}{q_i}a_{i(s)}^{\top}\Sigma_i^{-1}a_{i(r)} \\
& \hspace{1cm} -\frac{(c_i -1)}{4} {\textrm{tr}}\{A_{i(s)}\Sigma_i\}{\textrm{tr}}\{A_{i(r)}\Sigma_i\} \biggr\}.
\end{split}$$
The quantities $\psi_{i(2,1)}$ and $\psi_{i(2,2)}$ do not depend on $\theta$ and hence, the derivative of (\[kappasr\]) with respect to $\theta_t$ is $$\begin{aligned}
\kappa_{sr}^{(t)} &= \sum_{i=1}^n \biggl\{ \frac{c_i}{2} {\textrm{tr}}\{ A_{i(t)}\Sigma_iA_{i(s)}C_{i(r)} + A_{i(s)}\Sigma_iA_{i(t)}C_{i(r)} + C_{i(ts)}A_{i(r)} \\ & + C_{i(tr)}A_{i(s)}\}\biggr\} -\sum_{i=1}^n \biggl\{ \frac{4\psi_{i(2,1)}}{q_i}\bigl(a_{i(ts)}^{\top}\Sigma_i^{-1}a_{i(r)} + a_{i(s)}^{\top}A_{i(t)}a_{i(r)} \\
& + a_{i(s)}^{\top}\Sigma_i^{-1}a_{i(tr)}\bigr) \biggr\}+ \sum_{i=1}^n \biggl\{ \frac{(c_i -1)}{4} {\textrm{tr}}\{A_{i(t)}C_{i(s)} + \Sigma_i^{-1}C_{i(ts)}\}{\textrm{tr}}\{A_{i(r)}\Sigma_i\} \biggr\}\\
& +\sum_{i=1}^n \biggl\{ \frac{(c_i -1)}{4} {\textrm{tr}}\{A_{i(t)}C_{i(r)} + \Sigma_i^{-1}C_{i(tr)}\}{\textrm{tr}}\{A_{i(s)}\Sigma_i\}\biggr\}.\\\end{aligned}$$
Therefore,
$$\begin{aligned}
\kappa_{st}^{(r)} + \kappa_{tr}^{(s)} - \kappa_{sr}^{(t)} &= \sum_{i=1}^n \biggl\{ \frac{c_i}{2} {\textrm{tr}}\{ A_{i(r)}\Sigma_iA_{i(s)}C_{i(t)} + A_{i(s)}\Sigma_iA_{i(r)}C_{i(t)} \\
& + 2C_{i(rs)}A_{i(t)} \}\biggr\} - \sum_{i=1}^n \biggl\{ \frac{4\psi_{i(2,1)}}{q_i}\bigl(2a_{i(t)}^{\top} \Sigma_i^{-1} a_{i(sr)} \\
& + a_{i(t)}^{\top}A_{i(s)}a_{i(r)} + a_{i(s)}^{\top}A_{i(r)}a_{i(t)} - a_{i(s)}^{\top}A_{i(t)}a_{i(r)} \bigr) \biggr\} \\
& + \sum_{i=1}^n \biggl\{ \frac{(c_i -1)}{2} {\textrm{tr}}\{A_{i(r)}C_{i(s)} + \Sigma_i^{-1}C_{i(rs)}\}{\textrm{tr}}\{A_{i(t)}\Sigma_i\} \biggr\}.\end{aligned}$$
Now, the only quantity that remains to obtain is $\kappa_{t,s,r} = E(U_tU_sU_r)$. Noting that $z_i$ is independent of $z_j$ for $i\neq j$, we have $$\begin{aligned}
\kappa_{t,s,r} = \frac{1}{8}\sum_{i=1}^n E\biggl\{&\bigl[ {\textrm{tr}}\{A_{i(t)}(\Sigma_i - v_i z_iz_i^{\top}) \}{\textrm{tr}}\{A_{i(s)}(\Sigma_i - v_i z_iz_i^{\top}) \} \\
& {\textrm{tr}}\{A_{i(r)}(\Sigma_i - v_i z_iz_i^{\top}) \}\bigr] + 4{\textrm{tr}}\{A_{i(t)}(\Sigma_i - v_i z_iz_i^{\top}) \}(v_i^2 a_{i(r)}^{\top}\Sigma_i^{-1} \\
& z_i z_i^{\top} \Sigma^{-1}a_{i(s)}) + 4{\textrm{tr}}\{A_{i(r)}(\Sigma_i - v_i z_iz_i^{\top}) \}(v_i^2 a_{i(t)}^{\top}\Sigma_i^{-1}z_i z_i^{\top} \Sigma^{-1} \\
& a_{i(s)}) + 4{\textrm{tr}}\{A_{i(s)}(\Sigma_i - v_i z_iz_i^{\top}) \}(v_i^2 a_{i(t)}^{\top}\Sigma_i^{-1}z_i z_i^{\top} \Sigma^{-1}a_{i(r)})\biggr\}.\end{aligned}$$ Then, by using Lemma A.1 and from (\[WTS\]), we have, after lengthy algebra, that $$\label{Wr}
W^{(r)} = \sum_{i=1}^n F_i^{\top}H_iM_{i}H_i\Phi_{i(r)},$$ where
$$\Phi_{i(r)} = - \frac{1}{2} \left(H_i^{-1}M_i^{-1}B_{i(r)}H_iF_i + \frac{\partial F_i}{\partial \theta_r}\right),$$
and $$\begin{aligned}
B_{i(r)} &= -\frac{1}{2}
\begin{pmatrix}
\eta_{1i} C_{i(r)} & &2\eta_{1i} \Sigma_i \otimes a_{i(r)}^{\top} \\
2\eta_{2i} \Sigma_i\otimes a_{i(r)} & & 2(c_i-1)S_{1i(r)}
\end{pmatrix} \\ \\
& -
\frac{1}{4}
\begin{pmatrix}
\eta_{1i} \Sigma_i {\textrm{tr}}\{C_{i(r)}\Sigma_i^{-1}\} & &2\eta_{1i} a_{i(r)}{\textrm{vec}}(\Sigma_i)^{\top}\\
2\eta_{1i} {\textrm{vec}}(\Sigma_i)a_{i(r)}^{\top} & & 2(c_i + 8\widetilde{\omega}_i)S_{2i(r)}
\end{pmatrix},\end{aligned}$$ with
$$\begin{split}
\eta_{1i} &= c_i^* + 4\psi_{i(2,1)}/q_i, \ \
\eta_{2i} = c_i^* - 4\psi_{i(2,1)}/q_i, \\
S_{1i(r)} &= {\textrm{vec}}(\Sigma_i){\textrm{vec}}(C_{i(r)})^{\top} +\frac{1}{2}{\textrm{vec}}(\Sigma_i){\textrm{vec}}(\Sigma_i)^{\top}{\textrm{tr}}\{C_{i(r)}\Sigma_i^{-1}\} \ \ \mbox{and} \ \ \\
S_{2i(r)} &= {\textrm{vec}}(C_{i(r)}){\textrm{vec}}(\Sigma_i)^{\top} + {\textrm{vec}}(\Sigma_i){\textrm{vec}}(C_{i(r)})^{\top} + 4 \Sigma_i\otimes C_{i(r)} \\
&+ \big[\Sigma_i \otimes \Sigma_i+ \frac{1}{2}{\textrm{vec}}(\Sigma_i){\textrm{vec}}(\Sigma_i)^{\top}\big]{\textrm{tr}}\{C_{i(r)}\Sigma_i^{-1}\}.
\end{split}$$
Using (\[Wr\]) and (\[BTheta\]) the theorem is proved.
[**Proof of Corollary 3.1:**]{} It follows from Theorem 3.1, eq. (\[BIAS-vector\]), when $$F = \mbox{block--diag}\{F_{\theta_1}, F_{\theta_2}\}, \ \widetilde{H} = \mbox{block--diag} \{\widetilde{H}_{1}, \widetilde{H}_{2}\} \ \mbox{and} \ {\xi} = (\xi_{1}^{\top}, \xi_{2}^{\top})^{\top},$$ where $F_{\theta_j} = \left[F_{\theta_j(1)}^{\top}, \ldots, F_{\theta_j(n)}^{\top}\right]^{\top}$ and $\widetilde{H}_{j} = \mbox{block-diag}\{\widetilde{H}_{j(1)}, \ldots, \widetilde{H}_{j(n)}\}$ for $j = 1, 2$, with $F_{\theta_1(i)} = \partial \mu_i/\partial \theta_1^{\top}$, $F_{\theta_2(i)} = \partial[{\textrm{vec}}(\Sigma_i)]/\partial \theta_2^{\top}$, $\widetilde{H}_{1(i)} = \frac{4\psi_{i(2,1)}}{q_i} {\Sigma}_i^{-1}$ and $\widetilde{H}_{2(i)} = c_i \left(2 \Sigma_i \otimes \Sigma_i\right)^{-1} + (c_i-1){\textrm{vec}}(\Sigma_i^{-1}){\textrm{vec}}(\Sigma_i^{-1})^{\top}$. Furthermore, $\xi_{1} = \left[\xi_{1(1)}^\top, \ldots, \xi_{1(n)}^\top\right]^\top$ and $\xi_{2} = \left[\xi_{2(1)}^\top, \ldots, \xi_{2(n)}^\top\right]^\top$ with $$\begin{split}
\xi_{1(i)} =& \ - \frac{1}{2} \dot{F}_{\theta_1(i)} \ {\textrm{vec}}((F_{\theta_1}^{\top} \widetilde{H}_{(1)} F_{\theta_1})^{-1}), \\
\xi_{2(i)} =& \ \frac{1}{4} M_i^* P_i^* \ {\textrm{vec}}((F_{\theta_1}^{\top} \widetilde{H}_{(1)} F_{\theta_1})^{-1}) + \frac{1}{8} \left(M_i^* Q_i^* - 4 \ \dot{F}_{\theta_2(i)}\right) \\
& {\textrm{vec}}((F_{\theta_2}^{\top} \widetilde{H}_{(2)} F_{\theta_2})^{-1}).
\end{split}$$ Also, $\dot{F}_{\theta_1(i)} = [F_{\theta_1(i)}^1, \ldots, F_{\theta_1(i)}^{p_1}]$, $\dot{F}_{\theta_2(i)} = [F_{\theta_2(i)}^1, \ldots, F_{\theta_2(i)}^{p_2}]$, $Q_i^* = [Q_{i(1)}^*, \break \ldots, Q_{i(p_2)}^*]$, $P_i^* = [P_{i(1)}^*, \ldots,$ $P_{i(p_1)}^*]$, $F_{\theta_1(i)}^r = \frac{\partial F_{\theta_1(i)}}{\partial \theta_{1(r)}}$, $F_{\theta_2(i)}^s = \frac{\partial F_{\theta_2(i)}}{\partial \theta_{2(s)}}$, where $\theta_{1(r)}$ and $\theta_{2(s)}$ are the $r$th and $s$th elements of $\theta_1$ and $\theta_2$, respectively, $r = 1, \ldots, p_1$, $s = 1, \ldots, p_2$ and
$$\label{Eq.MQVP}
\begin{split}
M_{i}^* =& \ \frac{1}{c_i}\left(I_{q_i^2} - \frac{{\textrm{vec}}(\Sigma_i) {\textrm{vec}}(\Sigma_i)^\top}{2c_i + {\textrm{vec}}(\Sigma_i)^\top {\textrm{vec}}(\Sigma_i)} \right), \\
Q_{i(s)}^* =& \ \left((c_i - 1) S_{1i(s)} + \frac{1}{2} (c_i + 8\widetilde{\omega}_i)S_{2i(s)}\right) \left(\Sigma_i^{-1}\otimes \Sigma_i^{-1}\right) F_{\theta_2(i)}, \\
P_{i(r)}^* =& \ \left(2 \eta_{2i}(I_{q_i} \otimes a_{i(r)}) + \eta_{1i}{\textrm{vec}}(\Sigma_i)a_{i(r)}^{\top}\Sigma_i^{-1}\right) F_{\theta_1(i)}. \\
\end{split}$$
Acknowledgement {#acknowledgement .unnumbered}
===============
We gratefully acknowledge the financial the support from CNPq and FAPESP.
Akaike, H. (1974). A new look at the statistical model identification. *IEEE Transactions on Automatic Control.*, **19** 716–723.
Bull, S.B., Mak, C. and Greenwood, C. (2002). A modified score function estimator for multinomial logistic regression in small samples. *Computational Statistical Data Analysis*, **39** 57–74.
Cao, C–Z, Lin, J–G and Zhu, X–X. (2012). On estimation of a heteroscedastic measurement error model under heavy-tailed distributions. *Computational Statistics and Data Analysis*, **56** 438–448.
Cordeiro, G.M., Ferrari, S.L.P., Uribe-Opazo, M.A. and Vasconcellos, K.L.P. (2000). Corrected maximum-likelihood estimation in a class of symmetric nonlinear regression models. *Statistics and Probability Letters*, **46** 317–328.
Cordeiro, G.M. and Klein, R. (1994). Bias correction in ARMA models. *Statistics and Probability Letters*, **19** 169–176.
Cordeiro, G.M. and McCullagh, P. (1991). Bias correction in generalized linear models. *Journal of the Royal Statistical Society B*, **53**, 629–643.
Cox, D.R. and Hinkley, D.V. (1974). *Theoretical Statistics*. London: Chapman and Hall.
Cox, D.R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). *Journal of the Royal Statistical Society B*, **40**, 1–39.
Cox, D.R. and Snell, E. (1968). A general definition of residuals (with discussion). *Journal of the Royal Statistical Society B*, **30** 248–275.
Cysneiros, F.J.A., Cordeiro, G.M. and Cysneiros, A.H.M.A. (2010). Corrected maximum likelihood estimators in heteroscedastic symmetric nonlinear models, *Journal of Statistical Computation and Simulation*, **80** 451–461.
Doornik, J.A. (2013). *Object-Oriented Matrix Programming using Ox*. London: Timberlake Consultants Press (ISBN 978-0-9571708-1-0).
Fang, K.T., Kotz, S. and Ng, K.W. (1990). *Symmetric Multivariate and Related Distributions*. London: Chapman and Hall.
Firth, D. (1993). Bias reduction of maximum likelihood estimates. *Biometrika*, **80** 27–38.
Gómez, E., Gómez-Villegas, M.A. and Martín, J.M. (1998). A multivariate generalization of the power exponential family of distributions. *Communications in Statistics. Part A: Theory and Methods*, **27** 589–600.
Hurvich, C.M. and Tsai, C.L. (1989). Regression and Time Series Model Selection in Small Samples. *Biometrika*, **76** 297–307.
Ihaka, R. and Gentleman, R. (1996). R: A language for data analysis and graphics. *Journal of Computational Graphics and Statistics*, **5** 299–314.
Kosmidis I. (2014). Improved estimation in cumulative link models. *Journal of the Royal Statistical Society*, **76** 169–196.
Kosmidis I. and Firth, D. (2009). Bias reduction in exponential family nonlinear models. *Biometrika*, **96** 793–804.
Kosmidis I. and Firth, D. (2011). Multinomial logit bias reduction via the Poisson log-linear model. *Biometrika*, **98** 755–759.
Kulathinal, S.B., Kuulasmaa, K. and Gasbarra, D. (2002). Estimation of an errors-in-variables regression model when the variances of the measurement error vary between the observations. *Statistics in Medicine*, **21** 1089–1101.
Lange, K.L., Little, R.J.A. and Taylor, J.M.G. (1989). Robust statistical modeling using the $t$ distribution. *Journal of the American Statistical Association*, **84** 881–896.
Lawley, D.N. (1956). A general method for approximating to the distribution of likelihood ratio criteria. *Biometrika*, **43** 295–303.
Lemonte, A.J. (2011). Improved maximum-likelihood estimation in a regression model with general parametrization. *Journal of Statistical Computation and Simulation*, **81** 1027–1037.
Lemonte, A.J. and Patriota, A.G. (2011). Multivariate elliptical models with general parameterization. *Statistical Methodology*, **8** 389–400.
Lucas, A.(1997). Robustness of the Student $t$ based M-estimator. *Communications in Statistics, Theory and Methods*, **26** 1165–1182.
Magnus, J.R. and Neudecker, H. (2007). *Matrix Differential Calculus with Applications in Statistics and Econometrics*. Chichester: Wiley, third edition.
Mehrabi, Y. and Matthews, J.N.S. (1995). Likelihood-based methods for bias reduction in limiting dilution assays. *Biometrics*, **51** 1543–1549.
Mitchell, A.F.S. (1989). The information matrix, skewness tensor and $\alpha$-connections for the general multivariate elliptical distribution. *Annals of the Institute of Statistical Mathematics*, **41** 289–304.
Patriota, A.G. (2011). A note on influence in nonlinear mixed-effects elliptical models. *Computational Statistics and Data Analysis*, **51** 218–225.
Patriota, A.G., Bolfarine, B. and de Castro, M. (2009a). A heteroscedastic structural errors-in-variables model with equation error. *Statistical Methodology*, **6** 408–423.
Patriota, A.G. and Lemonte, A.J. (2009). Bias correction in a multivariate normal regression model with general parameterization. *Statistics & Probability Letters*, **79** 1655–1662.
Patriota, A.G., Lemonte, A.J. and Bolfarine, H. (2009b). Improved maximum likelihood estimators in a heteroskedastic errors-in-variables model. *Statistical Papers*, **52** 455–467.
Pettitt, A.N., Kelly, J.M. and Gao, J.T. (1998). Bias correction for censored data with exponential lifetimes. *Statistica Sinica*, **8** 941–964.
Rosillo, F.G., Chivelet, N.M. (2009). Lifetime prediction of fluorescent lamps used in photovoltaic systems. *Lighting Research and Technology*, **41 (2)** 183–197.
Schwarz, G. (1978). Estimating the dimensional of a model. *Annals of Statistics*, **6** 461–464.
Tiede, J.J. and Pagano, M. (1979). Application of robust calibration to radioimmunoassay. *Biometrics*, **35** 567–574.
Vanegas, L.H. and Paula, G.A. (2014). Log–symmetric distributions: statistical properties and parameter estimation. *Brazilian Journal of Probability and Statistics*. Accepted for publication. Available online at [http://www.imstat.org/bjps/papers/BJPS272.pdf]{}; accessed on 2015-06-23.
Vanegas, L.H. and Paula, G.A. (2015). A semiparametric approach for joint modeling of median and skewness. *Test*, **24** 110–135.
Vasconcellos, K.L.P. and Cordeiro, G.M. (2000). Bias corrected estimates in multivariate Student $t$ regression models. *Communications in Statistics, Theory and Methods*, **29** 797–822.
[^1]: All the omitted tables in this paper are presented in a supplement available from the authors upon request.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The polaron features due to electron-phonon interactions with different coupling ranges are investigated by adopting a variational approach. The ground-state energy, the spectral weight, the average kinetic energy, the mean number of phonons, and the electron-lattice correlation function are discussed for the system with coupling to local and nearest neighbor lattice displacements comparing the results with the long range case. For large values of the coupling with nearest neighbor sites, most physical quantities show a strong resemblance with those obtained for the long range electron-phonon interaction. Moreover, for intermediate values of interaction strength, the correlation function between electron and nearest neighbor lattice displacements is characterized by an upturn as function of the electron-phonon coupling constant.'
author:
- 'C. A. Perroni, V. Cataudella, G. De Filippis, V. Marigliano Ramaglia'
title: 'Effects of electron-phonon coupling range on the polaron formation'
---
Introduction
============
In the last years effects due to strong electron-phonon ($el-ph$) interactions and polaronic signatures have been evidenced in several compounds, such as high-temperature cuprate superconductors, [@1] colossal magnetoresistance manganites, [@2] fullerenes, [@fullerene] carbon nanotubes [@carbon] and $DNA$. [@3] This amount of experimental data has stimulated the study of different $el-ph$ coupled systems. Among the most studied models there are the Holstein lattice model [@4] characterized by a very short-range ($SR$) $el-ph$ interaction, and the Fr$\ddot{o}$hlich model [@froh] that takes into account long-range ($LR$) $el-ph$ couplings in polar compounds treating the dielectric as a continuum medium. Moreover, more realistic lattice interaction models including both $SR$ and $LR$ couplings have been recently introduced. [@ahn]
Within the Holstein model a tight-binding electron locally couples to optical phonon modes. For intermediate $el-ph$ couplings and electron and phonon energy scales not well separated, it has been found by numerical studies [@7; @korn; @8; @10; @ciuchi] and variational approaches [@11; @17; @17bis] that the system undergoes a crossover from a weakly dressed electron to a massive localized polaronic quasiparticle, the small Holstein polaron ($SHP$). A variational approach [@17; @17bis] proposed by some of us and based on a linear superposition between the Bloch states characteristic of the weak and strong coupling regime is able to describe all the ground state properties with great accuracy. Moreover this method provides an immediate physical interpretation of the intermediate regime characterized by the polaron crossover.
Recently a discrete version of the Fr$\ddot{o}$hlich model has been introduced in order to understand the role of $LR$ coupling on the formation of the lattice polaron. [@alex] Due to the $LR$ interaction, the polaron is much lighter than the $SHP$ with the same binding energy in the strong coupling regime. [@alex2] Furthermore the lattice deformation induced by the electron is spread over many lattice sites giving rise to the formation of a large polaron ($LP$) also in the strong coupling region. [@fehs] Extending the variational approach previously proposed for the study of systems with local $el-ph$ coupling, many properties have been studied by some of us as a function of the model parameters focusing on the adiabatic regime. [@ourlong] Indeed there is a range of values of the $el-ph$ coupling where the ground state is well described by a particle with a weakly renormalized mass but a spectral weight much smaller than unity. Furthermore, with increasing the strength of interaction in the same regime, the renormalized mass gradually increases, while the average kinetic energy is not strongly reduced. Finally due to the $LR$ coupling a strong mixing between electronic and phononic degrees of freedom has been found even for small values of the $el-ph$ coupling constant.
Both the $SR$ Holstein and the $LR$ discrete Fr$\ddot{o}$hlich model can be described by a quite general Hamiltonian $H$
$$\begin{aligned}
H=-t\sum_{<i,j>} c^{\dagger}_{i}c_{j} +\omega_0\sum_{i}
a^{\dagger}_i a_i +\alpha \omega_0 \sum_{i,j}
f(|\vec{R}_i-\vec{R}_j|) c^{\dagger}_{i}c_{i}\left(
a_j+a^{\dagger}_j\right), \label{1r}\end{aligned}$$
where $f(|\vec{R}_i-\vec{R}_j| )$ is the interacting force between an electron on the site $i$ and an ion displacement on the site $j$. In Eq.(\[1r\]) $c^{\dagger}_{i}$ ($c_i$) denotes the electron creation (annihilation) operator at site $i$, whose position vector is indicated by $\vec{R}_{i}$, and the symbol $<>$ denotes nearest neighbors ($nn$) linked through the transfer integral $t$. The operator $a^{\dagger}_i$ ($a_i$) represents the creation (annihilation) operator for phonon on the site $i$, $\omega_0$ is the frequency of the optical local phonon modes, and $\alpha$ controls the strength of $el-ph$ coupling. The units are such that $\hbar=1$. The Hamiltonian (\[1r\]) reduces to the Holstein model for $$f(|\vec{R}_i-\vec{R}_j| )=\delta_{\vec{R}_i,\vec{R}_j},
\label{force1}$$ while in the $LR$ case [@alex] the interaction force is given by $$f(|\vec{R}_i-\vec{R}_j|)= \left(|\vec{R}_i-\vec{R}_j|^{2} +1
\right)^{-\frac{3}{2}},
\label{force}$$ if the distance $|\vec{R}_i-\vec{R}_j|$ is measured in units of lattice constant.
In addition to the $SR$ and $LR$ case, in this work we analyze the properties of the system where the electron couples with local and $nn$ lattice displacements. In this case the interaction force becomes $$f(|\vec{R}_i-\vec{R}_j|
)=\delta_{\vec{R}_i,\vec{R}_j}+ \frac{\alpha_1}{\alpha}
\sum_{\vec{\delta}} \delta_{\vec{R}_i+{\vec \delta},\vec{R}_j},
\label{force2}$$ where ${\vec \delta}$ indicates the $nn$ sites. For all the couplings of Eqs. (\[force1\],\[force\],\[force2\]) the $el-ph$ matrix element in the momentum space $M_{\vec{q}}$ is $$M_{\vec{q}}=\frac{\alpha \omega_0}{\sqrt{L}} \sum_{m}
f(|\vec{R}_m|) e^{i \vec{q} \cdot \vec{R}_m},$$ with $L$ number of lattice sites. Through the matrix element $M_{\vec{q}}$ we can define the polaronic shift $E_p$ $$E_p=\sum_{\vec{q}}\frac{M^{2}_{\vec{q}}}{\omega_0},$$ and the coupling constant $\lambda=E_p/zt$, with $z$ lattice coordination number, that represents a natural measure of the strength of the $el-ph$ coupling for any range of the interaction. Limiting the analysis to the one-dimensional case, the matrix element $M_{\vec{q}}$ is reported in Fig. 1 for the $SR$, $LR$, and $nn$ extended range ($ER$) couplings. In the $SR$ case the coupling is constant as function of the transferred phononic momentum, while in the $LR$ case the vertex is peaked around $q
\simeq 0$. With increasing the ratio $\alpha_1/\alpha$, the $ER$ interaction deviates from the constant behavior developing a peak around $q \simeq 0$. Actually for the ratio $\alpha_1/\alpha=0.3$ the interaction vertex of the $ER$ case is close to the behavior of the $LR$ coupling.
In this paper we adopt the variational approach previously proposed [@17; @17bis; @ourlong] for the study of systems with local and $LR$ $el-ph$ interactions in order to study the system with $nn$ $ER$ $el-ph$ coupling. The aim is to investigate the crossover from $SR$ to $LR$ interactions in the $ER$ model. The evolution of the ground-state spectral weight, the average kinetic energy, the mean number of phonons, and the electron-lattice correlation function with respect to the adiabaticity ratio $\omega_0/t$ and the $el-ph$ coupling constant is discussed comparing the results with the local and $LR$ case. For large values of the $nn$ $el-ph$ coupling many properties show a behavior similar to those obtained for the $LR$ $el-ph$ interaction. Regions of the model parameters are distinguished according to the values assumed by the spectral weight giving rise to a polaronic phase diagram. The transition line between the crossover and the strong coupling regime continuously evolves toward that of the $LR$ case by increasing the coupling of the $ER$ system. Finally, for intermediate values of this coupling, the correlation function between electron and $nn$ lattice displacements shows an upturn with increasing the $el-ph$ constant $\lambda$.
In section II the variational approach is reviewed, while in section III the results are discussed.
Variational wave function
=========================
In this section the variational approach is briefly summarized. Details can be found in previous works. [@17; @17bis]
The trial wave functions are translational invariant Bloch states obtained by taking a superposition of localized states centered on different lattice sites $$|\psi^{(i)}_{\vec{k}}>=\frac{1}{\sqrt{L}}\sum_{\vec{R}_n}e^{i\vec{k}\cdot
\vec{R}_n}|\psi^{(i)}_{\vec{k}}(\vec{R}_n)>,
\label{12rn}$$ where $$|\psi^{(i)}_{\vec{k}}(\vec{R}_n)> = e^{\sum_{\vec{q}}\left[
h^{(i)}_{\vec{q}}(\vec{k})a_{\vec{q}} e^{i\vec{q}\cdot \vec{R}_n}
+h.c.\right]} \sum_m \phi^{(i)}_{\vec{k}}(\vec{R}_m)
c^{\dagger}_{m+n}|0> .
\label{13rn}$$ In Eq. (\[12rn\]) the apex $i=w,s$ indicates the weak and strong coupling polaron wave function, respectively, $|0>$ denotes the electron and phonon vacuum state, and $\phi^{(i)}_{\vec{k}}(\vec{R}_m)$ are variational parameters defining the spatial broadening of the electronic wave function. The phonon distribution functions $h^{(i)}_{\vec{q}}(\vec{k})$ are chosen in order to reproduce polaron features in the two asymptotic limits. [@17]
In the intermediate regime the weak and strong coupling wave functions are not orthogonal and the off-diagonal matrix elements of the Hamiltonian are not zero. Therefore the ground state properties are determined by considering as trial state $|\psi_{\vec{k}}>$ a linear superposition of the weak and strong coupling wave functions $$|\psi_{\vec{k}}>=\frac{A_{\vec{k}}
|\overline{\psi}^{(w)}_{\vec{k}}>+ B_{\vec{k}}
|\overline{\psi}^{(s)}_{\vec{k}}>}
{\sqrt{A^2_{\vec{k}}+B^2_{\vec{k}}
+2A_{\vec{k}}B_{\vec{k}}S_{\vec{k}}}},
\label{31r}$$ where $$\begin{aligned}
&&|\overline{\psi}^{(w)}_{\vec{k}}>= \frac{|\psi^{(w)}_{\vec{k}}>}
{\sqrt{<\psi^{(w)}_{\vec{k}}|\psi^{(w)}_{\vec{k}}>}},
|\overline{\psi}^{(s)}_{\vec{k}}>= \frac{|\psi^{(s)}_{\vec{k}}>}
{\sqrt{<\psi^{(s)}_{\vec{k}}|\psi^{(s)}_{\vec{k}}>}} \label{32r}\end{aligned}$$ and $S_{\vec{k}}$ $$S_{\vec{k}}=
\frac{<\overline{\psi}^{(w)}_{\vec{k}}|\overline{\psi}^{(s)}_{\vec{k}}>+h.c.}
{2}
\label{33r}$$ is the overlap factor of the two wave functions $|\overline{\psi}^{(w)}_{\vec{k}}>$ and $|\overline{\psi}^{(s)}_{\vec{k}}>$. In Eq.(\[31r\]) $A_{\vec{k}}$ and $B_{\vec{k}}$ are two additional variational parameters which provide the relative weight of the weak and strong coupling solutions for any particular value of $\vec{k}$. The variational minimization is performed extending the electron wave function up to fifth neighbors.
Results
=======
In this section we discuss ground state properties in the one-dimensional case for the different ranges of $el-ph$ coupling.
In Fig. 2(a) we report the polaron ground state energy as a function of the $el-ph$ constant coupling $\lambda$. The variational method recovers the perturbative results and improves significantly these asymptotic estimates in the intermediate region. In this regime the energy decreases with increasing the range of the $e-ph$ coupling. Moreover, with increasing the range of the coupling, the crossover between the weak and strong coupling solution becomes less evident. Actually, as shown in Fig. 2(b), there are marked differences in the ratio $B/A$ that is the weight of the strong coupling solution with respect to the weak coupling one. In the $SR$ case the strong coupling solution provides all the contribution since the overlap with the weak coupling function is negligible. However, with increasing the range of the interaction, the weight of the weak coupling function increases and the polaronic crossover becomes smooth. Another quantity that gives insight about the properties of the electron state is the average kinetic energy $K$ reported in Fig. 2(c) (in units of the bare electron energy). While in the $SR$ case $K$ is strongly reduced, in the $LR$ case it is only weakly renormalized stressing that the self-trapping of the electron occurs for larger couplings with increasing the range of the interaction. Finally the mean number of phonons is plotted in Fig. 2(d). In the weak-coupling regime the interaction of the electron with displacements on different sites is able to excite more phonons. However, in the strong coupling regime there is an inversion in the roles played by $SR$ and $ER$ interaction. Indeed the $SHP$ is strongly localized on the site allowing a larger number of local phonons to be excited.
In addition to the quantities discussed in Fig. 2, other properties change remarkably with increasing the ratio $\alpha_1/\alpha$. An interesting property is the ground state spectral weight $Z$, that measures the fraction of the bare electron state in the polaronic trial wave function. As plotted in Fig. 3(a), the increase of the $el-ph$ coupling strength induces a decrease of the spectral weight that is more evident with increasing the range of the $el-ph$ coupling. Only in the strong coupling regime the spectral weights calculated for different ranges assume similar small values. While for the local Holstein model $Z=m/m^*$, as the $ER$ case is considered, $Z$ becomes progressively smaller than $m/m^*$ in analogy with the behavior due to the $LR$ interaction. [@ourlong] We have found that for the ratio $\alpha_1/\alpha=0.3$ there is a region of intermediate values of $\lambda$ where the ground state is described by a particle with a weakly renormalized mass but a spectral weight $Z$ much smaller than unity. In Fig. 3(b) we propose a phase diagram based on the values assumed by the spectral weight making a comparison with that obtained in the $SR$ and $LR$ case. [@17bis; @ourlong] Analyzing the behavior of $Z$ it is possible to distinguish different regimes, for example the crossover regime ($0.1 < Z < 0.9$) characterized by intermediate values of spectral weight and a mass not strongly enhanced for the $ER$ case, and strong coupling regime ($Z<0.1$) where the spectral weight is negligible and the mass is large but not enormous if the range of the coupling increases. With increasing the range of the interactions in the adiabatic case there is strong mixing of electronic and phononic degrees of freedom for values of $\lambda$ smaller than those characteristic of local Holstein interaction. Furthermore, entering the strong coupling regime, the charge carrier is still mobile and it does not undergo any abrupt localization. Only in the antiadiabatic regime the transition lines separating the crossover from the strong coupling regime tend to superimpose.
Another important quantity associated to the polaron formation is the correlation function $S(R_l)$ $$S(R_l)=S_{k=0}(R_l)= \frac{\sum_{n} <\psi_{k=0}
|c^{\dagger}_nc_n\left(a^{\dagger}_{n+l}+a_{n+l}\right)|\psi_{k=0}>}
{<\psi_{k=0} |\psi_{k=0}>} \label{102r}.$$ In Fig. 4(a) we report the correlation function $S(R_l=0)$ at $\omega_0/t=1$ for several ranges of the $el-ph$ interaction. In analogy with the behavior of the average number of phonons discussed in Fig. 2(d), the on-site correlation function is larger with increasing the range of the interaction in the weak coupling regime, but it becomes smaller as function of the coupling constant $\lambda$ in the strong coupling region indicating the the $SHP$ is more effective in producing local lattice distortions. Actually in the $ER$ and $LR$ case, the lattice deformation is spread over $nn$ or many lattice sites, respectively, giving rise to the formation of $LP$ also in the strong coupling regime. Therefore it is interesting to analyze the behavior of the correlation function at $nn$ sites. As reported in Fig. 4(b), in the $SR$ case there is a minimum as function of $\lambda$ since the particle tends to localize on a single site with increasing the $el-ph$ coupling. In the $LR$ case the lattice distortion shows a decreasing behavior with increasing $\lambda$ indicating that the nearest neighbor contribution is always relevant. However for intermediate values of the ratio $\alpha_1/\alpha$ in the $ER$ case, the correlation function shows an upturn as function of the coupling constant $\lambda$. Actually, for small values of the coupling, this function tends to follow the behavior of the local interaction, but, with increasing the value of $\lambda$, the coupling to the $nn$ lattices is able to give deviations from the $SR$ case. In fact the lattice deformation reaches a maximum, then begins to decrease following the behavior of the $LR$ interaction. Therefore, as function of $\lambda$, two different regimes in the correlation function can be evidenced.
Discussion and Conclusion
=========================
In this paper we have extended a variational approach in order to study the polaronic ground-state features of a one dimensional $el-ph$ model with coupling to local and $nn$ lattice displacements. Many physical quantities such as the ground state energy and spectral weight, the average kinetic energy, the mean number of phonons, and the electron-lattice correlation function have been discussed making a comparison with the results obtained with $SR$ and $LR$ interactions. It has been possible to ascertain that most physical quantities are quantitatively equal to those obtained for the $LR$ interaction as the $el-ph$ coupling in the $ER$ case is large. A polaronic phase diagram based on the values assumed by the spectral weight has been proposed. It has been shown that the transition lines between the crossover and the strong coupling regime continuously evolve toward that of the $LR$ case by increasing the coupling of the $ER$ system. The deviations of the $ER$ case from $LR$ case become evident only in quantities depending on distances larger than the lattice parameter, such as in the electron-lattice correlation function. At neighbor nearest sites for large values of the coupling, the $ER$ interaction is able to reproduce the correlation function characteristic of the $LR$, while, at intermediate values of the ratio $\alpha_1/\alpha$, the lattice deformation shows an upturn as function of the coupling constant $\lambda$.
Recently, a variational wave function [@perrossh] has been proposed to study the polaron formation in Su-Schrieffer-Heeger ($SSH$) model where the electronic transfer integral depends on the relative displacement between $nn$ sites. Unlike the original $SSH$ model, the non-local electron-lattice coupling has been assumed to be due to the interaction with optical phonon modes. It has been shown that with this type of interaction the tendency towards localization is hindered from the pathological sign change of the effective next-nearest-neighbor hopping. Therefore it is not possible to reach the strong coupling regime where most properties obtained with the $ER$ density-type $el-ph$ coupling bear strong resemblance with those in the $LR$ model. Only the coupling with acoustic phonons is able to provide a solution with localized behavior within the $SSH$ model. [@lamagna]
The variational approach for models with density-type $el-ph$ coupling can be generalized to high dimensions, where it can still give a good description of ground state features. [@17; @giulio] However, in order to reproduce with the $ER$ interaction most physical quantities of the $LR$ case, with increasing the dimensionality, it is important to include not only coupling terms at $nn$ sites but also at next nearest neighbors. Actually it is necessary that the expansion of the coupling to near sites gives rise to an $el-ph$ interaction vertex similar to that obtained in the $LR$ case. Under these conditions the variational method is able to interpolate between the behavior of the $SR$ case to the $LR$ one with increasing the coupling of the interaction with close sites.
Figure captions {#figure-captions .unnumbered}
===============
[Fig.1]{} The $el-ph$ matrix element $M_q$ (in units of $\alpha
\omega_0/{\sqrt L}$) for different ranges of the interaction as function of the momentum q (in units of $\pi$).
[Fig.2]{} The ground state energy $E_0$ in units of $\omega_0$ (a), the ratio B/A at k=0 (b), the average kinetic energy $K$ in units of the bare one (c) and the average phonon number $N$ (d) for $t=\omega_0$ as a function of the coupling constant $\lambda$ for different ranges of the $el-ph$ interaction: $SR$ (solid line), $ER$ with $\alpha_1/\alpha=0.05$ (dash line), $ER$ with $\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$ (dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot line), $LR$ (double dash-dot line).
[Fig.3]{} (a) The ground state spectral weight at $\omega_0 /t =1$ as a function of the coupling constant $\lambda$ for different ranges of interaction: $SR$ (solid line), $ER$ with $\alpha_1/\alpha=0.05$ (dash line), $ER$ with $\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$ (dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot line), $LR$ (double dash-dot line).
\(b) Polaron phase diagram for $SR$ (solid line), $ER$ with $\alpha_1/\alpha=0.2$ (dash-dot line) , $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot line), and $LR$ (double dash-dot line) $el-ph$ interaction. The transition lines correspond to model parameters such that the spectral weight $Z=0.1$.
[Fig.4]{} The electron-lattice correlation functions $S(R_l=0)$ (a) and $S(R_l=\delta)$ (b) at $\omega_0 /t =1$ for different ranges of the $el-ph$ interaction: $SR$ (solid line), $ER$ with $\alpha_1/\alpha=0.05$ (dash line), $ER$ with $\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$ (dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot line), $LR$ (double dash-dot line).
Guo-Meng-Zhao, M. B. Hunt, H. Keller, and K. A. Muller, Nature [**385**]{}, 236 (1997); A. Lanzara, P. V. Bogdanov, X. J. Zhou, S. A. Kellar, D. L. Feng, E. D. Lu, T. Yoshida, H. Eisaki, A. Fujimori, K. Kishio, J.-I. Shimoyama, T. Noda, S. Uchida, Z. Hussain, and Z.-X. Shen, [*ibid.*]{} [**412**]{}, 510 (2001); R. J. McQueeney, J. L. Sarrao, P. G. Pagliuso, P. W. Stephens, and R. Osborn, Phys. Rev. Lett. [**87**]{}, 77001 (2001).
J. M. De Teresa, M. R. Ibarra, P. A. Algarabel, C. Ritter, C. Marquina, J. Blasco, J. Garcia, A. del Moral, and Z. Arnold, Nature [**386**]{}, 256 (1997); A. J. Millis, [*ibid.*]{} [**392**]{}, 147 (1998); M. B. Salamon and M. Jaime, Rev. Mod. Phys. [**73**]{}, 583 (2001).
O. Gunnarsson, Rev. Mod. Phys. [**69**]{}, 575 (1997).
M. Verissimo-Alves, R. B. Capaz, B. Koiller, E. Artacho, and H. Chacham, Phys. Rev. Lett. [**86**]{}, 3372 (2001); E. Piegari, V. Cataudella, V. Marigliano, and G. Iadonisi [*ibid.*]{} [**89**]{}, 49701 (2002).
S. S. Alexandre, E. Artacho, J. M. Soler, and H. Chacham, Phys. Rev. Lett. [**91**]{}, 108105 (2003).
T. Holstein, Ann. Phys. (Leipzig) [**8**]{}, 325 (1959); [**8**]{}, 343 (1959).
H. Fr${\ddot o}$hlich, Adv. Phys. [**3**]{}, 325 (1954).
K.H. Ahn, T. Lookman, and A. R. Bishop, Nature [**428**]{}, 401 (2004).
H. de Raedt and Ad Lagendijk, Phys. Rev. B [**27**]{}, 6097 (1983); [**30**]{}, 1671 (1984).
P. E. Kornilovitch, Phys. Rev. Lett. [**81**]{}, 5382 (1998).
E. de Mello and J. Ranninger, Phys. Rev. B [**55**]{}, 14872 (1997); M. Capone, W. Stephan, and M. Grilli, [*ibid.*]{} [**56**]{}, 4484 (1997); A. S. Alexandrov, V. V. Kabanov, and D. K. Ray, [*ibid.*]{} [**49**]{}, 9915 (1994); G. Wellein and H. Fehske, [*ibid.*]{} [**56**]{}, 4513 (1997).
S. R. White, Phys. Rev. B [**48**]{}, 10345 (1993); E. Jeckelmann and S. R. White, [*ibid.*]{} [**57**]{}, 6376 (1998).
S. Ciuchi, F. de Pasquale, S. Fratini, and D. Feinberg, Phys. Rev. B [**56**]{}, 4494 (1997).
A. H. Romero, D. W. Brown, and K. Lindenberg, Phys. Rev. B [**59**]{}, 13728 (1999).
V. Cataudella, G. De Filippis, and G. Iadonisi, Phys. Rev. B [**60**]{}, 15163 (1999).
V. Cataudella, G. De Filippis, and G. Iadonisi, Phys. Rev. B [**62**]{}, 1496 (2000).
A. S. Alexandrov and P. E. Kornilovitch, Phys. Rev. Lett. [**82**]{}, 807 (1999).
A. S. Alexandrov and B. Ya. Yavidov, Phys. Rev. B [**69**]{}, 073101 (2004).
H. Fehske, J. Loos, and G. Wellein, Phys. Rev. B [**61**]{}, 8016 (2000).
C.A. Perroni, V. Cataudella, G. De Filippis, J. Phys.: Condens. Matter [**16**]{}, 1593 (2004).
G. De Filippis, V. Cataudella, V. Marigliano Ramaglia, C. A. Perroni, and D. Bercioux, Eur. Phys. J. B [**36**]{}, 65 (2003).
C.A. Perroni, E. Piegari, M. Capone, and V. Cataudella, Phys. Rev. B [**69**]{}, 174301 (2004).
A. La Magna and R. Pucci, Phys. Rev. B [**55**]{}, 6296 (1997).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability. Considering this technique is mainly exploited for the task of classification, this paper is developed to study the construction of random matrix from the viewpoint of feature selection, rather than of traditional distance preservation. This yields a somewhat surprising theoretical result, that is, the sparse random matrix with exactly one nonzero element per column, can present better feature selection performance than other more dense matrices, if the projection dimension is sufficiently large (namely, not much smaller than the number of feature elements); otherwise, it will perform comparably to others. For random projection, this theoretical result implies considerable improvement on both complexity and performance, which is widely confirmed with the classification experiments on both synthetic data and real data.'
author:
- 'Weizhi Lu, Weiyu Li, Kidiyo Kpalma and Joseph Ronsin'
bibliography:
- 'egbib.bib'
title: 'Sparse Matrix-based Random Projection for Classification'
---
Random Projection, Sparse Matrix, Classification, Feature Selection, Distance Preservation, High-dimensional data
\[lemma\][Corollary]{}
Introduction
============
Random projection attempts to project a set of high-dimensional data into a low-dimensional subspace without distortion on pairwise distance. This brings attractive computational advantages on the collection and processing of high-dimensional signals. In practice, it has been successfully applied in numerous fields concerning categorization, as shown in [@Goel05] and the references therein. Currently the theoretical study of this technique mainly falls into one of the following two topics. One topic is concerned with the construction of random matrix in terms of distance preservation. In fact, this problem has been sufficiently addressed along with the emergence of Johnson-Lindenstrauss (JL) lemma [@Jonson84]. The other popular one is to estimate the performance of traditional classifiers combined with random projection, as detailed in [@Durrant13] and the references therein. Specifically, it may be worth mentioning that, recently the performance consistency of SVM on random projection is proved by exploiting the underlying connection between JL lemma and compressed sensing [@Calderbank09compressedlearning] [@Baraniuk08].
Based on the principle of distance preservation, Gaussian random matrices [@Indyk98] and a few sparse $\{0, \pm1\}$ random matrices [@Achlioptas03; @Li06; @Dasgupta10] have been sequentially proposed for random projection. In terms of implementation complexity, it is clear that the sparse random matrix is more attractive. Unfortunately, as it will be proved in the following section \[subsec-sparse\], the sparser matrix tends to yield weaker distance preservation. This fact largely weakens our interests in the pursuit of sparser random matrix. However, it is necessary to mention a problem ignored for a long time, that is, random projection is mainly exploited for various tasks of classification, which prefer to maximize the distances between different classes, rather than preserve the pairwise distances. In this sense, we are motivated to study random projection from the viewpoint of feature selection, rather than of traditional distance preservation as required by JL lemma. During this study, however, the property of satisfying JL lemma should not be ignored, because it promises the stability of data structure during random projection, which enables the possibility of conducting classification in the projection space. Thus throughout the paper, all evaluated random matrices are previously ensured to satisfy JL lemma to a certain degree.
In this paper, we indeed propose the desired $\{0, \pm1\}$ random projection matrix with the best feature selection performance, by theoretically analyzing the change trend of feature selection performance over the varying sparsity of random matrices. The proposed matrix presents currently the sparsest structure, which holds only one random nonzero position per column. In theory, it is expected to provide better classification performance over other more dense matrices, if the projection dimension is not much smaller than the number of feature elements. This conjecture is confirmed with extensive classification experiments on both synthetic and real data. The rest of the paper is organized as follows. In the next section, the JL lemma is first introduced, and then the distance preservation property of sparse random matrix over varying sparsity is evaluated. In section \[sec-theoryframework\], a theoretical frame is proposed to predict feature selection performance of random matrices over varying sparsity. According to the theoretical conjecture, the currently known sparsest matrix with better performance over other more dense matrices is proposed and analyzed in section \[sec-proposal\]. In section \[sec-experiment\], the performance advantage of the proposed sparse matrix is verified by performing binary classification on both synthetic data and real data. The real data incudes three representative datasets in dimension reduction: face image, DNA microarray and text document. Finally, this paper is concluded in section \[sec-conclusion\].
Preliminaries {#sec-preliminary}
=============
This section first briefly reviews JL lemma, and then evaluates the distance preservation of sparse random matrix over varying sparsity.
For easy reading, we begin by introducing some basic notations for this paper. A random matrix is denoted by $\mathbf{R} \in \mathbb{R}^{k\times d}$, $k<d$. $r_{ij}$ is used to represent the element of $\mathbf{R}$ at the $i$-th row and the $j$-th column, and $\mathbf{r}\in \mathbb{R}^{1\times d}$ indicates the row vector of $\mathbf{R}$. Considering the paper is concerned with binary classification, in the following study we tend to define two samples $\mathbf{v} \in \mathbb{R}^{1\times d}$ and $\mathbf{w}\in \mathbb{R}^{1\times d}$, randomly drawn from two different patterns of high-dimensional datasets $\mathcal{V} \subset \mathbb{R}^{d}$ and $\mathcal{ W} \subset \mathbb{R}^{d}$, respectively. The inner product between two vectors is typically written as $\langle \mathbf{v},\mathbf{w} \rangle$. To distinguish from variable, the vector is written in bold. In the proofs of the following lemmas, we typically use $\Phi(*)$ to denote the cumulative distribution function of $N(0,1)$. The minimal integer not less than $*$, and the the maximum integer not larger than $*$ are denoted with $\lceil * \rceil$ and $\lfloor * \rfloor$ .
Johnson-Lindenstrauss (JL) lemma {#subsec-jl}
--------------------------------
The distance preservation of random projection is supported by JL lemma. In the past decades, several variants of JL lemma have been proposed in [@DasGupta99; @Matousek08; @Arriaga06]. For the convenience of the proof of the following Corollary 2, here we recall the version of [@Arriaga06] in the following Lemma 1. According to Lemma 1, it can be observed that a random matrix satisfying JL lemma should have $\mathds{E}(r_{ij})=0$ and $\mathds{E}(r_{ij}^2)=1$.
[@Arriaga06] \[lemma-1\] Consider random matrix $\mathbf{R} \in \mathbb{R}^{k\times d}$, with each entry $r_{ij}$ chosen independently from a distribution that is symmetric about the origin with $\mathds{E}(r_{ij}^2)=1$. For any fixed vector $\mathbf{v}\in \mathbb{R}^d$, let $\mathbf{v}'=\frac{1}{\sqrt{k}}\mathbf{R}\mathbf{v}^{T}$.
- Suppose $B=\mathds{E}(r_{ij}^4)<\infty$. Then for any $\epsilon>0$,
$$\label{eq-prB}
\begin{aligned}
\text{\emph{Pr}}(\|\mathbf{v}'\|^2\leq(1-\epsilon)\|\mathbf{v}\|^2)\leq e^{-\frac{(\epsilon^2-\epsilon^3)k}{2(B+1)}}
\end{aligned}$$
- Suppose $\exists L>0$ such that for any integer $m>0$, $\mathds{E}(r_{ij}^{2m})\leq \frac{(2m)!}{2^mm!}L^{2m}$. Then for any $\epsilon >0$, $$\label{eq-prl}
\begin{aligned}
\text{ \emph{Pr}}(\|\mathbf{v}'\|^2\geq(1+\epsilon)L^2\|\mathbf{v}\|^2)&\leq((1+\epsilon)e^{-\epsilon})^{k/2}\\ &\leq e^{-(\epsilon^2-\epsilon^3)\frac{k}{4}}
\end{aligned}$$
Sparse random projection matrices {#subsec-sparse}
---------------------------------
Up to now, only a few random matrices are theoretically proposed for random projection. They can be roughly classified into two typical classes. One is the Gaussian random matrix with entries i.i.d dawn from $N(0,1)$ , and the other is the sparse random matrix with elements satisfying the distribution below:
$$\label{eq-rij}
r_{ij}=\sqrt{q}\times
\left\{
\begin{array}{cl}
1&\text{with probability} ~1/2q \\
0& \text{with probability} ~1-1/q \\
-1&\text{with probability} ~1/2q
\end{array}
\right.$$
where $q$ is allowed to be 2, 3 [@Achlioptas03] or $\sqrt{d}$ [@Li06]. Apparently the larger $q$ indicates the higher sparsity.
Naturally, an interesting question arises: can we continue improving the sparsity of random projection? Unfortunately, as illustrated in Lemma \[lemma-2\], the concentration of JL lemma will decrease as the sparsity increases. In other words, the higher sparsity leads to weaker performance on distance preservation. However, as it will be disclosed in the following part, the classification tasks involving random projection are more sensitive to feature selection rather than to distance preservation.
\[lemma-2\] Suppose one class of random matrices $R \in \mathbb{R}^{k\times d}$, with each entry $r_{ij}$ of the distribution as in formula , where $q=k/s$ and $1\leq s \leq k$ is an integer. Then these matrices satisfy JL lemma with different levels: the sparser matrix implies the worse property on distance preservation.
With formula , it is easy to derive that the proposed matrices satisfy the distribution defined in Lemma \[lemma-1\]. In this sense, they also obey JL lemma if the two constraints corresponding to formulas and could be further proved.\
For the first constraint corresponding to formula :\
$$\label{eq-B}
\begin{aligned}
B&=\mathds{E}(r_{ij}^4)\\&=(\sqrt{k/s})^4\times(s/2k)+(-\sqrt{k/s})^4\times(s/2k)\\
&=k/s<\infty
\end{aligned}$$ then it is approved.\
For the second constraint corresponding to formula :\
for any integer $m>0$, derive $\mathds{E}(r^{2m})=(k/s)^{m-1}$, and $$\frac{\mathds{E}(r_{ij}^{2m})}{(2m)!L^{2m}/(2^mm!)}=\frac{2^mm!k^{m-1}}{s^{m-1}(2m)!L^{2m}}.$$ Since $(2m)!\geq m!m^m$,\
$$\frac{\mathds{E}(r_{ij}^{2m})}{(2m)!L^{2m}/(2^mm!)}\leq\frac{2^mk^{m-1}}{s^{m-1}m^mL^{2m}},$$ let $L=(2k/s)^{1/2}\geq \sqrt{2}(k/s)^{(m-1)/2m}/\sqrt{m}$, further derive $$\frac{\mathds{E}(r_{ij}^{2m})}{(2m)!L^{2m}/(2^mm!)}\leq 1.$$ Thus $\exists L=(2k/s)^{1/2}>0$ such that $$\mathds{E}(r_{ij}^{2m})\leq\frac{(2m)!}{2^mm!}L^{2m}$$ for any integer $m>0$. Then the second constraint is also proved.\
Consequently, it is deduced that, as $s$ decreases, $B$ in formula will increase, and subsequently the boundary error in formula will get larger. And this implies that the sparser the matrix is, the worse the JL property.
Theoretical Framework {#sec-theoryframework}
=====================
In this section, a theoretical framework is proposed to evaluate the feature selection performance of random matrices with varying sparsity. As it will be shown latter, the feature selection performance would be simply observed, if the product between the difference between two distinct high-dimensional vectors and the sampling/row vectors of random matrix, could be easily derived. In this case, we have to previously know the distribution of the difference between two distinct high-dimensional vectors. For the possibility of analysis, the distribution should be characterized with a unified model. Unfortunately, this goal seems hard to be perfectly achieved due to the diversity and complexity of natural data. Therefore, without loss of generality, we typically assume the i.i.d Gaussian distribution for the elements of difference between two distinct high-dimensional vectors, as detailed in the following section \[subsec-difference\]. According to the law of large numbers, it can be inferred that the Gaussian distribution is reasonable to be applied to characterize the distribution of high-dimensional vectors in magnitude. Similarly to most theoretical work attempting to model the real world, our assumption also suffers from an obvious limitation. Empirically, some of the real data elements, in particular the redundant (indiscriminative) elements, tend to be coherent to some extent, rather than being absolutely independent as we assume above. This imperfection probably limits the accuracy and applicability of our theoretical model. However, as will be detailed later, this problem can be ignored in our analysis where the difference between pairwise redundant elements is assume to be zero. This also explains why our theoretical proposal can be widely verified in the final experiments involving a great amount of real data. With the aforementioned assumption, in section \[subsec-product\], the product between high-dimensional vector difference and row vectors of random matrices is calculated and analyzed with respect to the varying sparsity of random matrix, as detailed in Lemmas 3-5 and related remarks. Note that to make the paper more readable, the proofs of Lemmas 3-5 are included in the Appendices.
Distribution of the difference between two distinct high-dimensional vectors {#subsec-difference}
----------------------------------------------------------------------------
From the viewpoint of feature selection, the random projection is expected to maximize the difference between arbitrary two samples $\mathbf{v}$ and $\mathbf{w}$ from two different datasets $\mathcal{V}$ and $\mathcal{W}$, respectively. Usually the difference is measured with the Euclidean distance denoted by $\lVert\mathbf{R}\mathbf{z}^T\rVert_2$, $\mathbf{z}=\mathbf{v}-\mathbf{w}$. Then in terms of the mutual independence of $\mathbf{R}$, the search for good random projection is equivalent to seeking the row vector $\hat{\mathbf{r}}$ such that $$\label{eq-rhat}
\hat{\mathbf{r}}=\operatorname*{arg\,max}_{\mathbf{r}}\{|\langle \mathbf{r},\mathbf{z} \rangle|\}.$$ Thus in the following part we only need to evaluate the row vectors of $\mathbf{R}$. For the convenience of analysis, the two classes of high-dimensional data are further ideally divided into two parts, $\mathbf{v}=[\mathbf{v}^f ~\mathbf{v}^r]$ and $\mathbf{w}=[\mathbf{w}^f ~\mathbf{w}^r]$, where $\mathbf{v}^f$ and $\mathbf{w}^f$ denote the feature elements containing the discriminative information between $\mathbf{v}$ and $\mathbf{w}$ such that $\mathds{E}(v^f_i-w^f_i)\neq 0$, while $\mathbf{v}^r$ and $\mathbf{w}^r$ represent the redundant elements such that $\mathds{E}(v^r_i-w^r_i)= 0 $ with a tiny variance. Subsequently, $\mathbf{r}=[\mathbf{r}^f~ \mathbf{r}^r]$ and $\mathbf{z}=[\mathbf{z}^f ~\mathbf{z}^r]$ are also seperated into two parts corresponding to the coordinates of feature elements and redundant elements, respectively. Then the task of random projection can be reduced to maximizing $|\langle \mathbf{r}^f,\mathbf{z}^f \rangle|$, which implies that the redundant elements have no impact on the feature selection. Therefore, for simpler expression, in the following part the high-dimensional data is assumed to have only feature elements except for specific explanation, and the superscript $f$ is simply dropped. As for the intra-class samples, we can simply assume that their elements are all redundant elements, and then the expected value of their difference is equal to 0, as derived before. This means that the problem of minimizing the intra-class distance needs not to be further studied. So in the following part, we only consider the case of maximizing inter-class distance, as described in formula .
To explore the desired $\hat{\mathbf{r}}_i$ in formula , it is necessary to know the distribution of $\mathbf{z}$. However, in practice the distribution is hard to be characterized since the locations of feature elements are usually unknown. As a result, we have to make a relaxed assumption on the distribution of $\mathbf{z}$. For a given real dataset, the values of $v_i$ and $w_i$ should be limited. This allows us to assume that their difference $z_i$ is also bounded in amplitude, and acts as some unknown distribution. For the sake of generality, in this paper $z_i$ is regarded as approximately satisfying the Gaussian distribution in magnitude and randomly takes a binary sign. Then the distribution of $z_i$ can be formulated as
$$\label{eq-zi}
z_i=\left\{
\begin{array}{cl}
x&\text{with probability} ~1/2 \\
-x&\text{with probability} ~1/2
\end{array}
\right.$$
where $x\in N(\mu,\sigma^2)$, $\mu$ is a positive number, and Pr$(x>0)=1-\epsilon$, $\epsilon=\Phi(-\frac{\mu}{\sigma})$ is a small positive number.
Product between high-dimensional vector and random sampling vector with varying sparsity {#subsec-product}
----------------------------------------------------------------------------------------
This subsection mainly tests the feature selection performance of random row vector with varying sparsity. For the sake of comparison, Gaussian random vectors are also evaluated. Recall that under the basic requirement of JL lemma, that is $\mathds{E}(r_{ij})=0$ and $\mathds{E}(r_{ij}^2)=1$, the Gaussian matrix has elements i.i.d drawn from $N(0,1)$, and the sparse random matrix has elements distributed as in formula with $q \in \{d/s:1\leq s\leq d, s\in \mathds{N}\}$.
Then from the following Lemmas 3-5, we present two crucial random projection results for the high-dimensional data with the feature difference element $|z_i|$ distributed as in formula :
- Random matrices will achieve the best feature selection performance as only one feature element is sampled by each row vector; in other words, the solution to the formula is obtained when $\mathbf{r} $ randomly has $s=1$ nonzero elements;
- The desired sparse random matrix mentioned above can also obtain better feature selection performance than Gaussian random matrices.
Note that, for better understanding, we first prove a relatively simple case of $z_i\in \{\pm\mu\}$ in Lemma \[lemma-3\], and then in Lemma \[lemma-4\] generalize to a more complicated case of $z_i$ distributed as in formula . The performance of Gaussian matrices on $z_i\in \{\pm\mu\}$ is obtained in Lemma \[lemma-5\].
\[lemma-3\] Let $\mathbf{r}=[r_{1},...,r_{d}]$ randomly have $1\leq s \leq d$ nonzero elements taking values $\pm\sqrt{d/s}$ with equal probability, and $\mathbf{z}=[z_1,...,z_d]$ with elements being $\pm\mu$ equiprobably, where $\mu$ is a positive constant. Given $f(\mathbf{r},\mathbf{z})=|\langle \mathbf{r},\mathbf{z}\rangle|$, there are three results regarding the expected value of $f(r_i,z)$:
$ \mathds{E}(f)=2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\lceil \frac{s}{2}\rceil C_s^{\lceil \frac{s}{2}\rceil}$;
$\mathds{E}(f)|_{s=1}=\mu\sqrt{d}>\mathds{E}(f)|_{s>1}$;
$\mathop{\lim}\limits_{s\rightarrow \infty} \frac{1}{\sqrt{d}}\mathds{E}(f)\rightarrow \mu\sqrt{\frac{2}{\pi}}$.
Please see Appendix A.
[**Remark on Lemma \[lemma-3\]:**]{} This lemma discloses that the best feature selection performance is obtained, when only one feature element is sampled by each row vector. In contrast, the performance tends to converge to a lower level as the number of sampled feature elements increases. However, in practice the desired sampling process is hard to be implemented due to the few knowledge of feature location. As it will be detailed in the next section, what we can really implement is to sample only one feature element with high probability. Note that with the proof of this lemma, it can also be proved that if $s$ is odd, $\mathds{E}(f)$ fast decreases to $\mu\sqrt{2d/\pi}$ with increasing $s$; in contrast, if $s$ is even, $\mathds{E}(f)$ quickly increases towards $\mu\sqrt{2d/\pi}$ as $s$ increases. But for arbitrary two adjacent $s$ larger than 1, their average value on $\mathds{E}(f)$, namely $(\mathds{E}(f)|_{s}+\mathds{E}(f)|_{s+1})/2$, is very close to $\mu\sqrt{2d/\pi}$. For clarity, the values of $\mathds{E}(f)$ over varying $s$ are calculated and shown in Figure \[fig-lemma3\], where instead of $\mathds{E}(f)$, $\frac{1}{\mu\sqrt{d}}\mathds{E}(f)$ is described since only the varying $s$ is concerned. The specific character of $\mathds{E}(f)$ ensures that one can still achieve better performance over others by sampling $s=1$ element with a relative high probability, along with the occurrence of a sequence of $s$ taking consecutive values slightly larger than 1.
------------------------------------- -------------------------------------
{width="45.00000%"} {width="45.00000%"}
(a) (b)
------------------------------------- -------------------------------------
\[lemma-4\] Let $\mathbf{r}=[r_{1},...,r_{d}]$ randomly have $1\leq s \leq d$ nonzero elements taking values $\pm\sqrt{d/s}$ with equal probability, and $\mathbf{z}=[z_1,...,z_d]$ with elements distributed as in formula . Given $f(\mathbf{r},\mathbf{z})=|\langle \mathbf{r},\mathbf{z}\rangle|$, it is derived that:
$$\mathds{E}(f)|_{s=1}>\mathds{E}(f)|_{s>1}$$ if $(\frac{9}{8})^{\frac{3}{2}}[\sqrt{\frac{2}{\pi}}+(1+\frac{\sqrt{3}}{4})\frac{2}{\pi}(\frac{\mu}{\sigma})^{-1}]+2\Phi(-\frac{\mu}{\sigma})\leq 1$.
Please see Appendix B.
[**Remark on Lemma \[lemma-4\]:**]{} This lemma expands Lemma \[lemma-3\] to a more general case where $|z_i|$ is allowed to vary in some range. In other words, there is an upper bound on $\frac{\sigma}{\mu}$ for $\mathds{E}(f)|_{s=1}>\mathds{E}(f)|_{s>1}$, since $\Phi(-\frac{\mu}{\sigma})$ decreases monotonically with respect to $\frac{\mu}{\sigma}$. Clearly the larger upper bound for $\frac{\sigma}{\mu}$ allows more variation of $|z_i|$. In practice the real upper bound should be larger than that we have derived as a sufficient condition in this lemma.
\[lemma-5\] Let $\mathbf{r}=[r_{1},...,r_{d}]$ have elements i.i.d drawn from $N(0,1)$, and $\mathbf{z}=[z_1,...,z_d]$ with elements being $\pm\mu$ equiprobably, where $\mu$ is a positive constant. Given $f(\mathbf{r},\mathbf{z})=|\langle \mathbf{r}, \mathbf{z}\rangle|$, its expected value $\mathds{E}(f)=\mu\sqrt{\frac{2 d}{\pi}}$.
Please see Appendix C.
[**Remark on Lemma \[lemma-5\]:**]{} Comparing this lemma with Lemma \[lemma-3\], clearly the row vector with Gaussian distribution shares the same feature selection level with sparse row vector with a relatively large $s$. This explains why in practice the sparse random matrices usually can present comparable classification performance with Gaussian matrix. More importantly, it implies that the sparsest sampling process provided in Lemma 3 should outperform Gaussian matrix on feature selection.
Proposed sparse random matrix {#sec-proposal}
==============================
The lemmas of the former section have proved that the best feature selection performance can be obtained, if only one feature element is sampled by each row vector of random matrix. It is now interesting to know if the condition above can be satisfied in the practical setting, where the high-dimensional data consists of both feature elements and redundant elements, namely $\mathbf{v}=[\mathbf{v}^f ~\mathbf{v}^r]$ and $\mathbf{w}=[\mathbf{w}^f ~\mathbf{w}^r]$. According to the theoretical condition mentioned above, it is known that the row vector $\mathbf{r}=[\mathbf{r}^f ~ \mathbf{r}^r]$ can obtain the best feature selection, only when $||\mathbf{r}^f||_0=1$, where the quasi-norm $\ell_0$ counts the number of nonzero elements in $\mathbf{r}^f$. Let $\mathbf{r}^f\in\mathds{R}^{d_f}$, and $\mathbf{r}^r\in\mathds{R}^{d_r}$, where $d=d_f+d_r$. Then the desired row vector should have $d/d_f$ uniformly distributed nonzero elements such that $\mathds{E}(||\mathbf{r}^f||_0)=1$. However, in practice the desired distribution for row vectors is often hard to be determined, since for a real dataset the number of feature elements is usually unknown.
In this sense, we are motivated to propose a general distribution for the matrix elements, such that $||\mathbf{r}^f||_0=1$ holds with high probability in the setting where the feature distribution is unknown. In other words, the random matrix should hold the distribution maximizing the ratio $\text{Pr}(||\mathbf{r}^f||_0=1)/\text{Pr}(||\mathbf{r}^f||_0\in\{2,3,...,d_f\})$. In practice, the desired distribution implies that the random matrix has exactly one nonzero position per column, which can be simply derived as below. Assume a random matrix $\mathbf{R} \in \mathbb{R}^{k\times d} $ randomly holding $1\leq s'\leq k$ nonzero elements per *column*, equivalently $s'd/k$ nonzero elements per *row*, then one can derive that
$$\label{eq-pr}
\begin{aligned}
&\text{Pr}(||\mathbf{r}^f||_0=1)/\text{Pr}(||\mathbf{r}^f||_0\in\{2,3,...,d_f\})\\&=\frac{\text{Pr}(||\mathbf{r}^f||_0=1)}{1-\text{Pr}(||\mathbf{r}^f||_0=0)-\text{Pr}(||\mathbf{r}^f||_0=1)}\\
&=\frac{C_{d_f}^1C_{d_r}^{s'd/k-1}}{C_d^{s'd/k}-C_{d_r}^{s'd/k}-C_{d_f}^1C_{d_r}^{s'd/k-1}}\\
&=\frac{d_fd_r!}{\frac{d!(d_r-s'd/k+1)!}{s'd/k(d-s'd/k)!}-\frac{d_r!(d_r-s'd/k+1)}{s'd/k}-d_fd_r!}
\end{aligned}$$
From the last equation in formula , it can be observed that the increasing $s'd/k$ will reduce the value of formula . In order to maximize the value, we have to set $s'=1$. This indicates that the desired random matrix has only one nonzero element per column.
The proposed random matrix with exactly one nonzero element per column presents two obvious advantages, as detailed below.
- In complexity, the proposed matrix clearly presents much higher sparsity than existing random projection matrices. Note that, theoretically the very sparse random matrix with $q=\sqrt{d}$ [@Li06] has higher sparsity than the proposed matrix when $k<\sqrt{d}$. However, in practice the case $k<\sqrt{d}$ is usually not of practical interest, due to the weak performance caused by large compression rate $d/k$ ($>\sqrt{d}$).
- In performance, it can be derived that the proposed matrix outperforms other more dense matrices, if the projection dimension $k$ is not much smaller than the number $d_f$ of feature elements included in the high-dimensional vector. To be specific, from Figure \[fig-lemma3\], it can be observed that the dense matrices with column weight $s'>1$ share comparable feature selection performance, because as $s'$ increases they tend to sample more than one feature element (namely $||\mathbf{r}^f||_0>1$) with higher probability. Then the proposed matrix with $s'=1$ will present better performance than them, if $k$ ensures $||\mathbf{r}^f||_0=1$ with high probability, or equivalently the ratio $\text{Pr}(||\mathbf{r}^f||_0=1)/ \text{Pr}(||\mathbf{r}^f||_0\in\{2,3,...,d_f\})$ being relatively large. As shown in formula , the condition above can be better satisfied, as $k$ increases. Inversely, as $k$ decreases, the feature selection advantage of the proposed matrix will degrade. Recall that the proposed matrix is weaker than other more dense matrices on distance preservation, as demonstrated in section \[subsec-sparse\]. This means that the proposed matrix will perform worse than others when its feature selection advantage is not obvious. In other words, there should exist a lower bound for $k$ to ensure the performance advantage of the proposed matrix, which is also verified in the following experiments. It can be roughly estimated that the lower bound of $k$ should be on the order of $d_f$, since for the proposed matrix with column weight $s'=1$, the $k=d_f$ leads to $\mathds{E}(||\mathbf{r}^f||_0)=d/k\times d_f/d=1$. In practice, the performance advantage seemingly can be maintained for a relatively small $k(<d_f)$. For instance, in the following experiments on synthetic data, the lower bound of $k$ is as small as $d_f/20$. This phenomenon can be explained by the fact that to obtain performance advantage, the probability $\text{Pr}(||\mathbf{r}^f||_0=1)$ is only required to be relatively large rather than to be equal to 1, as demonstrated in the remark on Lemma \[lemma-3\].
Experiments {#sec-experiment}
============
Setup
-----
This section verifies the feature selection advantage of the proposed currently sparest matrix (StM) over other popular matrices, by conducting binary classification on both synthetic data and real data. Here the synthetic data with labeled feature elements is provided to specially observe the relation between the projection dimension and feature number, as well as the impact of redundant elements. The real data involves three typical datasets in the area of dimensionality reduction: face image, DNA microarray and text document. As for the binary classifier, the classical support vector machine (SVM) based on Euclidean distance is adopted. For comparison, we test three popular random matrices: Gaussian random matrix (GM), sparse random matrix (SM) as in formula with $q=3$ [@Achlioptas03] and very sparse random matrix (VSM) with $q=\sqrt{d}$ [@Li06].
The simulation parameters are introduced as follows. It is known that the repeated random projection tends to improve the feature selection, so here each classification decision is voted by performing 5 times the random projection [@Fern03]. The classification accuracy at each projection dimension $k$ is derived by taking the **average** of **100000** simulation runs. In each simulation, four matrices are tested with the same samples. The projection dimension $k$ decreases uniformly from the high dimension $d$. Moreover, it is necessary to note that, for some datasets containing more than two classes of samples, the SVM classifier randomly selects two classes to conduct binary classification in each simulation. For each class of data, one half of samples are randomly selected for training, and the rest for testing.
Synthetic data experiments
--------------------------
### Data generation
The synthetic data is developed to evaluate the two factors as follows:
- the relation between the lower bound of projection dimension $k$ and the feature dimension $d_f$;
- the negative impact of redundant elements, which are ideally assumed to be zero in the previous theoretical proofs.
To this end, two classes of synthetic data with $d_f$ feature elements and $d-d_f$ redundant elements are generated in two steps:
- randomly build a vector $\tilde{\mathbf{v}}\in\{\pm1\}^d$, then define a vector $\tilde{\mathbf{w}}$ distributed as $\tilde{w}_i=-\tilde{v}_i$, if $1\leq i\leq d_f$, and $\tilde{w}_i=\tilde{v}_i$, if $d_f< i\leq d$;
- generate two classes of datasets $\mathcal{V}$ and $\mathcal{W}$ by i.i.d sampling $v^f_i\in N(\tilde{v}_i,\sigma_f^2)$ and $w^f_i\in N(\tilde{w}_i,\sigma_f^2)$, if $1\leq i\leq d_f$; and $v^r_i\in N(\tilde{v}_i,\sigma_r^2)$ and $w^r_i\in N(\tilde{w}_i,\sigma_r^2)$, if $d_f< i\leq d$.
Subsequently, the distributions on pointwise distance can be approximately derived as $|v_i^f-w_i^f|\in N(2,2\sigma_f^2)$ for feature elements and $(v_i^r-w_i^r)\in N(0,2\sigma_r^2)$ for redundant elements, respectively. To be close to reality, we introduce some unreliability for feature elements and redundant elements by adopting relatively large variances. Precisely, in the simulation $\sigma_f$ is fixed to 8 and $\sigma_r$ varies in the set $\{8, 12, 16\}$. Note that, the probability of $(v_i^r-w_i^r)$ converging to zero will decrease as $\sigma_r$ increases. Thus the increasing $\sigma_r$ will be a challenge for our previous theoretical conjecture derived on the assumption of $(v_i^r-w_i^r)=0$. As for the size of the dataset, the data dimension $d$ is set to 2000, and the feature dimension $d_f=1000$. Each dataset consists of 100 randomly generated samples.
### Results
Table \[tab-synthetic\] shows the classification performance of four types of matrices over evenly varying projection dimension $k$. It is clear that the proposal always outperforms others, as $k>200$ (equivalently, the compression ratio $k/d>0.1$). This result exposes two positive clues. First, the proposed matrix preserves obvious advantage over others, even when $k$ is relatively small, for instance, $k/d_f$ is allowed to be as small as 1/20 when $\sigma_r=8$. Second, with the interference of redundant elements, the proposed matrix still outperforms others, which implies that the previous theoretical result is also applicable to the real case where the redundant elements cannot be simply neglected.
Real data experiments
---------------------
Three types of representative high-dimensional datasets are tested for random projection over evenly varying projection dimension $k$. The datasets are first briefly introduced, and then the results are illustrated and analyzed. Note that, the simulation is developed to compare the feature selection performance of different random projections, rather than to obtain the best performance. So to reduce the simulation load, the original high-dimensional data is uniformly downsampled to a relatively low dimension. Precisely, the face image, DNA, and text are reduced to the dimensions 1200, 2000 and 3000, respectively. Note that, in terms of JL lemma, the original high dimension allows to be reduced to arbitrary values (not limited to 1200, 2000 or 3000), since theoretically the distance preservation of random projection is independent of the size of high-dimensional data [@Achlioptas03].
### Datasets
- Face image
- AR [@Martinez98] : As in [@Martinez01], a subset of 2600 frontal faces from 50 males and 50 females are examined. For some persons, the faces were taken at different times, varying the lighting, facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). There are 6 faces with dark glasses and 6 faces partially disguised by scarfs among 26 faces per person.
- Extended Yale B [@Georghiades01; @Lee05]: This dataset includes about 2414 frontal faces of 38 persons, which suffer varying illumination changes.
- FERET [@Phillips98]: This dataset consists of more than 10000 faces from more than 1000 persons taken in largely varying circumstances. The database is further divided into several sets which are formed for different evaluations. Here we evaluate the 1984 *frontal* faces of 992 persons each with 2 faces separately extracted from sets *fa* and *fb*.
- GTF [@Nefian00]: In this dataset, 750 images from 50 persons were captured at different scales and orientations under variations in illumination and expression. So the cropped faces suffer from serious pose variation.
- ORL [@Samaria94]: It contains 40 persons each with 10 faces. Besides slightly varying lighting and expressions, the faces also undergo slight changes on pose.
$k$ 30 60 **120** 240 360 480 600
-- ----- ----------- ----------- ----------- ----------- ----------- ----------- -----------
GM **98.67** 99.04 99.19 99.24 99.30 99.28 99.33
SM 98.58 99.04 99.21 99.25 99.31 99.30 99.32
VSM 98.62 99.07 99.20 99.27 99.30 99.31 99.34
StM 98.64 **99.10** **99.24** **99.35** **99.48** **99.50** **99.58**
GM 97.10 **98.06** 98.39 98.49 98.48 98.45 98.47
SM 97.00 98.05 98.37 98.49 98.48 98.45 98.47
VSM 97.12 98.05 98.36 98.50 98.48 98.45 98.48
StM **97.15** **98.06** **98.40** **98.54** **98.54** **98.57** **98.59**
GM 86.06 86.42 86.31 86.50 86.46 86.66 86.57
SM 86.51 86.66 87.26 88.01 88.57 89.59 90.13
VSM **87.21** 87.61 89.34 91.14 92.31 93.75 93.81
StM 87.11 **88.74** **92.04** **95.38** **96.90** **97.47** **97.47**
GM 96.67 97.48 97.84 98.06 98.09 98.10 98.16
SM 96.63 97.52 97.85 98.06 98.09 98.13 98.16
VSM **96.69** **97.57** 97.87 98.10 98.13 98.14 98.16
StM 96.65 97.51 **97.94** **98.25** **98.40** **98.43** **98.53**
GM 94.58 95.69 96.31 96.40 96.54 96.51 96.49
SM 94.50 95.63 96.36 96.38 96.48 96.47 96.48
VSM 94.60 **95.77** 96.33 96.35 96.53 96.55 96.46
StM **94.64** 95.75 **96.43** **96.68** **96.90** **97.04** **97.05**
- DNA microarray
- Colon [@Alon08061999]: This is a dataset consisting of 40 colon tumors and 22 normal colon tissue samples. 2000 genes with highest intensity across the samples are considered.
- ALML [@Golub15101999]: This dataset contains 25 samples taken from patients suffering from acute myeloid leukemia (AML) and 47 samples from patients suffering from acute lymphoblastic leukemia (ALL). Each sample is expressed with 7129 genes.
- Lung [@Beer081012] : This dataset contains 86 lung tumor and 10 normal lung samples. Each sample holds 7129 genes.
- Text document [@Caideng09][^1]
- TDT2: The recently modified dataset includes 96 categories of total 10212 documents/samples. Each document is represented with vector of length 36771. This paper adopts the first 19 categories each with more than 100 documents, such that each category is tested with 100 randomly selected documents.
- 20Newsgroups (version 1): There are 20 categories of 18774 documents in this dataset. Each document has vector dimension 61188. Since the documents are not equally distributed in the 20 categories, we randomly select 600 documents for each category, which is nearly the maximum number we can assign to all categories.
- RCV1: The original dataset contains 9625 documents each with 29992 distinct words, corresponding to 4 categories with 2022, 2064, 2901, and 2638 documents respectively. To reduce computation, this paper randomly selects only 1000 documents for each category.
### Results
Tables \[tab-face\]-\[tab-text\] illustrate the classification performance of four classes of matrices on three typical high-dimensional data: face image, DNA microarray and text document. It can be observed that, all results are consistent with the theoretical conjecture stated in section \[sec-proposal\]. Precisely, the proposed matrix will always perform better than others, if $k$ is larger than some thresholds, i.e. $k>120$ (equivalently, the compression ratio $k/d>1/10$) for all face image data, $k>100$ ($k/d>1/20$) for all DNA data, and $k>600$ ($k/d>1/5$) for all text data. Note that, for some individual datasets, in fact we can obtain smaller thresholds than the uniform thresholds described above, which means that for these datasets, our performance advantage can be ensured in lower projection dimension. It is worth noting that our performance gain usually varies across the types of data. For most data, the gain is on the level of around $1\%$, except for some special cases, for which the gain can achieve as large as around $5\%$. Moreover, it should be noted that the proposed matrix can still present comparable performance with others (usually inferior to the best results not more than $1\%$), even as $k$ is smaller than the lower threshold described above. This implies that regardless of the value of $k$, the proposed matrix is always valuable due to its lower complexity and competitive performance. In short, the extensive experiments on real data sufficiently verifies the performance advantage of the theoretically proposed random matrix, as well as the conjecture that the performance advantage holds only when the projection dimension $k$ is large enough.
Conclusion and Discussion {#sec-conclusion}
=========================
This paper has proved that random projection can achieve its best feature selection performance, when only one feature element of high-dimensional data is considered at each sampling. In practice, however, the number of feature elements is usually unknown, and so the aforementioned best sampling process is hard to be implemented. Based on the principle of achieving the best sampling process with high probability, we practically propose a class of sparse random matrices with exactly one nonzero element per column, which is expected to outperform other more dense random projection matrices, if the projection dimension is not much smaller than the number of feature elements. Recall that for the possibility of theoretical analysis, we have typically assumed that the elements of high-dimensional data are mutually independent, which obviously cannot be well satisfied by the real data, especially the redundant elements. Although the impact of redundant elements is reasonably avoided in our analysis, we cannot ensure that all analyzed feature elements are exactly independent in practice. This defect might affect the applicability of our theoretical proposal to some extent, whereas empirically the negative impact seems to be negligible, as proved by the experiments on synthetic data. In order to validate the feasibility of the theoretical proposal, extensive classification experiments are conducted on various real data, including face image, DNA microarray and text document. As it is expected, the proposed random matrix shows better performance than other more dense matrices, as the projection dimension is sufficiently large; otherwise, it presents comparable performance with others. This result suggests that for random projection applied to the task of classification, the proposed currently sparsest random matrix is much more attractive than other more dense random matrices in terms of both complexity and performance.
Appendix A. {#apd-a .unnumbered}
===========
[**Proof of Lemma \[lemma-3\]**]{}
Due to the sparsity of $\mathbf{r}$ and the symmetric property of both $r_{j}$ and $z_j$, the function $f(\mathbf{r},\mathbf{z})$ can be equivalently transformed to a simpler form, that is $f(x)=\mu\sqrt{\frac{d}{s}}|\sum_{i=1}^{i=s}x_i|$ with $x_i$ being $\pm1$ equiprobably. With the simplified form, three results of this lemma are sequentially proved below.
- First, it can be easily derived that $$\mathds{E}(f(x))=\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=1}^{s}(C_s^i|s-2i|)$$ then the solution to $\mathds{E}(f(x))$ turns to calculating $\sum_{i=0}^{s}(C_s^i|s-2i|)$, which can be deduced as
$$\sum_{i=0}^{s}(C_s^i|s-2i|)=\left\{
\begin{array}{cl}
2sC_{s-1}^{\frac{s}{2}-1}& if~ s ~is ~even\\[5pt]
2sC_{s-1}^{\frac{s-1}{2}}& if~ s ~is~ odd\\
\end{array}
\right.$$ by summing the piecewise function
$$C_s^i|s-2i|=\left\{
\begin{array}{ll}
sC_{s-1}^{0}& if~ i=0 \\[6pt]
sC_{s-1}^{s-i-1}-sC_{s-1}^{i-1}& if~ 1\leq i \leq \frac{s}{2}\\[6pt]
sC_{s-1}^{i-1}-sC_{s-1}^{s-i-1}& if~ \frac{s}{2}< i< s \\[6pt]
sC_{s-1}^{s-1}& if~ i=s\\
\end{array}
\right.$$ Further, with $C_{s-1}^{i-1}=\frac{i}{s}C_s^i$, it can be deduced that $$\sum_{i=0}^{s}(C_s^i|s-2i|)=2\lceil \frac{s}{2}\rceil C_s^{\lceil \frac{s}{2}\rceil}$$ Then the fist result is obtained as
$$\mathds{E}(f)=2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\lceil \frac{s}{2}\rceil C_s^{\lceil \frac{s}{2}\rceil}$$
- Following the proof above, it is clear that $ \mathds{E}(f(x))|_{s=1}=f(x)|_{s=1}=\mu\sqrt{d}$. As for $ \mathds{E}(f(x))|_{s>1}$, it is evaluated under two cases:
- if $s$ is odd,$$\frac{\mathds{E}(f(x))|_{s}}{\mathds{E}(f(x))|_{s-2}}=\frac{\frac{2}{\sqrt{s}}\frac{1}{2^s} \frac{s+1}{2} C_s^{ \frac{s+1}{2}}}{\frac{2}{\sqrt{s-2}}\frac{1}{2^{s-2}} \frac{s-1}{2} C_{s-2}^{ \frac{s-1}{2}}}=\frac{\sqrt{s(s-2)}}{s-1}<1$$ namely, $\mathds{E}(f(x))$ decreases monotonically with respect to $s$. Clearly, in this case $\mathds{E}(f(x))|_{s=1}> \mathds{E}(f(x))|_{s>1}$;
- if $s$ is even, $$\frac{\mathds{E}(f(x))|_{s}}{\mathds{E}(f(x))|_{s-1}}=\frac{\frac{2}{\sqrt{s}}\frac{1}{2^s} \frac{s}{2} C_s^{ \frac{s}{2}}}{\frac{2}{\sqrt{s-1}}\frac{1}{2^{s-1}} \frac{s}{2} C_{s-1}^{ \frac{s}{2}}}=\sqrt{\frac{s-1}{s}}<1$$ which means $\mathds{E}(f(x))|_{s=1}> \mathds{E}(f(x))|_{s>1}$, since $s-1$ is odd number for which $\mathds{E}(f(x))$ monotonically decreases.
Therefore the proof of the second result is completed.
- The proof of the third result is developed by employing Stirling’s approximation [@Bruijn81]$$s!=\sqrt{2\pi s}(\frac{s}{e})^se^{\lambda_s},~~~ 1/(12s+1)<\lambda_s<1/(12s).$$ Precisely, with the formula of $\mathds{E}(f(x))$, it can be deduced that
- if $s$ is even, $$\mathds{E}(f(x))=\mu\sqrt{ds}\frac{1}{2^s}\frac{s!}{\frac{s}{2}!\frac{s}{2}!}=\mu\sqrt{\frac{2d}{\pi}}e^{\lambda_s-2\lambda_{\frac{s}{2}}}$$
- if $s$ is odd, $$\mathds{E}(f(x))=\mu\sqrt{d}\frac{s+1}{\sqrt{s}}\frac{1}{2^s}\frac{s!}{\frac{s+1}{2}!\frac{s-1}{2}!}=\mu\sqrt{\frac{2d}{\pi}}(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}$$
Clearly $\mathop{\lim}\limits_{s\rightarrow \infty }\frac{1}{\sqrt{d}}\mathds{E}(f(x))\rightarrow \mu\sqrt{\frac{2}{\pi}}$ holds, whenever $s$ is even or odd.
Appendix B. {#apd-b .unnumbered}
===========
[**Proof of Lemma \[lemma-4\]**]{}
Due to the sparsity of $\mathbf{r}$ and the symmetric property of both $r_{j}$ and $z_{j}$, it is easy to derive that $f(\mathbf{r},\mathbf{z})=|\langle \mathbf{r}, \mathbf{z}\rangle|=\sqrt{\frac{d}{s}}|\sum_{j=1}^{s}z_j|$. This simplified formula will be studied in the following proof. To present a readable proof, we first review the distribution shown in formula $$z_j\sim\left\{
\begin{array}{lr}
N(\mu,\sigma) & \text{with probability}~ 1/2\\
N(-\mu,\sigma) & \text{ with probability} ~1/2\\
\end{array}
\right.$$ where for $x\in N(\mu,\sigma)$, $\text{Pr}(x>0)=1-\epsilon$, $\epsilon=\Phi(-\frac{\mu}{\sigma})$ is a tiny positive number. For notational simplicity, the subscript of random variable $z_j$ is dropped in the following proof. To ease the proof of the lemma, we first need to derive the expected value of $|x|$ with $x\sim N(\mu,\sigma^2)$:
$$\begin{aligned}
\mathds{E}(|x|)&=\int_{-\infty}^{\infty}\frac{|x|}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx\\
&=\int_{-\infty}^{0}\frac{-x}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx+\int_{0}^{\infty}\frac{x}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx\\
&=-\int_{-\infty}^{0}\frac{x-\mu}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx+\int_{0}^{\infty}\frac{x-\mu}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx\\
&+\mu\int_{0}^{\infty}\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx-\mu\int_{-\infty}^{0}\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx\\
&=\frac{\sigma}{\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}|_{-\infty}^0-\frac{\sigma}{\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}|^{\infty}_0+\mu \text{Pr}(x>0)-\mu \text{Pr}(x<0)\\
&=\sqrt{\frac{2}{\pi}}\sigma e^{-\frac{\mu^2}{2\sigma^2}}+\mu(1-2\text{Pr}(x<0))\\
&=\sqrt{\frac{2}{\pi}}\sigma e^{-\frac{\mu^2}{2\sigma^2}}+\mu(1-2\Phi(-\frac{\mu}{\sigma}))
\end{aligned}$$ which will be used many a time in the following proof. Then the proof of this lemma is separated into two parts as follows.
- This part presents the expected value of $f(r_i,z)$ for the cases $s=1$ and $s>1$.
- if $s=1$, $f(\mathbf{r},\mathbf{z})=\sqrt{d}|z|$; with the the probability density function of $z$: $$p(z)=\frac{1}{2}\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(z-\mu)^2}{2\sigma^2}}+\frac{1}{2}\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(z+\mu)^2}{2\sigma^2}}$$ one can derive that $$\begin{aligned}
\mathds{E}(|z|)&=\int_{-\infty}^{\infty}|z|p(z)d_{z}\\
&=\frac{1}{2}\int_{-\infty}^{\infty}\frac{|z|}{\sqrt{2\pi}\sigma}e^{\frac{-(z-\mu)^2}{2\sigma^2}}dz+\frac{1}{2}\int_{-\infty}^{\infty}\frac{|z|}{\sqrt{2\pi}\sigma}e^{\frac{-(z+\mu)^2}{2\sigma^2}}dz\\
\end{aligned}$$ with the previous result on $\mathds{E}(|x|)$, it is further deduced that $$\mathds{E}(|z|)=\sqrt{\frac{2}{\pi}}\sigma e^{-\frac{\mu^2}{2\sigma^2}}+\mu(1-2\Phi(-\frac{\mu}{\sigma}))$$ Recall that $\Phi(-\frac{\mu}{\sigma})=\epsilon$, so $$\mathds{E}(f)=\sqrt{d}\mathds{E}(|z|)=\sqrt{\frac{2d}{\pi}}\sigma_\mu e^{-\frac{\mu^2}{2\sigma^2}}+\mu\sqrt{d}(1-2\Phi(-\frac{\mu}{\sigma}))\approx \mu \sqrt{d}$$ if $\epsilon$ is tiny enough as illustrated in formula .
- if $s>1$, $f(\mathbf{r},\mathbf{z})=\sqrt{\frac{d}{s}}|\sum_{j=1}^{s}z|$; let $t=\sum_{j=1}^{s}z$, then according to the symmetric distribution of $z$, $t$ holds $s+1$ different distributions:
$$t\sim N((s-2i)\mu, s\sigma^2)~ \text{with probability} ~\frac{1}{2^s}C_s^i$$ where $0\leq i\leq s$ denotes the number of $z$ drawn from $N(-\mu,\sigma^2)$. Then the PDF of $t$ can be described as $$p(t)=\frac{1}{2^s}\sum_{i=0}^sC_s^i\frac{1}{\sqrt{2\pi s}\sigma}e^{\frac{-(t-(s-2i)\mu)^2}{2s\sigma^2}}$$ then,
$$\begin{aligned}
\mathds{E}(|t|)&=\int_{-\infty}^{\infty}|t|p(t)dt\\&=\frac{1}{2^s}\sum_{i=0}^sC_s^i\int_{-\infty}^{\infty}|t|\frac{1}{\sqrt{2\pi s}\sigma}e^{\frac{-(t-(s-2i)\mu)^2}{2s\sigma^2}}dt\\
&=\frac{1}{2^s}\sum_{i=0}^sC_s^i\{\sqrt{\frac{2s}{\pi}}\sigma e^{\frac{-(s-2i)^2\mu^2}{2s\sigma^2}}+\mu|s-2i|[1-2\Phi(\frac{-|s-2i|\mu}{\sqrt{s}\sigma})]\}
\end{aligned}$$ subsequently, the expected value of $f(r_i,z) $ can be expressed as $$\begin{aligned}
\mathds{E}(f)&=\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=0}^s(C_s^i|s-2i|)+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=0}^sC_s^ie^{\frac{-(s-2i)^2\mu^2}{2s\sigma^2}}\\
&-2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=0}^s[C_s^i|s-2i|\Phi(\frac{-|s-2i|\mu}{\sqrt{s}\sigma})]
\end{aligned}$$
- This part derives the upper bound of the aforementioned $\mathds{E}(f)|_{s>1}$. For simpler expression, the three factors of above expression for $\mathds{E}(f)|_{s>1}$ are sequentially represented by $f_1$, $f_2$ and $f_3$, and then are analyzed, respectively.
- for $f_1=\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=0}^s(C_s^i|s-2i|)$, it can be rewritten as $$f_1=2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}C_s^{\lceil\frac{s}{2}\rceil}\lceil\frac{s}{2}\rceil$$
- for $f_2=\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=0}^sC_s^ie^{\frac{-(s-2i)^2\mu^2}{2s\sigma^2}}$, first, we can bound $$\left\{
\begin{array}{ll}
e^{\frac{-(s-2i)^2\mu^2}{2s\sigma^2}} <\text{exp}(-\frac{\mu^2}{\sigma^2}) & \text{if}~ i<\alpha ~\text{or}~ i>\alpha\\[6pt]
e^{\frac{-(s-2i)^2\mu^2}{2s\sigma^2}} \leq 1 & \text{if} ~\alpha\leq i\leq s-\alpha\\
\end{array}
\right.$$ where $\alpha=\lceil\frac{s-\sqrt{s}}{2}\rceil$. Take it into $f_2$, $$\begin{aligned}
f_2&<\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=0}^{\alpha-1}C_s^ie^{\frac{-\mu^2}{\sigma^2}}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=s-\alpha+1}^{s}C_s^ie^{\frac{-\mu^2}{2\sigma^2}}
+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=\alpha}^{s-\alpha}C_s^i\\
&<\sigma\sqrt{\frac{2d}{\pi}}e^{\frac{-\mu^2}{\sigma^2}}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\sum_{i=\alpha}^{s-\alpha}C_s^i
\end{aligned}$$ Since $C_s^i\leq C_s^{\lceil s/2\rceil}$, $$\begin{aligned}
f_2&<\sigma\sqrt{\frac{2d}{\pi}}e^{\frac{-\mu^2}{\sigma^2}}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}(\lfloor \sqrt{s}\rfloor+1) C_s^{\lceil s/2\rceil}\\
&\leq \sigma\sqrt{\frac{2d}{\pi}}e^{\frac{-\mu^2}{\sigma^2}}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s} \sqrt{s} C_s^{\lceil s/2\rceil}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s} C_s^{\lceil s/2\rceil}\\
&\leq \sigma\sqrt{\frac{2d}{\pi}}e^{\frac{-\mu^2}{\sigma^2}}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s}\frac{2} {\sqrt{s}} C_s^{\lceil s/2\rceil}{\lceil \frac{s}{2}\rceil}+\sigma\sqrt{\frac{2d}{\pi}}\frac{1}{2^s} C_s^{\lceil s/2\rceil}
\end{aligned}$$ with Stirling’s approximation, $$f_2<\left\{
\begin{array}{ll}
\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}+\sqrt{d}\frac{2}{\pi}\sigma e^{\lambda_s-2\lambda_{s/2}}+\sqrt{\frac{d}{s}}\frac{2}{\pi}\sigma e^{\lambda_s-2\lambda_{s/2}}&\text{if $s$ is even}\\[10pt]
\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}+\sqrt{{d}}\frac{2\sigma}{\pi}(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}\\
+\sqrt{{d}}\frac{2\sigma}{\pi}\frac{\sqrt{s}}{s+1}(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}&\text{if $s$ is odd}
\end{array}
\right.$$
- for $f_3=-2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=0}^s[C_s^i|s-2i|\Phi(\frac{-|s-2i|\mu}{\sqrt{s}\sigma})]$, with the previous defined $\alpha$, $$\begin{aligned}
f_3&\leq -2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=\alpha}^{s-\alpha}[C_s^i|s-2i|\Phi(\frac{-|s-2i|\mu}{\sqrt{s}\sigma})]\\
&\leq -2\mu\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=\alpha}^{s-\alpha}[C_s^i|s-2i|\Phi(\frac{-\mu}{\sigma})]\\
&= -2\mu\epsilon\sqrt{\frac{d}{s}}\frac{1}{2^s}\sum_{i=\alpha}^{s-\alpha}[C_s^i|s-2i|]\\
&= -2\mu\epsilon\sqrt{\frac{d}{s}}\frac{1}{2^s}(2sC_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-2sC_{s-1}^{\alpha-1})\\
&= -4\mu\epsilon\sqrt{ds}\frac{1}{2^s}(C_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-C_{s-1}^{\alpha-1})\\
&\leq 0
\end{aligned}$$
finally, we can further deduce that $$\begin{aligned}
&\mathds{E}(f)|_{s>1}=f_1+f_2+f_3\\[6pt]
&<\left\{
\begin{array}{ll}
2\mu\frac{1}{2^s}\sqrt{\frac{d}{s}}C_s^{\lceil \frac{s}{2} \rceil} +\frac{2\sigma}{\pi}\sqrt{d}e^{\lambda_s-2\lambda_{\frac{s}{2}}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}+\sqrt{\frac{d}{s}}\frac{2}{\pi}\sigma e^{\lambda_s-2\lambda_{s/2}}\\-4\mu\epsilon\sqrt{ds}\frac{1}{2^s}(C_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-C_{s-1}^{\alpha-1}) & \text{if $s$ is even}\\[12pt]
2\mu\frac{1}{2^s}\sqrt{\frac{d}{s}}C_s^{\lceil \frac{s}{2} \rceil}+\frac{2\sigma}{\pi}\sqrt{d}\frac{s^2}{s^2-1}^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}\\+\sqrt{{d}}\frac{2\sigma}{\pi}\frac{\sqrt{s}}{s+1}(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}-4\mu\epsilon\sqrt{ds}\frac{1}{2^s}(C_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-C_{s-1}^{\alpha-1}) & \text{if $s$ is odd}
\end{array}
\right.\\[10pt]
&=\left\{
\begin{array}{ll}
(\sqrt{\frac{2d}{\pi}}\mu+\frac{4\sigma}{\pi}\sqrt{d})e^{\lambda_s-2\lambda_{\frac{s}{2}}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}+\sqrt{\frac{d}{s}}\frac{2}{\pi}\sigma e^{\lambda_s-2\lambda_{s/2}}\\-4\mu\epsilon\sqrt{ds}\frac{1}{2^s}(C_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-C_{s-1}^{\alpha-1}) & \text{if $s$ is even}\\[12pt]
(\sqrt{\frac{2d}{\pi}}\mu+\frac{4\sigma}{\pi}\sqrt{d})(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}\\+\sqrt{{d}}\frac{2\sigma}{\pi}\frac{\sqrt{s}}{s+1}(\frac{s^2}{s^2-1})^{\frac{s}{2}}e^{\lambda_s-\lambda_{\frac{s+1}{2}}-\lambda_{\frac{s-1}{2}}}-4\mu\epsilon\sqrt{ds}\frac{1}{2^s}(C_{s-1}^{\lceil{\frac{s}{2}-1}\rceil}-C_{s-1}^{\alpha-1}) & \text{if $s$ is odd}
\end{array}
\right.
\end{aligned}$$
- This part discusses the condition for $$\mathds{E}(f)|_{s>1}<\mathds{E}(f)|_{s=1}=\sqrt{\frac{2d}{\pi}}\sigma e^{-\frac{\mu^2}{2\sigma^2}}+\mu\sqrt{d}(1-2\Phi(-\frac{\mu}{\sigma}))$$ by further relaxing the upper bound of $\mathds{E}(f)|_{s>1}$.
- if $s$ is even, since $f_3\leq 0$, $$\begin{aligned}
\mathds{E}(f)|_{s>1}&<(\sqrt{\frac{2d}{\pi}}\mu+\frac{2\sigma}{\pi}\sqrt{d})e^{\lambda_s-2\lambda_{\frac{s}{2}}}+\sqrt{\frac{d}{s}}\frac{2}{\pi}\sigma e^{\lambda_s-2\lambda_{s/2}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}\\
&\leq (\sqrt{\frac{2d}{\pi}}\mu+\frac{2\sigma}{\pi}\sqrt{d})+\sqrt{\frac{d}{s}}\frac{2}{\pi}\sigma +\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}\\
&=\mu\sqrt{d}(\sqrt{\frac{2}{\pi}}+(1+\frac{1}{\sqrt{s}})\frac{2\sigma}{\pi\mu})+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}
\end{aligned}$$ Clearly $\mathds{E}(f)|_{s>1}<\mathds{E}(f)|_{s=1}$, if $\sqrt{\frac{2}{\pi}}+(1+\frac{1}{\sqrt{2}})\frac{2\sigma}{\pi\mu}\leq 1-2\Phi(-\frac{\mu}{\sigma})$. This condition is well satisfied when $\mu>>\sigma$, since $\Phi(-\frac{\mu}{\sigma})$ decreases monotonically with increasing $\mu/\sigma$.
- if $s$ is odd, with $f_3\leq 0$, $$\begin{aligned}
\mathds{E}(f)|_{s>1}&<(\sqrt{\frac{2d}{\pi}}\mu+\frac{2\sigma}{\pi}\sqrt{d})(\frac{s^2}{s^2-1})^{\frac{s}{2}}+\sqrt{{d}}\frac{2\sigma}{\pi}\frac{\sqrt{s}}{s+1}(\frac{s^2}{s^2-1})^{\frac{s}{2}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}
\end{aligned}$$ It can be proved that $(\frac{s^2}{s^2-1})^{\frac{s}{2}}$ decreases monotonically with respect to $s$. This yields that
$$\begin{aligned}
\mathds{E}(f)|_{s>1}<(\sqrt{\frac{2d}{\pi}}\mu+(1+\frac{\sqrt{3}}{4})\frac{2\sigma}{\pi}\sqrt{d})(\frac{3^2}{3^2-1})^{\frac{3}{2}}+\sqrt{\frac{2d}{\pi}}\sigma e^{\frac{-\mu^2}{2\sigma^2}}
\end{aligned}$$ in this case $\mathds{E}(f)|_{s>1}<\mathds{E}(f)|_{s=1}$, if $(\frac{9}{8})^{\frac{3}{2}}(\sqrt{\frac{2}{\pi}}+(1+\frac{\sqrt{3}}{4})\frac{2\sigma}{\pi\mu})\leq 1-2\Phi(-\frac{\mu}{\sigma})$.
Summarizing above two cases for $s$ , finally $$\mathds{E}(f)|_{s>1}<\mathds{E}(f)|_{s=1},~ \text{if} ~ (\frac{9}{8})^{\frac{3}{2}}[\sqrt{\frac{2}{\pi}}+(1+\frac{\sqrt{3}}{4})\frac{2}{\pi}(\frac{\mu}{\sigma})^{-1}]+2\Phi(-\frac{\mu}{\sigma})\leq 1$$
Appendix C. {#apd-c .unnumbered}
===========
[**Proof of Lemma \[lemma-5\]**]{}
First, one can rewrite $f(\mathbf{r},\mathbf{z})=|\Sigma_{j=1}^{j=d}(r_{j}z_j)|=\mu |x|$, where $x\in N(0,d)$, since i.i.d $r_{j}\in N(0,1)$ and $z_j \in \{\pm \mu\}$ with equal probability. Then one can prove that $$\begin{aligned}
\mathds{E}(|x|)&=\int_{-\infty}^{0}\frac{-x}{\sqrt{2\pi d}}e^{-\frac{x^2}{2d}}dx+\int_{0}^{\infty}\frac{x}{\sqrt{2\pi d}}e^{-\frac{x^2}{2d}}dx\\&=2\int_0^{\infty}\frac{\sqrt{d}}{\sqrt{2\pi}}e^{-\frac{x^2}{2d}}d\frac{x^2}{2d}\\
&=2\sqrt{d}\int_0^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\alpha}d\alpha\\
&=\sqrt{\frac{2d}{\pi}}
\end{aligned}$$ Finally, it is derived that $\mathds{E}(f)=\mu\mathds{E}(|x|)=\mu\sqrt{\frac{2d}{\pi}}$.
0.2in
[^1]: Publicly available at <http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'W. Naetar ([[email protected]]{})'
- 'O. Scherzer ([[email protected]]{})'
title: Quantitative photoacoustic tomography with piecewise constant material parameters
---
Abstract {#abstract .unnumbered}
========
The goal of *quantitative* photoacoustic tomography is to determine optical and acoustical material properties from initial pressure maps as obtained, for instance, from photoacoustic imaging. The most relevant parameters are absorption, diffusion and Grüneisen coefficients, all of which can be heterogeneous. Recent work by Bal and Ren shows that in general, unique reconstruction of all three parameters is impossible, even if multiple measurements of the initial pressure (corresponding to different laser excitation directions at a single wavelength) are available.
Here, we propose a restriction to piecewise constant material parameters. We show that in the diffusion approximation of light transfer, piecewise constant absorption, diffusion *and* Grüneisen coefficients can be recovered uniquely from photoacoustic measurements at a single wavelength. In addition, we implemented our ideas numerically and tested them on simulated three-dimensional data.
#### Keywords. Quantitative photoacoustic tomography, mathematical imaging, inverse problems {#keywords.-quantitative-photoacoustic-tomography-mathematical-imaging-inverse-problems .unnumbered}
#### AMS subject classifications. {#ams-subject-classifications. .unnumbered}
35R25, 35R30, 65J22, 92C55
Introduction {#sec:intro}
============
Photoacoustic tomography (PAT) is a hybrid imaging technique utilizing the coupling of laser excitations with ultrasound measurements. Tissue irradiated by a short monochromatic laser pulse generates an ultrasound signal (due to thermal expansion) which can be measured by ultrasound transducers outside the medium. From these measurements, the ultrasound wave’s initial pressure (whose spatial variation depends on material properties of the tissue) can be reconstructed uniquely by solving a well-studied inverse problem for the wave equation. For further information on this inverse problem, see, e.g., Kuchment and Kunyansky [@KucKun08].
The obtained ultrasound initial pressure *qualitatively* resembles the structure of the tissue (i.e., its inhomogeneities are visible). It is, however, desirable to image material parameters (whose values can serve as diagnostic information) instead. That is the goal of *quantitative* photoacoustic tomography (qPAT).
Mathematically, the problem can be posed as follows. In biological tissue, where photon scattering is a dominant effect compared to absorption, light transfer can be described by the *diffusion approximation* of the *radiative transfer equation*. It is valid in regions $\Omega \subset \mathbb{R}^3$ with sufficient distance to the light source and is given by $$\label{eq:diff_eq}
-{\operatorname{div}}(D(x) \nabla u(x)) + \mu(x) u(x) = 0.$$
$u(x)$ denotes the *fluence* (that is, the laser energy per unit area at a point $x$), $\mu(x)$ the *absorption coefficient* (the photon absorption probability per unit length) and $D(x)=\frac{1}{3(\mu + \mu_s')}$ (where $\mu_s'(x)$ denotes the *reduced scattering coefficient*) the *diffusion coefficient*. Both $\mu$ and $D$ vary spatially and depend on the wavelength of the laser excitation. For details and a derivation of the diffusion approximation, we refer to [@Arr99; @WanWu07].
In the literature, is commonly augmented with Dirichlet boundary conditions (which, in practice, might not be known) or, at interfaces with non-scattering media, Robin-type boundary conditions (see, for instance, [@WanWu07]).
In this model, the absorbed laser energy $\operatorname{\mathcal{E}}(x)$ is given by $$\label{eq:def_E}
\operatorname{\mathcal{E}}(x)=\mu(x) u(x).$$ The ultrasound initial pressure $\Gamma$ obtained by photoacoustic imaging is proportional to the absorbed energy $\operatorname{\mathcal{E}}$, so we have $$\label{eq:def_H}
{\mathcal{H}}(x)=\Gamma(x) \operatorname{\mathcal{E}}(x)=\Gamma(x) \mu(x) u(x).$$ The (spatially varying) dimensionless constant $\Gamma$ is called the Grüneisen parameter, its value corresponds to the conversion efficiency from change in thermal energy to pressure.
Hence, the goal in qPAT is to find the parameters $\mu,D,\Gamma$ in a domain $\Omega$ given $${\mathcal{H}}^k = \Gamma \mu u^k, \quad k=1,\ldots,K$$ where $u^k$ solves in $\Omega$ (here and in the following, the index $k$ corresponds to varying laser excitation directions).
Previous work on this problem (and variations of it) can be found, e.g., in [@AmmBosJugKan11; @BalRen11a; @BalRen12; @BalUhl10; @BanBagVasRoy08; @CoxArrBea09; @CoxArrKoeBea06; @GaoOshZha12; @LauCoxZhaBea10; @RenGaoZhao13; @SarTarCoxArr13; @ShaCoxZem11; @TarCoxKaiArr12; @YuaZahJia06; @Zem10]. For a more comprehensive list, we refer to the review article [@CoxLauArrBea12] by Cox et al.
In particular, Bal and Ren showed (see [@BalRen11a]) that unique reconstruction of all three parameters $\mu,D,\Gamma$ is impossible, independent of the number of measurements ${\mathcal{H}}^k$. They suggested to overcome this problem by the use of *multi-spectral data* (i.e., multiple photoacoustic measurements generated by laser excitations at different wavelengths). Using these data, unique reconstruction of all three material parameters (at the respective wavelengths used), becomes possible [@BalRen12].
In our paper, we take a different approach and propose a restriction to piecewise constant $\mu,D,\Gamma$. Similar restrictions (due to the large number of publications which use this approach we only provide a small selection of references) have been proposed for *Diffusion Optical Tomography* (e.g., [@ArrDorKaiKolSchTarVauZac06; @Har09; @KolVauKai00; @ZacScwKolArr09]) and *Conductivity Imaging* (e.g., [@BerFra11; @Dru98; @KimKwoSeoYoo02; @RonSan01]).
For our problem, it turns out that the reconstruction problem becomes a lot simpler and admits a unique solution for all three parameters $\mu,D,\Gamma$.
The result is based on an analytical, explicit reconstruction procedure consisting of two steps. First, we recover the regions where $\mu,D,\Gamma$ are constant by finding the discontinuities of photoacoustic data ${\mathcal{H}}$ and its derivatives up to second order (see Proposition \[prop:jump\_detection\]). In the second step, we determine the actual values of $\mu,D,\Gamma$ from the jumps of ${\mathcal{H}}$ and $\nabla {\mathcal{H}}\cdot \nu$ (the normal derivatives) across the obtained region boundaries (cf. Proposition \[prop:uniqueness\]). Our result holds under certain conditions on the parameters $\mu,D,\Gamma$ and the direction of $\nabla u$. We emphasize that we don’t necessarily require that $u|_{\partial\Omega}$ is known (which may not be the case in practice) or that specific boundary conditions hold on $\partial\Omega$. Instead, we use reference values of the parameters for reconstruction, i.e., values of one of the pairs $(\mu(x),\Gamma(x))$ or $(D(x),\Gamma(x))$ at a single point $x \in \Omega$.
Numerically, the reconstruction method we present heavily relies on an efficient *jump detection* algorithm (using a computational edge detection method) *a*nd subsequent *3D-image segmentation*, which provides a connection with image analysis.
The paper is organized as follows. In section \[sec:illposed\], we recap some of the non-uniqueness results for the qPAT problem in literature. In section \[sec:uniqueness\], we prove unique solvability for piecewise constant $\mu,D,\Gamma$. In section \[sec:numerics\], we give an example of how our ideas can be applied numerically. The last section contains two concrete numerical examples where the reconstruction method is applied to simulated data (with one data set FEM-generated and one data set generated by Monte Carlo simulations). The paper ends with a conclusion.
Ill-posedness of qPAT with smooth parameters {#sec:illposed}
============================================
In this section, we review some of the non-uniqueness results for quantitative photoacoustic tomography. For simplicity of presentation, we augment (in this section only) equation with Dirichlet boundary conditions, so we have
$$\label{eq:diff_eq_dirichlet}
\begin{aligned}
-{\operatorname{div}}(D(x) \nabla u(x)) + \mu(x) u(x) &= 0 \quad \text{in $\Omega \subset \mathbb{R}^3$}\\
u(x)|_{\partial \Omega} &= f(x).
\end{aligned}$$
The boundary values represent the laser illumination of one particular experiment. In this section, we assume $f$ is known, satisfies $f>0$ and is sufficiently smooth.
It is well-known and has been shown numerically (see [@CoxArrBea09; @ShaCoxZem11]) that even when the Grüneisen coefficient $\Gamma$ is known (so the absorbed energy $\operatorname{\mathcal{E}}=\mu u$ can be calculated from ${\mathcal{H}}$), different pairs of diffusion and absorption coefficients may lead to the same absorbed energy map $\operatorname{\mathcal{E}}$. To see this analytically, for given smooth coefficients $D,\mu >0$ let $u(D,\mu)$ be the corresponding smooth solution of and $\operatorname{\mathcal{E}}(\mu,D)=\mu u(D,\mu)$ the absorbed energy. By the strong maximum principle (see [@GilTru01 Theorem 3.5]), $u(D,\mu) > 0$ in $\Omega$ (since $f >0$).
Moreover, for fixed $\operatorname{\mathcal{E}}=\operatorname{\mathcal{E}}(\mu,D)$, let us denote by $v(\tilde D)$ the solution of $$\label{eq:diff_eq_modified}
\begin{aligned}
{\operatorname{div}}(\tilde D(x) \nabla v(x)) &= \operatorname{\mathcal{E}}(x) \quad \text{in $\Omega$} \\
v(x)|_{\partial \Omega} &= f(x).
\end{aligned}$$ Note that $v(D)=u(D,\mu) > 0$. Then, for every $\tilde{D}$ with ${\left\| D-\tilde{D} \right\|}_{1,\infty} < \epsilon$ (with $\epsilon$ small enough), we also have $v(\tilde{D}) > 0$. To see this, note that $$\begin{aligned}
{\operatorname{div}}(\tilde{D}\nabla(v(\tilde{D}) - v(D)))&=-{\operatorname{div}}((\tilde{D}-D)\nabla v(D)) \quad \text{in $\Omega$} \\
(v(\tilde{D}) - v(D))|_{\partial\Omega} &= 0\end{aligned}$$ Using a priori bounds [@GilTru01 Theorem 3.5], $${\left\| v(\tilde{D})-v(D) \right\|}_{\infty} \leq C_1 {\left\| {\operatorname{div}}((\tilde{D}-D)\nabla v(D)) \right\|}_\infty \leq C_2 {\left\| \tilde{D} - D \right\|}_{1,\infty},$$ which implies $v(\tilde D) >0$ if $\epsilon$ is sufficiently small.
Now, taking $\tilde{\mu}=\frac{\operatorname{\mathcal{E}}(\mu,D)}{v(\tilde{D})}$, we get $${\operatorname{div}}(\tilde D \nabla v(\tilde D)) - \tilde \mu v(\tilde D) = 0.$$ Hence, $v(\tilde{D})=u(\tilde{D},\tilde{\mu})$ and $\operatorname{\mathcal{E}}(\tilde \mu,\tilde D)= \tilde{\mu} u(\tilde{D},\tilde{\mu}) = \operatorname{\mathcal{E}}(\mu,D)$, which shows that infinitely many pairs of coefficients may create the same absorbed energy map.
This nonuniqueness can be overcome by varying $f$ (i.e., changing the illumination pattern), obtaining multiple absorbed energy maps. This approach is called *multi-source* quantitative photoacoustic tomography. Bal and Ren [@BalRen11a] showed that while this additional information leads to unique reconstruction of $\mu,D$ from $\operatorname{\mathcal{E}}$, finding three unknown parameters $\mu,D,\Gamma$ given ${\mathcal{H}}=\Gamma \mu u$ is still impossible, independent of the number of illuminations (any more than two do not add any information). In fact, they showed that for *any* given Lipschitz continuous $\mu$, $D$ or $\Gamma$ the other two parameters can be chosen such that given initial pressures ${\mathcal{H}}^k=\Gamma \mu u^k$ (for multiple illumination patterns $f^k$) are generated.
\[ex:nonuniqueness\] Given any set of parameters $(\mu,D,\Gamma)$, for every $\lambda > 0$, $(\lambda \mu,\lambda D,\frac{1}{\lambda} \Gamma)$ generate the same measurements, since is invariant under simultaneous scaling of $\mu$ and $D$.
This simple example shows that even for constant parameters knowledge of $f$ and ${\mathcal{H}}$ is insufficient to determine $\mu,D,\Gamma$. Hence, more prior information about the unknown parameters will be necessary in order to get a unique solution.
Reconstruction of piecewise constant parameters {#sec:uniqueness}
===============================================
To overcome this essential non-uniqueness, we assume that $\mu,D,\Gamma$ are piecewise constants. That is, for some partition $(\Omega_m)_{m=1}^M$ of $\Omega \subset \mathbb{R}^3$,
$$\label{eq:piecewise_const}
\overline\Omega = \bigcup_{m=1}^M \overline\Omega_m, \enskip \mu = \sum_{m=1}^M \mu_m 1_{\Omega_m}, \enskip D = \sum_{m=1}^M D_m 1_{\Omega_m}, \enskip \Gamma = \sum_{m=1}^M \Gamma_m 1_{\Omega_m}.$$
Since the parameters are discontinuous, we need a generalized solution concept. Under certain additional conditions (which we explain in detail in Appendix \[sec:transmission\_cond\]) a weak solution $u$ of with piecewise constant parameters $\mu, D$ can be characterized by
$$\label{eq:u_continuity}
u \in C^\alpha(\overline{\Omega})$$
for some $\alpha >0$ and, for $m=1,\ldots,M$,
$$\label{eq:diff_eq_scalar}
\begin{aligned}
&u_m := u|_{\Omega_m} \in C^\infty(\Omega_m) \\
&D_m \Delta u_m - \mu_m u_m = 0 \quad \text{in } \Omega_m
\end{aligned}$$
and, almost everywhere on interfaces $I_{mn}:=\partial\Omega_m \cap \partial\Omega_n$,
$$\label{eq:transmission_cond}
D_m \nabla u_m \cdot \nu = D_n \nabla u_n \cdot \nu \quad \text{(for any normal vector $\nu$).}$$
The transmission condition is ill-defined on corners and intersections of multiple subregions, therefore we can only expect it to hold almost everywhere. For details and a derivation, see Appendix \[sec:transmission\_cond\]. The transmission condition can also be derived physically (rather than starting from a weak solution), it is accurate within the scope of the diffusion approximation [@Aro95; @RipNie99].
From now on, we consider $u_m$ and ${\mathcal{H}}_m:={\mathcal{H}}|_{\Omega_m}=\Gamma_m \mu_m u_m$ (and their derivatives up to second order) continuously extended (from the inside) to $\partial\Omega_m$. We emphasize that for $\nabla u_m$ and $\nabla {\mathcal{H}}_m$, this may only be possible for almost all points (with respect to the surface measure), see Appendix \[sec:transmission\_cond\].
We also assume that $u$ is strictly positive and bounded from above in $\overline\Omega$.
In the following Proposition \[prop:jump\_detection\], we show that the jump set $\bigcup_m \partial\Omega_m$ of piecewise constant parameters $\mu,D,\Gamma$ can be determined from photoacoustic initial pressure data ${\mathcal{H}}=\Gamma \mu u$. For $k \geq 0$, denote by $$J_k(f)=\Omega \setminus \bigcup \{ B \subset \Omega \big| B \text{ is open and } f \in C^k(B) \}$$ the set of discontinuities of a function $f \in L^\infty(\Omega)$ and its derivatives up to $k$-th order.
We require an assumption on $\nabla u$ and the unknown parameters $\mu,D,\Gamma$. For all $x \in J_0(D) \setminus (J_0(\Gamma \mu) \cup J_0(\frac{\mu}{D})) \subset \partial\Omega_m \cap \partial\Omega_n$ (i.e., interfaces of $D$ which are not interfaces of $\Gamma \mu$ and $\frac{\mu}{D}$) we require that the fluence $u$ satisfies almost everywhere (where $\nu$ denotes a normal vector on $\partial\Omega_m \cap \partial\Omega_n$), $$\label{eq:cd_assumption_1}
|\nabla u_n(x) \cdot \nu(x)| > 0 \quad (\stackrel{\eqref{eq:transmission_cond}}{\iff} |\nabla u_m(x) \cdot \nu(x)| > 0).$$
\[prop:jump\_detection\]
Let $\mu,D,\Gamma$ be of the form and $u=\sum_m u_m 1_{\Omega_m}$ and ${\mathcal{H}}=\sum_m {\mathcal{H}}_m 1_{\Omega_m}$ the corresponding fluence and initial pressure distributions satisfying condition in $\Omega$. Then,
$$\overline{J_0(\mu)} \cup \overline{J_0(D)} \cup \overline{J_0(\Gamma)} = \overline{J_2({\mathcal{H}})}.$$
Let $B \subset \Omega$ be an open ball with $B \cap (J_0(\mu) \cup J_0(D) \cup J_0(\Gamma) ) = \emptyset$. Since $u$ solves an elliptic PDE with constant coefficients in $B$, we have $u \in C^\infty(B)$ by interior regularity. Hence ${\mathcal{H}}\in C^\infty(B)$ (since $\Gamma\mu$ is constant in $B$), which implies $J_2({\mathcal{H}}) \subset J_0(\mu) \cup J_0(D) \cup J_0(\Gamma)$.
To show the converse, take $x \in \Omega$ such that $x \in J_0(\mu) \cup J_0(D) \cup J_0(\Gamma)$ (that is, one of the parameters jumps at $x$). We have to show that $x \in \overline{J_2({\mathcal{H}})}$.
Let $m,n$ be such that $x \in I_{mn}=\partial \Omega_m \cap \partial \Omega_n$. We distinguish three cases:
1. $\Gamma_m \mu_m \neq \Gamma_n \mu_n$: Since $u$ is continuous across $I_{mn}$ (cf. Appendix \[sec:transmission\_cond\]), ${\mathcal{H}}=\Gamma\mu u$ is discontinuous at $x$, so we get $x \in J_0({\mathcal{H}}) \subset J_2({\mathcal{H}})$.
2. $\Gamma_m \mu_m = \Gamma_n \mu_n, \frac{\mu_m}{D_m} \neq \frac{\mu_n}{D_n}$: From and $u \in C(\overline\Omega)$ we get $$\Delta u_m(x) = \frac{\mu_m}{D_m} u_m(x) \neq \frac{\mu_n}{D_n} u_n(x) = \Delta u_n(x).$$
Hence $x \in J_2(u)$, which implies $x \in J_2({\mathcal{H}})$ since $\Gamma \mu$ is constant in $\Omega_m \cup \Omega_n$.
3. $\Gamma_m \mu_m = \Gamma_n \mu_n, D_n \neq D_m$: First, let $x \in I_{mn}$ be a point where the transmission condition and hold (by assumption, this is the case for almost all points with respect to the surface measure). We have $$\begin{aligned}
\left|\nabla u_m(x) - \nabla u_n(x) \right| &\geq \left| (\nabla u_m(x)- \nabla u_n(x))\cdot \nu(x) \right| \\
&\geq \left|1-\frac{D_m}{D_n} \right| \left|\nabla u_m(x) \cdot \nu(x) \right| > 0.
\end{aligned}$$ This shows that $x \in J_1(u)$, which implies $x \in J_1({\mathcal{H}})$ and thus $x \in J_2({\mathcal{H}})$. By taking the closure, we get $x \in \overline{J_2({\mathcal{H}})}$ for all $x \in I_{mn}$.
The cases (1)-(3) cover all possibilities, since otherwise all three parameters $\mu,D,\Gamma$ would be constant in $\Omega_m \cup \Omega_n$.
Proposition \[prop:jump\_detection\] shows that we can obtain the parameter discontinuities (in regions where holds) via the set $J_2({\mathcal{H}})$. In fact, the proof tells us that ${\mathcal{H}}$, $\nabla {\mathcal{H}}$ or $\Delta {\mathcal{H}}$ have jumps at discontinuities of $\mu$, $D$ or $\Gamma$. That is, images of the gradient and Laplacian of the data ${\mathcal{H}}$ show material inhomogeneities not visible in ${\mathcal{H}}$.
In the next Proposition, we show how to recover piecewise constant parameters $\mu,D,\Gamma$ once their jump set $\bigcup_m \partial\Omega_m$ is known (e.g., from Proposition \[prop:jump\_detection\]). Knowledge of boundary values of $u$ alone is insufficient to fully determine the parameters (see Example \[ex:nonuniqueness\]). We also have to require knowledge of the parameters in some $\Omega_n \subset \Omega, \ n \in \{1,\ldots,M\}$. Using the continuity of $u$, and , we will show that these reference values combined with photoacoustic measurements ${\mathcal{H}}=\Gamma \mu u$ suffice to determine $\mu,D,\Gamma$ everywhere.
For this result, we again need an assumption on $\nabla u$. For every interface $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$ with normal vector $\nu(x)$, we require the existence of some $x \in I_{mn}$ with $$\label{eq:cd_assumption_2}
\nabla u_n(x) \cdot \nu(x) \neq 0 \quad (\stackrel{\eqref{eq:transmission_cond}}{\iff} \nabla u_m(x) \cdot \nu(x) \neq 0),$$ that is, on every interface $I_{mn}$ there must exist a point where $\nabla u$ is not tangential.
\[prop:uniqueness\] Let $\mu,D,\Gamma$ be of the form (with the decomposition $(\Omega_m)$ of $\Omega$ known). Furthermore, let $u$ and ${\mathcal{H}}=\Gamma \mu u$ be corresponding fluence and initial pressure distribution which satisfy condition on every interface $I_{mn} \subset \Omega$. Furthermore, let $(\mu_n,D_n,\Gamma_n)$ be known for some $n$. Then the parameters $\mu,D,\Gamma$ can be determined uniquely from ${\mathcal{H}}$.
Let $\Omega_m$ be a neighbouring subregion to $\Omega_n$ and denote by $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$ the interface. By continuity of $u$ and , we have for all $y \in I_{mn}$
$$\label{eq:unique_1}
\Gamma_m \mu_m = \frac{{\mathcal{H}}_m(y)}{{\mathcal{H}}_n(y)} \ \Gamma_n \mu_n$$
so from the reference values and ${\mathcal{H}}$ we can calculate $\Gamma\mu$ on neighbouring $\Omega_m$.
Next, let $x \in \partial\Omega_m \cap \partial\Omega_n$ such that $\nabla u_n(x) \cdot \nu(x) \neq 0$. Using and $\nabla {\mathcal{H}}_k = \Gamma_k \mu_k \nabla u_k$ for all $k$ (since the parameters are constant in $\Omega_k$) we get
$$\label{eq:unique_2}
\frac{D_m}{\Gamma_m \mu_m} = \frac{(\nabla {\mathcal{H}}_n \cdot \nu)(x)}{(\nabla {\mathcal{H}}_m \cdot \nu)(x)} \frac{D_n}{\Gamma_n \mu_n}.$$
Finally we get for all in $z \in \Omega_m$, from and $\Delta H_m = \Gamma_m \mu_m \Delta u_m$ in $\Omega_m$,
$$\label{eq:unique_3}
\frac{\mu_m}{D_m}=\frac{\Delta {\mathcal{H}}_m(z)}{{\mathcal{H}}_m(z)}.$$
The equations - suffice to obtain $\mu_m$, $D_m$ and $\Gamma_m$, since we have $$\label{eq:unique_final}
(\mu,D,\Gamma)=\left( A B C , A B, \frac{1}{B C} \right),$$ for $A=\Gamma\mu,\ B=\frac{D}{\Gamma\mu},\ C=\frac{\mu}{D}$.
By iterating over all interfaces, we can find $\mu,D,\Gamma$ everywhere in $\Omega$.
Note that in Proposition \[prop:uniqueness\], no knowledge of boundary values of $u$ is required, values of the parameters $\mu_n,D_n,\Gamma_n$ in some $\Omega_n$ are enough. In fact, knowledge of two of the three parameters already suffices, as we will show in the following Proposition.
\[prop:local\_uniqueness\] For a given $n$, the constants $(\mu_n,D_n,\Gamma_n)$ can be determined uniquely from photoacoustic data ${\mathcal{H}}_n=\Gamma_n \mu_n u_n$ and knowledge of one of the pairs $(\mu_n,\Gamma_n)$ or $(D_n,\Gamma_n)$. If $u(x)$ is known for some $x \in \Omega_n$, knowing one of the three constants is enough. If only one of the parameters $\mu_n,D_n,\Gamma_n$, only $u_n$, or the only pair $(\mu_n, D_n)$ is known, $(\mu_n,D_n,\Gamma_n)$ cannot be determined uniquely.
From and ${\mathcal{H}}=\Gamma\mu u$, we know that in $\Omega_n$
$$\begin{aligned}
D_n \Delta u_n - \mu_n u_n &= 0 \\
\Gamma_n \mu_n u_n &= {\mathcal{H}}_n,
\end{aligned}$$
which is equivalent to
$$\label{eq:diff_eq_system}
\begin{aligned}
\Gamma_n D_n \Delta u_n &= {\mathcal{H}}_n \\
\Gamma_n \mu_n u_n &= {\mathcal{H}}_n \\
\Gamma_n \mu_n \Delta u_n &= \Delta {\mathcal{H}}_n.
\end{aligned}$$
Here, one can immediately see that if $u(x)$ is known for some $x \in \Omega_n$, we can calculate $u_n(y)=\frac{1}{\Gamma_n \mu_n}{\mathcal{H}}_n(y) =\frac{u(x)}{{\mathcal{H}}(x)} {\mathcal{H}}_n(y)$ for all $y \in \Omega_n$ and thus also $\Delta u_n$ . Clearly, $(\mu_n,D_n,\Gamma_n)$ can now be determined from if one of the parameters is known.
Likewise, given one of the pairs $(\mu_n,\Gamma_n)$ or $(D_n,\Gamma_n)$ we can to calculate all three constants $(\mu_n,D_n,\Gamma_n)$.
Knowledge of $(\mu_n,D_n)$, on the other hand, is insufficient because $\lambda u_n$, $\frac 1 \lambda \Gamma_n$ satisfy for given $(\mu_n,D_n)$ for all $\lambda > 0$. Similarly, the system is underdetermined if only $u_n$ or $\Gamma_n$ is known.
Conditions and are vital for unique reconstruction. For instance, using Lemma \[prop:transmission\_cond\_converse\] one can see that, $u(x,y,z)=e^x$ is a weak solution of in $\mathbb{R}^3$ for both $$\mu \equiv 1, \quad D \equiv 1, \quad \Gamma \equiv 1$$ and $$\tilde \mu =
\left\{
\begin{array}{ll}
1 & \text{ if } y \geq 0 \\
\lambda & \text{ if } y < 0
\end{array}
\right. \hspace{-0.6 em}, \quad
\tilde D =
\left\{
\begin{array}{ll}
1 & \text{ if } y \geq 0 \\
\lambda & \text{ if } y < 0
\end{array}
\right. \hspace{-0.6 em}, \quad
\tilde \Gamma =
\left\{
\begin{array}{ll}
1 & \text{ if } y \geq 0 \\
\frac 1 \lambda & \text{ if } y < 0
\end{array}
\right. \hspace{-0.6 em}, \quad \lambda > 0$$ since $u$ is a classical solution on both sides of the interface $\{y=0\}$ and it satisfies $\nabla u \cdot \nu=0$. Furthermore, both parameter sets generate the same data ${\mathcal{H}}(x,y,z)=e^x$.
More generally, parts of interfaces where condition fails to hold don’t necessarily lie in $J_2({\mathcal{H}})$ and may thus be invisible to our reconstruction procedure (depending on the geometry, this may also lead to follow-up errors). If condition fails to hold, it might not be possible to determine $\mu,D,\Gamma$ everywhere.
To overcome this problem, we can use additional measurements (with different illumination directions) and hope that the location of critical points and gradient directions change. In particular, if photoacoustic data $({\mathcal{H}}^k)_{k=1}^K$ corresponding to solutions $(u^k)_{k=1}^K$ of that satisfy for almost all $x \in \Omega$ $$\label{eq:detcond}
\max_{i,j,k} \left| \det(\nabla u^i(x),\nabla u^j(x),\nabla u^k(x)) \right| > 0 \\$$ are available, on every $x \in I_{mn}$, one of the measurements satisfies (since $\nabla u^i(x),\nabla u^j(x),\nabla u^k(x)$ form a basis). With a similar argument as in Proposition \[prop:jump\_detection\] one can show that in this case $$J_0(\mu) \cup J_0(D) \cup J_0(\Gamma) = \bigcup_{k=1}^K J_2({\mathcal{H}}^k),$$ so unique reconstruction of $\mu,D,\Gamma$ in $\Omega$ can be guaranteed. To our knowledge, no method to force condition by boundary conditions or choice of source is known, however, its validity can be checked by looking at the data $({\mathcal{H}}^k)_{k=1}^K$.
Numerical reconstruction {#sec:numerics}
========================
In this section, we show how the results in the last section can be utilized numerically. Our goal is to estimate unknown piecewise constant parameters $\mu,D,\Gamma$ from noisy three-dimensional photoacoustic data $({\mathcal{H}}^k)_{k=1}^K$ (with varying boundary excitations) sampled on a regular grid.
We propose a two-step reconstruction:
- Detect jumps in $({\mathcal{H}}^k)_{k=1}^K$, $(\nabla {\mathcal{H}}^k)_{k=1}^K$, $(\Delta {\mathcal{H}}^k)_{k=1}^K$ and use the obtained surfaces to segment the image domain $\Omega$ to estimate subregions $(\hat{\Omega}_m)_{m=1}^M$ where the parameters are constant (and thus ${\mathcal{H}}$ is smooth).
- Given $(\hat\Omega_m)_{m=1}^M$ and reference values $(\mu_1,D_1,\Gamma_1)$, use the jump values of $({\mathcal{H}}^k)_{k=1}^K$ and $(\nabla {\mathcal{H}}^k \cdot \nu)_{k=1}^K$ across the estimated interfaces and values of $(\frac{\Delta {\mathcal{H}}^k}{{\mathcal{H}}^k})_{k=1}^K$ to find $\mu,D,\Gamma$ everywhere in $\Omega$ (using equations -).
Finding regions where the parameters are constant {#sub:num_regions}
-------------------------------------------------
In the proof of Proposition \[prop:jump\_detection\], one can see that in regions $\Omega$ where holds, discontinuities of the (piecewise constant) parameters $\mu,D,\Gamma$ correspond to jumps of ${\mathcal{H}}$, $\nabla {\mathcal{H}},\Delta {\mathcal{H}}$. We want to use *computational edge detection* to find these jumps.
We start by finding jumps in ${\mathcal{H}}$. Since they are multiplicative (i.e., $\frac{H_m}{H_n}$ is constant on $I_{mn}$), we apply a logarithm transformation to get constant jumps along the interfaces. In fact, let $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$ be an interface of the parameters $\mu,D,\Gamma$. Since $u_m=u_n$ on $I_{mn}$, we have
$$\label{eq:jump_Gamma_mu}
\left|\log({\mathcal{H}}_n)-\log({\mathcal{H}}_m) \right|=\left|\log(\Gamma_n \mu_n)-\log(\Gamma_m\mu_m) \right| \text{ on } I_{mn},$$
so jumps in $\Gamma \mu$ lead to jumps of equal magnitude in $\log {\mathcal{H}}$.
Next, we show that jumps in $D$ (that are large enough compared to those in $\Gamma\mu$) lead to jumps in $\log|\nabla {\mathcal{H}}|$. We restrict our search domain to $\Omega' \subset \Omega$ such that $|\nabla {\mathcal{H}}| \geq d > 0$ holds in $\Omega'$. Due to continuity of $u$ we have $\nabla u_m \cdot \tau = \nabla u_n \cdot \tau$ (for tangential vectors $\tau$) on parts of $I_{mn}$ that are $C^1$. Thus, we obtain on parts of $I_{mn} \cap \Omega'$ where holds, $$\label{eq:jump_nabla_u}
\begin{aligned}
\frac{|\nabla u_n|^2}{|\nabla u_m|^2} &= \frac{(\nabla u_n \cdot \nu)^2 + (\nabla u_n \cdot \tau)^2}{|\nabla u_m|^2} \stackrel{\eqref{eq:transmission_cond}}{=} \left(\frac{D_m}{D_n}\right)^2 \frac{ (\nabla u_m \cdot \nu)^2}{|\nabla u_m|^2} + \frac{ (\nabla u_m \cdot \tau)^2}{|\nabla u_m|^2} \\
&= 1 + \left( \left(\frac{D_m}{D_n}\right)^2 - 1 \right) \cos(\alpha_m)^2, \\
\end{aligned}$$ where $\alpha_m$ denotes the angle between the unit normal $\nu$ and $\nabla u_m$. Using $D_m \geq D_n$ (without loss of generality, otherwise we swap indices), we get $$\begin{aligned}
\left| \log|\nabla u_n| - \log|\nabla u_m| \right| &\geq \frac 1 2 \log \left( 1 + \left( e^{2 |\log D_m - \log D_n|} - 1 \right) \min_{\Omega,k} \cos(\alpha_k)^2 \right) \\
&:= \gamma\left(| \log D_m - \log D_n| \right).\end{aligned}$$ If $\min_{\Omega,k} \cos(\alpha_k)^2 > 0$ holds in $\Omega'$, the function $\gamma$ is positive, strictly increasing and unbounded. Hence, using the reverse triangle inequality, $$\label{eq:jump_D}
\begin{aligned}
|\log(|&\nabla {\mathcal{H}}_n|) - \log(|\nabla {\mathcal{H}}_m|) | = \left|\log(\Gamma_n\mu_n)-\log(\Gamma_m\mu_m)+\log|\nabla u_n|-\log|\nabla u_m| \right| \\
&\geq \left|\log|\nabla u_n|-\log|\nabla u_m| \right| - \left|\log(\Gamma_n\mu_n)-\log(\Gamma_m\mu_m) \right| \\
&\geq \gamma \left(| \log D_m - \log D_n| \right) - |\log(\Gamma_n \mu_n) - \log(\Gamma_m \mu_m)|.
\end{aligned}$$
Finally, since $\frac{\Delta {\mathcal{H}}_m}{{\mathcal{H}}_m}=\frac{\mu_m}{D_m}$ for all $m$, we get on $I_{mn}$ $$\left|\log \left(\frac{\Delta {\mathcal{H}}_m}{{\mathcal{H}}_m} \right)-\log \left(\frac{\Delta {\mathcal{H}}_n}{{\mathcal{H}}_n} \right)\right| = \left|\log \left(\frac{\mu_m}{D_m} \right) - \log \left(\frac{\mu_n}{D_n} \right)\right|$$ which shows that jumps in $\log \left(\frac{\mu}{D}\right)$ lead to jumps of equal magnitude in $\log \left(\frac{\Delta {\mathcal{H}}}{{\mathcal{H}}}\right)$.
To ensure that $|\nabla {\mathcal{H}}| \geq d > 0$ holds, we enforce a minimum for $|\nabla {\mathcal{H}}|$ (to avoid creating singularities). We counter failure of $\min_{\Omega,k} \cos(\alpha_k)^2 > 0$ by using additional measurements.
To estimate $(\Omega_m)_{m=1}^M$ given noisy data ${\mathcal{H}}^k$, we first look for jumps in $\log({\mathcal{H}}^k)$, then $\log |\nabla {\mathcal{H}}^k|$ and last in $|\log(\Delta {\mathcal{H}}^k) - \log({\mathcal{H}}^k)|$. More precisely, we proceed as follows:
- Find $\hat{J}_0 \subset \Omega$, a surface across which $\log{\mathcal{H}}^k$ jumps more than some threshold $\tau_0$. Segment the domain $\Omega$ using $\hat{J}_0$ (i.e., find the connected components of $\Omega \setminus \hat{J}_0$), giving subsets $(\hat{\Omega}^0_i)_{i=1}^I$, an estimate of the regions where $\Gamma\mu$ is constant.
- In all $\hat{\Omega}^0_i$, search for jumps in $\log |\nabla {\mathcal{H}}^k|$ that are bigger than threshold $\tau_1$, obtaining sets $\hat{J}^i_1 \subset \hat{\Omega}^0_i$. Take $\hat{J}_1=\bigcup_i \hat{J}^i_1 \cup \hat{J}_0$ and segment $\Omega$ using $\hat{J}_1$ to get $(\hat{\Omega}^1_i)_{i=1}^I$, an estimate for the regions where $\Gamma\mu$ and $D$ are constant.
- In all $\hat{\Omega}^1_i$, search for jump sets $\hat{J}^n_2 \subset \hat{\Omega}^1_n$ of $|\log \Delta {\mathcal{H}}^k - \log {\mathcal{H}}^k|$, with values above lower threshold $\tau_2$. We get $\hat{J}_2=\bigcup_n \hat{J}^i_2 \cup \hat{J}_1$, our estimate for $J(\Gamma\mu)$ Finally, by segmenting $\Omega$ using $\hat{J}_2$ we get $(\hat{\Omega}_m)_{m=1}^M$, an estimate for the regions where $\Gamma\mu,D,\frac \mu D$ (and thus also parameters $\mu,D,\Gamma$) are constant.
We can take advantage of multiple measurements $({\mathcal{H}}^k)_{k=1}^K$ (with different illuminations) by detecting edges separately for all ${\mathcal{H}}^k$ (and their derivatives) and joining the edge sets prior to segmentation in each step (1)-(3), or simpler, by averaging the input data for edge detection in each steps (1)-(3) (we implemented the second strategy). Using multiple measurements can be vital to counter locally missing contrast due to failure of condition or close to extremal points of ${\mathcal{H}}$.
For the actual jump detection, we use *Canny edge detection* in differential form as proposed by Lindeberg (cf. [@Can86] and [@Lin98]). See Appendix \[sec:canny\] for a short description of the method.
Estimating optical parameters {#sub:est_parameters}
-----------------------------
In the second stage of the reconstruction process, we want to estimate $\mu,D,\Gamma$ from photoacoustic data $({\mathcal{H}}^k)_{k=1}^K$ (sampled on a regular grid) given an estimate of the sets $(\Omega_m)_{m=1}^M$ (from the previous section) and reference values, for which we choose $(\mu_1,D_1,\Gamma_1)$ (without loss of generality). For simplicity, we first explain the procedure for a single measurement ${\mathcal{H}}$.
In the proof of Proposition \[prop:uniqueness\], evaluations of ${\mathcal{H}}_m$,$\nabla {\mathcal{H}}_m \cdot \nu$ and $\Delta {\mathcal{H}}_m$ at isolated points were sufficient to obtain all parameters. In the presence of noise and discretization error it is, however, better to use all the jump information available. Rather than calculating $\mu_m,D_m,\Gamma_m$ in an arbitrary order using equations - we use a least-squares fitting method to calculate $\Gamma\mu,\frac{D}{\Gamma\mu},\frac{\mu}{D}$ in all $\Omega_m$ simultaneously.
Since the data ${\mathcal{H}}$ contains noise and is only known on a grid, we can only calculate the values of ${\mathcal{H}}_m$ (whose values may not be known precisely on interfaces), $\nabla {\mathcal{H}}_m \cdot \nu$ and $\Delta {\mathcal{H}}_m$ up to some error. For $m=1,\ldots,M$, $y \in \partial\Omega_m$, $z \in \Omega_m$ and $x \in \partial\Omega_m$ with $|\nabla{\mathcal{H}}_m(x) \cdot \nu(x)| > 0$ let $h_m, g_m, l_m$ be the approximations $$\label{eq:estpar_approx}
\begin{aligned}
h_m(y) &\approx \log {\mathcal{H}}_m(y) \\
g_m(x) &\approx \log |\nabla{\mathcal{H}}_m(x) \cdot \nu(x)| \\
l_m(z) &\approx \log \left( \frac{\Delta {\mathcal{H}}_m(z)}{{\mathcal{H}}_m(z)} \right).
\end{aligned}$$
From and , we get on $I_{mn}$ $$\label{eq:estpar_error_1}
\log(\Gamma_m \mu_m)-\log(\Gamma_n\mu_n)=\log({\mathcal{H}}_m)-\log({\mathcal{H}}_n) = h_m - h_n + \epsilon_1 \\$$ and $$\label{eq:estpar_error_2}
\begin{aligned}
\log \left(\frac{D_m}{\Gamma_m \mu_m} \right) - \log \left(\frac{D_n}{\Gamma_n \mu_n} \right) &= \log|\nabla {\mathcal{H}}_n \cdot \nu| - \log|\nabla {\mathcal{H}}_m \cdot \nu|\\
&= g_n - g_m + \epsilon_2,
\end{aligned}$$ with $\epsilon_1, \epsilon_2$ denoting error terms. Now, we can estimate $$\hat a_m \approx \log(\Gamma_m \mu_m), \quad \hat b_m \approx \log \left(\frac{D_m}{\Gamma_m \mu_m} \right)$$ for $m>2$ ($a_1, b_1$ can be calculated from the reference values) by choosing values which minimize the $L^2$-norm of the error terms $\epsilon_1$ and $\epsilon_2$ over all interfaces, that is, by solving the least squares problems $$\label{eq:estpar_lsq}
\begin{aligned}
(\hat{a}_2,\ldots,\hat{a}_M) &= \arg\min_{a_2,\ldots,a_M} \sum_{\substack{n,k=1 \\ k>n}}^M {\left\| a_k - a_n + h_n - h_k \right\|}^2_{L^2(I_{nk})} \\
(\hat{b}_2,\ldots,\hat{b}_M) &= \arg\min_{a_2,\ldots,a_M} \sum_{\substack{n,k=1 \\ k>n}}^M {\left\| b_k - b_n - g_n + g_k \right\|}^2_{L^2(\tilde I_{nk})},
\end{aligned}$$ In the second least squares problem, we restrict the calculation to $\tilde I_{nk}$, a subset of $I_{nk}$ where $g_n,g_k$ are below some bound (i.e., where $|{\mathcal{H}}_n \cdot \nu|$ and $|{\mathcal{H}}_k \cdot \nu|$ are not zero).
A simple calculation shows that the optimizers $\hat{a},\hat{b}$ satisfy for $2 \leq m \leq M$ $$\label{eq:estpar_loglin_sol}
\begin{aligned}
\sum_{k \neq m} (\hat{a}_m - \hat{a}_k) \mathcal{A}(I_{mk}) &= \sum_{k \neq n} \int_{I_{mk}} h_m - h_k \,dS\\
\sum_{k \neq m} (\hat{b}_m - \hat{b}_k) \mathcal{A}(\tilde I_{mk}) &= \sum_{k \neq n} \int_{\tilde I_{mk}} g_k - g_m \,dS,
\end{aligned}$$ where $\mathcal{A}(I_{nk})$ denotes the area of the interface $\partial\Omega_m \cap \partial\Omega_n$. Since the corresponding system matrices are irreducibly diagonally dominant, the optimizers $\hat{a},\hat{b}$ are unique (see, e.g., [@HorJoh90 Theorem 6.2.27] ).
In , one can see that in the special case where $\Omega_1$ is the background and $\Omega_2,\ldots,\Omega_M$ are inclusions with no shared boundaries, our approach is equivalent to adding to $\log(\Gamma_1 \mu_1)$ (respectively $\log (\frac{D_1}{\Gamma_1 \mu_1}$)) the estimated jump values $h_k - h_1$ (respectively $g_1-g_k$) averaged over $\partial\Omega_k$.
Now, using and , we have in $\Omega_m, \ m=1,\ldots,M$ $$\label{eq:estpar_error_3}
\log \left( \frac{\mu_m}{D_m} \right) = \log \left( \frac{\Delta {\mathcal{H}}_m}{{\mathcal{H}}_m} \right) = l_m + \epsilon_3$$ for some error term $\epsilon$. As before, we can estimate $$\hat{c}=\log \left( \frac{\mu}{D} \right )$$ by minimizing the error term $\epsilon_3$. We obtain for $m=1,\ldots,M$ $$\label{eq:estpar_quad_sol}
\hat{c}_m = \arg\min_c {\left\| c - l_m \right\|}_{L^2(\Omega_m)}^2 = \frac{1}{\mathcal{V}(\Omega_m)}\int_{\Omega_m} l_m \,dx,$$ where $\mathcal{V}(\Omega_m)$ denotes the volume of $\Omega_m$, i.e., we take the mean of $l_m$ in $\Omega_m$. Finally, from $\hat a, \hat b, \hat c$, we can calculate $\hat\mu,\hat D,\hat\Gamma$ with .
The use of multiple measurements simply amounts to an additional summation in and , which corresponds to minimizing over the sum of all measurements.
Implementation {#sub:implementation}
--------------
We implemented the ideas presented in the last sections in *MATLAB*. The, possibly noisy, photoacoustic pressure data $({\mathcal{H}}^k)_{k=1}^K$ is given sampled on a regular 3D-grid with sufficiently high resolution.
Following the scheme presented in \[sub:num\_regions\], we first estimate subregions $(\hat{\Omega}_m)_{m=1}^M$ where $\mu,D,\Gamma$ are constant by using computational edge detection and then segmenting $\Omega$ using the obtained jump sets.
To detect jumps we use differential *Canny edge detection* (see Appendix \[sec:canny\] for details). The derivatives are estimated via finite differences (after low-pass filtering with a Gaussian kernel). We obtain jump surfaces with sub-voxel resolution in the form of a triangular mesh. For segmentation, we applied the *MATLAB* image processing toolbox function *bwconncomp*, which works on a voxel level (small holes in the jump sets, for instance at corners, can be closed up by increasing the thickness of the voxelized surfaces).
Given the jump surfaces and estimated regions $(\hat{\Omega}_m)_{m=1}^M$, in order to approximate $h_m \approx \log{\mathcal{H}}_m$ and $g_m \approx |\nabla {\mathcal{H}}_m \cdot \nu|$ (cf. ), we fit for every triangular element $e$ (with incenter $y$) of the surface a log-linear function $f_m^e$ to the data ${\mathcal{H}}_m$ at nearby grid points (using a Gaussian weight function that gives grid points closer to $y$ a larger weight). By taking $h_m^e = \log f_m^e(y)$ and $g_m^e = \log |\nabla f_m^e(y) \cdot \nu(y)|$ at $y$, we get approximations $h_m$ and $g_m$ that are piecewise constant on the surface elements $e$. We obtain $\hat{a} \approx \log(\Gamma \mu)$ and $\hat{b} \approx \log \left(\frac{D}{\Gamma \mu} \right)$ by solving .
Similarly, we use to estimate $\hat{c} \approx \log \left( \frac{\mu}{D} \right)$. Here, we locally (at grid points $z$ inside the estimated regions $\hat\Omega_m$) fit quadratic functions $q_m^z$ to the data $H_m$, calculate $l_m = \log \left| \frac{\Delta q_m^z}{{\mathcal{H}}_m(z)} \right| \approx \log \left( \frac{\Delta {\mathcal{H}}_m}{{\mathcal{H}}_m} \right)$ and average over $z$ to obtain $\hat{c}$ (since the fitting procedure is computationally intensive, this calculation is only performed on a random sample of the grid points, replacing the total average with the sample average).
Numerical examples {#sec:examples}
==================
In this section, we apply the numerical method described in the last section to simulated data. We start with a simple example using FEM-generated data with no added noise.
In the second example, we work with Monte Carlo generated data with added noise. The Monte Carlo method for photon transfer in random media (which is physically more accurate than the diffusion approximation) converges to solutions of the *radiative transfer equation* and thus satisfies our model only approximately (see, e.g., [@WanWu07] for details).
Example using FEM-generated data {#sub:fem_data}
--------------------------------
In the first example, we simulated a single photoacoustic measurement (using one illumination pattern only) directly in the diffusion approximation, with no added noise.
We placed, centered at $z=5$, four spherical inhomogeneities (cf. Figure \[fig:example\_fem\_setup\]) into a cubical grid $(x,y,z) \in [0,20] \times [0,10] \times [0,10]$ with resolution $320\times160\times160$. The fluence $u$ is calculated by numerically solving the PDE with homogeneous Dirichlet boundary conditions (simulating a uniform illumination). For this purpose, we take a self-written *MATLAB* finite element solver (that splits the grid into a tetrahedral mesh and then uses linear basis elements). To get simulated initial pressure data ${\mathcal{H}}$, we re-sampled $u$ at the grid centerpoints and built ${\mathcal{H}}=\Gamma \mu u$ by multiplication with $\Gamma\mu$ (see Figure \[fig:example\_fem\_data\]).
\
\
In Figure \[fig:example\_fem\_data\], one can see how the inhomogeneities affect the data ${\mathcal{H}}$ (cf. Proposition \[prop:jump\_detection\]). Spheres 1 and 4 have contrast in $\Gamma \mu$ with respect to the background, so their boundaries are are visible in ${\mathcal{H}}$. Sphere 2 displays contrast in $D$, but not in $\Gamma\mu$, its interface with the background hence can be seen in $|\nabla {\mathcal{H}}|$. Since in this particular example, the field $\nabla u$ is never parallel to the sphere’s boundary, the whole boundary is visible. Sphere 3 has the same $\Gamma \mu$ and $D$ as the background, so it’s only visible in $|\Delta {\mathcal{H}}|$.
\
Figure \[fig:example\_fem\_results\] shows the reconstruction results. As reference values, we used the values of $D$ and $\Gamma$ in the background (Region 1). All parameter discontinuities were recovered. Without noise, by far the biggest accuracy bottleneck is the estimation of jumps in $D$ from the normal components of $\nabla {\mathcal{H}}$, in particular for smaller structures (with respect to the resolution). The estimation of $\mu \Gamma$ and $\frac{\mu}{D}$ works almost perfectly for this type of data.
Example using Monte-Carlo-generated data {#sub:monte_carlo_data}
----------------------------------------
For the second numerical example, we used *MMC*, an open source 3D Monte-Carlo photon transfer simulator by Qianqian Fang (see [@Fan10] for details), to simulate photoacoustic measurements.
We again placed four inhomogeneities, centered at $z=5$, into a homogeneous background cubic grid $(x,y,z) \in [0,10] \times [0,10] \times [0,10]$ with resolution $150\times150\times150$ (cf. Figure \[fig:example\_mmc\_setup\]). Note that two of the structures touch (Regions 2 and 3). We deliberately chose the material parameters such that there is always enough contrast in $\Gamma\mu$ and $D$ so that edge detection in $\frac{\Delta {\mathcal{H}}}{{\mathcal{H}}}$ is not necessary (this proved to be very tricky in the presence of noise since it uses second order differences). Using *MMC*, we calculated fluences $u^k, \,k=1,\ldots,6$ for $6$, for multiple sources (placed in the center of each of the cube’s faces). We again re-sampled $u$ at the grid centerpoints, built initial pressure data ${\mathcal{H}}^k=\Gamma \mu u^k$ (by multiplication with $\Gamma\mu$) and added $5\%$ multiplicative Gaussian noise (which corresponds to a constant signal-to-noise ratio of about $26\,\mathrm{dB}$).
\
\
Figure \[fig:example\_mmc\_data\] shows a Monte-Carlo-simulated fluence $u^1$ and initial pressure $H^1$ (for which the light source at the top of the $z=5$ plane cut). Regions 3 and 5 are are visible in $\log{\mathcal{H}}^1$ due to contrast in $\Gamma\mu$. Regions 2 and 4 appear in $\log|\nabla {\mathcal{H}}^1|$. At some parts of the regions boundaries, the $\nabla u$ is parallel to the boundary, which leads to vanishing contrast. Taking the mean of $\log |\nabla {\mathcal{H}}^k|$ (over the $6$ sources), the whole boundary is becomes visible.
-------------- --------------- --------------- ---------------
**Region 1** 0.012 (19.8%) 0.166 (0%) 1.0000 (0%)
**Region 2** 0.014 (41.2%) 0.077 (39.2%) 0.839 (16.1%)
**Region 3** 0.011 (11.5%) 0.190 (14.7%) 1.284 (7%)
**Region 4** 0.023 (16.2%) 0.447 (16.9%) 0.511 (2.1%)
**Region 5** 0.007 (19.2%) 0.128 (15.5%) 0.806 (0.8%)
-------------- --------------- --------------- ---------------
Figure \[fig:example\_mmc\_results\] shows the reconstruction results. As reference values, we again used the values of $D$ and $\Gamma$ in the background. All parameter discontinuities were recovered. As before, errors in the estimation of jumps in $D$ from the normal components of $\nabla {\mathcal{H}}$ were the most significant.
Conclusion {#sec:conclusion}
==========
Our theoretical analysis shows that in many cases (e.g., if enough measurements such that holds in the region of interest are available), unique reconstruction of piecewise constant $\mu,D,\Gamma$ from photoacoustic measurements at a single wavelength is possible. Our numerical implementation of the analytical reconstruction procedure works with reasonable accuracy, even with Monte Carlo generated data (which satisfies the diffusion approximation, which we use for reconstruction, only approximately). Our numerical method, however, requires data with very high resolution and large parameter contrast. In addition, due to the fact that we use second derivatives of the data, our method is very sensitive to noise, so use with real data might turn out to be challenging.
Acknowledgements {#sec:acknowledgements}
================
This work has been supported by the Austrian Science Fund (FWF) within the national research network Photoacoustic Imaging in Biology and Medicine (project S10505-N20) and by the IK I059-N funded by the University of Vienna.
Derivation of transmission formulation {#sec:transmission_cond}
======================================
In this section (following the proof in [@AttButMic06]), we prove that under some regularity assumptions, a function $u$ is a weak solution of $$\label{eq:dv_diff_eq}
-{\operatorname{div}}(D \nabla u) + \mu u = 0 \quad \text{in $\Omega$}$$ with piecewise smooth parameters $\mu,D$ if and only if
- $u$ a classical solution in regions where the parameters are smooth,
- $u$ is continuous,
- the transmission condition holds at the jumps.
Let $\Omega \subset \mathbb{R}^n$ and $(\Omega_m)_{m=1}^M$ be piecewise-$C^1$ domains such that $\overline\Omega= \bigcup_{m=1}^M \overline\Omega_m$.
Denote by $T$ the part of the subregion boundaries that is $C^1$ and in the closure of at most two subregions. We require that the partition is chosen such $\mathcal{H}^{n-1}(\bigcup_{m=1}^M \partial\Omega_m \setminus T)=0$, i.e., the set junctions where more than three subregions meet or the boundary is not $C^1$ has zero surface measure.
Furthermore, let the parameters $\mu, D > 0$ be bounded and piecewise smooth, i.e., of the form $$\label{eq:dv_parameters}
\enskip \mu = \sum_{m=1}^M \mu_m 1_{\Omega_m}, \enskip D = \sum_{m=1}^M D_m 1_{\Omega_m}$$ with $\mu_m, D_m \in C^{\infty}(\overline\Omega_m)$. For a corresponding solution $u$ of , let $$u_m:= u|_{\Omega_m}, \, m=1,\ldots,M.$$
\[prop:transmission\_cond\]\
Let $u$ be a weak solution of . Furthermore, let $u_m, \, m=1,\ldots,M$ satisfy $$\label{eq:dv_regularity_additional}
u_m \in C^1(\overline\Omega_m \cap B) \quad \text{for all $B$ with $B \cap T = \emptyset$.}$$ Then $u \in C^\alpha(\Omega)$ for some $\alpha > 0$ and $u_m \in C^\infty(\Omega_m)$. Additionally, the restrictions $u_m$ satisfy $$\label{eq:dv_strong_eq}
-{\operatorname{div}}( D_m \nabla u_m )+ \mu_m u_m = 0 \quad \text{in $\Omega_m$ }$$ and, almost everywhere on interfaces $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$, $$\label{eq:dv_transmission_cond}
D_m \nabla u_m \cdot \nu = D_n \nabla u_n \cdot \nu \quad \text{(for any interface normal $\nu$)}.$$
A weak solution $u$ of satisfies $u \in H^1(\Omega)$ and $$\label{eq:dv_weak_solution_subset}
\int_\Omega D \nabla u \cdot \nabla \phi + \mu u \phi \,dx = 0 \text{ for all } \phi \in C^\infty_c(\Omega).$$
Since the equation is elliptic, we have $u_m \in C^\infty(\Omega_m)$ by interior regularity [@GilTru01 Corollary 8.11] and hence, from integration by parts, $\int_\Omega -{\operatorname{div}}(D_m \nabla u_m) \phi + \mu_m u_m \phi \,dx= 0$ for all $\phi \in C^\infty_c(\Omega_m)$, which shows that holds classically in $\Omega_m$, $m=1,\ldots,M$.
From De Giorgi-Nash-Moser theorem [@GilTru01 Theorem 8.22] we get $u \in C^\alpha(\overline\Omega)$ for some $\alpha > 0$.
Next, let $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$ be the interface between some $\Omega_m$ and $\Omega_n$. For almost all $x \in I_{mn}$ (those in $T$), there exists an open ball $B \Subset \Omega$ such that $x \in B = B_m \cup B_n = (\Omega_m \cap B) \cup (\Omega_n \cap B)$ (by the restriction on the partition).
Using integration by parts and we get for all $\phi \in C^\infty_c(B) \subset C^\infty_c(\Omega)$ $$\begin{aligned}
0 &= \int_\Omega D \nabla u \cdot \nabla \phi + \mu u \phi \,dx \\
&= \int_{\Omega_m} D_m \nabla u_m \cdot \nabla \phi + \mu_m u_m \phi \,dx + \int_{\Omega_n} D_n \nabla u_n \cdot \nabla \phi + \mu_n u_n \phi \,dx \\
&= \int_{\partial B_m} (D_m \nabla u_m \cdot \nu) \phi + \int_{\partial B_n} (D_n \nabla u_n \cdot \nu) \phi \,dS \\
&= \int_{I_{mn} \cap B} (D_m \nabla u_m \cdot \nu - D_n \nabla u_n \cdot \nu) \phi \,dS.
\end{aligned}$$ The transmission condition follows since $\nabla u_m, \nabla u_n \in C(I_{mn} \cap B)$ (by assumption ).
For certain partition geometries, weak solutions of always satisfy condition . For instance, Li and Nirenberg [@LiNir03 Proposition 1.4] showed that if $(\Omega_m)_{m=2}^M$ are inclusions with smooth boundaries (which may also touch in some points) and background $\Omega_1$, one gets $u_m \in C^\infty(\overline\Omega_m)$.
For sufficiently regular $u_m$ (e.g., in the setting just described) we can also derive the converse of Lemma \[prop:transmission\_cond\]:
\[prop:transmission\_cond\_converse\] Let $u_m:= u|_{\Omega_m} \in C^2(\overline\Omega_m), \,m=1,\ldots,M,$ satisfy $$\label{eq:dv_strong_eq_converse}
-{\operatorname{div}}( D_m \nabla u_m )+ \mu_m u_m = 0 \quad \text{in $\Omega_m$ } \\$$ and, on interfaces $I_{mn}=\partial\Omega_m \cap \partial\Omega_n$ with normal $\nu$, $$\label{eq:dv_transmission_cond_converse}
\begin{aligned}
u_m &= u_n \\
D_m \nabla u_m \cdot \nu &= D_n \nabla u_n \cdot \nu.
\end{aligned}$$ Then $u$ is a weak solution of .
To get $u \in H^1(\Omega)$, we first show that the weak gradient of $u$ is given by $$\label{eq:dv_weak_gradient}
\nabla u = \sum_{m=1}^M \nabla u_m 1_{\Omega_m}.$$
To see that, note that for all $\phi \in C^\infty_c(\Omega)$ $$-\int_\Omega u \nabla \phi \,dx = -\sum_{m=1}^M \int_{\Omega_m} u_m \nabla \phi \,dx = \sum_{m=1}^M \left(\int_{\Omega_m} \nabla u_m \phi \,dx - \int_{\partial\Omega_m} u_m \phi\nu \,dS \right).$$
The interior boundary terms cancel out due to $u_m = u_n$, the exterior boundary terms vanish since $\operatorname{\operatorname{supp}}\phi \Subset \Omega$), so the weak gradient of $u$ is given by . Hence $u \in H^1(\Omega)$ since $$\begin{aligned}
{\left\| u \right\|}_{H^1(\Omega)}^2 &= \int_\Omega |u|^2 + |\nabla u|^2 \,dx = \sum_{m=1}^M \left( \int_{\Omega_m} |u_m|^2 + \left|\nabla u_m \right|^2 \,dx \right) \\
&= \sum_{m=1}^M {\left\| u_m \right\|}_{H^1(\Omega_m)}^2 \leq C \sum_{m=1}^M {\left\| u_m \right\|}_{W^{1,\infty}(\overline\Omega_m)}^2 < \infty
\end{aligned}$$
Furthermore, using integration by parts, and imply $$\begin{aligned}
\int_{\Omega} D \nabla u \cdot \nabla \phi + \mu u \phi \,dx &= \sum\limits_{m=1}^M \int_{\Omega_m} D_m \nabla u_m \cdot \nabla \phi + \mu_m u_m \phi \,dx \\
&= \sum_{m=1}^M \int_{\partial\Omega_m} D_m (\nabla u_m \cdot \nu) \phi \,dS = 0
\end{aligned}$$
for $\phi \in C^\infty_c(\Omega)$ since the boundary terms cancel out due to and $\operatorname{\operatorname{supp}}\phi \Subset \Omega$, so $u$ is a weak solution of .
Differential Canny edge detection {#sec:canny}
=================================
In differential *Canny edge detection* as proposed by Lindeberg (cf. [@Lin98]), one starts from a *scale space* representation $f_\sigma = f * g_\sigma$ of a two-dimensional image $f\colon \mathbb{R}^2 \to \mathbb{R}$, where $g_\sigma$ is a Gaussian kernel with standard deviation $\sigma$. Edges at scale $\sigma$ are then *defined* (with finite resolution, no natural notion of discontinuity exists) as local maxima of the gradient magnitude $|\nabla f_\sigma|$ in gradient direction $\nabla f_\sigma$. Additionally, it is proposed to additionally maximize a certain functional measuring edge strength in scale space (which allows for automatic scale selection).
We want to use a similar algorithm to find the discontinuities of a three-dimensional function $f\colon \mathbb{R}^3 \to \mathbb{R}$ (which will be $\log {\mathcal{H}}^k$, $\log |\nabla {\mathcal{H}}^k|$ or $\log|\frac{\Delta {\mathcal{H}}^k}{{\mathcal{H}}^k}|$). Jumps of $f$ that are sufficiently big compared its continuous variation (within a grid step) lead to sudden changes of intensity (above some threshold) in the corresponding finite-resolution image. Heuristically, we have a similar situation as in Canny edge detection. That is, jump surfaces approximately correspond to thresholded maxima of $|\nabla f_\sigma|$ in gradient direction, where $f_\sigma = f * g_\sigma$ is the scale-space representation of $f$ for a properly chosen scale $\sigma$ (for simplicity, we will work at a single, manually chosen scale in this paper).
To estimate the jump set, we thus have to solve for *fixed* $\sigma$ and $v=\nabla f_\sigma$ $$\label{eq:canny_eqn}
\begin{aligned}
\partial_v |\nabla f_\sigma|^2 &= \sum_{i,j=1}^3 v_i v_j\,\partial_{x_i x_j} f_\sigma = 0\\
\partial_{vv} |\nabla f_\sigma|^2 &= \sum_{i,j,k=1}^3 v_i v_j v_k\,\partial_{x_i x_j x_k} f_\sigma > 0.
\end{aligned}$$
For discrete (voxelized) $f$, the solution manifold can be calculated with sub-voxel resolution. To restrict $E$, the solution surface of , to parts where the gradient magnitude (and thus also the jump across the surface) is large enough, we perform *hysteresis thresholding*. That is, we first apply a lower threshold $\rho_1$ to the jump strength $|\nabla f_\sigma |$ to get $$E_1 = \{ x \in E \ \big| \ |\nabla f_\sigma(x) | \geq \rho_1 \}.$$
Then, we remove all connected components $C \subset E_1$ for which the jump strength is never above a higher threshold $\rho_2$, so we get our final jump set $E_2$ with $$E_2 = \bigcup \{ C \subset E_1 \ \big| \ C \text{ is connected} \ \wedge \ \exists x \in C\colon \ |\nabla f_\sigma(x) | \geq \rho_2 \}.$$
As a final step, we remove all isolated structures smaller than a certain size (which are usually due to misdetections and too small for further processing).
\[1\]\#1[0=0=0 0 by1pt\#1]{}
[10]{}
H. Ammari, E. Bossy, V. Jugnon, and H. Kang. Reconstruction of the optical absorption coefficient of a small absorber from the absorbed energy density. , 71(3):676–693, 2011.
R. Aronson. Boundary conditions for diffusion of light. , 12(11):2532–2539, 1995.
S. R. Arridge. Optical tomography in medical imaging. , 15(2):R41–R93, 1999.
S. R. Arridge, O. Dorn, J. P. Kaipio, V. Kolehmainen, M. Schweiger, T. Tarvainen, M. Vauhkonen, and A. Zacharopoulos. Reconstruction of subdomain boundaries of piecewise constant coefficients of the radiative transfer equation from optical tomography data. , 22(6):2175–2196, 2006.
H. Attouch, G. Buttazzo, and G. Michaille. SIAM, Society for Industrial and Applied Mathematics, 2006.
G. Bal and K. Ren. Multi-source quantitative photoacoustic tomography in a diffusive regime. , 27(7):075003, 2011.
G. Bal and K. Ren. On multi-spectral quantitative photoacoustic tomography in diffusive regime. , 28(2):025010, 2012.
G. Bal and G. Uhlmann. Inverse diffusion theory of photoacoustics. , 26:085010, 2010.
B. Banerjee, S. Bagchi, R.M. Vasu, and D. Roy. Quantitative photoacoustic tomography from boundary pressure measurements: noniterative recovery of optical absorption coefficient from the reconstructed absorbed energy map. , 25(9):2347–2356, 2008.
E. Beretta and E. Francini. Lipschitz stability for the electrical impedance tomography problem: The complex case. , 36(10):1723–1749, 2011.
J.F. Canny. A computational approach to edge detection. , PAMI-8:679–697, 1986.
B. T. Cox, S. R. Arridge, and P. C. Beard. Estimating chromophore distributions from multiwavelength photoacoustic images. , 26(2):443–455, 2009.
B. T. Cox, S. R. Arridge, P. Köstli, and P. C. Beard. Two-dimensional quantitative photoacoustic image reconstruction of absorption distributions in scattering media by use of a simple iterative method. , 45(8):1866–1875, 2006.
B. T. Cox, J. G. Laufer, S. R. Arridge, and P. C. Beard. Quantitative spectroscopic photoacoustic imaging: a review. , 17(6):061202, 2012.
V. Druskin. On the uniqueness of inverse problems from incomplete boundary data. , 58(5):1591–1603, 1998.
Q. Fang. Mesh-based monte carlo method using fast ray-tracing in plücker coordinates. , 1(1):165–175, 2010.
H. Gao, S. Osher, and H. Zhao. Quantitative photoacoustic tomography. In [*Mathematical Modeling in Biomedical Imaging II*]{}. Springer Berlin, Heidelberg.
D. Gilbarg and N. Trudinger. . Classics in Mathematics. Springer Verlag, Berlin, 2001. Reprint of the 1998 edition.
B. Harrach. On uniqueness in diffuse optical tomography. , 28(5):055010, 2009.
R. A. Horn and C. R. Johnson. . Cambridge University Press, Cambridge, 1990. Corrected reprint of the 1985 original.
S. Kim, O. Kwon, J. K. Seo, and J.-R. Yoon. On a nonlinear partial differential equation arising in magnetic resonance impedance tomography. , 34(3):511–526, 2002.
V. Kolehmainen, M. Vauhkonen, and Kaipio J. P. Recovery of piecewise constant coefficients in optical diffusion tomography. , 7(13):468–480, 2000.
P. Kuchment and L. Kunyansky. Mathematics of thermoacoustic tomography. , 19:191–224, 2008.
J. Laufer, B. Cox, E. Zhang, and P. Beard. Quantitative determination of chromophore concentrations from 2d photoacoustic images using a nonlinear model-based inversion scheme. , 49(8):1219–1233, 2010.
Y.Y. Li and L. Nirenberg. Estimates for elliptic systems from composite material. , 56(7):892–925, 2003.
T. Lindeberg. Edge detection and ridge detection with automatic scale selection. , 30(1):117–154, 1998.
K. Ren, Gao H., and H. Zhang. A hybrid reconstruction method for quantitative pat. , 6(1):32–55, 2013.
J. Ripoll and M. Nieto-Vesperinas. Index mismatch for diffuse photon density waves at both flat and rough diffuse-diffuse interfaces. , 16(8):1947–1957, 1999.
L. Rondi and F. Santosa. Enhanced electrical impedance tomography via the mumford-shah functional. , 6:517–538, 2001.
T. Saratoon, T. Tarvainen, B. T. Cox, and S. R. Arridge. A gradient-based method for quantitative photoacoustic tomography using the radiative transfer equation. , 29(7):075006, 2013.
P. Shao, B. Cox, and R.J. Zemp. Estimating optical absorption, scattering, and grueneisen distributions with multiple-illumination photoacoustic tomography. , 50(19):3145–3154, 2011.
T. Tarvainen, B. T. Cox, J. P. Kaipio, and S. R. Arridge. Reconstructing absorption and scattering distributions in quantitative photoacoustic tomography. , 28(8):084009, 2012.
L. V. Wang and H. Wu, editors. . Wiley-Interscience, New York, 2007.
Z. Yuan, Q. Zhang, and H. Jiang. Simultaneous reconstruction of acoustic and optical properties of heterogeneous media by quantitative photoacoustic tomography. , 14:6749–6754, 2006.
A. Zacharopoulos, M. Schweiger, V. Kolehmainen, and S. Arridge. 3d shape based reconstruction of experimental data in diffuse optical tomography. , 21(17):18940–18956, 2009.
R. J. Zemp. Quantitative photoacoustic tomography with multiple optical sources. , 49(18):3566–3572, 2010.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We consider transformations preserving certain linear structure in Grassmannians and give some generalization of the Fundamental Theorem of Projective Geometry and the Chow Theorem \[Ch\]. It will be exploited to study linear $(k,n-k)$-involutions, $1<k<n-1$. The analogy of the J. Dieudonné and C. E. Rickart result will be obtained.
[**Keywords**]{}: Grassmannian, $R$-subset of Grassmannian, regular transformation of Grassmannian, $(k,n-k)$-involution.
[**MSC-class**]{}: 14M15, 14L35, 20G15, 51N30.
author:
- Mark Pankov
title: Transformations of Grassmannians and automorphisms of classical groups
---
Introduction
============
Let $V$ be an $n$-dimensional vector space over some field $F$. Denote by ${\mathbb G}_{k}(V)$ the Grassmannian consisted of $k$-dimensional linear subspaces of $V$. In what follows a $k$-dimensional linear subspace will be called a $k$-dimensional plane if $k>1$ and a line if $k=1$.
The Fundamental Theorem of Projective Geometry and Chow Theorem
---------------------------------------------------------------
Let us consider a transformation $f$ (bijection onto itself) of the Projective space ${\mathbb G}_{1}(V)$ such that $f$ and the inverse transformation $f^{-1}$ map each collection of linearly independent lines to a collection of linearly independent lines (we say that lines are linearly independent if non-zero vectors lying on them are linearly independent). Each collineation (semilinear transformation) of $V$ induces the transformation of ${\mathbb G}_{1}(V)$ satisfying this condition (note that two transformations of ${\mathbb G}_{k}(V)$ induced by collineations $f$ and $f'$ are coincident if and only if $f'=af$ for some $a\in F$).
The inverse statement is known as the Fundamental Theorem of Projective Geometry. It can be formulated in the following form: if $n\ge 3$ then each transformation of ${\mathbb G}_{1}(V)$ satisfying the condition considered above is induced by a collineation (see, for example, \[D2, O’M1\]). It must be pointed out that this result was successfully exploited to description of automorphisms of linear groups \[D1, D2, O’M1, O’M2, R\].
An analogy of the Fundamental Theorem of Projective Geometry for transformations of ${\mathbb G}_{k}(V)$, $k>1$, was given by W. L. Chow \[Ch\] (see also \[D2\]). The formulation of the Chow Theorem is based on the notion of the distance between planes.
For two arbitrary $k$-dimensional planes $S,S'\subset V$ the number $$k-\dim S\cap S'$$ is called the [*distance*]{} between $S$ and $S'$. It is easy to see that the distance is equal to the smallest number $i$ such that $S$ and $S'$ are contained in some $(k+i)$-dimensional plane. Two planes will be called [*adjacent*]{} if the distance between them is equal to $1$.
We consider transformations of Grassmannians preserving the distance between planes. A transformation $f$ of ${\mathbb G}_{k}(V)$ preserves the distance if and only if $f$ and $f^{-1}$ map any two adjacent planes to adjacent planes (it is trivial, see \[D2\]). If $k=1$ or $n-1$ then any two elements of ${\mathbb G}_{k}(V)$ are adjacent and each transformation of ${\mathbb G}_{k}(V)$ preserves the distance.
Clearly, each transformation of ${\mathbb G}_{k}(V)$ induced by a collineation preserves the distance. The Chow Theorem states that if $n\ge 3$, $1<k<n-1$ and $n\ne 2k$ then the inverse statement holds true: a transformation of ${\mathbb G}_{k}(V)$ preserving the distance between planes is induced by some collineation. For the case when $n=2k$ it fails.
Consider a non-degenerate sesquilinear form $\Omega$ on $V$. It defines the bijection $f_{k\,n-k}(\Omega)$ of the Grassmannian ${\mathbb G}_{k}(V)$ onto the Grassmannian ${\mathbb G}_{n-k}(V)$ which transfers each plane to the $\Omega$-orthogonal complement. This bijection preserves the distance. The bijections defined by forms $\Omega$ and $\Omega'$ are coincident if and only if there exists $a\in F$ such that $\Omega'=a\Omega$.
If $3\le n=2k$ then each $f_{k\,k}(\Omega)$ is a transformation of ${\mathbb G}_{k}(V)$ which is not induced by a collineation. For this case the Chow Theorem states that each transformation of ${\mathbb G}_{k}(V)$ preserving the distance between planes is induced by a collineation or defined by a non-degenerate sesquilinear form.
The similar statements were also proved for some homogeneous spaces, for example, for the space of all null planes of a non-degenerate symplectic form (see \[Ch\]). Other generalization of the Fundamental Theorem of Projective Geometry and the Chow Theorem was proposed by J. Tits \[T\].
Regular transformations of Grassmannians
----------------------------------------
Now we introduce so-called $R$-subsets of Grassmannians. They will be used to formulate a statement generalizing the results considered above.
We say that ${\cal R} \subset {\mathbb G}_{k}(V)$ is an $R$-[*set*]{} if there exists a base for $V$ such that each plane belonging to ${\cal R}$ contains $k$ vectors from this base; in other words, elements of ${\cal R}$ are coordinate planes for some coordinate system for $V$. Any base and any coordinate system satisfying that condition will be called [*associated*]{} with ${\cal R}$.
A coordinate system for $V$ has $$\binom{n}{k}
:=\frac{n!}{k!(n-k)!}$$ distinct $k$-dimensional coordinate planes. Therefore an $R$-subset of ${\mathbb G}_{k}(V)$ contains at most $\binom{n}{k}$ elements. An $R$-set ${\cal R}$ will be called [*maximal*]{} if any $R$-set containing ${\cal R}$ coincides with it. Each $R$-set is contained in some maximal $R$-set and an $R$-subset of ${\mathbb G}_{k}(V)$ is maximal if and only if it contains $\binom{n}{k}$ elements. Thus a maximal $R$-subset of ${\mathbb G}_{k}(V)$ consists of all $k$-dimensional coordinate planes for some coordinate system.
An $R$-subset of ${\mathbb G}_{1}(V)$ is a collection of linearly independent lines; $R$-subsets of ${\mathbb G}_{n-1}(V)$ are known as [*arrangements*]{} \[O\]. For the general case $R$-subsets of Grassmannians are discrete sets which can be considered as a generalization of collections of linearly independent lines.
A transformation $f$ of ${\mathbb G}_{k}(V)$ will be called [*regular*]{} if $f$ and $f^{-1}$ preserve the class of $R$-sets. Any transformation induced by a collineation is regular.
We say that a bijection $f$ of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$ is [*regular*]{} if $f$ and $f^{-1}$ transfer each $R$-set to an $R$-set. It is easy to see that for any non-degenerate sesquilinear form $\Omega$ on $V$ the bijection $f_{k\,n-k}(\Omega)$ is regular. Note that for the case when $m\ne k,n-k$ there are not regular bijections of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{m}(V)$ since the equality $$\binom{n}{k}=\binom{n}{m}$$ holds only for $m=k,n-k$.
If $n=2$ then any two lines generate a maximal $R$-subset of ${\mathbb G}_{1}(V)$ and each transformation of ${\mathbb G}_{1}(V)$ is regular. For the case when $n\ge 3$ the class of regular transformations of ${\mathbb G}_{k}(V)$ consists only of the transformations considered above.
If $n\ge 3$ then the following two statements hold true:
1. for the case when $n\ne 2k$ all regular transformations of ${\mathbb G}_{k}(V)$ are induced by collineations ;
2. if $n=2k$ then a regular transformation of ${\mathbb G}_{k}(V)$ is induced by a collineation or defined by some non-degenerate sesquilinear form.
If $n\ge 3$ and $n\ne 2k$ then each regular bijection of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$ is defined by a non-degenerate sesquilinear form.
For $k=n-1$ Theorem 1.1 is a simple consequence of the Fundamental Theorem of Projective Geometry. For the general case it will be proved in the next section. Our proof will be based on properties of sertain number characteristic of $R$-sets, so-called, the degree of inexactness (Subsections 2.1 and 2.2).
Linear involutions and automotphisms of classical groups
--------------------------------------------------------
Let $\sigma\in {\mathfrak G}{\mathfrak L}(V)$ be an involution (i.e. $\sigma^{2}=Id$) and the characteristic of the field $F$ is not equal to $2$. Then there exist two invariant planes $U_{+}(\sigma)$ and $U_{-}(\sigma)$ such that $$\sigma(x)=x\;\mbox{ if }\;x\in U_{+}(\sigma)\,,
\;\;\;\sigma(x)=-x\;\mbox{ if }\;x\in U_{-}(\sigma)$$ and $$V=U_{+}(\sigma)+U_{-}(\sigma)\,.$$ We say that $\sigma$ is a $(k,n-k)$-[*involution*]{} if the dimensions of the planes $U_{+}(\sigma)$ and $U_{-}(\sigma)$ are equal to $k$ and $n-k$, respectively.
The set of all $(k,n-k)$-involutions will be denoted by ${\mathfrak I}_{k\,n-k}(V)$. If the number $n-k$ is even then $(k,n-k)$-involutions generate the special linear group ${\mathfrak S}{\mathfrak L}(V)$. For the case when $n-k$ is odd they generate the group of all linear transformations $f\in {\mathfrak G}{\mathfrak L}(V)$ satisfying the condition $\det f=\pm 1$.
In what follows we will consider transformations of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity. For example, for each collineation $g$ of $V$ the transformation $$\sigma \to g\sigma g^{-1}$$ of ${\mathfrak I}_{k\,n-k}(V)$ satisfies this condition. Now consider a correlation $h$ (a semilinear bijection of $V$ onto the dual space $V^{*}$). Then the transformation $$\sigma \to h^{-1}\check{\sigma}h$$ of ${\mathfrak I}_{k\,n-k}(V)$ (here $\check{\sigma}$ is the contragradient) preserves the commutativity.
It was proved by J. Dieudonné \[D1\] and C. E. Rickart \[R\] that [*for the cases $k=1,n-1$ any transformation of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity is defined by a collineation or a correlation*]{}. The proof was based on the Fundamental Theorem of Projective Geometry and G. W. Mackey’s results \[M\]. It is not difficult to see that automorphisms of the groups ${\mathfrak S}{\mathfrak L}(V)$ and ${\mathfrak G}{\mathfrak L}(V)$ induce transformations of the set ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity. Therefore the well-known description of automorphisms of the groups ${\mathfrak G}{\mathfrak L}(V)$ and ${\mathfrak S}{\mathfrak L}(V)$ is a simple consequence of the theorem given above (see \[D1, D2, R\]).
In Section 3 the similar statement will be proved for transformations of ${\mathfrak I}_{k\,n-k}(V)$, $1<k<n-1$.
Proof of Theorem 1.1
====================
Degree of inexactness
---------------------
It was noted above that each $R$-set is contained in some maximal $R$-set. We say that an $R$-set is [*exact*]{} if there exists unique maximal $R$-set containing it. Certainly any maximal $R$-set is exact.
For an $R$-set ${\cal R}$ consider an exact $R$-set ${\cal R}'$ containing ${\cal R}$ and having the minimal number of elements (i.e. such that the inequality $|{\cal R}'|\le |{\cal R}''|$ holds for any other exact $R$-set ${\cal R}''$ containing ${\cal R}$). The number $$\deg({\cal R}):=|{\cal R}'|-|{\cal R}|$$ will be called the [*degree of inexactness*]{} of ${\cal R}$. An $R$-set is exact if and only if the degree of inexactness is zero.
It is trivial that the following statement holds true.
Regular transformations of ${\mathbb G}_{k}(V)$ and regular bijections of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$ preserve the degree of inexactness.
If $k=1$ then for any $R$-set ${\cal R}\subset{\mathbb G}_{k}(V)$ we have $\deg({\cal R})= n-|R|$ and our $R$-set is exact if and only if it is maximal. Lemma 2.1 guarantees the fulfilment of the similar statement for $k=n-1$. For the general case there exist exact $R$-sets which are not maximal.
Now we give a few technical definitions which will be used in what follows.
For an $R$-set ${\cal R}'\subset {\mathbb G}_{k}(V)$ fix an exact $R$-set ${\cal R}''$ containing ${\cal R}'$ and satisfying the condition $$\deg({\cal R}')=|{\cal R}''|-|{\cal R}'|\,.$$ There exists unique maximal $R$-set ${\cal R}$ containing ${\cal R}''$. Let ${\cal R}^{m}$ be the set of all $m$-dimensional coordinate planes for the coordinate system associated with ${\cal R}$ (this coordinate system is uniquely defined since ${\cal R}$ is a maximal $R$-set). The set ${\cal R}^{m}$ will be called the maximal $R$-subset of ${\mathbb G}_{m}(V)$ [*associated*]{} with ${\cal
R}$. For a plane $S$ belonging to ${\cal
R}^{m}$ denote by ${\cal R}(S)$ the set of all planes $U\in {\cal
R}$ incident to $S$, i.e. such that $$U\subset S\; \mbox{ if }\; k<m$$ and $$S\subset U\; \mbox{ if }\; k>m\,.$$ The set ${\cal R}(S)$ contains $\binom{m}{k}$ elements if $m>k$ and $\binom{n-m}{k-m}$ elements if $m<k$.
Let $L_{1},...,L_{n}$ be lines generating the set ${\cal R}^{1}$. For each $i=1,...,n$ $${\cal R}'_{i}:={\cal R}'\cap {\cal R}(L_{i})$$ is the set of all planes belonging to ${\cal R}'$ and containing the line $L_{i}$. Let $S_{i}$ be the intersection of all planes belonging to ${\cal R}'_{i}$. Then $${\cal R}'_{i}={\cal R}'\cap {\cal R}(S_{i})\,.$$ The dimension of the plane $S_{i}$ will be denoted by $n_{i}$; for the case when the set ${\cal R}'_{i}$ is empty we write $n_{i}=0$. Denote by $n({\cal R}')$ the number of all $i$ such that $n_{i}=1$. It is not difficult to see that
1. [*the number $n({\cal R}')$ does not depend on the choice of an exact $R$-set ${\cal R}''$ containing ${\cal R}'$ and satisfying the condition*]{} (2.1),
2. [*the $R$-set ${\cal R}'$ is exact if and only if $n({\cal R}')=n$*]{}.
Let us consider two examples.
[Suppose that the $R$-set ${\cal R}'$ coincides with ${\cal R}(L_{j})$ and $k\ge n-k$. Then $n_{j}=1$. If $i\ne j$ then $S_{i}$ is the two-dimensional plane containing the lines $L_{j}$ and $L_{i}$; therefore $n_{i}=2$ and $n({\cal R}')=1$. Consider a set $$\{i_{1},...,i_{k}\}\subset\{1,...,n\}\setminus\{j\}$$ and the plane $U'\in{\cal R}$ containing the lines $L_{i_{1}}$,...,$L_{i_{k}}$. For each $p=1,...,k$ the intersection of $U'$ with $S_{i_{p}}$ coincides with the line $L_{i_{p}}$. This implies the equality $$n({\cal R}'\cup\{U'\})=k+1\,.$$ If $k=n-1$ then the $R$-set ${\cal R}'\cup\{U'\}$ is exact and $\deg{\cal R}'=1$. For the case when $k<n-1$ the set $$\{1,...,n\}\setminus\{j,i_{1},...,i_{k}\}$$ contains less than $k$ elements. Denote them by $j_{1},...,j_{q}$ and consider the plane $U''\in {\cal R}$ containing the lines $L_{j_{1}}$,...,$L_{j_{q}}$. If $U''$ does not contain $L_{j}$ then the intersections of $U''$ with the planes $S_{j_{1}}$,...,$S_{j_{q}}$ are the lines $L_{j_{1}}$,...,$L_{j_{q}}$. Thus the $R$-set ${\cal R}'\cup\{U',U''\}$ is exact and $\deg{\cal R}'=2$. ]{}
[Now suppose that $k\le n-k$ and the $R$-set ${\cal R}'$ coincides with ${\cal R}(S)$, where $S$ is a plane belonging to ${\cal R}^{n-1}$. Consider unique line $L_{i}$ which is not contained in $S$. Then $n_{i}=0$ and $n_{j}=1$ for each $j\ne i$, thus $n({\cal R}')=n-1$. For a non-degenerate sesquilinear form $\Omega$ the bijection $f_{k\,n-k}(\Omega)$ transfers ${\cal R}'$ to the set considered in Example 2.1. Then Lemma 2.1 guarantees the fulfilment of the equality $\deg{\cal R}'=2$ for the case when $k>1$. ]{}
For the case when $1<k<n-1$ the following three statements hold true:
1. if $n-k<k$ and the set ${\cal R}'$ contains not less than $\binom{n-1}{k-1}$ elements then $\deg({\cal R}') \le 2$ and the equality $\deg({\cal R}')=2$ holds if and only ${\cal R}'$ is the set considered in Example [2.1]{};
2. if $k<n-k$ and the set ${\cal R}'$ contains not less than $\binom{n-1}{k}$ elements then $\deg({\cal R}') \le 2$ and the equality $\deg({\cal R}')=2$ holds if and only if ${\cal R}'$ is the set considered in Example [2.2]{};
3. if $n=2k$ and the set ${\cal R}'$ contains not less than $\binom{n-1}{k}=\binom{n-1}{k-1}$ elements then $\deg({\cal R}') \le 2$ and the equality $\deg({\cal R}')=2$ holds if and only ${\cal R}'$ is one of the sets considered in Examples [2.1]{} and [2.2]{}.
Proof of Theorem 2.1
--------------------
We start with a few lemmas. In the second part of the subsection they will be exploited to prove Theorem 2.1. Lemma 2.1 shows that we can restrict ourself only to the case when $n-k \le k <n-1$ and the $R$-set ${\cal R}'$ contains not less than $\binom{n-1}{k-1}$ elements.
If the condition $n_{i}=0$ holds for some number $i$ then $n=2k$ and the set ${\cal R}'$ coincides with ${\cal R}(S)$, where $S$ is a plane belonging to ${\cal R}^{n-1}$.
[**Proof.**]{} Consider the plane $S\in {\cal R}^{n-1}$ which does not contain the line $L_{i}$. The condition $n_{i}=0$ shows that the set ${\cal R}'_{i}$ is empty. Then we have the inclusion ${\cal R}' \subset {\cal R}(S)$ showing that $$\binom{n-1}{k-1}
\le |{\cal R}'| \le |{\cal R}(S)|=
\binom{n-1}{k}\,.$$ However $\binom{n-1}{k-1}\ge \binom{n-1}{k}$ if $k\ge n-k$ and the inequality can be replaced by an equality if and only if $n=2k$. We get the required. $\square$
The inequality $n_{i} \le n-k$ holds for each $i=1,...,n$.
[**Proof.**]{} The case $n_{i}=0$ is trivial. For the case when $n_{i}>0$ the set ${\cal R}'_{i}$ is not empty and there exists a plane $U\in {\cal R}'$ containing the line $L_{i}$. Then $n_{i} \le k$ and the required inequality holds for $n=2k$.
Let $n-k<k$. Denote by $S$ the plane belonging to ${\cal R}^{n-1}$ and such that the line $L_{i}$ is not contained in $S$. For each plane $U\in {\cal R}'$ the following two cases can be realized:
1. $U$ contains $L_{i}$ then $U\in {\cal R}'_{i}\subset {\cal R}(S_{i})$;
2. $U$ does not contain $L_{i}$ then $U\in{\cal R}(S)$.
This implies the inclusion ${\cal R}'\subset{\cal R}(S)\cup{\cal R}(S_{i})$. Thus $$\binom{n-1}{k-1}
\le |{\cal R}'|
\le |{\cal R}(S)| + |{\cal R}(S_{i})|=
\binom{n-1}{k}+\binom{n-n_{i}}{k-n_{i}}\,.$$ If $n_{i} \ge n-k+1$ then $$\binom{n-n_{i}}{k-n_{i}}
\le \binom{k-1}{2k-n-1}$$ (see Remark 2.1, it will be given after the proof) and the inequality (2.2) can be rewritten in the following form $$\binom{n-1}{k-1}-\binom{n-1}{k}
\le
\binom{k-1}{2k-n-1}\,.$$ We have $$\begin{split}
\binom{n-1}{k-1}-\binom{n-1}{k}=\;&
\frac{(n-1)!}{(k-1)!(n-k)!}-\frac{(n-1)!}{k!(n-k-1)!}=
\frac{(n-1)!(2k-n)}{k!(n-k)!}\\
=\;&(2k-n)\underbrace{k(k+1)...(n-2)(n-1)}_{n-k}
\frac{(k-1)!}{k!(n-k)!}
\end{split}$$ and $$\begin{split}
\binom{k-1}{2k-n-1}=\;&\frac{(k-1)!}{(2k-n-1)!(n-k)!}=\\
&(2k-n)\underbrace{(2k-n+1)...(k-1)k}_{n-k}
\frac{(k-1)!}{k!(n-k)!}\,.
\end{split}$$ The condition $n-k < k<n-1$ shows that $$(2k-n)\underbrace{k(k+1)...(n-2)(n-1)}_{n-k} >
(2k-n)\underbrace{(2k-n+1)...(k-1)k}_{n-k}$$ and the inequality (2.3) does not hold. The fulfilment of the inequality $n_{i}\le n-k$ is proved. $\square$
[An immediate verification shows that $$\binom{n-k_{1}}{k-k_{1}}
\ge
\binom{n-k_{2}}{k-k_{2}}$$ for any two natural numbers $k_{1}$ and $k_{2}$ satisfying the condition $0\le k_{1}\le k_{2}\le k$. ]{}
If there exists a number $i$ satisfying the condition $n_{i} \ge 3$ then $n_{j}=1$ for each $j \ne i$.
[**Proof.**]{} Consider the planes $S\in {\cal R}^{n-1}$ and $S'\in {\cal R}^{n-2}$ such that $S$ does not contain the line $L_{i}$ and $S'$ does not contain the lines $L_{i}$ and $L_{j}$. Then for each plane $U$ belonging to ${\cal R}'$ the following three cases can be realized.
1. $U\in {\cal R}'_{i}$.
2. If $U \notin {\cal R}'_{i}$ and $U \in {\cal R}'_{j}$ then $U$ does not contain the line $L_{i}$ and $U \in {\cal R}(S)$. The condition $U \in {\cal R}'_{j}$ shows that $U$ belongs to the set ${\cal R}(S)\cap {\cal R}(S_{j})$.
3. If $U\notin {\cal R}'_{i}\cup{\cal R}'_{j}$ then $U$ does not contain the lines $L_{i}$ and $L_{j}$. For this case $U\in {\cal R}(S')$.
In other words, we have the inclusion $${\cal R}'\subset {\cal R}(S_{i})\cup
({\cal R}(S)\cap {\cal R}(S_{j}))\cup {\cal R}(S')$$ showing that $$|{\cal R}'| \le
|{\cal R}(S_{i})|+ |{\cal R}(S)\cap {\cal R}(S_{j})|+
|{\cal R}(S')|\,.$$ The set ${\cal R}(S)\cap {\cal R}(S_{j})$ is not empty if and only if the plane $S_{j}$ is contained in $S$. It is not difficult to see that in this case our set contains $\binom{n-n_{j}-1}{k-n_{j}}$ elements. Then by (2.4) $$\binom{n-1}{k-1}\le
\binom{n-n_{i}}{k-n_{i}}+\binom{n-n_{j}-1}{k-n_{j}}+\binom{n-2}{k}\,.$$ For the case when $$n_{i} \ge 3\;\mbox{ and }\;n_{j} \ge 2$$ we have $$\binom{n-n_{i}}{k-n_{i}}\le \binom{n-3}{k-3}
\;\mbox{ and }\;
\binom{n-n_{j}-1}{k-n_{j}}\le \binom{n-3}{k-2}$$ (Remark 2.1); then $$\binom{n-1}{k-1} \le \binom{n-3}{k-3}+ \binom{n-3}{k-2} +
\binom{n-2}{k}\,.$$ We use the equality $$\binom{n-1}{k-1}=\binom{n-2}{k-1}+\binom{n-2}{k-2}=\binom{n-2}{k-1}
+\binom{n-3}{k-3}+ \binom{n-3}{k-2}$$ to rewrite the last inequality in the following form $$\binom{n-2}{k-1} \le \binom{n-2}{k}\,.$$ An immediate verification shows that (2.6) does not hold for the case when $n-k \le k$. This implies that one of the conditions (2.5) fails; therefore $n_{j}\le 1$. Lemma 2.2 guarantees that $n_{j}>0$ and we obtain the required. $\square$
The condition $n_{i}=2$ implies the existence of unique line $L_{j}$ [(]{}$j\ne
i$[)]{} contained in the plane $S_{i}$ and such that $n_{j}=1$.
[**Proof.**]{} For the case when $n_{i}=2$ there exists unique line $L_{j}$ ($j\ne i$) contained in $S_{i}$. The trivial inclusion ${\cal R}'_{i} \subset {\cal R}'_{j}$ shows that $S_{j}\subset S_{i}$ and $0<n_{j}\le 2$. If $n_{j}\ne 1$ then the planes $S_{i}$ and $S_{j}$ are coincident. Consider the plane $S\in {\cal R}^{n-2}$ which does not contain the lines $L_{i}$ and $L_{j}$. If a plane $U\in {\cal R}'$ does not belong to ${\cal R}'_{i}$ then the lines $L_{i}$ and $L_{j}$ are not contained in $U$ and $U \in {\cal R}(S)$. In other words, for each plane $U\in {\cal R}'$ the following two cases can be realized:
1. $U \in {\cal R}'_{i}={\cal R}'_{j}\subset {\cal R}(S_{i})$,
2. $U \in {\cal R}(S)$.
This implies the inclusion ${\cal R}' \subset
{\cal R}(S)\cup {\cal R}'_{i}$ showing that $$\begin{split}
\binom{n-1}{k-1}=\binom{n-2}{k-1}+\binom{n-2}{k-2}&\le
|{\cal R}'|\le\\
&|{\cal R}(S)|+
|{\cal R}(S_{i})|= \binom{n-2}{k}+\binom{n-2}{k-2}\,.
\end{split}$$ We obtain the inequality (2.6) but for the case when $n-k \le k$ it does not hold (see the proof of Lemma 2.4). Thus the equality $n_{j}=2$ fails and we get $n_{j}=1$. $\square$
Lemmas 2.2 and 2.4 show that if $n_{i}\ne 1,2$ for some number $i$ then $n({\cal R}')=n-1$. Let us consider the case when $0< n_{i} \le 2$ for each $i=1,...,n$.
If $0< n_{i} \le 2$ for each $i=1,...,n$ then $$n({\cal R}')>k\; \mbox{ or }\; n({\cal R}')=1\,.$$ Moreover, the equality $n({\cal R}')=1$ holds if and only if there exists a number $j$ such that ${\cal R}'={\cal R}(L_{j})$.
[**Proof.**]{} First of all note that for the case when $k=n-1$ our statement is trivial; $\binom{n-1}{k-1}=n-1$ and ${\cal R}'$ is an $R$-subset of ${\mathbb G}_{n-1}(V)$ containing not less than $n-1$ elements. If ${\cal R}'$ has $n$ elements then it is a maximal $R$-set and $n({\cal R}')=n$. If the number of elements is equal to $n-1$ then there exists a plane $U\in {\cal R}$ such that ${\cal R}'={\cal R}\setminus \{U\}$. Consider unique line $L_{j}$ which is not contained in $U$, it is trivial that ${\cal
R}'={\cal R}(L_{j})$.
Let $n-k\le k <n-1$. Fix a number $i$ such that $n_{i}=2$ and consider the plane $S \in{\cal R}^{n-1}$ which does not contain $L_{i}$. It is not difficult to see that the sets ${\cal R}(S)$ and ${\cal R}(S_{i})$ are disjoint and $$|{\cal R}'|=
|{\cal R}'\cap {\cal R}(S)|+|{\cal R}'_{i}|$$ (see the proof of Lemma 2.3). Thus $$|{\cal R}'\cap {\cal R}(S)| =
|{\cal R}'| - |{\cal R}'_{i}|\ge
|{\cal R}'| - |{\cal R}(S_{i})|\ge$$ $$\binom{n-1}{k-1}-\binom{n-2}{k-2}=\binom{n-2}{k-1}\,.$$ In other words, $${\cal R}'':={\cal R}'\cap {\cal R}(S)$$ is an $R$-subset of ${\mathbb G}_{k}(S)$ containing not less than $\binom{(n-1)-1}{k-1}$ elements (here ${\mathbb G}_{k}(S)$ is the Grassmannian of all $k$-dimensional planes of the $(n-1)$-dimensional vector space $S$).
Note that if the set ${\cal R}''$ contains $\binom{n-2}{k-1}$ elements then $$|{\cal R}'_{i}|=|{\cal R}'|-|{\cal R}''|\ge
\binom{n-1}{k-1}-\binom{n-2}{k-1}=\binom{n-2}{k-2}\,.$$ Recall that the set ${\cal R}(S_{i})$ contains $\binom{n-2}{k-2}$ elements and ${\cal R}'_{i} \subset {\cal R}(S_{i})$. This implies that for this case the set ${\cal R}'_{i}$ coincides with ${\cal R}(S_{i})$. This remark will be exploited in what follows.
Now suppose that Lemma 2.6 holds for $n-k<m$ (here $m$ is a natural number such that $1<m<n-1$) and consider the case when $n-k=m$. By the inductive hypothesis and the remark made before Lemma 2.6 we have $$n({\cal R}'')>k\; \mbox{ or }\; n({\cal R}'')=1\,.$$ The trivial inequality $n({\cal R}')\ge n({\cal R}'')$ implies the fulfilment of the required statement for the first case.
For the second case the inductive hypothesis implies the existence of a number $j_{1}\ne i$ such that $${\cal R}''={\cal
R}(S)\cap {\cal R}(L_{j_{1}})\,.$$ Then the set ${\cal R}''$ contains $\binom{n-2}{k-1}$ elements and ${\cal R}'_{i}$ coincides with ${\cal R}(S_{i})$. By Lemma 2.5 there exists unique line $L_{j_{2}}$ ($j_{2}\ne
i$) contained in the plane $S_{i}$ and such that $n_{j_{2}}=1$. It is easy to see that $${\cal R}(L_{j_{2}})=
{\cal R}(S_{i})\cup ({\cal R}(S)\cap {\cal R}(L_{j_{2}}))\,.$$ Then the equation (2.7) and the equality ${\cal R}(S_{i})={\cal R}'_{i}$ show that $${\cal R}(L_{j_{2}})={\cal R}'_{i}\cup{\cal R}''={\cal R}'$$ if $j_{1}=j_{2}$.
Consider the case when $j_{1}\ne j_{2}$ and prove that $n_{j}=1$ for each $j\ne i$. For $j=j_{1}$ or $j_{2}$ it is trivial. For $j\ne j_{1},j_{2}$ denote by $S'_{j}$ the intersection of all planes belonging to ${\cal R}''$ and containing the line $L_{j}$. It is the two-dimensional plane containing the lines $L_{j}$ and $L_{j_{1}}$. The condition $j_{1}\ne j_{2}$ implies the existence of a plane $U\in {\cal R}(S_{i})$ which contains $L_{j}$ and does not contain $L_{j_{1}}$. The intersection of $U$ with $S'_{j}$ is the line $L_{j}$. The equality ${\cal R}(S_{i})={\cal R}'_{i}$ guarantees that the plane $U$ belongs to ${\cal R}'$. Thus $n_{j}=1$ for each $j\ne i$ and $n({\cal R}')=n-1$. $\square$
[**Proof of Theorem 2.1.**]{} It was noted above that we can restrict ourself only to the case when $n-k \le k <n-1$. Lemmas 2.2, 2.4 and 2.6 show that we have to consider the following four cases.
1. The inequality $n_{i}>0$ holds for any $i=1,...,n$ and there exists a number $j$ such that $n_{j} \ge 3$. Then $n({\cal R}')=n-1$ (Lemma 2.4).
2. $0< n_{i} \le 2$ for each $i=1,...,n$ and $n({\cal R}')>k$.
3. $0< n_{i} \le 2$ for each $i=1,...,n$ and $n({\cal R}')=1$. Then ${\cal R}'$ coincides with some ${\cal R}(L_{j})$.
4. The condition $n_{i}=0$ holds for some number $i$. Then $n=2k$ and ${\cal
R}'={\cal R}(S)$, where $S$ is a plane belonging to the set ${\cal R}^{n-1}$ (Lemma 2.2).
[*Case* ]{}(i). Lemma 2.3 states that $n_{j} \le n-k$ and there exist $k-1$ numbers $i_{1},...,i_{k-1}$ such that the plane $S_{j}$ does not contain the lines $L_{i_{1}},...,L_{i_{k-1}}$. Denote by $U$ the $k$-dimensional plane containing the lines $L_{i_{1}},...,L_{i_{k-1}}$ and $L_{j}$. For a number $i\ne j$ we have $n_{i}=1$ and the intersection $U \cap S_{j}$ coincides with the line $L_{j}$. Therefore the $R$-set ${\cal R}' \cup \{U\}$ is exact and $\deg({\cal R}')=1$.
[*Case* ]{}(ii). Consider all numbers $i_{1},...,i_{m}$ such that $$n_{i_{1}}=...=n_{i_{m}}=2\;.$$ Then $n({\cal R}')=n-m$ and the condition $n({\cal R}')>k$ shows that $m <n-k \le k$. Denote by $S$ the plane containing the planes $S_{i_{1}},...,S_{i_{m}}$ and having the smallest dimension. The dimension of $S$ is not greater than $2m$ and $$n-2m =(n-m)-m> k-m> 0\,.$$ This implies the existence of $k-m$ numbers $j_{1},...,j_{k-m}$ such that $$n_{j_{1}}=...=n_{j_{k-m}}=1$$ and $S$ does not contain the lines $L_{j_{1}},...,L_{j_{k-m}}$. Denote by $U$ the $k$-dimensional plane containing the lines $$L_{i_{1}},...,L_{i_{m}}, L_{j_{1}},...,L_{j_{k-m}}\,.$$ For any number $p=1,...,m$ the plane $S_{i_{p}}$ is generated by two lines, one of them coincides with $L_{i_{p}}$, other line $L_{q(p)}$ satisfies the following conditions $n_{q(p)}=1$ (Lemma 2.5) and $$q(p)\ne j_{1},...,j_{k-m}\,.$$ Then the intersection of the plane $U$ with each $S_{i_{p}}$ is the line $L_{i_{p}}$ and the $R$-set ${\cal R}' \cup \{U\}$ is exact. We obtain the equality $\deg({\cal R}')=1$.
[*Cases* ]{}(iii) and (iv) were considered in Examples 2.1 and 2.2, respectively. $\square$
Proof of Theorem 1.1 for the general case
-----------------------------------------
First of all note that the case $n-k<k$ can be reduced to the case $n-k>k$. It is not difficult to see that the following statements hold true.
1. For any two transformations $f$ and $g$ of ${\mathbb G}_{k}(V)$ and ${\mathbb G}_{n-k}(V)$ induced by collineations and each non-degenerate sesquilinear form $\Omega$ the bijections $f_{k\,n-k}(\Omega)f$ and $gf_{k\,n-k}(\Omega)$ are defined by sesquilinear forms.
2. For any bijection defined by a sesquilinear form the inverse bijection is defined by a sesquilinear form.
3. The composition of two bijections defined by sesquilinear forms is a transformation induced by some collineation.
For each regular transformation $f$ of ${\mathbb G}_{k}(V)$ and a non-degenerate sesquilinear form $\Omega$ on $V$ $$g=f_{k\,n-k}(\Omega)fF_{n-k\,k}(\Omega)$$ is a regular transformation of ${\mathbb G}_{n-k}(V)$. The statements given above show that $f$ is induced by a collineation if and only if $g$ is induced by a collineation.
The similar arguments can be used to prove Corollary 1.1, since for each regular bijection $f$ of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$ and a non-degenerate sesquilinear form $\Omega$ on $V$ the composition $F_{n-k\,k}(\Omega)f$ is a regular transformation of ${\mathbb G}_{k}(V)$.
Let $f$ be a regular transformation of ${\mathbb G}_{k}(V)$ and $1<k\le n-k$. Let also $U$ and $U'$ be $k$-dimensional adjacent planes in $V$. We want to show that the planes $f(U)$ and $f(U')$ are adjacent. Consider a maximal $R$-set ${\cal R}$ containing $U$ and $U'$. Then the maximal $R$-set $f({\cal R})$ contains $f(U)$ and $f(U')$; in what follows this set will be denoted by ${\cal
R}_{f}$. For planes $S$ and $S'$ belonging to ${\cal R}^{m}$ and ${\cal R}^{m}_{f}$ (here ${\cal R}^{m}_{f}$ is the maximal $R$-subset of ${\mathbb G}_{m}(V)$ associated with ${\cal R}_{f}$) denote by ${\cal R}(S)$ and ${\cal R}_{f}(S')$ the sets of all planes $U\in {\cal R}$ and $U'\in {\cal R}_{f}$ incident to $S$ and $S'$, respectively.
For any plane $S\in{\cal R}^{n-1}$ the following statements are fulfilled:
1. if $n \ne 2k$ then there exists a plane $S'\in {\cal R}^{n-1}_{f}$ such that $$f({\cal R}(S))={\cal R}_{f}(S')\,;$$
2. if $n=2k$ then there exists a plane $S'\in {\cal R}^{m}_{f}$ such that $m=1$ or $n-1$ and the equality [(2.8)]{} holds true.
[**Proof.**]{} It is a direct consequence of Theorem 2.1 and Lemma 2.1. $\square$
For any plane $S\in{\cal R}^{k+1}$ the following statements are fulfilled:
1. if $n \ne 2k$ then there exists a plane $S'\in {\cal R}^{k+1}_{f}$ such that the equality [(2.8)]{} holds true;
2. if $n=2k$ then there exists a plane $S'\in {\cal R}^{m}_{f}$ such that $m=k-1$ or $k+1$ and the equality [(2.8)]{} holds true.
[**Proof.**]{} Let $L_{1},...,L_{n}$ and $L'_{1},...,L'_{n}$ be planes generating the sets ${\cal R}^{1}$ and ${\cal R}^{1}_{f}$, respectively. Denote by $S_{i}$ and $S'_{i}$ the planes belonging to ${\cal R}^{n-1}$ and ${\cal R}^{n-1}_{f}$ and such that $S_{i}$ does not contain $L_{i}$ and $S'_{i}$ does not contain $L'_{i}$. Lemma 2.7 states that if $n\ne 2k$ then for each $i=1,...,n$ there exists a number $j_{i}$ such that $$f({\cal R}(S_{i}))={\cal R}_{f}(S'_{j_{i}})\,.$$ Let us prove the statement (i) for the case when $S$ is the $(k+1)$-dimensional plane containing the lines $L_{1},...,L_{k+1}$ (for other planes belonging to the set ${\cal R}^{k+1}$ the proof is similar). We have $S=\cap^{n}_{i=k+2}S_{i}$ and $${\cal R}(S)=\bigcap^{n}_{i=k+2}{\cal R}(S_{i})\,.$$ The last equality and the equation (2.9) show that $$f({\cal R}(S))=
\bigcap^{n}_{i=k+2}{\cal R}_{f}(S'_{j_{i}})
={\cal R}_{f}(S')\,;$$ where $S'$ is the $(k+1)$-dimensional plane containing the lines $L'_{j_{1}},...,L'_{j_{k+1}}$.
If $n=2k$ then by Lemma 2.7 the following two cases can be realized:
1. there exists a number $j_{1}$ such that the equality (2.9) holds for $i=1$;
2. there exists a number $j$ such that $f({\cal R}(S_{1}))={\cal R}_{f}(L'_{j})$.
First of all show that for the case (a) there exist numbers $j_{2},...,j_{n}$ such that the equality (2.9) holds for any $i=2,...,n$ . Then the proof of the statement (ii) will be similar to the proof of the statement (i) given above.
Suppose that $$f({\cal R}(S_{i}))={\cal R}_{f}(L'_{j_{i}})$$ for some numbers $i$ and $j_{i}$ and consider the $(n-2)$-dimensional plane ${\hat S} = S_{1} \cap S_{i}$. Then $${\cal R}({\hat S})=
{\cal R}(S_{1}) \cap {\cal R}(S_{i})$$ and $$f({\cal R}({\hat S}))=
{\cal R}_{f}(S'_{j_{1}})\cap {\cal R}_{f}(L'_{j_{i}})\,.$$ The first set contains $\binom{2k-2}{k}$ elements, the second set contains $\binom{2k-2}{k-1}$ elements. However $$\binom{2k-2}{k} \ne \binom{2k-2}{k-1}\,.$$ Thus our hypothesis fails and the statement (ii) is proved for the case (a).
Show that the case (b) can be reduced to the case (a). Consider a non-degenerate sesquilinear form $\Omega$ on $V$ and the regular transformation $g=F_{k\,k}(\Omega)$. It is easy to see that the regular transformation $gf$ satisfies the conditions of the case (a). Then the maximal $R$-subset of ${\mathbb G}_{k+1}(V)$ associated with ${\cal R}_{gf}$ contains a plane $S''$ such that $$gf({\cal R}(S))= {\cal R}_{gf}(S'')\,.$$ Then the plane $$S'=(F_{k-1\,k+1}(\Omega))^{-1}(S'')\in {\cal R}^{k-1}_{f}$$ satisfies the required condition. $\square$
Now we can prove Theorem 1.1. The planes $U$ and $U'$ are adjacent and there exists a $(k+1)$-dimensiomal plane $S \in {\cal
R}^{k+1}$ containing them. By Lemma 2.8 the planes $f(U)$ and $f(U')$ are adjacent. The inverse transformation $f^{-1}$ is regular and the similar arguments show that the planes $f^{-1}(U)$ and $f^{-1}(U')$ are adjacent too. We have proved that $f$ preserves the distance between planes and the required statement is a direct consequence of the Chow Theorem.
Remark on exact $R$-sets
------------------------
Theorem 2.1 states that each $R$-set ${\cal R}\subset {\mathbb
G}_{k}(V)$ containing not less than $\binom{n-1}{k-1}+1$ elements if $n-k\le k$ and $\binom{n-1}{k}+1$ elements if $n-k\ge k$ satisfies the condition $\deg {\cal R}\le 1$. It is natural to ask what is the minimal number $s^{n}_{k}$ such that each $R$-subset of ${\mathbb G}_{k}(V)$ containing more than $s^{n}_{k}$ elements is exact? The following statement gives the answer on the question.
If $1<k<n-1$ then each $R$-subset of ${\mathbb G}_{k}(V)$ containing more than $$s^{n}_{k}:=\binom{n-1}{k}+\binom{n-2}{k-2}$$ elements is exact. Moreover, there exists a non-exact $R$-subset of ${\mathbb G}_{k}(V)$ containing $s^{n}_{k}$ elements.
Theorem 2.2 is not connected with Theorem 1.1, but its proof is not complicated and we give it here. It will be based on the terms introduced in Subsection 2.1.
[Let ${\cal R}$ be a maximal $R$-subsets of ${\mathbb G}_{k}(V)$. Consider two planes $S \in {\cal R}^{n-1}$ and $S'\in {\cal R}^{2}$ such that $S$ does not contain $S'$. Suppose that $${\cal R}'={\cal R}(S)\cup {\cal R}(S')\,.$$ The sets ${\cal R}(S)$ and ${\cal R}(S')$ are disjoint and $$|{\cal R}'|=|{\cal R}(S)|+|{\cal R}(S')|=
\binom{n-1}{k}+\binom{n-2}{k-2}=s^{n}_{k}\,.$$ There exists unique line $L_{i}$ which is not contained in the plane $S$ (this line is contained in $S'$). Then $n_{j}=1$ if $j\ne i$ and $S_{i}=S'$. Thus $n_{i}=2$ and the $R$-set ${\cal R}'$ is not exact. Consider a $k$-dimensional plane $U\in {\cal R}$ which contains the line $L_{i}$ and does not contain the plane $S'=S_{i}$. The intersection $U\cap S_{i}$ coincides with $L_{i}$. This implies that the $R$-set ${\cal
R}'\cup \{U\}$ is exact and $\deg({\cal R}')=1$. ]{}
Theorem 2.2 is a direct consequence of the following statement.
If an $R$-set ${\cal R}'\subset {\mathbb G}_{k}(V)$, $1<k<n-1$, contains not less than $s^{n}_{k}$ elements then $\deg({\cal R}') \le 1$ and the equality $\deg({\cal R}')=1$ holds if and only if ${\cal R}'$ is the set considered in Example [2.3]{}.
[**Proof.**]{} If the $R$-set ${\cal R}'$ is not exact then there exists a number $i$ such that $n_{i}\ne 1$. Consider unique plane $S\in {\cal R}^{n-1}$ which does not contain the line $L_{i}$. It was noted above that ${\cal R}'$ can be represented as the union of the two disjoint sets ${\cal R}(S)\cap {\cal R}'$ and ${\cal R}'_{i}$. We have $$|{\cal R}(S)\cap {\cal R}'|\le |{\cal R}(S)|=
\binom{n-1}{k}$$ and $$|{\cal R}'_{i}=R(S_{i})\cap {\cal R}'|\le
|R(S_{i})|=\binom{n-n_{i}}{k-n_{i}}\le \binom{n-2}{k-2}$$ (see Remark 2.1). Then the inequality $$|{\cal R}'|=|{\cal
R}(S)\cap {\cal R}'|+|{\cal R}'_{i}|
\le \binom{n-1}{k}+\binom{n-2}{k-2}
=s^{n}_{k}$$ shows that the condition $|{\cal R}'| \ge s^{n}_{k}$ holds if and only if $${\cal R}(S)\cap {\cal R}'= {\cal R}(S)\,,\;\;
{\cal R}'_{i}={\cal R}(S_{i})$$ and $n_{i}=2$. $\square$
Transformations of ${\mathfrak I}_{k\,n-k}(V)$ and automorphisms of classical groups
====================================================================================
Transformations of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity and the adjacency
---------------------------------------------------------------------------------------------
Denote by ${\mathbb G}_{k\,n-k}(V)$ the set of all pairs $$(U, S)\in {\mathbb G}_{k}(V)\times{\mathbb
G}_{n-k}(V)$$ such that $$U+S=V\,.$$ We say that ${\cal R}\subset {\mathbb G}_{k\,n-k}(V)$ is an $R$-[*set*]{} if there exists a base for $V$ such that for each pair $(U, S)$ belonging to ${\cal R}$ the planes $U$ and $S$ contain $k$ and $n-k$ vectors from this base; i.e. there exists a coordinate system such that for all $(U, S)\in{\cal R}$ the planes $U$ and $S$ are coordinate planes for this system. Any base and any coordinate system satisfying that condition will be called [*associated*]{} with ${\cal R}$.
An $R$-set ${\cal R}$ is called [*maximal*]{} if any $R$-set containing ${\cal R}$ coincides with it. It is trivial that an $R$-subset of ${\mathbb G}_{k\,n-k}(V)$ is maximal if and only if it contains $\binom{n}{k}$ elements.
We say that a transformation of ${\mathbb G}_{k\,n-k}(V)$ is [*regular*]{} if it preserves the class of $R$-sets. A bijection $f$ of ${\mathbb G}_{k\,n-k}(V)$ onto ${\mathbb G}_{n-k\,k}(V)$ is called [*regular*]{} if $f$ and $f^{-1}$ map each $R$-set to an $R$-set. For the case when $m\ne k,n-k$ there are not regular bijections of ${\mathbb G}_{k\,n-k}(V)$ onto ${\mathbb G}_{m\,n-m}(V)$.
If $F$ is a field with characteristic other than two then denote by $\pi$ the bijection $$\sigma \to (U_{+}(\sigma),U_{-}(\sigma))$$ of ${\mathfrak I}_{k\,n-k}(V)$ onto ${\mathbb G}_{k\,n-k}(V)$.
For a set ${\cal R}\subset {\mathbb G}_{k\,n-k}(V)$ the following two conditions are equivalent:
1. ${\cal R}$ is an $R$-set,
2. involutions belonging to the set $\pi^{-1}({\cal R})$ commute.
The proof of Proposition 3.1 will be based on the following well-known lemma.
An involution $s$ and a linear transformation $f$ commute if and only if $f$ preserves $U_{+}(s)$ and $U_{-}(s)$.
[**Proof.**]{} The implication ${\rm (i)}\Rightarrow {\rm (ii)}$ is a simple consequence of Lemma 3.1
Suppose that ${\cal R}$ is a subset of ${\mathbb G}_{k\,n-k}(V)$ such that any two elements of $\pi^{-1}({\cal R})$ commute. If the number of elements $|{\cal R}|$ is finite then for each set ${\cal R}'\subset {\cal R}$ define $$U_{{\cal R}'}=\left(
\bigcap_{s\in \pi^{-1}({\cal R}')}U_{+}(s) \right) \bigcap
\left(
\bigcap_{s\in \pi^{-1}({\cal R}\setminus{\cal R}')}U_{-}(s)
\right).$$ Some of these intersections are non-zero, we denote them by $U_{1},...,U_{i}$. Then $$U_{1}+...+U_{i}=V\,,$$ $U_{i}\cap U_{j}=\{0\}$ if $i\ne j$ and there exists a coordinate system such that $U_{1},...,U_{i}$ are coordinate planes for this system. This statement can be obtained as a consequence of Lemma 3.1 by the induction on $|{\cal R}|$. It is trivial that for any $(U,S)\in {\cal R}$ the planes $U$ and $S$ are coordinate planes for our system and ${\cal R}$ is an $R$-set.
For the general case the arguments given above show that each finite subset of ${\cal R}$ contains not greater than $\binom{n}{k}$ elements. Thus the set ${\cal R}$ is finite and we get the required. $\square$
Proposition 3.1 shows that a transformation $f$ of ${\mathbb G}_{k\,n-k}(V)$ is regular if and only if $\pi^{-1} f \pi$ is a transformation of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity.
[For a collineation $g$ of $V$ denote by $g_{k}$ and $g_{n-k}$ the transformations of ${\mathbb G}_{k}(V)$ and ${\mathbb G}_{n-k}(V)$ induced by it. Then $$(U,S)\to (g_{k}(U),g_{n-k}(S))\,,$$ is a regular transformation of ${\mathbb G}_{k\,n-k}(V)$. The respective transformation of ${\mathfrak I}_{k\,n-k}(V)$ maps each involution $\sigma$ to $g\sigma g^{-1}$. ]{}
[Recall that any non-degenerate sesquilinear form $\Omega$ defines the regular bijection $f_{k\,n-k}(\Omega)$ of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$. The map $$(U,S)\to (f_{n-k\,k}(\Omega)(S),f_{k\,n-k}(\Omega)(U))$$ is a regular transformation of ${\mathbb G}_{k\,n-k}(V)$; denote it by $f_{k\,n-k}(\Omega)$. It is not difficult to see that $$\pi^{-1}f_{k\,n-k}(\Omega)\pi$$ transfers each $\sigma$ to $h^{-1}\check{\sigma}h$, where $h$ is the correlation defined by our form $\Omega$. ]{}
[Consider the bijection $i_{k\, n-k}$ of ${\mathbb G}_{k\,n-k}(V)$ onto ${\mathbb G}_{n-k\,k}(V)$ which transfers $(U,S)$ to $(S,U)$. Then $i_{k\, n-k}$ is regular and $\pi^{-1} i_{k\, n-k} \pi$ coincides with the bijection of ${\mathfrak I}_{k\,n-k}(V)$ onto ${\mathfrak I}_{n-k\,k}(V)$ transferring $\sigma$ to $-\sigma$. ]{}
We say that two pairs $(U,S)$ and $(U',S')$ belonging to ${\mathbb G}_{k\, n-k}(V)$ are [*adjacent*]{} if one of the following conditions holds true:
1. $U=U'$ and the planes $S$ and $S'$ are adjacent,
2. $S=S'$ and the planes $U$ and $U'$ are adjacent.
Two involutions $\sigma$ and $\sigma'$ will be called [*adjacent*]{} if the pairs $\pi(\sigma)$ and $\pi(\sigma')$ are adjacent. It is trivial that the transformations considered in Examples 3.1 – 3.3 preserve the adjacency.
An immediate verification shows that two involutions $\sigma$ and $\sigma'$ are adjacent if and only if their composition $\sigma\sigma'$ is a transvection (recall that a linear transformation $g\in {\mathfrak S}{\mathfrak L}(V)$ is called a transvection if the dimension of $\ker(Id -g)$ is equal to $n-1$).
If $n\ge 3$ and the characteristic of the field $F$ is not equal to $2$ then the following two statements are fulfilled:
1. for the case when $n\ne 2k$ each transformation of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity and the adjacency is defined by a collineation $($Example [3.1]{}$)$ or a correlation $($Example [3.2]{}$)$;
2. if $n=2k$ then for any transformation $f$ of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity and the adjacency one of the transformations $f$ or $-f$ $($Example [3.3]{}$)$ is defined by a collineation or a correlation.
If $n\ge 3$ and the characteristic of the field $F$ is not equal to $2$ then the following two statements hold true:
1. for the case when $n\ne 2k$ each transformation of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity the adjacency can be extended to an automorphism of the group generated by $(k,n-k)$-involutions;
2. if $n=2k$ then for any transformation $f$ of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity and the adjacency one of the transformations $f$ or $-f$ can be extended to an automorphism of the group generated by $(k,n-k)$-involutions.
For $k=1,n-1$ each transformation of ${\mathfrak I}_{k\,n-k}(V)$ preserving the commutativity preserves the adjacency. For the case when $n=2k$ this statement fails (Example 3.4). For other cases it is not proved yet; we have not an analogy of Mackey’s lemma (see \[M\] or \[D2\]).
Proposition 3.1 shows that Theorem 3.1 can be reformulated in the following form.
If $n\ge 3$ then the following two statements hold true:
1. for the case when $n\ne 2k$ each regular transformation of ${\mathbb G}_{k\,n-k}(V)$ preserving the adjacency is induced by a collineation or defined by a non-degenerate sesquilinear form;
2. if $n=2k$ then for any regular transformation $f$ of ${\mathbb G}_{k\,n-k}(V)$ preserving the adjacency one of the transformations $f$ or $i_{k\,k}f$ is induced by a collineation or defined by a non-degenerate sesquilinear form.
[Suppose that $n=2k$ and consider a set ${\cal G}\subset {\mathbb G}_{k\,k}(V)$ satisfying the following condition: for each $(U,S)\in {\cal G}$ the pair $(S,U)$ belongs to ${\cal G}$. Define $$i_{{\cal G}}(U,S)=(S,U)\;\mbox{ if }\;(U,S)\in {\cal G}$$ and $$i_{{\cal G}}(U,S)=(U,S)\;\mbox{ if }\;(U,S)\notin {\cal G}\,.$$ Each maximal $R$-subset of ${\mathbb G}_{k\,k}(V)$ containing $(U,S)$ contains $(S,U)$. This implies that the transformation $i_{{\cal G}}$ is regular but for the case when ${\cal G}\ne \emptyset$ and ${\mathbb G}_{k\,k}(V)$ it does not preserve the adjacency. ]{}
Proof
-----
[**First step.**]{} For a $k$-dimensional plane $U$ and an $(n-k)$-dimensional plane $S$ denote by ${\cal X}(U)$ and ${\cal X}(S)$ the sets of all pairs $(U,\cdot)$ and $(\cdot, S)$ belonging to ${\mathbb G}_{k\,n-k}(V)$. It is trivial that the intersection ${\cal X}(U)\cap {\cal X}(S)$ is not empty if and only if $U+S=V$; for this case it consists only of the pair $(U,S)$. For other $k$-dimensional plane $U'$ we have ${\cal X}(U)\cap {\cal X}(U')=\emptyset$.
Now for arbitrary plane $P\subset V$ denote by ${\cal Y}(U, P)$ the set of all pairs $(U,S')\in {\cal X}(U)$ such that $S'$ is incident to $P$. Denote also by ${\cal Y}(S, P)$ the set of all pairs $(U',S)\in {\cal X}(S)$ such that $U'$ is incident to $P$. Note that for some cases the sets ${\cal Y}(U, P)$ and ${\cal Y}(U, P)$ may be empty (for example, if $P$ is a plane contained in $U$ or $S$).
If $P$ is an $(n-k\pm 1)$-dimensional plane then any two pairs belonging to ${\cal Y}(U, P)$ are adjacent. The set ${\cal Y}(S, P)$ satisfies the similar condition if the dimension of $P$ is equal to $k\pm 1$. In what follows we will use the following statement: if ${\cal G}$ is a subset of ${\mathbb G}_{k\, n-k}(V)$ any two elements of which are adjacent then there exists a $k$-dimensional plane $U$ or an $(n-k)$-dimensional plane $S$ such that ${\cal G}$ is contained in ${\cal X}(U)$ or ${\cal X}(S)$ (the proof is trivial).
[**Second step.**]{} Let $f$ be a transformation of ${\mathbb G}_{k\,n-k}(V)$ preserving the adjacency. Show that [*for any $k$-dimensional plane $U$ there exists a $k$-dimensional plane $U_{f}$ or an $(n-k)$-dimensional plane $S_{f}$ such that the set $f({\cal X}(U))$ coincides with ${\cal X}(U_{f})$ or ${\cal X}(S_{f})$.*]{}
Let us fix two adjacent pairs $(U,S)$ and $(U,S')$. Then $P=S\cap S'$ is an $(n-k-1)$-dimensional plane and any two pairs belonging to $f({\cal Y}(U,P))$ are adjacent. Therefore there exists a plane $T$ the dimension of which is equal to $k$ or $n-k$ and such that $$f({\cal Y}(U,P))\subset {\cal X}(T)\,.$$ Now consider a pair $(U, {\hat S})$ adjacent to $(U,S)$. The planes $S$ and ${\hat S}$ are contained in some $(n-k+1)$-dimensional plane $P'$. There exists a plane $T'$ such that $\dim T'=k$ or $n-k$ and $$f({\cal Y}(U,P'))\subset {\cal X}(T')\,.$$ The inclusions $P\subset S\subset P'$ guarantees that the set $${\cal Y}(U,P)\cap {\cal Y}(U,P')$$ contains not less than two elements. Then ${\cal X}(T)\cap {\cal X}(T')$ satisfies the similar condition. It means that the planes $T$ and $T'$ are coincident (see First step).
Thus for any pair $(U, {\hat S})$ satisfying the condition $d(S,{\hat S})=1$ the pair $f(U, {\hat S})$ belongs to the set ${\cal X}(T)$. Suppose that the similar statement holds for the case when $d(S,{\hat S})<i$, $i>1$ and consider the case $d(S,{\hat S})=i$.
By the definition of the distance there exists a pair $(U,{\check S})\in {\cal X}(U)$ such that $d(S,{\check S})=i-1$ and the planes ${\check S}$ and ${\hat S}$ are adjacent. Denote by $P''$ the $(n-k+i-1)$-dimensional plane containing $S$ and ${\check S}$. By our hypothesis $$f({\cal Y}(U,P''))\subset {\cal X}(T)\,.$$ For the $(n-k-1)$-dimensional plane ${\hat P}={\hat S}\cap {\check S}$ there exists a plane ${\hat T}$ the dimension of which is equal to $k$ or $n-k$ and such that $$f({\cal Y}(U,{\hat P}))\subset {\cal X}({\hat T})\,.$$ Then ${\hat P}\subset {\check S}\subset P''$ and the set $${\cal Y}(U,P'')\cap {\cal Y}(U,{\hat P})$$ contains not less than two elements. Thus ${\hat
T}$ coincides with $T$ and we get the required.
[**Third step.**]{} For the transformation $f$ introduced above consider the set ${\cal U}$ of all $k$-dimensional planes $U$ such that $${\cal X}(U)=f({\cal X}(U'))$$ for some $k$-dimensional plane $U'$. Consider also the set ${\cal S}$ of all $(n-k)$-dimensional planes $S$ such that $${\cal X}(S)=f({\cal X}(U''))$$ for some $k$-dimensional plane $U''$.
Let $U$ and $S$ be planes belonging to the sets ${\cal U}$ and ${\cal S}$. Let also $U'$ and $U''$ be $k$-dimensional planes satisfying the conditions (3.1) and (3.2). Then ${\cal X}(U')\cap {\cal X}(U'')=\emptyset$. Therefore ${\cal X}(U)\cap {\cal X}(S)=\emptyset$. The last equality holds only for the case when $$\dim U\cap S\ge 1\,.$$ In other words, the following statements are fulfilled:
1. if a $k$-dimensional plane $U$ belongs to ${\cal U}$ then the inequality (3.3) holds for each $S\in {\cal S}$,
2. if an $(n-k)$-dimensional plane $S$ belongs to ${\cal S}$ then the inequality (3.3) holds for each $U\in {\cal U}$.
For arbitrary pair $(U,S)\in
{\mathbb G}_{k\,n-k}(V)$ one of the planes $U$ or $S$ belongs to ${\cal U}$ or ${\cal S}$, respectively. Thus one of these sets is empty. If the transformation $f$ is regular then one of the following cases is realized:
1. for each $k$-dimensional plane $U$ there exists a $k$-dimensional plane $U_{f}$ such that $$f({\cal X}(U))={\cal X}(U_{f})\,,$$ then $U\to U_{f}$ is a regular transformation of ${\mathbb G}_{k}(V)$;
2. for each $k$-dimensional plane $U$ there exists an $(n-k)$-dimensional plane $S_{f}$ such that $$f({\cal X}(U))={\cal X}(S_{f})\,,$$ then $U\to S_{f}$ is a regular bijection of ${\mathbb G}_{k}(V)$ onto ${\mathbb G}_{n-k}(V)$.
Theorem 1.1 and Corollary 1.1 give the required.
[W99]{}
W. L. Chow, On the geometry of algebraic homogeneous spaces, Ann. of Math., 50 (1949), p. 32 – 67. J. Dieudonné, On the automorphisms of the classical groups, Memoirs Amer. Math. Soc., 2 (1951), p. 1 – 95. J. Dieudonné, La Géométrie des Groupes Classiques, Springer – Verlag, 1971. G. W. Mackey, Isomorphism of norden linear spaces, Ann. of Math., 43 (1942), p. 244 – 260. O. T. O’Meara, Lectures on linear groups, Providence, Rhode Island, 1974. O. T. O’Meara, Lectures on symplectic groups, Providence, Rhode Island, 1976. P. Orlik, Introduction to arrangements, Regional conference series in mathematics, N 72, Providence, Rhode Island, 1989. C. E. Rickart, Isomorphic groups of linear transformations, Amer. J. Math., 72 (1950), p. 451 – 464. J. Tits, Buildings spherical types and finite BN-pairs, Lect. Notes Math., 386, Springer – Verlag, 1974.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Neutral models aspire to explain biodiversity patterns in ecosystems where species difference can be neglected, as it might occur at a specific trophic level, and perfect symmetry is assumed between species. Voter-like models capture the essential ingredients of the neutral hypothesis and represent a paradigm for other disciplines like social studies and chemical reactions. In a system where each individual can interact with all the other members of the community, the typical time to reach an absorbing state with a single species scales linearly with the community size. Here we show, by using a rigorous approach within a large deviation principle and confirming previous approximate and numerical results, that in a heterogeneous voter model the typical time to reach an absorbing state scales exponentially with the system size, suggestive of an asymptotic active phase.'
author:
- 'Claudio Borile[^1], Paolo Dai Pra[^2], Markus Fischer[^3], Marco Formentin[^4], and Amos Maritan[^5]'
title: Time to absorption for a heterogeneous neutral competition model
---
[**AMS 2000 subject classification:**]{} 60K35; 60F10; 60K37; 92D25.
***keywords:***[ Voter Model with disorder, Neutral models of biodiversity, Large deviations, Stochastic dynamics with quenched disorder]{}
Introduction {#sect:intro}
============
Models of interacting degrees of freedom are nowadays widely spread in different scientific disciplines—from Physics and Mathematics to Biology, Ecology, Finance and Social Sciences—, and more than ever in the last few years there has been a growing effort in connecting the phenomenology observed at a macroscopic level with a simplified “microscopic” modeling of very disparate complex systems. Clearly, this idea is extremely appealing to statistical physicists and can provide a good benchmark for developing new ideas and methods. A famous and particularly successful example of this approach, which reconciles interdisciplinarity and pure research in statistical physics, can be found in the ecological literature in the so-called *neutral theory of species diversity*, that aims at giving a first null individual-based modeling of the dynamic competition among individuals of different species in the same trophic level of an ecosystem [@Hubbell; @Volkov03; @Azaele06; @vallade2003analytical; @alonso2006merits]. The neutral hypothesis finds its mathematical equivalent in the voter model (VM) [@Liggett] and its generalizations [@AlHammal05], which, in turn, is equivalent to the well-known Moran model in genetics [@Kimura]. This model has been deeply studied and has gradually become a paradigmatic example of non-equilibrium lattice models. It is conceptually simple but nevertheless has a very rich phenomenology with applications in many different scientific areas [@Henkel; @Dornic01; @Durrett94; @Castellano09b; @Blythe07]. Despite the fact that the original formulation of the VM can be exactly solved in any spatial dimension [@Liggett]—fact that contributed greatly to its rise—, any slight modification made in order to improve the realism of the model complicates drastically its analysis.
Among the possible modifications of the original VM, there has been a recent interest in studying the asymptotic behavior of the VM in the presence of quenched random-field-like disorder, whose motivations span from ecology [@Pigolotti10; @borile2013effect; @tilman1994habitat] to social modeling [@Masuda10; @Masuda11] through models of chemical reactants [@Krapivsky] and more fundamental research [@Odor]. A particularly interesting problem is to assess the typical time needed by a finite-size system to reach one of the absorbing states of the model; depending on the particular interpretation of the model, that would mean the typical time for the extinction of a species in an ecosystem, or the reaching or not of a consensus on a particular topic in a society. In all these cases, it is known that heterogeneities, in the habitat of an ecosystem or in the ideologies of groups of people, play a major role in shaping the global dynamics of the complex system. It has been shown [@Masuda10] that a quenched (random-field-like) disorder creating an intrinsic preference of each individual for a particular state/opinion hinders the formation of consensus, hence favoring coexistence. In the context of neutral ecology, this corresponds to a version of the VM in which at each location there is an intrinsic preference for one particular species, leading to mixed states lasting for times that grow exponentially with system size [@Pigolotti10; @Masuda10].
Here, we propose a rigorous mathematical development of a disordered VM intended as a general model of neutral competition in a heterogeneous environment. Supporting the previous findings [@Pigolotti10; @Masuda10] based on computational investigations or approximation arguments, we will show that a heterogenous environment indeed favors significantly the maintenance of the active state, and the typical time needed to reach an absorbing phase passes from a power-law dependence in the system’s size, typical of the neutral theories, to an exponentially long time, signature of an asymptotic active phase. This will be achieved by setting up a large deviation principle for the considered model and will thus provide a first attempt of an extreme value theory for systems with multiple symmetric absorbing states.\
Macroscopic limit
=================
The state of the system is described by a vector of spins $\eta = (\eta_1,\eta_2,\ldots,\eta_N) \in \{0,1\}^N$. The random environment consists of $N$ independent and identically distributed random variables $h_1,h_2,\ldots,h_N$, taking the values $0$ and $1$ with probability, respectively, $1-q$ and $q \in (0,1)$. Moreover, let $\rho \in [0,1]$ be a given parameter. While the random environment remains constant, the state $\eta$ evolves in time according to the following rules:
- each site $1,2,\ldots,N$ has its own independent random clock. A given site $i$ after a waiting time with exponential distribution of mean $1$ chooses at random, with uniform probability, a site $j \in \{1,2,\ldots,N\}$.\
- If $\eta_j = h_i$, then the site $i$ updates its spin from $\eta_i$ to $\eta_j$. If $\eta_j \neq h_i$, then the site $i$ updates its spin from $\eta_i$ to $\eta_j$ with probability $\rho$, while it keeps its spin $\eta_i$ with probability $1-\rho$.
Thus, the site $i$ has a preference to agree with sites whose spins equal its local field $h_i$. For $\rho = 1$, this effect is removed, and we obtain the standard Voter model. Note that, by symmetry, there is no loss of generality in assuming $q \geq 1/2$, as we will from now on.
In more formal terms, for every realization of the random environment, the spins evolve as a continuous-time Markov chain with generator $L_N$ acting on a function $f:\{0,1\}^N\rightarrow \mathbb{R}$ according to $$\begin{aligned}
\label{spin}
L_Nf(\eta) := \sum_{i=1}^N\frac{1}{N}&\sum_{j=1}^N\mathbb{I}_{h_i=\eta_j}\left(f(\eta^{j\rightarrow i})-f(\eta)\right)\nonumber\\
&+\rho\mathbb{I}_{h_i\neq\eta_j}\left(f(\eta^{j\rightarrow i})-f(\eta)\right),\end{aligned}$$ where $\mathbb{I}_A$ denotes the indicator function of the set $A$ and $\eta^{j\rightarrow i}$ is the configuration obtained from $\eta$ by replacing the value of the spin at the site $i$ with that of the spin at the site $j$. This Markov chain has two absorbing states, corresponding to all spin values equal to zero and all equal to one. We denote by $T_N$ the random time needed to reach one of the two absorbing states.\
It is useful to review the main properties of the model in the case $\rho = 1$. In this case, the dynamics are independent of $q$ and the unique order parameter for the model is given by $K_N := \sum_{i=1}^N \eta_i$, i.e., the number of spins with value $1$. It is easy to check, using the generator , that $K_N$ evolves as a random walk on $\{0,1,\ldots,N\}$: if $K_N = k$, then it moves to either $k+1$ or $k-1$ with the same rate $\frac{(N-k)k}{N}$. By standard arguments on birth and death processes (see e.g. [@lambert2008]), one shows that ${\ensuremath{\langleT_N\rangle}} \sim N \ln(2)$ as $N \rightarrow + \infty$: the mean absorption time grows linearly in $N$. Consider now the general case $\rho \leq 1$. Here the system is described in terms of two integer-valued order parameters, namely $\displaystyle{\sum_{i=1}^Nh_i\eta_i}$ and $\displaystyle{\sum_{i=1}^N(1-h_i)\eta_i}$, that will be convenient to properly scale as follows: $$\begin{split}
m_N^+ & := m_N^+(\eta) := \frac{1}{N}\sum_{i=1}^Nh_i\eta_i \\
m_N^- & := m_N^-(\eta) :=\frac{1}{N}\sum_{i=1}^N(1-h_i)\eta_i
\end{split}$$ Note that the pair $(m_N^+, m_N^-)$ belongs to the subset of the plane $\{(x,y) \in [0,1]^2 : x+y \leq 1\}$. Note, however, that when the limit as $N \rightarrow +\infty$ is considered, $m^+_N \leq \frac{1}{N}\sum_{i=1}^Nh_i \rightarrow q$, where this last convergence follows from the law of large numbers. Similarly, $m^-_N \leq \frac{1}{N}\sum_{i=1}^N(1-h_i) \rightarrow 1- q$. Thus, limit points of the sequence $(m_N^+, m_N^-)$ belong to $[0,q] \times [0,1-q]$. Given an initial, possibly random, state $\eta(0)$ for the dynamics of $N$ spins, we denote by $m_N^{\pm}(t)$ the (random) value at time $t$ of the order parameters $m_N^{\pm}$. In what follows we also denote by $\mu_N$ the distribution of $\eta(0)$.\
\[limdyn\] Assume there exists a non-random pair $(\bar{m}^+, \bar{m}^-) \in [0,q] \times [0,1-q]$ such that, for every $\epsilon>0$, $$\lim_{N \rightarrow +\infty} \mu_N\left( \left| m_N^{\pm}(0) - \bar{m}^{\pm} \right| > \epsilon \right) = 0.$$ Then the stochastic process $(m^+(t), m^-(t))_{t \geq 0}$ converges in distribution to the unique solution of the following system of ODEs:
$$\label{eq}
\left\{\begin{array}{lll}
\dot{m}^+ &= &-\rho m^+(1-m^--m^+)\\
& &+(q-m^+)(m^++m^-)\\
\dot{m}^- & = &-m^-(1-m^--m^+)\\
& &+\rho(1-q-m^-)(m^++m^-) \\ m^{\pm}(0) & = & \bar{m}^{\pm}
\end{array}\right.$$
[**Proof:**]{} Denote by $\mathcal{G}$ the generator of the semigroup associated to the deterministic evolution , i.e., $$\mathcal{G} f(m^+,m^-) := V^+(m^+,m^-) \frac{\partial f}{\partial m^+} + V^-(m^+,m^-) \frac{\partial f}{\partial m^-},$$ with $$\begin{split}
V^+(m^+,m^-) = & -\rho m^+(1-m^--m^+) \\ & +(q-m^+)(m^++m^-) \\
V^-(m^+,m^-) = & -m^-(1-m^--m^+) \\ & + \rho(q-m^-)(m^++m^-)
\end{split}$$ Let $f: [0,1]^2 \rightarrow \mathbb{R}$. By direct computation one finds that $$L_N [f(m^+_N, m^-_N)](\eta)$$ depends on $\eta$ only through $m^+_N,m^-_N$, which implies that the process $(m_N^+(t), m_N^-(t))_{t \geq 0}$ is a Markov process, whose associated semigroup has a generator $\mathcal{G}_N$ that can be identified by the identity $$L_N [f(m^+_N, m^-_N)](\eta) = [\mathcal{G}_N f] (m^+_N(\eta), m^-_N(\eta)),$$ which yields $$\label{gn}
\begin{split}
&\mathcal{G}_{N}f(x,y)\\
&:= N \Bigl( \left(q-x\right) (x+y) \left(f(x+\tfrac{1}{N},y) - f(x,y)\right)) \\
&\quad+ \rho x \left(1-(x+y)\right) \left(f(x-\tfrac{1}{N},y) - f(x,y)\right) \\
&\quad+ \rho \left(1-q-y\right) (x+y) \left(f(x,y+\tfrac{1}{N}) - f(x,y)\right) \\
&\quad+ y \left(1-(x+y)\right) \left(f(x,y-\tfrac{1}{N}) - f(x,y)\right) \Bigr) .
\end{split}$$
Moreover, if $f$ is smooth with bounded derivatives, one checks that $$\lim_{N \rightarrow +\infty}\sup_{(m^+,m^-) \in [0,1]^2} \left| \mathcal{G}_N f(m^+,m^-) - \mathcal{G} f(m^+,m^-)\right| = 0.$$ The conclusion then follows by a standard result of convergence of Markov processes, cf. [@ethier2009markov], Ch. 4, Corollary 8.7. [ $\square$]{}
[This first theorem formalizes and extends a useful result for the infinite size system, already obtained by means of different techniques in some previous works [@Masuda10; @borile2013effect]. It [is a dynamic law of large numbers that]{} quantifies the deterministic evolution of the order parameters as obtained from the limiting dynamics described by $L_N$ neglecting fluctuations. The stability analysis of the fixed points of Eq. provides some immediate results on the global dynamics of the model in the infinite size limit:]{} For $\rho = 1$, equations trivialize: the only relevant variable is $m = m^+ + m^-$, which satisfies $\dot{m} = 0$. This is simply the macroscopic consequence of the fact that $K_N = N (m^+_N+ m_N^-)$ evolves as a symmetric random walk. The picture changes as $\rho <1$. When $\rho<1$, the system has three equilibrium points:
1. $(m^+, m^-)=(q,1-q)$, which represents a limiting behavior where all the spins equal 1;\
2. $(m^+, m^-)=(0,0)$, which is the case with all spins equal to 0;\
3. $(m^+, m^-)= \left( \frac{q(1+\rho)-\rho}{(1+\rho)(1-\rho)}, \rho \frac{q(1+\rho)-\rho}{(1+\rho)(1-\rho)} \right)$.
It is easily checked that equilibrium 3 lies inside $[0,q] \times [0,1-q]$, hence is admissible, if and only if the condition $$\label{qcon}
\rho < \frac{1-q}{q}$$ holds (remember we are assuming $q \geq 1/2$). The stability analysis of the three equilibria is also easily done: for $\frac{1-q}{q} < \rho < 1$ equilibrium 1 is stable, and attracts all initial conditions except $(0,0)$, which is an unstable equilibrium, while for $\rho < \frac{1-q}{q}$ both $(q,1-q)$ and $(0,0)$ are unstable, and the stable equilibrium 3 emerges, attracting all initial conditions except the unstable equilibria. Note that, for $q = 1/2$, only this second regime exists. Thus, in the case $q > 1/2$ and $\frac{1-q}{q} < \rho < 1$, the asymmetric disorder stabilizes the equilibrium $(q,1-q)$; lower values of $\rho$ increase the effects of the disorder, so that a new stable equilibrium appears.\
Large deviations and time to absorption
=======================================
[In order to get information on the behavior of the system when the total number of “individuals” $N$ is large but finite, we need to go beyond the law of large numbers in Theorem \[limdyn\]. In particular, our next aim is ]{} to show that, whenever equilibrium 3 is present for the macroscopic dynamics , the absorption time for the microscopic system grows exponentially in $N$. To this end, we use the Freidlin and Wentzell theory for randomly perturbed dynamical systems (see [@freidlin2012random]). This theory, based on finite time Large Deviations, yields asymptotic estimates characterizing the long-time behavior of the perturbed system (here the microscopic system described by $(m^{+}_{N},m^{-}_{N})$) as the noise intensity tends to zero (equivalent here to $N\to \infty$). See also [@den2008large; @touchette2009large] for an introduction to Large deviations.\
For simplicity, we assume $q = 1/2$, so that equilibrium 3 exists for every $\rho <1$. For ${\bf x} = (x,y) \in [0,1/2]^2$ set $$\begin{split}
l_1({\bf x}) & = \rho x (1-x-y) \\ r_1({\bf x }) & = (1/2 - x)(x+y) \\
l_2({\bf x}) & = y (1-x-y) \\ r_2({\bf x }) & = \rho (1/2 - y)(x+y)
\end{split}$$ Notice that the vector field $b({\bf x}) = (b_1({\bf x}), b_2({\bf x}))$ defined by $b_i({\bf x}) := r_i({\bf x }) - l_i({\bf x })$ appears in , the equation of the macroscopic dynamics, which we interpret as the unperturbed dynamical system. Define the family of point measures, parametrized by ${\bf x} \in [0,1/2]^2$: $$\mu_{\boldsymbol{x}} := r_{1}(\boldsymbol{x})\delta_{(1,0)} + l_{1}(\boldsymbol{x})\delta_{(-1,0)} + r_{2}(\boldsymbol{x})\delta_{(0,1)} + l_{2}(\boldsymbol{x})\delta_{(0,-1)},$$ where $\delta$ indicates Dirac measure. Then the generator $\mathcal{G}_{N}$ in can be rewritten in a diffusion-like form as $$\begin{gathered}
\mathcal{G}_{N}(f)(\boldsymbol{x}) = N \int_{\mathbb{R}^{2}\setminus\{0\}} \left( f(\boldsymbol{x} + \tfrac{1}{N}\boldsymbol{\gamma}) - f(\boldsymbol{x})\right) \mu_{\boldsymbol{x}}(d\boldsymbol{\gamma}) \\
= \left\langle b(\boldsymbol{x}), \nabla f(\boldsymbol{x})\right\rangle \\ + N \int \left( f(\boldsymbol{x} + \tfrac{1}{N}\boldsymbol{\gamma}) - f(\boldsymbol{x}) - \frac{1}{N} \left\langle \boldsymbol{\gamma}, \nabla f(\boldsymbol{x})\right\rangle \right) \mu_{\boldsymbol{x}}(d\boldsymbol{\gamma}),\end{gathered}$$ where $\langle \cdot , \cdot \rangle$ denotes the scalar product in $\mathbb{R}^2$. Let $H\!: \mathbb{R}^{2}\times \mathbb{R}^{2} \rightarrow \mathbb{R}$ be the Hamiltonian associated with the operators $\mathcal{G}_{N}$, $N\in \mathbb{N}$: $$H(\boldsymbol{x},\boldsymbol{\alpha}) := \left\langle b(\boldsymbol{x}), \boldsymbol{\alpha}\right\rangle + \int \left( \exp\left(\langle \boldsymbol{\gamma}, \boldsymbol{\alpha} \rangle\right) - 1 - \left\langle \boldsymbol{\gamma}, \boldsymbol{\alpha}\right\rangle \right) \mu_{\boldsymbol{x}}(d\boldsymbol{\gamma}).$$ It follows that $$H(\boldsymbol{x},\boldsymbol{\alpha}) = \sum_{i=1}^{2} \left[r_{i}(\boldsymbol{x})\left(e^{\alpha_{i}} - 1 \right) + l_{i}(\boldsymbol{x})\left(e^{-\alpha_{i}}- 1 \right) \right].$$ Let $L$ be the Legendre transform of $H$, given by $$L(\boldsymbol{x},\boldsymbol{\beta}) := \sup_{\boldsymbol{\alpha}\in \mathbb{R}^{2}}\left\{\left\langle \boldsymbol{\beta}, \boldsymbol{\alpha}\right\rangle - H(\boldsymbol{x},\boldsymbol{\alpha})\right\}.$$ It is easy to show that $$\label{EqLagrangian}
L(\boldsymbol{x},\boldsymbol{\beta}) = \tilde{L}(l_{1}\bigl(\boldsymbol{x}),r_{1}(\boldsymbol{x});\beta_{1}\bigr) + \tilde{L}\bigl(l_{2}(\boldsymbol{x}),r_{2}(\boldsymbol{x});\beta_{2}\bigr),$$ where $\tilde{L}: [0,\infty)^{2}\times \mathbb{R} \rightarrow [0,\infty]$ is given by $$\begin{split}
\tilde{L}(l,r;\beta) & = \sup_{\alpha\in \mathbb{R}} \left\{ \beta\cdot\alpha - r\cdot (e^{\alpha}-1) - l\cdot (e^{-\alpha}-1)\right\} \\
& = \beta\log\left(\tfrac{\beta + \sqrt{\beta^{2}+4rl}}{2r}\right) - \sqrt{\beta^{2}+4rl} + l + r,
\end{split}$$ taking appropriate limits for the boundary cases $l=0$ or $r = 0$. In particular, $\tilde{L}(l,r;\beta) = \infty$ if and only if either $l=0$ and $\beta < 0$ or $r=0$ and $\beta > 0$. The [*Lagrangian*]{} $L$ in allows to define the [*action functional*]{}: for $T>0$, $\phi:[0,T] \rightarrow \mathbb{R}^2$, set $$\label{ExActionFnct}
S_{T}(\phi) :=
\int_{0}^{T} L\bigl(\phi(t),\dot{\phi}(t)\bigr)dt,$$ where $S_{T}(\phi)$ is meant to be equal to $+\infty$ if $\phi$ is not absolutely continuous. The action functional controls the [*quenched*]{} Large Deviations of the stochastic process $(m_N^+(t), m_N^-(t))_{t \geq 0}$: if $B_{\phi}$ is a small neighborhood of a trajectory $\phi:[0,T] \rightarrow \mathbb{R}^2$, $h = (h_1,h_2,\ldots,h_N)$ is a realization of the random environment, and $P_h$ is the law of the Markov process generated by for $h$ [*fixed*]{}, then for almost every realization $h$ $$\frac{1}{N} \log P_h\left[ (m^+(t), m^-(t))_{t \in [0,T]} \in B_{\phi} \right] \simeq - S_{T}(\phi)$$ for $N$ large. This fact falls within the range of the Freidlin-Wentzell Large Deviations results (see [@freidlin2012random]), although several modifications of the original proof are needed here, following [@Budhiraja11].
As shown in [@freidlin2012random], the control of the Large Deviations provides control on the hitting times of subsets of the state space $[0,1/2]^2$ of the process $(m_N^+(t), m_N^-(t))_{t \geq 0}$, in particular of the time $T_N$ needed to reach the absorbing states. Denote by ${\bf z}$ the stable equilibrium for the macroscopic dynamics: $${\bf z} = \left(\frac{1}{2(1+\rho)}, \frac{\rho}{2(1+\rho)} \right).$$ For ${\bf x} \in [0,1/2]^2$, define the [*quasi-potential*]{} by $$V({\bf x}) := \inf \{ S_T(\phi) : T > 0, \phi(0) = {\bf z}, \phi(T) = {\bf x} \}.$$ Let $D$ be a domain in $[0,1/2]^{2}$ containing ${\bf z}$ with smooth boundary $\partial D$ such that $\partial D \subseteq (0,1/2)^{2}$ and the vector field $b({\bf x})$ is directed strictly inside $D$. Let $\tau_{N}$ denote the first time the process $(m^{+}_{N},m^{-}_{N})$ hits the complement of $D$. By construction, $\tau_{N} \leq T_{N}$.\
\[quasipotential\] For every ${\bf x} \neq {\bf z}$ we have $V({\bf x}) >0$. Moreover, for almost every realization of the environment $h$, every ${\varepsilon}>0$, $$\lim_{N \rightarrow +\infty} P_h \left(e^{N(V_{\partial D}-{\varepsilon})} \leq \tau_N \leq e^{N(V_{\partial D}+{\varepsilon})} \right) = 1$$ where $$V_{\partial D} := \min\left\{ V({\bf x}) : {\bf x}\in \partial D \right\} > 0.$$
[**Proof:** ]{} In order to show that $V({\bf x}) >0$ for every ${\bf x} \neq {\bf z}$, it suffices to check that, for every $\delta_{0} > 0$ small enough, $\inf_{\bf x\in \partial B_{\delta_{0}}(\boldsymbol{z})} V(\bf x) > 0$.
Set ${\bf r_{\ast}}:=(\frac{\rho}{4(1+\rho)},\frac{\rho}{4(1+\rho)})$; thus ${\bf r_{\ast}} = (r_{i}({\bf z}),l_{i}({\bf z}))$, $i\in \{1,2\}$. Let $l,r > 0$. Then $\tilde{L}(l,r;\beta)$ as a function of $\beta\in \mathbb{R}$ is smooth, non-negative, strictly convex with minimum value zero attained at $\beta = r-l$ and of super-linear growth. Second order Taylor expansion around $\beta = r -l$ yields $$\tilde{L}(l,r;\beta) = \frac{1}{2(r+l)}\left(\beta - (r-l)\right)^{2} + \mathcal{O}\left(\left(\beta - (r-l)\right)^{3}\right).$$ It follows that for every $\delta_{\ast} > 0$ small enough there are a constant $c > 0$ and a continuous function $\underline{L}\!: \overline{B_{\delta_{\ast}}(\bf{r}_{\ast})}\times \mathbb{R} \rightarrow [0,\infty)$ such that $\tilde{L}(l,r;\beta) \geq \underline{L}(l,r;\beta)$, $\underline{L}(l,r;.)$ is strictly convex with super-linear growth and for every $(l,r)\in \overline{B_{\delta_{\ast}}(\boldsymbol{r}_{\ast})}$, $$\underline{L}(l,r;\beta) = c \left(\beta - (r-l)\right)^{2} \text{ if } \beta\in [-4\delta_{\ast},4\delta_{\ast}].$$ Choose such $\delta_{\ast}$, $c$, $\underline{L}$. By continuity of the functions $r_{1}$, $l_{1}$, $r_{2}$, $l_{2}$, we can choose $\delta_{0} > 0$ such that $(l_{1}(\boldsymbol{x}),r_{1}(\boldsymbol{x})), (l_{2}(\boldsymbol{x}), r_{2}(\boldsymbol{x})) \in \overline{B_{\delta_{\ast}}(\boldsymbol{r}_{\ast})}$ for all $\boldsymbol{x}\in \overline{B_{\delta_{0}}(\boldsymbol{z}))}$. Recall that $b_{i} = r_{i} - l_{i}$. It follows that $$\begin{split}
\inf_{\boldsymbol{x}\in \partial B_{\delta_{0}}(\boldsymbol{z})} &\bar{V}(\boldsymbol{x}) \\
&\geq \inf \sum_{i=1}^{2}\int_{0}^{T} \underline{L}\left(l_{i}(\phi(t)),r_{i}(\phi(t)); \dot{\phi}_{i}(t)\right)dt,
\end{split}$$ where the infimum on the right-hand side is over all $\phi\in \mathbf{C}_{a}([0,\infty), \overline{B_{\delta_{\ast}}(\boldsymbol{r}_{\ast})})$, $T > 0$ such that $\phi(0)=\boldsymbol{z}$, $\phi(T) \in \partial B_{\delta_{0}}(\boldsymbol{z})$. Using a time transformation argument analogous to that of Lemma 4.3.1 in [@freidlin2012random] and the convexity and super-linear growth of $\underline{L}(l,r,\beta)$ in $\beta$, one finds that the infimum can be restricted to $\phi\in \mathbf{C}_{a}([0,\infty), \overline{B_{\delta_{\ast}}(\boldsymbol{r}_{\ast})})$ such that $|\dot{\phi}_{i}(t)|\leq 4\delta_{\ast}$ and $|\dot{\phi}(t)| = |b(\phi(t))|$ for almost all $t\in \mathbb{R}$. Thus $$\inf_{\boldsymbol{x}\in \partial B_{\delta_{0}}(\boldsymbol{z})} \bar{V}(\boldsymbol{x}) \geq \inf \int_{0}^{T} c\cdot \bigl|b(\phi(t)) - \dot{\phi}(t)\bigr|^{2} dt.$$ The Jacobian of $b$ at $\boldsymbol{z}$ has two strictly negative eigenvalues. Choosing, if necessary, a smaller $\delta_{\ast}$ and corresponding $c > 0$, $\underline{L}$, it follows that $$ \inf_{\boldsymbol{x}\in \partial B_{\delta_{0}}(\boldsymbol{z})} \bar{V}(\boldsymbol{x})
\geq \inf
\int_{0}^{T} c\cdot \bigl|Db(\boldsymbol{z})\phi(t) - \dot{\phi}(t)\bigr|^{2} dt > 0,
$$ which establishes the strict positivity of $V$ away from ${\bf z}$.
The second part of the assertion is established in a way analogous to the proofs of Theorems 4.4.1 and 4.4.2 in [@freidlin2012random], see Section 5.4 therein. [ $\square$]{}
Theorem \[quasipotential\] implies in particular that the time to reach any small neighborhood of the absorbing states grows exponentially in $N$ [ for any $\rho<1$. This is a generalization of the Kramers’s formula for the noise activated escape from a potential well [@hanggi1986escape]. This exponential behavior in $N$ suggests the existence of an active phase where both spin states / species, $0$ and $1$, coexist in the stationary state in the infinite size limit, $N\rightarrow \infty$.]{}\
Normal fluctuation
==================
As seen in the previous sections, on a time scale of order $1$ the process $(m_N^+(t), m_N^-(t))_{t \geq 0}$ remains close to its thermodynamic limit: i.e., Eq.. In this section we consider the normal fluctuations around this limit. Suppose the assumptions of Theorem \[limdyn\] are satisfied; moreover, for the sake of simplicity, we assume $q = 1/2$, and $(m^+,m^-) = {\bf z}$ with ${\bf z} = \left(\frac{1}{2(1+\rho)}, \frac{\rho}{2(1+\rho)} \right)$, so that the limiting dynamics starts in equilibrium. We define the fluctuation processes $$\begin{split}
x_N(t) & := \sqrt{N} \left(m^+_N(t) - m^+ \right) \\
y_N(t) & := \sqrt{N} \left(m^-_N(t) - m^- \right) .
\end{split}$$\
\[fluctuations\] The stochastic process $(x_N(t), y_N(t))$ converges in distribution to a Gauss-Markov process $(X,Y)$ which solves the stochastic differential equation
$$\label{diff}\left\{\begin{array}{ll}
d X=&\left(-\frac{1+\rho^2}{2(1+\rho)}X+\frac{\rho}{1+\rho}Y+\frac{1}{2}\mathcal{H}\right)dt\\
&+\frac{1}{\sqrt{2}}\sqrt{\frac{\rho}{1+\rho}}d B_1\\
d Y=&\left(-\frac{1+\rho^2}{2(1+\rho)}Y+\frac{\rho}{1+\rho}X-\rho\frac{1}{2}\mathcal{H}\right)dt\\
&+\frac{1}{\sqrt{2}}\sqrt{\frac{\rho}{1+\rho}}d B_2
\end{array}\right.$$
Here, $B_i$, $i=1,2$ are two independent standard Brownian motions and $\mathcal{H}$ is a zero average standard Gaussian random variable, independent of $B_1,B_2$.
The proof of Theorem \[fluctuations\] uses the method of convergence of generators as that of Theorem \[limdyn\], and is omitted. Unlike in Theorem \[limdyn\], the environment does not fully self-averages since $\mathcal{H}$ is not identically equal to zero. The quenched random variable $\mathcal{H}$ in Theorem \[fluctuations\] is due to the normal fluctuations of the environment $(h_1,h_2,\ldots,h_N)$.\
Discussion and conclusions
==========================
It is well known that habitat heterogeneity impacts on biodiversity [@mcclain2010habitat]. At large scale, e.g. at regional or larger level, geomorphological changes may induce genetic isolation whereas at smaller scales the complexity induced by, for example, vegetation, sediment types, moisture and temperature leads to the coexistence of several species and to the emergence of niches. [To our knowledge, however, quantitative estimates of the relation between the degrees of heterogeneity and biodiversity and the time of coexistence of species have not been obtained. Here we have rigorously proved that even a small habitat disorder in a neutral competition-like model dramatically enhances the typical time biodiversity persists; more specifically, we have shown that the typical time to loss of biodiversity, $\tau_N$, scales exponentially with the population size $N$, leading, for large size systems, to an unobservable long time scale beyond which extinction occurs. This is in contrast to what happens in absence of habitat heterogeneity, where the typical time to loss of biodiversity is typically small, growing as the system’s size, $N$. We have also obtained the scaling exponent of $\tau_N$ in terms of a suitable *quasi-potential* $V(\bf{x})$, that encodes the minimum “cost” of a trajectory to reach a given point $x$ of the phase-space. The consequences of these findings could be particularly relevant, for example, in conservation ecology: In a given area different species at the same trophic level compete for space and nutrients in a neutral fashion; for example, think of a tropical forest, where the neutral theory provides a very good null model [@Volkov03].\
Lastly, we have shown that the fluctuations around the metastable symmetric fixed point obey a Brownian motion dynamics with drift where the environmental disorder does not show self-averaging.]{}
***Acknowledgments:*** AM acknowledges Cariparo foundation for financial support. We thank Miguel Munoz for useful discussions, comments and suggestions.
[10]{}
S.P. Hubbell. . Monographs in Population Biology. Princeton University Press, 2008.
I. Volkov, J. R. Banavar, S. P. Hubbell, and A. Maritan. Neutral theory and relative species abundance in ecology. , 424(2):1035 – 1037, 2003.
S. Azaele, S. Pigolotti, J. R. Banavar, and A. Maritan. Dynamical evolution of ecosystems. , 444:926 – 928, 2006.
M Vallade and B Houchmandzadeh. Analytical solution of a neutral model of biodiversity. , 68(6):061902, 2003.
David Alonso, Rampal S Etienne, and Alan J McKane. The merits of neutral theory. , 21(8):451–457, 2006.
T. M. Liggett. . Springer (Berlin), 2005.
O. Al Hammal, H. Chaté, I. Dornic, and M. A. Muñoz. Langevin description of critical phenomena with two symmetric absorbing states. , 94:230601, 2005.
M. Kimura and N. Takahata. . Evolutionary biology. University of Chicago Press, 1995.
M. Henkel, H. Hinrichsen, and S. L[ü]{}beck. . Theoretical and mathematical physics. Springer London, Limited, 2008.
I. Dornic, H. Chaté, J. Chave, and H. Hinrichsen. Critical coarsening without surface tension: The universality class of the voter model. , 87:045701, 2001.
R. Durrett and S. A. Levin. Stochastic spatial models: a user’s guide to ecological applications. , 343(1305):329–350, 1994.
C. Castellano, S. Fortunato, and V. Loreto. Statistical physics of social dynamics. , 81:591–646, 2009.
R. A. Blythe and A. J. McKane. Stochastic models of evolution in genetics, ecology and linguistics. , 2007(07):P07018, 2007.
S. Pigolotti and M. Cencini. Coexistence and invasibility in a two-species competition model with habitat-preference. , 265(4):609 – 617, 2010.
C. Borile, A. Maritan, and M. A. Mu[ñ]{}oz. The effect of quenched disorder in neutral theories. , 2013(04):P04032, 2013.
David Tilman, Robert M May, Clarence L Lehman, and Martin A Nowak. Habitat destruction and the extinction debt. 1994.
N. Masuda, N. Gibert, and S. Redner. Heterogeneous voter models. , 82:010103, 2010.
N. Masuda and S. Redner. Can partisan voting lead to truth? , 2011(02):L02002, 2011.
P.L. Krapivsky, S. Redner, and E. Ben-Naim. . Cambridge University Press, 2010.
G. [Ó]{}dor. . World Scientific, 2008.
A. Lambert. Population dynamics and random genealogies. , 24(S1):45–163, 2008.
Stewart N Ethier and Thomas G Kurtz. , volume 282. Wiley, 2009.
M. I Freidlin and A. D Wentzell. , volume 260. Springer, 2012.
Frank Den Hollander. , volume 14. American Mathematical Soc., 2008.
Hugo Touchette. The large deviation approach to statistical mechanics. , 478(1):1–69, 2009.
A. Budhiraja, P. Dupuis, and V. Maroulas. Variational representations for continuous time processes. , 47(3):725–747, 2011.
Peter Hanggi. Escape from a metastable state. , 42(1-2):105–148, 1986.
Craig R McClain and James P Barry. Habitat heterogeneity, disturbance, and productivity work in concert to regulate biodiversity in deep submarine canyons. , 91(4):964–976, 2010.
[^1]: Dipartimento di Fisica “G. Galilei”, Università di Padova, via Marzolo 8, I-35151 Padova, Italy,`[email protected]`
[^2]: Dipartimento di Matematica, Università di Padova, via Trieste 63, I-35121 Padova, Italy,`[email protected]`
[^3]: Dipartimento di Matematica, Università di Padova, via Trieste 63, I-35121 Padova, Italy,`[email protected]`
[^4]: Dipartimento di Fisica “G. Galilei”, Università di Padova, via Marzolo 8, I-35151 Padova, Italy,`[email protected]`
[^5]: Dipartimento di Fisica “G. Galilei”, Università di Padova, via Marzolo 8, I-35151 Padova, Italy,`[email protected]`
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the fine-grained uncertainty relations for qubit system by measurements corresponding to two and three spin operators. Then we derive the general bound for a combination of two probabilities of projective measurements in mutually unbiased bases in $d$-dimensional Hilbert space ($d$ is prime). Those uncertainty inequalities can be applied to construct different thermodynamic cycles which all imply the second law of thermodynamics.'
author:
- 'Li-Hang Ren'
- Heng Fan
title: 'The fine-grained uncertainty relation for mutually unbaised bases'
---
*Introduction.-* At the heart of quantum mechanics lies Heisenberg’s uncertainty principle [@heisenberg], which bounds the uncertainties about the outcomes of two incompatible measurements. For example, if the momentum of a particle is predicted with certainty, when measuring its position, all outcomes would occur. Originally the uncertainty principle is expressed by the Heisenberg-Robertson relation [@robertson]: $$\label{1}
\bigtriangleup R \cdot\bigtriangleup S\geq \frac{1}{2} |\langle[R,S]\rangle|$$ with standard deviations $\bigtriangleup R$ and $\bigtriangleup S$. There exist limitations using the standard deviation as a measure of uncertainty, since the bound on the right-hand side of relation (\[1\]) depends on the state. To overcome this problem, Deutsch [@deutsch] proposed a relationship to quantify uncertainty in terms of Shannon entropy. Soon afterwards, an improved entropic uncertainty relation was established by Kraus [@kraus] and then proved by Maassen and Uiffink [@maassen] with form $$\begin{aligned}
H(R)+H(S)\geq\log_{2}\frac{1}{c},\end{aligned}$$ where $H(R)$ denotes the Shannon entropy of the probability distribution of the outcomes when $R$ is measured. The entropic uncertainty relation provides us a way to quantify the uncertainties with more than two measurements independent of the form of state. The entropic uncertainty relation in the presence of quantum memory has also been presented in Ref. [@berta]. So far many studies on the entropic uncertainty relation has been done [@ruiz; @ghirardi; @wu; @survey].
Even though the entropic function is better to describe uncertainty than the standard deviation, it is still a rather coarse way of measuring the uncertainty of a set of measurements. Entropy is just a function of the probability distribution when a measurement taken, and it can not distinguish the uncertainty inherent in obtaining any combination of outcomes for different measurements. A new form of uncertainty relation, i.e., fine-grained uncertainty relation, was proposed by Oppenheim and Wehner [@finegrained] to overcome this defect. For a set of measurements, there exist a set of inequalities, one for each combination of possible outcomes: $$\label{2}
\left\{\sum_{t=1}^{n}p(t)p(x^{(t)}|\rho)\leq\zeta_{\bm{x}}\Big|\bm{x}\in\mathbf{B}^{\times n} \right\},$$ where $p(t)$ is the probability of choosing measurement labeled $t$, $p(x^{(t)}|\rho)$ is the probability that we obtain the outcome $x^{(t)}$ when performing measurement $t$ on the state $\rho$ and $\bm{x}=(x^{(1)},\ldots,x^{(n)})\in\mathbf{B}^{\times n}$ is a combination of possible outcomes. Here, $\zeta_{\bm{x}}=\max_{\rho}\sum_{t=1}^n p(t)p(x^{(t)}|\rho)
$ where the maximum is taken over all states allowed on a particular system. When $\zeta_{\bm{x}}<1$, we can not obtain outcomes with certainty for all measurements simultaneously. A state that saturates the inequality (\[2\]) is called a “maximally certain state". Till now the fine-grained uncertainty relation has been applied to some aspects. It has been employed to discriminate among classical, quantum, and superquantum correlations involving two or three parties [@nonlocal], and also optimize the lower bound of entropic uncertainty in the presence of quantum memory is optimized [@fineentropy]. Moreover, the fine-grained uncertainty is related with the second law of thermodynamics [@violation].
In this letter, we investigate the fine-grained uncertainty relation for mutually unbaised bases. At first we review a simple relation with two measurements, and show that measurements $\{\sigma_x$ ,$\sigma_z\}$ behave as the best measurement basis. Then, we derive a general fine-grained uncertainty relation for mutually unbiased bases in $d$ dimensions. Finally, we describe a thermodynamic cycle and use our generalized fine-grained uncertainty relation to verify the second law of thermodynamic for a more general situation. *Two-dimensional fine-grained uncertainty relation.-* The essence for fine graining is to consider a particular string of outcomes and get the bound $\zeta_{\bm{x}}$. As an example [@finegrained], we choose Pauli operators $\sigma_x$ or $\sigma_z$ with equal probability $1/2$, and for all pure states $\rho=|\psi\rangle\langle\psi|$ $$\label{4}
\frac{1}{2}p(0^{(x)}|\rho)+\frac{1}{2}p(0^{(z)}|\rho)\leq\frac{1}{2}+\frac{1}{2\sqrt{2}}$$ with $0^{(x)}=|+\rangle$ and $0^{(z)}=|0\rangle$ (we label the eigenbases of $\sigma_x$, $\sigma_y$ and $\sigma_z$ as $\{|+\rangle, |-\rangle\}$, $\{|\widetilde{+}\rangle, |\widetilde{-}\rangle\}$ and $\{|0\rangle, |1\rangle\}$). An arbitrary pure state in two-dimensional Hilbert space can be expressed as $|\psi\rangle=\cos\frac{\theta}{2}|0\rangle+e^{i\phi}\sin\frac{\theta}{2}|1\rangle$ with $\theta\in[0,\pi]$ and $\phi\in[0,2\pi)$. The maximum is saturated when $\theta=\frac{\pi}{4}$ and $\phi=0$. At this time, $|\psi\rangle=\cos\frac{\pi}{8}|0\rangle+\sin\frac{\pi}{8}|1\rangle$ is just an eigenstate of $(\sigma_x+\sigma_z)/\sqrt{2}$. The two-dimensional Hilbert space exhibits remarkable geometrical properties: a pure state corresponds to a point on the Bloch sphere [@book]. The state with $\theta=\frac{\pi}{4}$ and $\phi=0$ is located on the angle bisector between $x$-axis and $z$-axis. For other pairs of outcomes $(0^{(x)},1^{(z)})$, $(1^{(x)},0^{(z)})$ and $(1^{(x)},1^{(z)})$, we can obtain the same bound as relation (\[4\]).
We know that these four states, $\{|0\rangle,|1\rangle\}$ and $\{|+\rangle,|-\rangle\}$£¬ are the well known BB84 states [@bb]. We can assume that those two pairs of bases may enable the uncertainty to be optimal on average. It can be explained as follows: for an unknown state, we expect to measure it accurately with two measurements as far as possible, that is to say, we want to get the maximal bound for fine-grained uncertainty relation. Taking a qubit as an example, due to the symmetry of the Bloch sphere, we just consider the $x$-$z$ semi-plane, namely $\theta\in[0,\pi]$, $\phi=0$. The states $|\psi\rangle=\cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}|1\rangle$ with different $\theta$ appear with equal probability. Without loss of generality, we can first fix a measurement $\sigma_z$ and then pick an arbitrary operator $\sigma_n$, of which the angle between the two is $\alpha$ with $\alpha\in[0,\pi]$. Thus $p_{z\uparrow}=\cos^2\frac{\theta}{2}$, $p_{n\uparrow}=\cos^2\frac{\theta-\alpha}{2}$. In order to take all states into account, we will compute the following integral $$\label{5}
\int_0^{\pi}\frac{p_{z\uparrow}+p_{n\uparrow}}{\pi}d\theta=1+\frac{\sin\alpha}{\pi}$$ in which the maximum is reached when $\alpha=\pi/2$. Therefore, $\sigma_x$ and $\sigma_z$ denote the optimal measurements for the uncertainty relation.
Let us then consider the two-dimensional fine-grained uncertainty relation from another perspective. We choose two spin operators at will: $A=\bm{\sigma}\cdot\bm{m}$ and $B=\bm{\sigma}\cdot\bm{n}$, in which $\bm{m}$ and $\bm{n}$ are unit vectors [@ghirardi]. A normalized state $|\psi\rangle$ can be viewed as the eigenvector of $\bm{\sigma}\cdot\bm{k}$ projective to eigenvalue $+1$, which means that $|\psi\rangle$ can be expressed as a unit vector $\bm{k}$ in the three-dimensional Euclidean space (all these vectors form the so-called Bloch sphere). The corresponding measurement probabilities are $p(\bm{m}_{\uparrow})=|\langle\bm{m}_{\uparrow}|\bm{k}_{\uparrow}\rangle|^2
=\frac{1}{2}(1+\bm{m}\cdot\bm{k})$ and $p(\bm{n}_{\uparrow})=\frac{1}{2}(1+\bm{n}\cdot\bm{k})$. Thus $p(\bm{m}_{\uparrow})+p(\bm{n}_{\uparrow})=
1+\frac{1}{2}(\bm{m}+\bm{n})\cdot\bm{k} \leq \zeta$, where $\zeta$ takes the maximum of $1+\frac{1}{2}(\bm{m}+\bm{n})\cdot\bm{k}$ over all vectors $\bm{k}$ forming the Bloch sphere. When $\bm{k}$ is parallel to $\bm{m}+\bm{n}$, namely, when $\bm{k}$ lies in the direction of angle bisector between $\bm{m}$ and $\bm{n}$, the inequality takes the maximum. $\zeta=1+\frac{1}{2}|\bm{m}+\bm{n}||\bm{k}|=1+\frac{|\bm{m}+\bm{n}|}{2}$. Then $$\label{12}
p(\bm{m}_{\uparrow})+p(\bm{n}_{\uparrow})\leq 1+\cos\frac{\gamma}{2}$$ in which $\gamma\in(0,\pi)$ is the angle between $\bm{m}$ and $\bm{n}$. If $\bm{m}\rightarrow \bm{x}$ and $\bm{n}\rightarrow \bm{z}$, $|\bm{m}+\bm{n}|=\sqrt{2}$ ($\gamma={\pi}/{2}$), so we will get the relation (\[4\]). From relation (\[12\]), we can see that as the angle $\gamma$ becomes larger, the bound for the fine-grained uncertainty relation becomes smaller. The six states case can be similarly investigated, which corresponds to six states quantum key distribution [@six].
*Fine-grained uncertainty relation for mutually unbaised bases.-* In order to investigate the fine-grained uncertainty relation in higher dimensional Hilbert space, we confine our discussion to the projective measurements with mutually unbaised bases (MUBs) [@mub]. For prime power dimension $d$ (we assume that $d$ is always prime in this Letter), there can be at most $d+1$ MUBs, and furthermore, the squared overlaps between a basis state in one base and all basis states in the other bases are identical. Therefore, after the detection of a particular basis state, all outcomes of the measurement with another base will occur with equal probabilities. The simplest example of MUBs consists of three Pauli matrices in the case of spin-$1/2$ particle. Let $\{|j\rangle\}_{j=0}^{d-1}$ and $\{|j^{(k)}\rangle\} (k=0,1,\ldots,d-1)$ denote a complete set of MUBs [@measurement]. We use $k=\ddot{0}$ to label the base $\{|j\rangle\}$, which are the eigenvectors of generalized Pauli operator $Z$, $Z|j\rangle=w^j|j\rangle, w=e^{2\pi i/d}$, and $$\label{6}
|j^{(k)}\rangle=\frac{1}{\sqrt{d}}\sum_{l=0}^{d-1}w^{kl^2-2jl}|l\rangle$$ Next we will discuss how to obtain a fine-grained uncertainty relation in dimension $3$ with MUBs, and then generalize to $d$-dimensional Hilbert space.
A three-dimensional pure state can be written as $$\begin{aligned}
|\Psi\rangle=\cos x_0|0\rangle+e^{i\varphi_1}\sin x_0\cos x_1|1\rangle+e^{i\varphi_2}\sin x_0\sin x_1|2\rangle\end{aligned}$$ with $x_0, x_1\in[0,\frac{\pi}{2}]$ and $\varphi_1, \varphi_2\in[0,2\pi)$. Let $p(j^{(k)}|\rho)$ denote the probability of obtaining the outcome $j^{(k)}$ when taking the measurement with MUB labeled by $k$ on pure state $\rho=|\Psi\rangle\langle\Psi|$. Then it is easily obtained $$\label{6}
p(0^{(\ddot{0})}|\rho)+p(0^{(0)}|\rho)\leq \frac{1}{2}+\frac{1}{2\sqrt{3}}.$$ The equality can be saturated when $\varphi_1=\varphi_2=0$, $x_1=\pi/4$, and $x_0=\frac{\pi}{4}-\frac{1}{2}\arcsin\frac{1}{\sqrt{3}}$. When the inequality is saturated, the maximally certain state denote $p(0^{(\ddot{0})}|\rho)=p(0^{(0)}|\rho)$, which means that the angle between the state and base $|0\rangle$ is the same as the one between the state and base $|0^{(0)}\rangle$. Similarly, any combination of two outcomes from different measurements has the equal upper bound as listed above.
Moreover, the fine-grained uncertainty relation for MUBs in $d$-dimensional Hilbert space can be conjectured as $$\label{7}
\frac{1}{2}p(0^{(\ddot{0})}|\rho)+ \frac{1}{2}p(0^{(0)}|\rho)\leq \frac{1}{2}+\frac{1}{2\sqrt{d}}$$ This can be proved in the similar way as the three-dimensional case. A pure state can be commonly described as $|\Psi\rangle=\cos x_0|0\rangle+e^{i\varphi_1}\sin x_0\cos x_1|1\rangle+\cdots
+e^{i\varphi_{d-1}}\sin x_0\sin x_1\cdots\sin x_{d-2}|d-1\rangle$. The probabilities $p(0^{(\ddot{0})}|\rho)$ and $p(0^{(0)}|\rho)$ are easily calculated. The inequality $\sqrt{n}\sin x+\cos x \leq\sqrt{n+1}$ is used to determine the final bound. The same upper bound as relation (\[7\]) holds for all any other pairs of outcomes from different measurements.
*Verification of second law of thermodynamic.-* The measurement operators composed of the MUBs are maximally noncommuting among themselves. It is significant to establish fine-grained uncertainty relation for mutually unbiased bases. For instance, the relation (\[7\]) can be applied to verify the second law of thermodynamics in an analogous way as Ref. [@violation]. We design a similar thermodynamics cycle given in [@violation; @vedral], as depicted in Fig. \[f1\].
![A thermodynamic cycle. There are two paths from the initial state to the maximally mixed state. The first path (a)$\rightarrow$(b) denotes the first part of the cycle. This process is irreversible. The second path (a)$\rightarrow$(c)$\rightarrow$(b) is a reversible process and its reversed process denotes the second part of the cycle.[]{data-label="f1"}](fig1.eps "fig:"){width="40.00000%" height="6"}\
At the very start, we prepare three kinds of particles $\rho_0$, $\rho_1$ and $\rho_2$ with the number of $p_0N$, $p_1N$ and $p_2N$: $$\begin{aligned}
\rho_0&=&\frac{|0\rangle\langle 0|+|0^{(0)}\rangle\langle 0^{(0)}|}{2},\
\rho_1=\frac{|1\rangle\langle 1|+|1^{(0)}\rangle\langle 1^{(0)}|}{2}\nonumber\\
\rho_2&=&\frac{|2\rangle\langle 2|+|2^{(0)}\rangle\langle 2^{(0)}|}{2}.\end{aligned}$$ They are put into a vessel divided into three volumes with $p_0V$, $p_1V$ and $p_2V$ by partitions. In this situation, we use semi-transparent membranes that were imaged by von Neumann [@von] and Peres [@peres]. A membrane labeled by $M_0$ is opaque to a normalized state $|e_0\rangle$ and transparent to the other orthogonal normalized states $|e_1\rangle$ and $|e_2\rangle$. The process that a state passes through the membrane corresponds to a projective measurement. After we replace the left partition by two membranes $M_0$ and $M_1$ and the right one by $M_2$, these membranes will move apart until they are in equilibrium. If we let $p_0=p_1=p_2=1/3$, the state in the vessel can be written as $\varrho=\frac{1}{3}(\rho_0+\rho_1+\rho_2)=\frac{\textbf{1}}{3}$ when the equilibrium is reached. Because the state $\varrho$ is the completely mixed state, we can write $$\begin{aligned}
p(e_j|\varrho)\equiv\langle e_j|\varrho|e_j\rangle=\frac{1}{3}\quad j=0,1,2.\end{aligned}$$ This process enables us to extract work $W_1$ from the system. In the following discission, we will omit the factor $NkT\ln2$ contained in the work. $$\begin{aligned}
&&W_1 = -\sum_{i=0}^2 p_i\log p_i-\sum_{j=0}^2 p(e_j|\varrho)\log p(e_j|\varrho)\nonumber\\
&+&[p_1 p(e_0|\rho_1)+p_2 p(e_0|\rho_2)]\log[p_1 p(e_0|\rho_1)+p_2 p(e_0|\rho_2)]\nonumber\\
&+&[p_1 p(e_1|\rho_1)+p_2 p(e_1|\rho_2)]\log[p_1 p(e_1|\rho_1)+p_2 p(e_1|\rho_2)] \nonumber\\
&+&[p_0 p(e_2|\rho_0)+p_1 p(e_2|\rho_1)]\log[p_0 p(e_2|\rho_0)+p_1 p(e_2|\rho_1)] \nonumber\\
&+& p_0 p(e_0|\rho_0)\log[p_0 p(e_0|\rho_0)] + p_0 p(e_1|\rho_0)\log[p_0 p(e_1|\rho_0)]\nonumber\\
&+& p_2 p(e_2|\rho_2)\log[p_2 p(e_2|\rho_2)].\end{aligned}$$
Next we describe a path from the mixed state $\varrho$ to the initial state. New membranes are inserted to separate the state $\varrho=\sum_j \lambda_j|\omega_j\rangle\langle \omega_j|$ into its eigenvectors $|\omega_j\rangle$, each with volume $\lambda_jV$. According to the decompositions $\rho_0=\sum_j r_j^0|\chi_j^0\rangle\langle\chi_j^0|$, $\rho_1=\sum_j r_j^1|\chi_j^1\rangle\langle\chi_j^1|$ and $\rho_2=\sum_j r_j^2|\chi_j^2\rangle\langle\chi_j^2|$, we subdivide the separated volumes into smaller sizes proportional to weights $r_j^0$, $r_j^1$ and $r_j^2$. The conversion from the pure state $|\omega_j\rangle$ to $|\chi_j^0\rangle$, $|\chi_j^1\rangle$ or $|\chi_j^2\rangle$ needs no work. Finally we retrieve $\rho_0$, $\rho_1$ and $\rho_2$ by mixing their respective components. This retrieval process needs work $$\label{10}
W_2=S(\varrho)-\sum_{j=0}^2 p_j S(\rho_j),$$ where $S(\varrho)$ denotes the von Neumann entropy. Thus, after this thermodynamic loop the total work is given by $$\begin{aligned}
\bigtriangleup W &=&W_1-W_2 =H_{b}\left(\frac{1}{2}+\frac{1}{2\sqrt{3}}\right)\nonumber\\
& -&{\frac{1}{3} H_{b}\left[\frac{1}{2}p(0^{(\ddot{0})}|\rho_{e_0})+\frac{1}{2}p(0^{(0)}|\rho_{e_0})\right]}\nonumber\\
&-&{\frac{1}{3} H_{b}\left[\frac{1}{2}p(0^{(\ddot{0})}|\rho_{e_1})+\frac{1}{2}p(0^{(0)}|\rho_{e_1})\right]} \nonumber\\
&-&{\frac{1}{3} H_{b}\left[\frac{1}{2}p(2^{(\ddot{0})}|\rho_{e_2})+\frac{1}{2}p(2^{(0)}|\rho_{e_2})\right] }\end{aligned}$$ where $H_{b}(p)=-p\log p-(1-p)\log(1-p)$ is the binary entropy and $\rho_{e_{j}}=|e_{j}\rangle\langle e_{j}|$ for $j=0,1,2$. The monotonicity of $H_{b}(p)$ and relation (\[6\]) imply $\bigtriangleup W\leq0$. If the uncertainty relation is violated, namely, the bound in inequality (\[6\]) becomes higher, then $\bigtriangleup W>0$ will occur. That means the second law of thermodynamics is violated. The first part of cycle involving the uncertainty relation is an irreversible process, while the returning process is reversible, where we don’t use the uncertainty relation. On a deeper level, the uncertainty principle is related to the reversibility, i.e., the direction of thermodynamic process, which is the core of the second law.
Moreover, the similar cycle can be generally extended to $d$-dimensional situation. At this point, the first partition can be replaced by a set of $d$ membranes and the other partitions are directly removed. Therefore we can get a general conclusion: $$\begin{aligned}
&& \bigtriangleup W=H_b \left(\frac{1}{2}+\frac{1}{2\sqrt{d}}\right)\nonumber\\
&-&{\frac{1}{d} H_b \left[\frac{1}{2}p(0^{(\ddot{0})}|\rho_{e_0})+\frac{1}{2}p(0^{(0)}|\rho_{e_0})\right]} -\cdots\nonumber\\
&-&{\frac{1}{d} H_b\left[\frac{1}{2}p(0^{(\ddot{0})}|\rho_{e_{d-1}})+\frac{1}{2}p(0^{(0)}|\rho_{e_{d-1}})\right] },\end{aligned}$$ which shows that violation of uncertainty relation (\[7\]) may lead to the violation of the second law of thermodynamics. We generalize the thermodynamic cycle in Ref. [@vedral] to d-dimensional mixed state with d membranes. Furthermore, compared with Ref. [@violation], we illustrate a more general results that a violation of quantum uncertainty implies a violation of the second law.
*Extension to more than two probabilities?-* As mentioned above, we just consider the fine-grained uncertainty relation with two probabilities. In future work, it might be interesting to investigate a set of arbitrary number of outcomes, or use other measurement bases.
If we consider a combination of three outcomes for two dimension Hilbert space, namely, we choose measurements $\sigma_x$, $\sigma_y$ or $\sigma_z$ with equal probability $1/3$, then one obtains $$\begin{aligned}
\frac{1}{3}[p(0^{(x)}|\rho)+p(0^{(y)}|\rho)+p(0^{(z)}|\rho)] \leq
\frac{1}{2}+\frac{1}{2\sqrt{3}},\label{17}\end{aligned}$$ where $\rho=|\psi\rangle\langle\psi|$ is a pure state and the equality is saturated when $\theta=\arcsin\sqrt{{2}/{3}}$, $\phi={\pi}/{4}$. Therefore, in the Bloch sphere representation the maximally certain state lies on the body diagonal formed by $x$-axis, $y$-axis and $z$-axis. Other combinations of three outcomes from different measurements $ \sigma_x $, $ \sigma_y $ and $ \sigma_z $ hold the same upper bound as Eq. (\[17\]). For other combinations of outcomes, we can obtain the same upper bound.
As this example, the quantum uncertainty principle may take various forms. It is still an open question whether there exists inner connection between the entropic uncertainty and the fine-grained one. The fine-grained uncertainty relation may be used to derive better bounds for the entropic uncertainty relation and make the bound tight [@tightbound].
*Conclusion.-* In summary we derive the fine-grained uncertainty relation for any two spin operators in a qubit system. Thanks to the geometrical property of qubit, we reduce the problem to geometrical one, using the correspondence between state vectors and points on the Bloch sphere, and concluding that the upper bound decreases with the increase of the angle between two spin operators. Then we generalize the inequality to d-dimensional case where two projective measurements with MUBs are considered. Since MUBs hold the special property that arbitrary two states from different bases have the same overlap, we can get a general bound $\frac{1}{2}+\frac{1}{2\sqrt{d}}$ for any combination of outcomes. Furthermore, our new inequality can be employed to a general thermodynamic cycle, which constructs connection between quantum uncertainty and the second law of thermodynamics. Finally we prove the fine-grained uncertainty relation for three Pauli operators. However, the result for more than two measurements in higher-dimensional space is to be solved.
We thank Yu-Ran Zhang for valuable discussions. This work is supported by “973" program (2010CB922904), NSFC (11175248), and grants from Chinese Academy of Sciences.
[99]{} W. Heisenberg, Z. Phys. [**43**]{}, 172 (1927).
H. P. Robertson, Phys. Rev. [**34**]{}, 163 (1929).
D. Deutsch, Phys. Rev. Lett. [**50**]{}, 631 (1983).
K. Kraus, Phys. Rev. D [**35**]{}, 3070 (1987).
H. Maassen, and J. B. M. Uffink, Phys. Rev. Lett. [**60**]{}, 1103 (1988).
M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner, Nat. Phys. [**6**]{}, 659 (2010).
J. Sanchez-Ruiz, Phys. Lett. A [**244**]{}, 189 (1998).
G. Ghirardi, L. Marinatto, and R. Romano, Phys. Lett. A [**317**]{}, 32 (2003).
S. J. Wu, S. S. Yu, and K. Molmer, Phys. Rev. A [**79**]{}, 022104 (2009).
S. Wehner, and A. Winter, New J. Phys. [**12**]{}, 025009 (2010).
J. Oppenheim, and S. Wehner, Science [**330**]{}, 1072 (2010).
Ansuman Dey, T. Pramanik, and A. S. Majumdar, Phys. Rev. A [**87**]{}, 012120 (2013).
T. Pramanik, P. Chowdhury, and A. S. Majumdar, Phys. Rev. Lett. [**110**]{}, 020402 (2013).
E. Hanggi, and S. Wehner, Nature Commun. [**4**]{}, 1670 (2013).
C. H. Bennett, and G. Brassard, in Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing (IEEE, New York/Bangalore, 1984), pp. 175.
D. Bruss, Phys. Rev. Lett. [**81**]{}, 3018 (1998).
M. A. Nielsen, and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, England, 2000).
S. Bandyopadhyay, P. O. Boykin, V. Roychowdhury, and F. Vatan, Algorithmica [**34**]{}, 512 (2002).
A. Mann, and M. Revzen, Phys. Rev. Lett. [**110**]{}, 260502 (2013).
J. I. de Vicente, and J. Sanchez-Ruiz, Phys. Rev. A [**77**]{}, 042110 (2008).
K. Maruyama, C. Brukner, and V. Vedral, J. Phys. A Math. Gen. [**38**]{}, 7175 (2005).
J. von Neumann, *Mathematical Foundations of Quantum Mechanics* (Princeton University Press, 1955).
A. Peres, *Quantum Theory: Concepts and Methods. Fundamental Theories of Physics* (Kluwer Academic, 1993).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'For a word $w$ in the braid group $B_n$, we denote by $T_w$ the corresponding transverse braid in $(\mathbb{R}^3,\xi_{rot})$. We exhibit, for any two $g,h \in B_n$, a “comultiplication” map on link Floer homology $\widetilde\Phi:{\widetilde{HFL}}(m(T_{hg}) )\rightarrow {\widetilde{HFL}}(m(T_g\#T_h))$ which sends $\widetilde\theta(T_{hg})$ to $\widetilde\theta(T_g\#T_h)$. We use this comultiplication map to generate infinitely many new examples of prime topological link types which are not transversely simple.'
address: |
Department of Mathematics, Princeton University\
Princeton, NJ 08544-1000
author:
- 'John A. Baldwin'
bibliography:
- 'References.bib'
title: 'Comultiplication in link Floer homology and transversely non-simple links'
---
Introduction
============
Transverse links feature prominently in the study of contact 3-manifolds. They arise very naturally – for instance, as binding components of open book decompositions – and can be used to discern important properties of the contact structures in which they sit (see [@bev Theorem 1.15] for a recent example). Yet, transverse links, even in the standard tight contact structure, $\xi_{std}$, on $\mathbb{R}^3$, are notoriously difficult to classify up to transverse isotopy.
A transverse link $T$ comes equipped with two “classical" invariants which are preserved under transverse isotopy: its topological link type and its self-linking number $sl(T)$. For transverse links with more than one component, it makes sense to refine the notion of self-linking number as follows. Let $T$ and $T'$ be two transverse representatives of some $l$-component topological link type, and suppose there are labelings $T=T_1\cup\dots \cup T_l$ and $T'=T'_1\cup\dots\cup T'_l$ of the components of $T$ and $T'$ such that
1. there is a topological isotopy sending $T$ to $T'$ which sends $T_i$ to $T_i'$ for each $i$, and\
2. $sl(S) = sl(S')$ for any sublinks $S = T_{n_1}\cup\dots \cup T_{n_j}$ and $S' = T_{n_1}'\cup\dots\cup T_{n_j}'.$
Then we say that $T$ and $T'$ have the same *self-linking data*, and we write $\mathcal{SL}(T) = \mathcal{SL}(T')$. A basic question in contact geometry is how to tell, given two transverse representatives, $T$ and $T'$, of some topological link with the same self-linking data, whether $T$ and $T'$ are transversely isotopic; that is, whether the classical data completely determines the *transverse* link type. We say that a topological link type is *transversely simple* if any two transverse representatives $T$ and $T'$ which satisfy $\mathcal{SL}(T) = \mathcal{SL}(T')$ are transversely isotopic. Otherwise, the link type is said to be *transversely non-simple*.
From this point on, we shall restrict our attention to transverse links in the tight rotationally symmetric contact structure, $\xi_{rot},$ on $\mathbb{R}^3,$ which is contactomorphic to $\xi_{std}$. There are several well-known examples of knot types which are transversely simple. Among these are the unknot [@yasha5], torus knots [@et4] and the figure eight [@EH2].
Only recently, however, have knot types been discovered which are not transversely simple. These include a family of 3-braids found by Birman and Menasco [@bm3] using the theory of braid foliations; and the (2,3) cable of the (2,3) torus knot, which was shown to be transversely non-simple by Etnyre and Honda using contact-geometric techniques [@EH4]. Matsuda and Menasco have since identified two explicit transverse representatives of this cabled torus knot which have identical self-linking numbers, but which are not transversely isotopic [@mm]. Their examples take center stage in Section \[sec:nonsimple\] of this paper.
There has been a flurry of progress in finding transversely non-simple link types in the last couple years, spurred by the discovery of a transverse invariant $\theta$ in link Floer homology by Ozsv[á]{}th, Szab[ó]{} and Thurston [@oszt]; this discovery, in turn, was made possible by the combinatorial description of ${HFL^-}$ found by Manolescu, Ozsv[á]{}th and Sarkar in [@mos] (see also [@most]).[^1] This $\theta$ invariant is applied by Ng, Ozsv[á]{}th and Thurston in [@not] to identify several examples of transversely non-simple links, including the knot $10_{132}$. In [@vera], V[é]{}rtesi proves a connected sum formula for $\theta$, which she weilds to find infinitely many examples of non-prime knots which are transversely non-simple (Kawamura has since proven a similar result without using Floer homology [@keiko]; both hers and V[é]{}rtesi’s results follow from Etnyre and Honda’s work on Legendrian connected sums [@EH5]).
Finding infinite families of transversely non-simple *prime* knots is generally more difficult. Using a slightly different invariant, which we shall denote by $\underline\theta$, derived from knot Floer homology and discovered by Lisca, Ozsv[á]{}th, Szab[ó]{} and Stipsicz in [@lossz], Ozsv[á]{}th and Stipsicz identify such an infinite family among two-bridge knots [@ost]. And, most recently, Khandhawit and Ng use the invariant $\theta$ to construct a 2-parameter infinite family of prime transversely non-simple knots, which generalizes the example of $10_{132}$ [@kng].
In this paper, we formulate and apply a strategy for generating a slew of new infinite families of transversely non-simple prime links. This strategy hinges on the “naturality" results below. For a word $w$ in the braid group $B_n$, we denote by $T_w$ the corresponding transverse braid in $(\mathbb{R}^3,\xi_{rot})$.
\[thm:nat\] There exists a map on link Floer homology, $$\widetilde\Phi:{\widetilde{HFL}}(m(T_{w\sigma_i}) )\rightarrow {\widetilde{HFL}}(m(T_w)),$$ which sends $\widetilde\theta(T_{w\sigma_i})$ to $\widetilde\theta(T_w)$, where $\sigma_i$ is one of the standard generators of $B_n$.
This theorem implies the existence of a “comultiplication" map on link Floer homology, similar in spirit to the map discovered in [@bald3]:
\[thm:comult\] For any two braid words $h$ and $g$ in $B_n$, there exists a map, $$\widetilde\mu:{\widetilde{HFL}}(m(T_{hg}) )\rightarrow {\widetilde{HFL}}(m(T_g\#T_h)),$$ which sends $\widetilde\theta(T_{hg})$ to $\widetilde\theta(T_g\#T_h).$
One may combine Theorem \[thm:comult\] with V[é]{}rtesi’s result governing the behavior of $\theta$ under connected sums to conclude the following.
\[thm:nonzero\] If $\widehat\theta(T_g)$ and $\widehat\theta(T_h)$ are both non-zero, then so is $\widehat\theta(T_{hg})$.
Here, we sketch one potential way to use these results to find transversely non-simple links. Start with some $w_1$, $w_2 \in B_n$ for which $T_{w_1}$ and $T_{w_2}$ are topologically isotopic and have the same self-linking data, but for which $\widehat\theta(T_{w_1})=0$ while $\widehat\theta(T_{w_2})\neq 0$, so that $T_{w_1}$ and $T_{w_2}$ are not transversely isotopic. Now, choose an $h\in B_n$ for which $\widehat\theta(T_h) \neq 0$. Theorem \[thm:nonzero\] then implies that $\widehat\theta(T_{hw_2})\neq 0$ as well. If one can show that $\widehat\theta(T_{hw_1}) = 0$, that $T_{hw_1}$ and $T_{hw_2}$ still represent the same topological link type, and that $\mathcal{SL}(T_{hw_1}) = \mathcal{SL}(T_{hw_2})$ (this is automatic if $T_{hw_1}$ and $T_{hw_2}$ are knots), then one may conclude that $T_{hw_1}$ represents a transversely non-simple link type.
An advantage of this approach for generating new transversely non-simple link types from old over, say, that of [@vera; @keiko], is that there is no *a priori* reason to expect that the links so formed are composite. We demonstrate the effectiveness of this approach in Section \[sec:nonsimple\] of this paper. In doing so, we describe an infinite family of prime transversely non-simple link types (half are knots; the other half are 3-component links) which generalizes the (2,3) cable of the (2,3) torus knot. Moreover, it is clear that this example only scratches the surface of the potential of our more general technique.
Lastly, it is tempting to conjecture that the two invariants $\theta$ and $\underline\theta$ agree for transverse links in $(\mathbb{R}^3,\xi_{rot})$, as they share many formal properties. We prove a partial result in this direction, which follows from Theorem \[thm:nat\] together with work of Vela-Vick on the $\underline\theta$ invariant [@vv].
\[thm:equivalent\] $\widehat\theta(T)$ and $\widehat{\underline\theta}(T)$ agree for positive, transverse, connected braids $T$ in $(\mathbb{R}^3,\xi_{rot})$.
Organization {#organization .unnumbered}
------------
In the next section, we outline the relationship between grid diagrams, Legendrian links and their transverse pushoffs. In Section \[sec:hfl\], we review the grid diagram construction of link Floer homology and describe some important properties of the transverse invariant $\theta$. In Section \[sec:comult\], we prove Theorems \[thm:nat\], \[thm:comult\], \[thm:nonzero\] and \[thm:equivalent\]. And, in Section \[sec:nonsimple\], we outline a general strategy for using our comultiplication result to produce new examples of transversely non-simple link types, and we give an infinite family of such examples which are prime.
Acknowledgements {#acknowledgements .unnumbered}
----------------
I wish to thank Lenny Ng for helpful correspondence. His suggestions were key in developing some of the strategy formulated in Section \[sec:nonsimple\]. Thanks also to the referee for helpful comments.
Grid diagrams, Legendrian and transverse links
==============================================
In this section, we provide a brief review of the relationship between Legendrian links in $(\mathbb{R}^3,\xi_{std})$, transverse braids in $(\mathbb{R}^3,\xi_{rot})$ and grid diagrams, largely following the discussion in [@kng]. For a more detailed account, see [@kng; @ngt]. The standard tight contact structure $\xi_{std}$ on $\mathbb{R}^3$ is given as $$\xi_{std} = \text{ker}(dz-ydx).$$ An oriented link $L \subset (\mathbb{R}^3,\xi_{std})$ is called *Legendrian* if it is everywhere tangent to $\xi_{std}$, and *transverse* if it is everywhere transverse to $\xi_{std}$ such that $dz-ydx>0$ along the orientation of $L$. Any smooth link can be perturbed by a $C^0$ isotopy to be Legendrian or transverse. We say that two Legendrian (resp. transverse) links are *Legendrian* (resp. *transversely*) isotopic if they are isotopic through Legendrian (resp. transverse) links.
A Legendrian link $L$ can be perturbed to a transverse link (which is arbitrarily close to $L$ in the $C^{\infty}$ topology) by pushing $L$ along its length in a generic direction transverse to the contact planes in such a way that the orientation of the pushoff agrees with that of $L$. The resulting link $L^+$ is called a *positive transverse pushoff* of $L$. Legendrian isotopic links give rise to transversely isotopic pushoffs. Conversely, every transverse link is the positive transverse pushoff of some Legendrian link; however, two such Legendrian links need not be Legendrian isotopic. The precise relationship between Legendrian and transverse links is best explained via *front projections*.
The front projection of a Legendrian link is its projection onto the $xz$ plane. The front projection of a generic Legendrian link has no vertical tangencies and has only semicubical cusps and transverse double points as singularities. Moreover, at each double point, the slope of the overcrossing is more negative than the slope of the undercrossing. See Figure \[fig:example\].c for the front projection of a right-handed Legendrian trefoil.
The *positive* (resp. *negative*) *stabilization* of a Legendrian link $L$ along some component $C$ of $L$ is the Legendrian link whose front projection is obtained from that of $L$ by adding a zigzag along $C$ with downward (resp. upward) pointing cusps. See Figure \[fig:stab\]. Two Legendrian links are said to be *negatively stably isotopic* if they are Legendrian isotopic after each has been negatively stabilized some number of times along some of its components. The following theorem implies that the classification of transverse links up to transverse isotopy is equivalent to the classification of Legendrian links up to Legendrian isotopy and negative stabilization.
$(a)$ at 135 65 $(b)$ at 273 65 $C$ at 55 15
Two Legendrian links are negatively stably isotopic if and only if their positive transverse pushoffs are transversely isotopic.
Consider the rotationally symmetric tight contact structure on $\mathbb{R}^3$ defined by $$\xi_{rot} = \text{ker}(dz-ydx+xdy).$$ The diffeomorphism of $\mathbb{R}^3$ given by $$\label{eqn:contact}\phi(x,y,z) = (x, 2y, xy+z)$$ sends $\xi_{rot}$ to $\xi_{std}.$ One can define transverse links for $\xi_{rot}$ in the same way that one does for $\xi_{std}$. Since $\phi$ sends a transverse link in $(\mathbb{R}^3,\xi_{rot})$ to a transverse link in $(\mathbb{R}^3,\xi_{std})$, the study of transverse links in $(\mathbb{R}^3,\xi_{std})$ is equivalent to that in $(\mathbb{R}^3,\xi_{rot})$; however, the latter is often more convenient, per the following theorem of Bennequin.
\[thm:braid2\] Any transverse link in $(\mathbb{R}^3,\xi_{rot})$ is transversely isotopic to a closed braid around the $z$-axis.
Theorem \[thm:braid2\] allows us to use braid-theoretic techniques to study transverse links. For a braid word $w\in B_n$, we let $T_w$ denote the corresponding transverse braid around the $z$-axis. Braid words which are conjugate in $B_n$ clearly correspond to transversely isotopic links. Recall that, for $w\in B_n$, a *positive* (resp. *negative*) *braid stabilization* of $w$ is the operation which replaces $w$ by the word $w\sigma_n$ (resp. $w\sigma_n^{-1}$) in $ B_{n+1}$. We will also refer to $T_{w\sigma_n}$ (resp. $T_{w\sigma_n^{-1}}$) as the *positive* (resp. *negative*) *braid stabilization* of the transverse link $T_{w}$. The following theorem makes precise the relationship between braids and transverse links in $(\mathbb{R}^3,\xi_{rot})$.
\[thm:markov\] For $w\in B_n$ and $w' \in B_m$, the transverse links $T_{w}$ and $T_{w'}$ are transversely isotopic in $(\mathbb{R}^3,\xi_{rot})$ if and only if $w$ and $w'$ are related by a sequence of conjugations and positive braid stabilizations and destabilizations.
In Section \[sec:nonsimple\], we use a braid operation called an *exchange move*. If $a$, $b$ and $c$ in $B_n$ are words in the generators $\sigma_2,\dots,\sigma_{n-1}$, then an exchange move is the operation which replaces the word $w_1 = a\sigma_1b\sigma_1^{-1}c$ with the word $w_2 = a\sigma_1^{-1}b\sigma_1c$. An exchange move is actually just a composition of conjugations, one positive braid stabilization and one positive destabilization, and so the link $T_{w_1}$ is transversely isotopic to $T_{w_2}$ (see, for example, [@ngt]).
It bears mentioning that the self-linking number of a transverse link admits a nice formulation in the language of braids. If $\Sigma$ is a Seifert surface for a transverse link $T$, then the vector bundle $\xi_{rot}|_{\Sigma}$ is trivial and, therefore, has a non-zero section $v$. Recall that the *self-linking number* of $T$ is defined by $$sl(T)=lk(T,T'),$$ where $T'$ is a pushoff of $T$ in the direction of $v$. Any two links which are transversely isotopic have identical self-linking numbers. For a word $w\in B_n$, the self-linking number of $T_w$ is given simply by $a(w)-n$, where $a(w)$ is the algebraic length of $w$.
In what remains of this section, we describe a relationship between the front diagram of a Legendrian link in $(\mathbb{R}^3,\xi_{std})$ and a braid representation of its positive transverse pushoff, thought of as a transverse link in $(\mathbb{R}^3,\xi_{rot})$. Grid diagrams provide the necessary connection.
A *grid diagram* $G$ is an $k \times k$ square grid along with a collection of $k$ $X$’s and $k$ $O$’s contained in these squares such that every row and column contains exactly one $O$ and one $X$ and no square contains both an $O$ and an $X$. See Figure \[fig:example\].a. We call $k$ the *grid number* of $G$. One can produce an oriented link diagram $L$ from $G$ by drawing a horizontal segment from the $O$’s to the $X$’s in each row and a vertical segment from the $X$’s to the $O$’s in each column so that the horizontal segments pass over the vertical segments (this is the convention used in [@kng], and the opposite of the convention in [@mos]; see [@ngt] for a discussion on the relationship between the two conventions), as in Figure \[fig:example\].b. By rotating $L$ $45^\circ$ clockwise, and then smoothing the upward and downward pointing corners and turning the leftward and rightward pointing corners into cusps, one obtains the front projection of a Legendrian link, as in Figure \[fig:example\].c. Let us denote this Legendrian link by $L(G)$.
$(a)$ at 10 380 $(b)$ at 273 380 $(c)$ at 487 380 $(d)$ at 273 190 $(e)$ at 490 190
Alternatively, one can construct a braid diagram from $G$ by drawing a horizontal segment from the $O$’s to the $X$’s in each row, as before, and drawing a vertical segment from the $X$’s to the $O$’s for each column in which the marking $X$ lies under the marking $O$. For those columns in which the $X$ is above the $O$, we draw two vertical segments: one from the $X$ up to the top of the grid diagram, and the other from the bottom of the grid diagram up to the $O$. As before, we require that the horizontal segments pass over the vertical segments. Note that all vertical segments are oriented upwards and that the closure of the diagram we have constructed is a braid. See Figures \[fig:example\].d and \[fig:example\].e for an example of this procedure. Let us denote the corresponding braid word by $w(G)$, read from the bottom up. The relationship between $T_{w(G)}$ and $(L(G))^+$ is expressed in the proposition below.
The contactomorphism $\phi$ from $(\mathbb{R}^3,\xi_{rot})$ to $(\mathbb{R}^3,\xi_{std})$ defined in Equation (\[eqn:contact\]) sends the transverse link $T_{w(G)}$ to a link which is transversely isotopic to $(L(G))^+$.
Link Floer homology and the transverse invariant {#sec:hfl}
================================================
In this section, we describe the grid diagram formulation of link Floer homology discovered in [@mos; @most]. Let $G$ be a grid diagram for a link $L$ and suppose that $G$ has grid number $k$. From this point forward, we think of $G$ as a *toroidal* grid diagram – that is, we identify the top and bottom sides of $G$ and the right and left sides of $G$ – so that the horizontal and vertical lines become $k$ horizontal and $k$ vertical circles. Let $\mathbb{O}$ and $\mathbb{X}$ denote the sets of markings $\{O_i\}_{i=1}^k$ and $\{X_i\}_{i=1}^k$, respectively.
We associate to $G$ a chain complex $({CFL^-}(m(L)), \partial^-)$ as follows. The generators of ${CFL^-}(L)$ are one-to-one correspondences between the horizontal and vertical circles of $G$. Equivalently, we may think of a generator as a set of $k$ intersection points between the horizontal and vertical circles, such that no intersection point appears on more than one horizontal circle or on more than one vertical circle. We denote this set of generators by $\mathbf{S}(G)$. Then, ${CFL^-}(m(L))$ is defined to be the free ${\mathbb{Z}_2}[U_1,\dots,U_k]$-module generated by the elements of $\mathbf{S}(G)$, where the $U_i$ are formal variables corresponding to the markings $O_i$.
For ${\mathbf{x}}, {\mathbf{y}}\in {\mathbf{S}}(G)$, we let $Rect_G({\mathbf{x}},{\mathbf{y}})$ denote the space of embedded rectangles in $G$ with the following properties. $Rect_G({\mathbf{x}},{\mathbf{y}})$ is empty unless ${\mathbf{x}}$ and ${\mathbf{y}}$ coincide at $k-2$ points. An element $r\in Rect_G({\mathbf{x}},{\mathbf{y}})$ is an embedded disk on the toroidal grid $G$ whose edges are arcs on the horizontal and vertical circles and whose four corners are intersection points in ${\mathbf{x}}\,\cup\, {\mathbf{y}}$. Moreover, we stipulate that if we traverse each horizontal boundary component of $r$ in the direction specified by the induced orientation on $\partial r$, then this horizontal arc is oriented from a point in ${\mathbf{x}}$ to a point in ${\mathbf{y}}$. If $Rect_G({\mathbf{x}},{\mathbf{y}})$ is non-empty, then it consists of exactly two rectangles. See Figure \[fig:rect\] for an example. We let $Rect_G^o({\mathbf{x}},{\mathbf{y}})$ denote the space of $r\in Rect_G^o({\mathbf{x}}, {\mathbf{y}})$ for which $r\cap\mathbb{X}= \text{Int}(r)\cap \mathbf{x} = \emptyset$.
The module ${CFL^-}(m(L))$ is endowed with an endomorphism $$\partial^-: {CFL^-}(m(L))\rightarrow {CFL^-}(m(L)),$$ defined on ${\mathbf{S}}(G)$ by $$\partial^-({\mathbf{x}}) = \sum_{{\mathbf{y}}\in{\mathbf{S}}(G)}\,\,\sum_{r\in Rect_{G}^o({\mathbf{x}},{\mathbf{y}})} U_1^{O_1(r)}\cdots U_k^{O_n(r)} \cdot {\mathbf{y}}.$$ Here, $O_i(r)$ denotes the number of times the marking $O_i$ appears in $r$. The map $\partial^-$ is a differential, and, so, gives rise to a chain complex $({CFL^-}(m(L)), \partial^-).$ The homology of this chain complex, ${HFL^-}(m(L)) = H_*({CFL^-}(m(L)), \partial^-)$, is an invariant of the link $L$, and agrees with the *link Floer homology* of $m(L)$ defined in [@osz19]. It bears mentioning that the complex $({CFL^-}(m(L),\partial^-)$ comes equipped with *Maslov* and *Alexander* gradings, which are then inherited by ${HFL^-}(m(L))$; however, we will not discuss these gradings further as they play no role in this paper.
Suppose that the link $L$ has $l$ components. If $O_i$ and $O_j$ lie on the same component of $L$, then multiplication by $U_i$ in $({CFL^-}(m(L)), \partial^-)$ is chain homotopic to multiplication by $U_j$, and, so, these multiplications induce the same maps on ${HFL^-}(m(L))$ [@most Lemma 2.9]. So, if we label the markings in $\mathbb{O}$ so that $O_1,\dots,O_l$ lie on different components, then we can think of ${HFL^-}(m(L))$ as a module over ${\mathbb{Z}_2}[U_1,\dots,U_l]$.
Setting $U_1 = \dots = U_l=0$, one obtains a chain complex $({\widehat{CFL}}(m(L)), \widehat\partial)$ whose homology we denote by ${\widehat{HFL}}(m(L))$. The latter is a bi-graded vector space over ${\mathbb{Z}_2}$, whose graded Euler characteristic is some normalization of the multivariable Alexander polynomial of $m(L)$ [@osz19]. If one sets $U_1 = \dots = U_k = 0$, one obtains a chain complex $({\widetilde{CFL}}(m(L)), \widetilde\partial)$ whose homology we denote by ${\widetilde{HFL}}(m(L))$. The group ${\widehat{HFL}}(m(L))$ determines ${\widetilde{HFL}}(m(L))$. Specifically, if we let $n_i$, for $i=1,\dots,l$, denote the number of markings in $\mathbb{O}$ on the ith component of $L$, then $${\widetilde{HFL}}(m(L)) = {\widehat{HFL}}(m(L)) \otimes \bigotimes_{i=1}^l V_i^{\otimes (n_i-1)},$$ where $V_i$ is a fixed two dimensional vector space [@most Proposition 2.13], and the quotient map $$j:{\widehat{CFL}}(m(L))\rightarrow {\widetilde{CFL}}(m(L))$$ induces an injection $j_*$ on homology.
The element ${\mathbf{z}}^+(G) \in {\mathbf{S}}(G)$, which consists of the intersection points at the upper right corners of the squares in $G$ containing the markings in $\mathbb{X}$, is clearly a cycle in $({CFL^-}(m(L)), \partial^-)$ (and, hence, also in the other chain complexes). If $T$ is the transverse link in $(\mathbb{R}^3,\xi_{rot})$ corresponding to the braid obtained from $G$ as in Figure \[fig:example\].e, then $T$ is topologically isotopic to $L$, and the image of ${\mathbf{z}}^+(G)$ in ${HFL^-}(m(T))$ is the transverse invariant $\theta^-(T)$ defined in [@oszt]. The images of ${\mathbf{z}}^+(G)$ in ${\widehat{HFL}}(m(T))$ and ${\widetilde{HFL}}(m(T))$ are likewise denoted $\widehat\theta(T)$ and $\widetilde\theta(T)$, and are invariants of the transverse link $T$ as well. Moreover, the map $j_*$ sends $\widehat\theta(T)$ to $\widetilde\theta(T)$; in particular, $\widehat\theta(T)=0$ if and only if $\widetilde\theta(T)=0$. The theorem below makes these statements about invariance precise.
\[thm:invt\] Suppose that $G$ and $G'$ are two grid diagrams whose associated braids $T$ and $T'$ are transversely isotopic. Then, there is an isomorphism $$f^o_*:HFL^o(m(T))\rightarrow HFL^o(m(T')),$$ induced by a chain map $f^o$, which sends $\theta^o(T)$ to $\theta^o(T')$.
Here, the superscript “$^o$" is meant to indicate that this theorem holds for any of the three versions of link Floer homology described above. In particular, if $T$ and $T'$ are two transverse links for which $\widehat\theta(T) \neq 0$ and $\widehat\theta(T') = 0$, then $T$ and $T'$ are not transversely isotopic (the invariant $\theta^-(T)$ is always non-zero and non-$U_i$-torsion in ${HFL^-}(m(T))$ [@oszt Theorem 7.3]). These transverse invariants also behave nicely under negative braid stabilizations.
\[thm:stab\] Suppose that $G$ and $G'$ are two grid diagrams with associated braids $T$ and $T'$, and suppose that $T'$ is obtained from $T$ by performing a negative braid stabilization along the ith component of $T$. Then, there is an isomorphism $$f^-_*:{HFL^-}(m(T))\rightarrow {HFL^-}(m(T')),$$ induced by a chain map $f^-$, which sends $\theta(T)$ to $U_i\cdot\theta(T')$.
Since multiplication by $U_i$ is the same as multiplication by zero on ${\widehat{HFL}}$ and ${\widetilde{HFL}}$, we obtain the following corollary.
\[cor:stab\] If $T'$ is obtained from a transverse braid $T$ by performing a negative braid stabilization along some component of $T$, then $\widehat\theta(T') = \widetilde\theta(T')=0.$
The map $\Phi$ and comultiplication {#sec:comult}
===================================
Fix some $w\in B_n$ and some $i\in\{1,\dots,n-1\}$. Figure \[fig:pentagon\] shows simultaneously a portion of a grid diagram $G_{\beta}$ for $T_{w\sigma_i}$ and the corresponding portion of a grid diagram $G_{\gamma}$ for $T_{w}$. The grid diagrams $G_{\beta}$ and $G_{\gamma}$ are the same except that $G_{\beta}$ uses the horizontal curve $\beta$ while $G_{\gamma}$ uses the horizontal curve $\gamma$. Let $k$ denote their common grid number.
$T_{w\sigma_i}$ at 440 50 $T_{w}$ at 600 50 $\beta$ at -7 107 $\gamma$ at -7 137
For ${\mathbf{x}}\in {\mathbf{S}}(G_{\beta})$ and ${\mathbf{y}}\in {\mathbf{S}}(G_{\gamma})$, let $Pent_{\beta\gamma}({\mathbf{x}},{\mathbf{y}})$ denote the space of embedded pentagons with the following properties. $Pent_{\beta\gamma}({\mathbf{x}},{\mathbf{y}})$ is empty unless ${\mathbf{x}}$ and ${\mathbf{y}}$ coincide at $k-2$ points. An element $p\in Pent_{\beta\gamma}({\mathbf{x}},{\mathbf{y}})$ is an embedded disk in the torus whose boundary consists of five arcs, each contained in horizontal or vertical circles. We stipulate that under the orientation induced on the boundary of $p$, the boundary may be traversed as follows. Start at the component of ${\mathbf{x}}$ on the curve $\beta$ and proceed along an arc contained in $\beta$ until we arrive at the right-most intersection point between $\beta$ and $\gamma$; next, proceed along an arc contained in $\gamma$ until we reach the component of ${\mathbf{y}}$ contained in $\gamma$; next, follow the arc of a vertical circle until we arrive at a component of ${\mathbf{x}}$; then, proceed along the arc of a horizontal circle until we arrive at a component of ${\mathbf{y}}$; finally, follow an arc contained in a vertical circle back to the initial component of ${\mathbf{x}}$. Let $Pent_{\beta\gamma}^o({\mathbf{x}},{\mathbf{y}})$ denote the space of $p\in Pent_{\beta\gamma}({\mathbf{x}},{\mathbf{y}})$ for which $p\cap\mathbb{X}= \text{Int}(p)\cap \mathbf{x} = \emptyset$.
We construct a map $$\phi^-:{CFL^-}(m(T_{w\sigma_i}))\rightarrow {CFL^-}(m(T_w))$$ of ${\mathbb{Z}_2}[U_1,\dots,U_k]$-modules as follows. For ${\mathbf{x}}\in {\mathbf{S}}(G_{\beta})$, let $$\phi^-({\mathbf{x}}) = \sum_{{\mathbf{y}}\in{\mathbf{S}}(G_{\gamma})}\,\,\sum_{p\in Pent_{\beta\gamma}^o({\mathbf{x}},{\mathbf{y}})} U_1^{O_1(p)}\cdots U_k^{O_n(p)} \cdot {\mathbf{y}}.$$ We then define $$\widetilde\phi:{\widetilde{CFL}}(m(T_{w\sigma_i}))\rightarrow {\widetilde{CFL}}(m(T_w))$$ to be the map on ${\widetilde{CFL}}$ induced by $\phi^-$. In other words, $\widetilde\phi$ counts pentagons in $Pent_{\beta\gamma}^o({\mathbf{x}},{\mathbf{y}})$ which also miss the $\mathbb{O}$ basepoints. (This construction is inspired by the proof of commutation invariance in [@most Section 3.1].)
Unlike $\widetilde\phi$, the map $\phi^-$ is not necessarily a chain map.
The juxtaposition $p*r$ of any $p\in Pent^o_{\beta\gamma}({\mathbf{x}},{\mathbf{y}})$ with any rectangle $r \in Rect^o_{G_\gamma}({\mathbf{y}},{\mathbf{w}})$ such that $p\cap\mathbb{O}=r\cap\mathbb{O}=\emptyset$ has precisely one such decomposition and exactly one other decomposition as $r'*p'$, where $r' \in Rect^o_{G_\beta}({\mathbf{x}},{\mathbf{y}}')$ and $p'\in Pent^o_{\beta\gamma}({\mathbf{y}}',{\mathbf{w}})$ and $r'\cap\mathbb{O}=p'\cap\mathbb{O}=\emptyset$. It follows that $\widetilde\phi$ is a chain map and, so, induces a map $$\widetilde\Phi:{\widetilde{HFL}}(m(T_{w\sigma_i}))\rightarrow {\widetilde{HFL}}(m(T_w)).$$ Moreover, it is clear that $Pent^o_{\beta\gamma}(\mathbf{z}^+(G_{\beta}), \mathbf{z}^+(G_{\gamma}))$ consists only of the shaded pentagon shown in Figure \[fig:pentagon\], and that $Pent^o_{\beta\gamma}(\mathbf{z}^+(G_{\beta}), \mathbf{y})$ is empty for $\mathbf{y} \neq \mathbf{z}^+(G_{\gamma}).$ Therefore, $\widetilde\Phi$ sends $\widetilde\theta(T_{w\sigma_i})$ to $\widetilde\theta(T_w)$, proving Theorem \[thm:nat\].
The more general comultiplication fact stated in Theorem \[thm:comult\] follows from the above result together with the sequence of braid moves depicted in Figure \[fig:braid\]. The braid in Figure \[fig:braid\].a represents the transverse link $T_{hg}$. The braid in \[fig:braid\].b is obtained from that in \[fig:braid\].a by a mixture of isotopy and positive stabilizations. The braid in \[fig:braid\].c is obtained from that in \[fig:braid\].b by isotopy followed by the introduction of negative crossings. The braid in \[fig:braid\].e is isotopic to the braids in \[fig:braid\].c and \[fig:braid\].d, and represents the connected sum of the transverse links $T_g$ and $T_h$ (for the latter statement, see [@bm2]). Therefore, a composition of the maps $\widetilde\Phi$ described above (one for each negative crossing introduced in going from \[fig:braid\].b to \[fig:braid\].c) yields a map $$\widetilde\mu:{\widetilde{HFL}}(m(T_{hg}) )\rightarrow {\widetilde{HFL}}(m(T_g\#T_h))$$ which sends $\widetilde\theta(T_{hg})$ to $\widetilde\theta(T_g\#T_h).$
$(a)$ at 30 850 $(b)$ at 393 850 $(c)$ at 900 850 $(d)$ at 1400 850 $(e)$ at 1900 850
$h$ at 160 398 $h$ at 515 398 $h$ at 1020 398 $\reflectbox{h}$ at 1520 398 $\reflectbox{h}$ at 2023 397 $g$ at 160 737 $g$ at 515 737 $g$ at 1020 737 $g$ at 1680 737 $g$ at 2190 737
Suppose that $T_g \# T_h$ is any connected sum of $T_g$ and $T_h$. In [@vera], V[é]{}rtesi proves the following refinement of the Kunneth formula described in [@osz19 Theorem 1.4]. (Her proof is actually for the analogous result in knot Floer homology, but it extends in an obvious manner to a proof of the theorem below.)
\[thm:connectedsum\] There is an isomorphism, $${\widehat{HFL}}( m(T_g \# T_h)) \cong {\widehat{HFL}}( m(T_g))\otimes_{{\mathbb{Z}_2}} {\widehat{HFL}}(m(T_h)),$$ under which $\widehat\theta(T_g \# T_h)$ is identified with $\widehat\theta(T_g)\otimes \widehat\theta(T_h)$.
V[é]{}rtesi’s theorem, used in combination with the comultiplication map $\widetilde{\mu}$, may be applied to prove Theorem \[thm:nonzero\].
Recall from the previous section that $\widehat\theta(T_w)$ is non-zero if and only if $\widetilde\theta(T_w)$ is non-zero. If $\widehat\theta(T_g)$ and $\widehat\theta(T_h)$ are both non-zero, then, by Theorem \[thm:connectedsum\], so is $\widehat\theta(T_g \# T_h)$, and, hence, so is $\widetilde\theta(T_g \# T_h)$. Since $\widetilde\mu$ sends $\widetilde\theta(T_{hg})$ to $\widetilde\theta(T_g\#T_h),$ this implies that $\widetilde\theta(T_{hg})$ is non-zero, and, hence, so is $\widehat\theta(T_{hg})$.
Recall that a braid $T_g$ is said to be *quasipositive* if $g \in B_n$ can be expressed as a product of conjugates of the form $w\sigma_iw^{-1}$, where $w$ is any word in $B_n$.
\[cor:qp\] If $T_g$ is a quasipositive braid, then $\widehat\theta(T_g)\neq0.$
If $g$ is a product of $m$ conjugates as above, then after resolving the corresponding $m$ positive crossings, one obtains a braid isotopic to $I_n$, the trivial $n$-braid. Therefore, a composition of $m$ of the maps $\widetilde\Phi$ sends $\widetilde\theta(T_g)$ to $\widetilde\theta(I_n)$. Moreover, one sees by glancing at the grid diagram for $I_n$ in Figure \[fig:trivial\] that $\widetilde\theta(T_{I_n})\neq 0$. Therefore, $\widetilde\theta(T_g) \neq 0$ and the same is true of $\widehat\theta(T_{g})$.
$n$ at 242 105
Suppose that $T_w$ is a positive braid with one component. Then $\widehat\theta(T_w) \neq 0$, by the corollary above; also, $T_w$ is a fibered knot [@bw]. Moreover, $\widehat\theta(T_w)$ lies in Alexander grading $(sl(T_w)+1)/2$, which, in this case, is simply the genus of $T_w$ [@oszt]. Therefore, $\widehat\theta(T_w)$ is the unique generator of ${\widehat{HFL}}(T_w,g(T_w))$. To show that $\widehat\theta(T_w) = \widehat{\underline\theta}(T_w)$, it suffices to prove that $\widehat{\underline\theta}(T_w)$ is non-zero as well. Fortunately, this has been shown by Vela-Vick in [@vv].
Finding new transversely non-simple links {#sec:nonsimple}
=========================================
In this section, we outline and apply one strategy for using comultiplication (in particular, Theorem \[thm:nonzero\]) to generate a plethora of new examples of transversely non-simple link types. Consider the braid words $$w_1 = a\sigma_1^{m}b\sigma_1^{-1}c\,\,\,\,\,\,\, and \,\,\,\,\,\,\,w_2 = a\sigma_1^{-1}b\sigma_1^{m}c$$ in $B_n$, where $a$, $b$ and $c$ are words in the generators $\sigma_2,\dots,\sigma_{n-1}$. The transverse braids $T_{w_1}$ and $T_{w_2}$ are said to be related by a *negative flype* and, in particular, represent the same topological link type. If, in addition, $m$ is odd, or if $m$ is even and the two strands which cross according to $\sigma_1^m$ belong to the same component of $T_{w_1}$, then $\mathcal{SL}(T_{w_1}) = \mathcal{SL}(T_{w_2})$.
Suppose that $\widehat\theta(T_{w_1})=0$ and $\widehat\theta(T_{w_2}) \neq 0$. The idea is to find a word $h$ in the generators $\sigma_2,\dots,\sigma_{n-1}$ with $\widehat\theta(T_h) \neq 0.$ Theorem \[thm:nonzero\] would then imply that $\widehat\theta(T_{hw_2}) \neq 0$. If it is also true that $\widehat\theta(T_{hw_1})=0$, then $T_{hw_1}$ and $T_{hw_2}$ are not transversely isotopic although they are topologically isotopic. We would like to find examples which also satisfy $\mathcal{SL}(T_{hw_1}) = \mathcal{SL}(T_{hw_2})$ (if $T_{hw_1}$ is a knot, this is automatic) so as to produce topological link types which are not transversely simple. One nice feature of this proposed method, which differs from that in [@vera], is that there is no reason to believe *a priori* that the link $T_{hw_1}$ so obtained is composite.
In principle, Theorem \[thm:nonzero\] eliminates half of the work in this scenario – namely, showing that $\widehat\theta(T_{hw_2}) \neq 0$. In practice, one would like to find examples in which the other half – showing that $\widehat\theta(T_{hw_1})$ is zero – is very easy. To that end, one strategy is to pick an example in which $T_{w_1}$ is transversely isotopic to a braid which can be negatively destabilized, and to show that the same is true of the braid $T_{hw_1}$, which would guarantee that $\widehat\theta(T_{hw_1})=0$ by Corollary \[cor:stab\]. In particular, $T_{w_1}$ must belong to a topological link type with a transverse representative (that is, $T_{w_2}$) which does not maximize self-linking number, but which cannot be negatively destabilized.
The most well-known such link type is that of the $(2,3)$ cable of the $(2,3)$ torus knot. In [@EH4], Etnyre and Honda prove the following.
\[prop:EH\] The $(2,3)$ cable of the $(2,3)$ torus knot has two Legendrian representatives, $L_1$ and $L_2$, both with $tb=5$ and $r=2$, for which $L_1$ is the positive (Legendrian) stabilization of a Legendrian knot while $L_2$ is not. Moreover, $L_1$ and $L_2$ are not Legendrian isotopic after any number of negative (Legendian) stabilizations.
That $L_1$ and $L_2$ are not Legendrian isotopic after any number of negative stabilizations implies that their transverse pushoffs, $L_1^+$ and $L_2^+$, are not transversely isotopic (yet, they both have $sl = 3$). Moreover, since $L_1$ is the positive stabilization of a Legendrian knot, its pushoff $L_1^+$ is transversely isotopic to the negative stabilization of some transverse braid.
Matsuda and Menasco have since given explicit forms for $L_1$ and $L_2$ [@mm]. Figures \[fig:grids\].a and \[fig:grids\].a$'$ depict the rectangular diagrams corresponding to slightly modified versions of these forms (ours are derived from the front diagrams in [@not Figure 6]). Figures \[fig:grids\].b and \[fig:grids\].b$'$ show the rectangular braid diagrams for the transverse pushoffs $L_1^+$ and $L_2^+$, respectively. The braids in \[fig:grids\].c and \[fig:grids\].c$'$ are obtained from those in \[fig:grids\].b and \[fig:grids\].b$'$ by isotoping the red arcs as indicated, and the braids in \[fig:grids\].d and \[fig:grids\].d$'$ are obtained from those in \[fig:grids\].c and \[fig:grids\].c$'$ after additional simple isotopies and conjugations. The braids in \[fig:grids\].e and \[fig:grids\].e$'$ are obtained from those in \[fig:grids\].d and \[fig:grids\].d$'$ by conjugation, and they are related to one another by a negative flype. Indeed, Figure \[fig:grids\] shows that $L_1^+$ and $L_2^+$ are transversely isotopic to the transverse braids $T_{w_1}$ and $T_{w_2}$, respectively, where $w_1 = a\sigma_1^2 b\sigma_1^{-1}c$, $w_2 = a\sigma_1^{-1} b\sigma_1^2c$, $$\begin{aligned}
a&=&\sigma_4\sigma_3\sigma_5\sigma_6\sigma_4\sigma_5\sigma_5\sigma_6\sigma_4\sigma_5\sigma_7\sigma_6\sigma_5^{-1}\sigma_4^{-1}\sigma_3^{-1}\sigma_2\sigma_3\sigma_3\sigma_4\sigma_5\sigma_4^{-1}\sigma_3^{-1}\sigma_2^{-1}, \\
b&=& \sigma_5\sigma_6\sigma_7\sigma_6^{-1}\sigma_5^{-1}\sigma_4^{-1}\sigma_6^{-1}\sigma_5^{-1}\sigma_4^{-1}\sigma_3\sigma_4\sigma_5\sigma_2\sigma_3\sigma_4\sigma_4\sigma_5\sigma_6\sigma_5^{-1}\sigma_4^{-1}\sigma_3^{-1}\sigma_2^{-1}, \text{ and}\\
c&=&\sigma_7^{-1}\sigma_6^{-1}\sigma_5^{-1}.\end{aligned}$$
$(a)$ at 60 817 $(b)$ at 390 817 $(c)$ at 732 817 $(d)$ at 1087 817 $(e)$ at 1450 817
$(a')$ at 60 385 $(b')$ at 390 385 $(c')$ at 736 385 $(d')$ at 1100 385 $(e')$ at 1465 385
According to Proposition \[prop:EH\], $T_{w_1}$ is transversely isotopic to a braid which can be negatively destabilized. Figure \[fig:grids2\] shows a sequence of transverse braid moves which demonstrates that the same is true of $T_{hw_1}$ for any word $h\in B_8$ in the generators $\sigma_3,\dots, \sigma_6$. The braid in Figure \[fig:grids2\].b is obtained from that in \[fig:grids2\].a by isotoping the red and blue arcs as shown. The braid in \[fig:grids2\].c is related to that in \[fig:grids2\].b by an exchange move at the circled crossings in \[fig:grids2\].b. The braid in \[fig:grids2\].d is obtained from that in \[fig:grids2\].c by isotopy of the red, blue and green arcs. The braid in \[fig:grids2\].e is obtained from that in \[fig:grids2\].d after the indicated isotopy of the yellow, orange and purple arcs. An exchange move at the circled crossings in \[fig:grids2\].e produces the braid in \[fig:grids2\].f. The braid in \[fig:grids2\].g is obtained from that in \[fig:grids2\].f by isotoping the red arc as shown. The braid in \[fig:grids2\].h is obtained from that in \[fig:grids2\].g after an isotopy of the blue, green and purple arcs as shown. An exchange move at the circled crossings in \[fig:grids2\].h, followed by the indicated isotopy of the red arc produces the braid in \[fig:grids2\].i. Finally, the braid in \[fig:grids2\].j is obtained from that in \[fig:grids2\].i by an exchange move at the circled crossings in \[fig:grids2\].i, followed by the indicated isotopy of the blue arc. Note that the braid in \[fig:grids2\].j may be negatively destabilized at the circled crossing. The essential point here is that the region of the braid in \[fig:grids2\].a corresponding to the word $h$ is not affected by this combination of isotopies and exchange moves.
$(a)$ at 15 915 $(b)$ at 370 915 $(c)$ at 727 915 $(d)$ at 1060 915 $(e)$ at 1430 915 $(f)$ at 30 425 $(g)$ at 389 425 $(h)$ at 705 425 $(i)$ at 1055 425 $(j)$ at 1415 425
$h$ at 170 555 $h$ at 479 555 $h$ at 822 555 $h$ at 1205 555 $h$ at 1605 555
$h$ at 209 65 $h$ at 553 65 $h$ at 850 65 $h$ at 1180 65 $h$ at 1525 65
To sum up: since $\widehat\theta(T_{w_2}) \neq 0$ (see [@not]), we have proven that for any $h\in B_8$ which is 1) a word in the generators $\sigma_3,\dots,\sigma_6$ and for which 2) $\widehat\theta(T_{h}) \neq 0$, it is the case that $\widehat\theta(T_{hw_1})=0$ while $\widehat\theta(T_{hw_2})\neq 0$. It follows that the transverse braids $T_{hw_1}$ and $T_{hw_2}$ are not transversely isotopic though they are topologically isotopic. If, in addition, 3) $h$ is such that the two strands of $T_{hw_1}$ which cross according to the string $\sigma_1^2$ belong to the same component of $T_{hw_1}$, then $\mathcal{SL}(T_{hw_1}) = \mathcal{SL}(T_{hw_2})$; that is, the topological link type represented by $T_{hw_1}$ is transversely non-simple.
There are infinitely many choices of $h$ which meet criteria 1) - 3) above. In order to give such an $h$, we first prove the following.
\[lem:translation\] For $1\leq j\leq k$ and $0\leq l\leq k-j$, consider the map $\psi_{j,k,l}:B_j \rightarrow B_k$ which sends $\sigma_i$ to $\sigma_{i+l}$. If $g$ is a word in $B_j$ for which $\widehat\theta(T_{g})\neq 0$, and $h = \psi_{j,k,l}(g)$, then $\widehat\theta(T_{h}) \neq 0$ as well.
See Figure \[fig:map\] for a pictorial depiction of the map $\psi_{j,k,l}$.
If $h=\psi_{j,k,l}(g)$, then the braid $T_h$ is easily seen to be connected sum of $T_g$ with the trivial braids $I_{k-j-l+1}$ and $I_{l+1}$. We know, from the proof of Corollary \[cor:qp\], that $\widehat\theta(I_n) \neq 0$ for any $n\geq 1$. Lemma \[lem:translation\] therefore follows from Theorem \[thm:connectedsum\].
$g$ at 53 100 $g$ at 289 100 $T_g$ at 52 0 $T_{\psi_{j,k,l}(g)}$ at 288 0
$j$ at 52 215 $j$ at 288 215 $l$ at 210 215 $k-j-l$ at 370 215
It follows from Corollary \[cor:qp\] and from Lemma \[lem:translation\] that $h=\psi_{4,8,3}(g)$ satisfies criteria 1) and 2) above as long as $T_g$ is a quasipositive 4-braid. Let $h=\psi_{4,8,3}(g)$ for $$g=\sigma_3\sigma_2\sigma_3\sigma_1\sigma_2\sigma_3.$$ It is easy to check that $h^n$ also satisfies criterion 3) (as well as criteria 1) and 2), of course) for all $n\geq 0$.
\[cor:ex\] The topological link types represented by $T_{h^nw_1}$ are transversely non-simple for all $n\geq 0$. When $n$ is even, $T_{h^nw_1}$ is a knot; otherwise, $T_{h^nw_1}$ is a 3-component link.
Below, we prove that most of the links in Corollary \[cor:ex\] are prime. Note that $T_{h^nw_1}$ is obtained from $T_{w_1}$ by performing $n$ positive half twists of strands 4 - 7 in the region of $T_{w_1}$ where we would insert the word $h^n$. For $n=2m$, this amounts to adding $m$ positive full twists, which can also be accomplished by performing $-1/m$ surgery on an unknot $U$ encircling strands 4 - 7 of $T_{w_1}$ in the corresponding region. See Figure \[fig:braid2\].
$T_{w_1}$ at 140 20 $T_{h^{2m}w_1}$ at 468 20 $w_1$ at 140 262 $w_1$ at 468 262 $w_1$ at 796 262 $w_1$ at 1116 262 $h^{2m}$ at 468 150 $m$ at 826 146 $-\frac{1}{m}$ at 1296 55
$=$ at 630 190 $=$ at 950 190
A *SnapPea* computation [@weeks] combined with the Inverse Function Theorem test described in Moser’s thesis [@moserh] shows that the complement of the link $T_{w_1} \cup U$ is hyperbolic. To be specific, *SnapPea* finds a triangulation of this link complement by ideal tetrahedra and computes an approximate solution to the gluing equations. Moser’s test then confirms, using this approximate solution, that an exact solution exists.
Thurston’s celebrated Dehn Surgery Theorem then implies that all but finitely many Dehn fillings of the boundary component corresponding to $U$ are hyperbolic as well [@th1]. In turn, this implies that the link $T_{h^{2m}w_1}$ is hyperbolic, and, hence, prime for all but finitely many $m$. This argument can be repeated to show that the links $T_{h^{2m+1}w_1}$ are also prime for all but finitely many $m$. The lemma below sums this up.
The links $T_{h^nw_1}$ are prime for all but finitely many values of $n$.
[^1]: There are several versions of this $\theta$ invariant, denoted by $\theta^-$, $\widehat\theta$ and $\widetilde\theta$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Sun and Farooq [@sun_fusion02] showed that random samples can be efficiently drawn from an arbitrary $n$-dimensional hyperellipsoid by transforming samples drawn randomly from the unit $n$-ball. They stated that it was a *straightforward* to show that, given a uniform distribution over the $n$-ball, the transformation results in a uniform distribution over the hyperellipsoid, but did not present a full proof. This technical note presents such a proof.'
author:
- |
Jonathan D. Gammell\
Institute for Aerospace Studies\
University of Toronto\
4925 Dufferin Street\
Toronto, Ontario\
Canada M3H 5T6\
`<[email protected]>`
- |
Timothy D. Barfoot\
Institute for Aerospace Studies\
University of Toronto\
4925 Dufferin Street\
Toronto, Ontario\
Canada M3H 5T6\
`<[email protected]>`
bibliography:
- 'TR-2014-JDG004.bib'
title: '**[The Probability Density Function of a Transformation-based Hyperellipsoid Sampling Technique]{}**'
---
Transformation-based Sampling of Hyperellipsoids {#sec:sample}
================================================
Let ${{{X}_{\rm ellipse}}}$ be the set of points within an $n$-dimensional hyperellipsoid such that $$\begin{aligned}
{{{X}_{\rm ellipse}}}= \left\lbrace {\mathbf{x}}\in \mathbb{R}^n \;\; \middle| \;\; \left( {\mathbf{x}}- {{{\mathbf{x}}_{\rm centre}}}\right)^T\mathbf{S}\left( {\mathbf{x}}- {{{\mathbf{x}}_{\rm centre}}}\right) \leq 1 \right\rbrace,\end{aligned}$$ where $\mathbf{S} \in \mathbb{R}^{n \times n}$ is the hyperellipsoid matrix, and ${{{\mathbf{x}}_{\rm centre}}}= \left({{\mathbf{x}}_{\rm f1}}+ {{\mathbf{x}}_{\rm f2}}\right)/2$ is the centre of the hyperellipsoid in terms of its two focal points, ${{\mathbf{x}}_{\rm f1}}$ and ${{\mathbf{x}}_{\rm f2}}$. We can then transform points from the unit $n$-ball, ${{{\mathbf{x}}_{\rm ball}}}\in {{{X}_{\rm ball}}}$, to points in the hyperellipsoid, ${{{\mathbf{x}}_{\rm ellipse}}}\in {{{X}_{\rm ellipse}}}$, by a linear invertible transformation as, $$\begin{aligned}
\label{eqn:transform}
{{{\mathbf{x}}_{\rm ellipse}}}= \mathbf{L} {{{\mathbf{x}}_{\rm ball}}}+ {{{\mathbf{x}}_{\rm centre}}}.\end{aligned}$$ The transformation, $\mathbf{L}$ is given the by the Cholesky decomposition of the hyperellipsoid matrix, $$\begin{aligned}
\mathbf{L}\mathbf{L}^T \equiv \mathbf{S},\end{aligned}$$ and the unit $n$-ball is defined in terms of the Euclidean norm, ${\left|\left| \cdot \right|\right|_{2}}$, by $$\begin{aligned}
{{{X}_{\rm ball}}}= \left\lbrace {\mathbf{x}}\in \mathbb{R}^n \;\; \middle| \;\; {\left|\left| {\mathbf{x}}\right|\right|_{2}} \leq 1 \right\rbrace.\end{aligned}$$
Resulting Probability Density Function
======================================
In response to concerns expressed by Li [@li_cntrlapp92] that sampling the hyperellipsoid by transforming uniformly-drawn samples from the unit $n$-ball, ${{{\mathbf{x}}_{\rm ball}}}\sim {\mathcal{U}\left({{{X}_{\rm ball}}}\right)}$, by would not result in a uniform distribution, Sun and Farooq [@sun_fusion02] stated the following Lemma and Proof.
If the random points distributed in a hyper-ellipsoid are generated from the random points uniformly distributed in a hyper-sphere through a linear invertible non-orthogonal transformation, then the random points distributed in the hyper-ellipsoid are also uniformly distributed.
The proof of the above lemma is very straightforward and is omitted here for brevity. The result of the lemma is further substantiated through the simulation shown in \[Figures\].
For clarity, the full proof is presented below.
Let ${{p_{\rm ball}\left(\cdot\right)}}$ be the probability density function of samples drawn uniformly from the unit $n$-ball of volume ${{\zeta_{n}}}$, such that, $$\begin{aligned}
\label{eqn:ballPdf}
{{p_{\rm ball}\left({\mathbf{x}}\right)}} :=
\begin{cases}
\dfrac{1}{{{\zeta_{n}}}},& \forall {\mathbf{x}}\in {{{X}_{\rm ball}}}\\
0, & \text{otherwise},
\end{cases}\end{aligned}$$ and $g\left(\cdot\right)$ be an invertible transformation from the unit $n$-ball to a hyperellipsoid, such that, $$\begin{aligned}
{{{\mathbf{x}}_{\rm ellipse}}}&:= g\left({{{\mathbf{x}}_{\rm ball}}}\right),\\
{{{\mathbf{x}}_{\rm ball}}}&= g^{-1}\left({{{\mathbf{x}}_{\rm ellipse}}}\right).\end{aligned}$$ Then the probability density function of samples drawn from the hyperellipsoid, ${{p_{\rm ellipse}\left(\cdot\right)}}$, is given by, $$\begin{aligned}
\label{eqn:pdfDefn}
{{p_{\rm ellipse}\left({\mathbf{x}}\right)}} := {{p_{\rm ball}\left(g^{-1}\left({\mathbf{x}}\right)\right)}}\left|\det\left\lbrace \left.\frac{dg^{-1}}{d{{{\mathbf{x}}_{\rm ellipse}}}}\right|_{{\mathbf{x}}} \right\rbrace \right|.\end{aligned}$$ From , we can calculate the inverse transformation as, $$\begin{aligned}
g^{-1}\left({{{\mathbf{x}}_{\rm ellipse}}}\right) = \mathbf{L}^{-1}\left({{{\mathbf{x}}_{\rm ellipse}}}- {{{\mathbf{x}}_{\rm centre}}}\right),\end{aligned}$$ whose Jacobian is then $$\begin{aligned}
\label{eqn:jac}
\frac{dg^{-1}}{d{{{\mathbf{x}}_{\rm ellipse}}}} = \frac{d}{d{{{\mathbf{x}}_{\rm ellipse}}}}\mathbf{L}^{-1}\left({{{\mathbf{x}}_{\rm ellipse}}}- {{{\mathbf{x}}_{\rm centre}}}\right) = \mathbf{L}^{-1}.\end{aligned}$$ Substituting and into gives, $$\begin{aligned}
\label{eqn:ellipsePdf}
{{p_{\rm ellipse}\left({\mathbf{x}}\right)}} :=
\begin{cases}
\dfrac{1}{{{\zeta_{n}}}}\left|\det\left\lbrace \mathbf{L}^{-1} \right\rbrace \right|,& \forall {\mathbf{x}}\in {{{X}_{\rm ellipse}}}\\
0, & \text{otherwise},
\end{cases}\end{aligned}$$ where we have used the fact that $g^{-1}\left({\mathbf{x}}\right) \in {{{X}_{\rm ball}}}\implies {\mathbf{x}}\in {{{X}_{\rm ellipse}}}$. As ${{p_{\rm ellipse}\left(\cdot\right)}}$ is constant for all ${{{\mathbf{x}}_{\rm ellipse}}}\in {{{X}_{\rm ellipse}}}$, this proves that transforms samples drawn uniformly from the unit $n$-ball such that they are uniformly distributed over the hyperellipsoid given by $\mathbf{S}$.
Orthogonal Hyperellipsoids
--------------------------
If the axes of hyperellipsoid are orthogonal, there is a coordinate frame aligned to the axes of the hyperellipsoid such that $\mathbf{S}$ will be diagonal, $$\begin{aligned}
\mathbf{S} = \operatorname*{diag}\left\lbrace r_1^2, r_2^2, \ldots, r_n^2 \right\rbrace,\end{aligned}$$ where $r_i$ is the radius of $i$-th axis of the hyperellipsoid. The transformation from the unit $n$-ball to the hyperellipsoid expressed in this aligned frame, $\mathbf{L}'$, will then be $$\begin{aligned}
\label{eqn:orthoL}
\mathbf{L}' = \operatorname*{diag}\left\lbrace r_1, r_2, \ldots, r_n \right\rbrace.\end{aligned}$$ The hyperellipsoid in any arbitrary Cartesian frame can then be expressed as a rotation applied after this diagonal transformation, $$\begin{aligned}
\label{eqn:rotC}
{{{\mathbf{x}}_{\rm ellipse}}}= \mathbf{C}\mathbf{L'} {{{\mathbf{x}}_{\rm ball}}}+ {{{\mathbf{x}}_{\rm centre}}},\end{aligned}$$ where $\mathbf{C} \in SO\left(n\right)$ is an $n$-dimensional rotation matrix. Rearranging and substituting into gives $$\begin{aligned}
\label{eqn:orthoPdfTemp}
{{p_{\rm ellipse}\left({\mathbf{x}}\right)}} :=
\begin{cases}
\dfrac{1}{{{\zeta_{n}}}}\left|\det\left\lbrace \mathbf{L'}^{-1}\mathbf{C}^T \right\rbrace \right|,& \forall {\mathbf{x}}\in {{{X}_{\rm ellipse}}}\\
0, & \text{otherwise},
\end{cases}\end{aligned}$$ where we have made use of the orthogonality of rotation matrices, $\forall \mathbf{C} \in SO\left(n\right), \, \mathbf{C}^T \equiv \mathbf{C}^{-1}$. Substituting into finally gives, $$\begin{aligned}
\label{orthoPdf}
{{p_{\rm ellipse}\left({\mathbf{x}}\right)}} :=
\begin{cases}
\dfrac{1}{{{\zeta_{n}}}\prod_{i=1}^n r_i},& \forall {\mathbf{x}}\in {{{X}_{\rm ellipse}}}\\
0, & \text{otherwise},
\end{cases}\end{aligned}$$ Where we have made use of the fact that all rotation matrices have a unity determinant, $\forall \mathbf{C} \in SO\left(n\right), \, \det\left\lbrace\mathbf{C}\right\rbrace = 1,$ and that the determinant of a diagonal matrix is the product of the diagonal terms. As expected, is exactly the inverse of the volume of an $n$-dimensional hyperellipsoid with radii $\left\lbrace r_i \right\rbrace$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Models of bacterial growth tend to be ‘irreversible’, allowing for the number of bacteria in a colony to increase but not to decrease. By contrast, models of molecular self-assembly are usually ‘reversible’, allowing for addition and removal of particles to a structure. Such processes differ in a fundamental way because only reversible processes possess an equilibrium. Here we show at mean-field level that dynamic trajectories of reversible and irreversible growth processes are similar in that both feel the influence of attractors, at which growth proceeds without limit but the intensive properties of the system are invariant. Attractors of both processes undergo nonequilibrium phase transitions as model parameters are varied, suggesting a unified way of describing reversible and irreversible growth. We also establish a connection at mean-field level between an irreversible model of growth (the magnetic Eden model) and the equilibrium Ising model, supporting the findings made by other authors using numerical simulations.'
author:
- Katherine Klymko$^1$
- 'Juan P. Garrahan$^2$'
- Stephen Whitelam$^3$
title: Similarity of ensembles of trajectories of reversible and irreversible growth processes
---
[*Introduction.*]{} Physical growth processes can be reversible, allowing for the number of particles present in a system to increase and decrease, or irreversible, allowing only for an increase of particle number. For example, bacterial colony growth is usually considered to be irreversible, because bacteria multiply but do not disappearȩden1961two,mazzitello2015far. By contrast, models of molecular self-assembly are usually reversible, allowing for particle attachment and detachmentḩagan2006dynamic,wilber2007reversible,rapaport2008role,drossel1997model. The two types of process are fundamentally different in that reversible processes possess an equilibrium at which growth ceases, while irreversible processes do not. Usually one of these processes is chosen to model a particular physical system, and so comparison between the two is rarely made. Here we compare reversible and irreversible stochastic growth processes in a mean-field (space independent) setting. We show that despite their differences in respect of equilibrium, the two types of process can display similar behavior when growth is allowed to proceed without limit. Specifically, ensembles of dynamic trajectories are governed by attractors in phase space at which the averaged properties of the system, scaled by system size, are invariant. These attractors undergo nonequilibrium phase transitions as model parameters are varied, suggesting a unified way of describing reversible and irreversible processes. For one particular irreversible process, a mean-field version of the magnetic Eden model (MEM)ȩden1961two,candia2008magnetic,candia2001comparative, we show that its nonequilibrium phase behavior is that of the mean-field [*equilibrium*]{} Ising model. This finding provides additional evidence for a “nontrivial correspondence between the MEM for the irreversible growth of spins and the equilibrium Ising model” (in distinct spatial dimensions) conjectured by other authors on the basis of numerical simulationsçandia2008magnetic,candia2001comparative.
[*Modeling reversible and irreversible growth.*]{} We consider reversible and irreversible stochastic growth processes in the simplest limit, ignoring spatial degrees of freedom and resolving only the numbers of particles in the system. By ‘reversible’ we mean simply that particles may enter [*and*]{} leave the system, and we intentionally do not require that rates are derived from the principle of detailed balance. We consider growth of a system composed of two types of particle, labeled ‘red’ and ‘blue’. The state of the system is defined at any instant by the number of red particles $r$ and blue particles $b$ it contains, or equivalently by the system’s ‘size’ $N\equiv b+r$ and ‘magnetization’ $m \equiv (b-r)/(b+r)$. We add blue particles to the system with rate $\lb$, and red particles with rate $\lr$. We remove blue and red particles from the system with respective rates $\gb$ and $\gr$. For an irreversible process these latter two rates are zero. We allow rates to depend on the instantaneous magnetization of the system but not (directly) its size. We impose this requirement in order to model a notional growth process in which rates of particle addition and removal to a structure scale with the size of the interface between the structure and its environment. We then assume the limit of a large structure whose interfacial area does not change appreciably during the growth process, and we divide addition and removal rates by the (constant) surface area in order to obtain the rates stated above.
We studied this class of growth processes using a continuous-time Monte Carlo protocolgillespie2005general. To interpret these simulations we derived a set of analytic expressions for averages over dynamic trajectories, in the limit of vanishing particle-number fluctuations (see Sections \[supp1\]–\[supp3\]). Under these conditions the net rates of addition of blue and red particles are $\Gb(m) = \lb-\gb$ and $\Gr(m) = \lr-\gr$. The time evolution of system size is $\dot{N} = \Gb+\Gr$. The requirement for equilibrium, by which we mean a zero-growth-rate stationary solution, is $\Gb(m_0) = 0 = \Gr(m_0)$, where $m_0$ is the magnetization of the system in equilibrium. These relations can be satisfied by a reversible process but not an irreversible one, except in the trivial limit of zero addition rate. Thus only a reversible process has an equilibrium for which $\dot{N}= 0 = \dot{m}$. However, both types of process admit a steady-state growth regime for which $\dot{N}>0$ and $\dot{m}=0$. The time evolution of magnetization is $\dot{m} = N^{-1} \left[ \Gb-\Gr-m(\Gb+\Gr) \right]$, which vanishes for $m=m_\star$ such that \[ss\] m\_= . Thus there exist nullclines, at which $\dot{m}=0$, in the space of dynamic growth trajectories. The existence of such nullclines requires only that [*net*]{} rates of particle addition are positive, whether or not removal rates vanish, and so can be displayed by reversible [*and*]{} irreversible processes. We shall show that these nullclines can be attractors, stable with respect to perturbations, and so constitute fixed lines to which dynamic trajectories flow. Furthermore, these attractors undergo nonequilibrium phase transitions as model parameters are varied.
We now specialize the discussion to two models that might be regarded as reversibly- and irreversibly growing versions of the mean-field Ising model. The irreversible stochastic process we consider is a space-independent version of the magnetic Eden modelȩden1961two,candia2008magnetic,candia2001comparative, closely related to a model studied in Ref.morris2014growth. Addition of red and blue particles occurs with rates that are Arrhenius-like in the energy of interaction between a single particle and the system, $\lr =\frac{1}{2} \e^{-m J-h}$ and $\lb = \frac{1}{2}\e^{m J+h}$. Here $J$ is the interparticle coupling and $h$ a magnetic field (we set $m=0$ when $N=0$). We allow no particle removals, setting $\gb=\gr=0$. There is therefore no equilibrium. The analytic evolution equations, averaged over trajectories, read $\dot{N} = \cosh(m J + h)$ and \[mdot\_eden\] = N\^[-1]{} , and admit the nullcline \[eos\_eden\] m\_= (m\_J + h). This equation is equivalent to the well-known mean-field expression for Ising model thermodynamicsçhandler, binney1992theory. For $h=0$, Equations and indicate that the nullcline $m_\star=0$ is an attractor for $J \leq J_{\rm c} = 1$ and a repeller for $J>1$. For $J>1$ two symmetric attractors emerge as ${m_\star}_\pm \sim \pm (J-1)^{1/2}$. In other words, these equations describe a continuous phase transition of dynamic trajectories that are ‘unmagnetized’ for $J<1$ and ‘magnetized’ for $J>1$, via a critical point at $J=1$. Thus, at mean-field level, nonequilibrium trajectories of the magnetic Eden model possess phase behavior identical to that of the equilibrium Ising model [^1]. This result provides an analytic connection between models supporting the findings of Refsçandia2008magnetic,candia2001comparative, which demonstrated a numerical equivalence between phase transitions, in distinct spatial dimensions, of Eden and Ising models (see alsomazzitello2015far). This result also appears to be consistent with general arguments suggesting that probabilistic irreversible cellular automata with Ising-like symmetry lie in the universality class of the equilibrium Ising modelgrinstein1985statistical (see alsobar1990structure,drossel1997model).
![Phase diagrams of the stable dynamic attractors of the reversible (a) and irreversible (b) models of growth, for $h=0$. U and M indicate unmagnetized and magnetized regions, respectively, with the latter admitting two stable attractors. The solid line in (a) and dot in (b) denote continuous phase transitions. To the left of the dotted line in (a) we have no growth.[]{data-label="fig1"}](fig1_new){width="0.8\linewidth"}
![Dynamic growth trajectories in the space of magnetization $m$ and system size $N$ for reversible (top) and irreversible (bottom) models. We show trajectories in the magnetized regime (a,b) with attractors marked as dotted lines, and at a dynamic critical point (c,d) where the attractor lies at zero magnetization. Parameters: (a) $J=2.5$ and $c=2$, (b) $J=1.25$, (c) $J=2$ and $c=2$, (d) $J=1$.[]{data-label="fig2"}](figure2.pdf){width="\linewidth"}
{width="0.75\linewidth"}
The reversible model we consider is the stochastic process whose fluctuation-free limit is described in Refs.whitelam2014self,whitelam2015minimal. We assume constant rates of particle addition, $\lb = p c $ and $\lr = (1-p) c$, where $c$ is a notional ‘solution’ concentration and $p$ is the fraction of particles in solution that are blue. To make contact with Ising model nomenclature we introduce the magnetic field $h$ via $p \equiv \e^h/(2 \cosh h)$. Unbinding rates are Arrhenius-like, appropriate for particles escaping from a structure via thermal fluctuations, and are $\gb=\frac{1}{2} \e^{-m J} (1+m)$ and $\gr=\frac{1}{2} \e^{m J} (1-m)$ (supplemented by the restriction that particle numbers cannot be negative). Note that these rates are intentionally not designed to satisfy detailed balance with respect to a particular energy function; however, the process still possesses an equilibrium. The fluctuation-free evolution equations are $\dot{N} = c- \cosh(m J) +m \sinh (m J)$ and \[rev\_2\] = N\^[-1]{} . Equilibrium is achieved when $c_0^2 = \left( 1-m_0^2\right) \cosh^2 h$, with \[rev\_3\] m\_0 = (m\_0 J+h). Thus the [*equilibrium*]{} phase behavior of this model is identical to the [*nonequilibrium*]{} phase behavior of the irreversible model, and to the equilibrium phase behavior of the mean-field Ising model.
Persistently-growing trajectories admit the nullcline \[eos\_rev\] c(m\_-h)=( 1-(m\_)\^2) (m\_J). This nullcline is different in detail to that of the irreversible model, . However, and describe, for $J < \sqrt{6}$, a similar nonequilibrium continuous phase transition between unmagnetized and magnetized trajectories. The ‘magnetic’ critical exponent is $1/2$, as for the irreversible case, i.e. magnetization emerges for $J>J_{\rm c}$ as ${m_\star}_\pm \sim \pm (J-J_{\rm c})^{1/2}$, where $J_{\rm c}=c$ is the critical point of the reversible process.
In we show the nonequilibrium phase diagrams derived from , , and . These diagrams indicate the nature of the dynamic attractors $m_\star$ in a space of model parameters: in regions U and M the stable attractors possess zero and nonzero magnetization, respectively. Both models exhibit phase transitions at which the nature of the attractors changes qualitatively.
Numerical simulations accommodating particle-number fluctuations confirm these analytic expectations, and provide additional insight into the nature of phase transitions of ensembles of trajectories. We began simulations (in general) with no particles present, and advanced time by an amount $1/(\gr+\gb+\lr+\lb)$ following every Monte Carlo move. In we show that dynamic trajectories feel the influence of the dynamic attractors predicted analytically. In the magnetized region trajectories ‘flow’ to one of the two stable magnetized attractors, while at the critical point the stable attractor is unmagnetized. [*Individual*]{} trajectories fluctuate increasingly slowly in $m$-space as $N$ increases (because, for large $N$, fluctuations of particle number change magnetization by an amount $\propto N^{-1}$), and so e.g. the likelihood of a magnetized trajectory spontaneously reversing its magnetization becomes vanishingly small (see Ref.morris2014growth). However, [*ensembles*]{} of trajectories show behavior characteristic of a phase transition. In (a,b) we show averages of $|m(t)|$ over $10^5$ dynamic trajectories generated at several different values of $J$. We define averages of an observable $A(t)$ as $\av{A(t)} \equiv K^{-1} \sum_{i=1}^K A_i(t)$, where $A_i(t)$ is the value of $A(t)$ for the $i$th of $K$ trajectories. Trajectory averages evolve as $t$ increases toward the attractor. This evolution is in general slow, because the mobility of individual trajectories is low: ignoring fluctuations we expect small departures $\av{\delta m(t)}$ from the attractor to decay – above, at, and just below the critical point – as $t^{-q}$, $(\ln t)^{-1/2}$, and $t^{2q}$, respectively, where $q=1-J$ for the irreversible process and $q=(c-J)/(c-1)$ for the reversible one.
![Evolution of $\av{N}$ (a) and $\av{|m|}$ (b) for size-limited reversible growth shows that trajectories fall under the influence of the dynamic attractor while growth persists, and evolve to the equilibrium attractor when the size constraint is reached. Lines denote averages over 500 trajectories. Parameters: $J=1.5,c=2$. The line labeled ‘constrained’ in (a) shows the results of similarly constrained simulations for several values of $J$.[]{data-label="fig4"}](maxN.pdf){width="0.9\linewidth"}
Trajectory-to-trajectory [*fluctuations*]{}, which are neglected by our analytic expressions, also show behavior characteristic of a phase transition. In (c,d) we show the trajectory-to-trajectory fluctuations of magnetization, $\chi \equiv \av{N(t)} (\av{m^2(t)} - \av{|m(t)|^2})$ (the quantity ${\rm var}(|M|)/\av{N}$, where $M=mN$, behaves similarly). For both models $\chi$ displays at the critical point a peak that increases in height with average system size as $\av{N(t)}^{0.82(1)}$ over the time interval shown (see inset to (b)). While individual trajectories flow to stable attractors as time increases, thereby causing ${\rm var} (|m|)$ to decrease with time, the same trajectories also acquire more particles, and the combination $\av{N}\, {\rm var}(|m|)$ [*increases*]{} with time over the interval simulated. Such ‘sharpening’ of a phase transition with increasing time is reminiscent of behavior seen in glassy models that display phase transitions in space-timeḩedges2009dynamic. In the asymptotic limit (when $N\to \infty$ and $m =m_\star$ is constant) we expect the evolution of $M=mN$ to resemble that of a random walker, and so ${\rm var}(|M|) \propto t$. Thus ensembles of trajectories feel the effect of dynamic attractors, but can in certain regimes of phase space take considerable time to reach them.
In some limits the two types of process can be clearly distinguished. All growth processes must eventually end. A bacterial colony will run out of food, and a self-assembled structure will come to equilibrium with its environment. In this limit the difference between reversible and irreversible processes becomes apparent. In we show dynamic simulations of the reversible model carried out in the presence of an additional rule that forbids any addition that would cause the system to contain more than $10^3$ particles. During the growth phase dynamic trajectories fall under the influence of the dynamic attractor, but when the system size limit is reached trajectories evolve to an attractor similar to that of the equilibrium one; see also the line labeled ‘constrained’ in (a). Trajectories of the irreversible model, in the presence of a size constraint, simply cease to evolve. The behavior of the reversible model gives insight into the behavior of the lattice models of growth of Refs.whitelam2014self,whitelam2015minimal. These models obey detailed balance, and so must eventually evolve to equilibrium, but during a period of growth they display nonequilibrium behavior consistent with that of a persistently-growing process. The present results indicate that one can consider dynamic trajectories of a reversible growth process to feel the effect of both nonequilibrium and equilibrium attractors, the relative influence of which varies over the lifetime of the trajectory.
[*Conclusions.*]{} We have shown at mean-field level that reversible and irreversible growth processes are similar in that both admit attractors in the space of dynamical trajectories. At these attractors growth proceeds without limit, but averaged intensive properties of the system are time-invariant. Attractors of both types of process can undergo similar nonequilibrium phase transitions. We have also established a connection at mean-field level between an irreversible model of growth (the magnetic Eden model) and the equilibrium Ising model, supporting the findings made by other authors using numerical simulations. There is sustained interest in nucleation and growth pathways of molecularşear2007nucleation, activeredner2016classical and livingȩden1961two matter. Our results indicate that certain qualitative properties of nonequilibrium growth trajectories are insensitive to details of the microscopic transition rates that produce them, suggesting a unified way of describing growth processes of distinct microscopic entities.
[*Acknowledgements.*]{} We thank the organizers of the EPSRC workshop ‘Collective Behaviour in Growing Systems’, Bath University, Nov 2014, which motivated this work. KK acknowledges support from an NSF Graduate Research Fellowship. JPG was supported by EPSRC Grant No. EP/K01773X/1. This work was done as part of a User project at the Molecular Foundry at Lawrence Berkeley National Laboratory, supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02–05CH11231.
[20]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](http://stacks.iop.org/1751-8121/47/i=34/a=342003) @noop [**]{} (, ) @noop [**]{} (, ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{}
Stochastic models of growth {#supp1}
===========================
The stochastic processes described in the main text can be described using a master equation. Consider the probability $P(r,b;t)$ that a system at time $t$ contains $r \geq 0 $ red and $b \geq 0$ blue particles. For brevity we will write this probability as $P(r,b)$, with the time-dependence of the function being implicit. We add red particles to the system with rate $\lr(r,b)$ and blue particles with rate $\lb(r,b)$, and remove red and blue particles with respective rates $\gr(r,b)$ and $\gb(r,b)$. The master equation for this set of processes is \[master\] \_t P(r,b) &=& (r,b-1) P(r,b-1)-(r,b) P(r,b)\
&+& (r-1,b) P(r-1,b)-(r,b) P(r,b)\
&+& (r,b+1) P(r,b+1)-(r,b) P(r,b)\
&+& (r+1,b) P(r+1,b)-(r,b) P(r,b). We set $\gb(r,0) = 0 = \gr(0,b)$ so that we cannot have a negative number of particles of either color. By multiplying both sides of by the arbitrary function $U(r,b)$ and summing over $b$ and $r$ we obtain the evolution equation for the quantity $U$ averaged over dynamic trajectories, $\av{U(r,b)} \equiv \sum_{r,b=0}^\infty U(r,b) P(r,b)$: \[time\_evolution\_one\] \_t U(r,b) &=& (r,b)\
&+& (r,b)\
&+& (r,b)\
&+& (r,b) . Setting $U(r,b)=b$ gives the rate of change of the mean number of blue particles, \[blue\_ev\] \_t = -. The corresponding equation for red particles is \[red\_ev\] \_t = -. We can obtain closed-form equations for rates of change of particle number by making a mean-field approximation, replacing averages over functions $f$ of $r$ and $b$ with functions $f$ of the averages of $r$ and $b$, i.e. writing $\av{f(r,b)} = f(\av{r},\av{b})$. To simplify notation we then replace $\av{r} \to r$ and $\av{b} \to b$, so that and read \[one\] &=& (b,r)-(b,r);\
\[two\] &=& (b,r)-(b,r). The size of the system is $N = r+b$, and so its growth rate is \[growth\_rate\] &=& +\
&=& +--. In equilibrium we must have the vanishing of and , giving \[eos1\] =and \[eos2\] =. The rate of change of magnetization $m \equiv (b-r)/(b+r)$ is \[mdot\] &=&\
&=& . The condition $\dot{m}=0$ implies \[eos\_noneq\] m\_=, in which all rates are understood to be evaluated at $m=m_\star$.
Irreversible model of growth {#supp2}
============================
The irreversible model described in the main text allows no particle removal, $\gb(r,b) = 0 = \gr(r,b)$. Blue particles are added with an Arrhenius-like rate that assumes Ising-like interaction interaction energies between red and blue particles with coupling $J$ and magnetic field $h$ (we take $\kt = 1$), (b,r) &=& { J - J + h }\
&=& \^[m J +h]{}. Here the spatial mean-field approximation is apparent, i.e. particles ‘feel’ only the average magnetization of the whole system. We have absorbed the particle coordination number, assumed to be constant, into $J$. Similarly, red particles are added to the system with rate (b,r) &=& { J - J - h }\
&=& \^[-m J- h]{}, The averaged growth rate is \[eden1\] = (m J + h). The averaged time evolution of the system’s magnetization, , is \[eden2\] &=& N\^[-1]{} . This vanishes for \[eden3\] m\_= (m\_J + h). Equations and are equations and of the main text.
The temporal evolution to the attractor differs in different regimes of parameter space. Consider the case of vanishing magnetic field. For a small departure $\delta m$ from the attractor, $m(t) = m_\star + \delta m(t)$, we have from $N \approx \cosh (m_\star J)t$. Inserting this result into and using gives \_t m - { m (J m)+\[(m\_)\^2+m\_m-1\] (J m) }. Expanding this equation in powers of $\delta m$ gives \[expansion\] \_t m { m - J m\_(m)\^2 + (m)\^3 }. In the unmagnetized region $(J<1)$ we have $m_\star = 0$, and so $ \partial_t \delta m \approx (J-1) \delta m/t$. Thus temporal relaxation to the attractor is algebraic, with a continuously varying exponent: $\delta m \sim t^{J-1}$. In the magnetized region $(J>1,m_\star \neq 0)$ relaxation to steady-state is also algebraic, $\delta m \sim t^{J(1-(m_\star)^2)-1}$. Close to the critical point, where $J \approx 1$, we have from that $(m_\star)^2 \approx 3(J-1)/J^3$, and so $\delta m \sim t^{2(1-J)}$ to leading order in $J-1$. Thus the (moduli of) exponents either side of the critical point are distinct. At the critical point we have $J=1$ and $m_\star=0$, in which case the first two terms on the right-hand side of vanish. We then have $\partial_t \delta m \propto - (\delta m)^3/t$, and so $\delta m \sim (\ln t)^{-1/2}$.
Reversible model of growth {#supp3}
==========================
For the reversible model we have constant rates of particle addition, $\lb(r,b) = p c $ and $\lr(r,b) = (1-p) c$, where $c$ is a notional ‘solution’ concentration and $p$ is the fraction of particles in solution that are blue. To make contact with Ising model nomenclature we introduce the magnetic field $h$ via $p \equiv \e^h/(2 \cosh h)$. Particle unbinding rates are Arrhenius-like, appropriate for particles escaping from a structure via themal fluctuations (we take $\kt = 1$): (b,r) &=& { - - }\
&& \^[--m ]{}, where the magnetization of the system is again $m\equiv (b-r)/(b+r)$. We assume that blue particles possess energy of interaction $-\es$ with blue particles and $-\ed$ with red particles (we have absorbed factors of coordination number, assumed to be constant, into these energetic parameters). We have defined the parameters $\Sigma \equiv \left(\es+\ed \right)/2$ and $\Delta \equiv \left(\es-\ed \right)/2$. The prefactor of the exponential ensures that blue particles leave the system with a rate proportional to their relative abundance in the system. For red particles we choose the unbinding rate (b,r) &=& { - - }\
&& \^[-+m ]{}. Note that because $m$ is not defined for $r=b=0$ we additionally require $\gb(0,0) = 0 = \gr(0,0)$.
Hence and become &=&pc- \^[-- m ]{},\
&=&(1-p)c- \^[-+ m ]{}, which, with $p=1/2$, are Equations (1) of Ref.whitelam2014self. It is convenient to rescale time and concentration t \^ t and c \^[-]{} c to get the simpler equations &=&pc- \^[- m ]{}, \[rate1\]\
&=&(1-p)c- \^[ m ]{}. \[rate2\] The growth rate is \[ndot\_supp\] = c- (m ) +m (m ). In this model there exists an equilibrium when rates of particle addition and removal balance. The the associated equation of state follows from and , and is \[eq1\_supp\] m\_0 = (m\_0 +h) with the associated concentration \[eq2\] c\_0\^2 = ( 1-m\_0\^2) \^2 h. Note that the equilibrium concentration for $h=0$ is the same for red $(m_0<0)$ and blue $(m_0>0)$ solutions, i.e. $c_0$ is unchanged upon setting $m_0 \to -m_0$.
The rate of magnetization evolution, , is \[mdot2\_supp\] = , which vanishes when \[ss\_m\_supp\] c(m\_-h)=( 1-(m\_)\^2) (m\_).
In the main text we assume an Ising-like hierarchy for the interaction energies of this model, in which case $\Delta = J$ and $\Sigma=0$. With these choices Equations , , and are equations , , and of the main text
Analysis of , for $h=0$, gives rise to (a) of the main text. The function on the right-hand side of vanishes at $m = 0$ and at $m= \pm 1$, and has one turning point for positive $m$ and one for negative $m$. When $\Delta < \sqrt{6}$ this function takes its largest positive gradient, $\Delta$, at the origin. Therefore it intersects the function $c m$ on the left-hand side of three times if $c< \Delta$ (with two non-negative solutions, $m_{\pm}$, stable to perturbations of $m$, and one, at $m=0$, unstable) and only once (at $m=0$) if $c> \Delta$. When $c=\Delta$ all solutions lie at $m=0$. The solutions $m_{\pm}$ move smoothly away from $m$ as $c$ is decreased below $\Delta$, and do so as \[scaling\] m\_\~( )\^[1/2]{}. Thus at the point $c=\Delta$ (for $h=0$ and $\Delta < \sqrt{6}$) we have a continuous nonequilibrium phase transition separating zero- and nonzero magnetization solutions to .
For $h=0$ and $\Delta \geq \sqrt{6}$ we can have zero, three or five solutions to , depending on the value of $c$, and respectively zero, two and three of those solutions are stable.
As for the irreversible model, temporal relaxation to the attractor $m_\star$ varies by parameter regime. Expanding and for $m(t)=m_\star+\delta m(t)$ gives, for $m_\star=0$, \_t m { m - (m)\^3 }. In the unmagnetized region $c>\Delta$ we then have $\delta m \sim t^{\frac{\Delta-c}{c-1}}$. At criticality $(\Delta = c)$ we have $\delta m \sim (\ln t)^{-1/2}$. Expanding and for $m(t)=m_\star+\delta m(t)$ and using reveals that in the magnetized region we have $\delta m \sim t^{\frac{2(c-\Delta)}{c-1}}$.
[^1]: The same conclusion follows from the variant of Eq. (14) of Ref.morris2014growth that would be obtained by setting, in Eq. (5) of that reference, the spin-flip terms $f$ to zero and requiring the vanishing of the remaining term.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We use realistic pseudopotentials and a plane-wave basis to study the electronic structure of non-periodic, three-dimensional, 2000-atom (AlAs)$_{n}$/(GaAs)$_{m}$ (001) superlattices, where the individual layer thicknesses $n,m \in \{1,2,3\}$ are randomly selected. We find that while the band gap of the equivalent ($n = m = 2$) [*ordered*]{} superlattice is indirect, random fluctuations in layer thicknesses lead to a [*direct*]{} gap in the planar Brillouin zone, strong wavefunction localization along the growth direction, short radiative lifetimes, and a significant band-gap reduction, in agreement with experiments on such intentionally grown disordered superlattices.'
address: 'National Renewable Energy Laboratory, Golden, CO 80401'
author:
- 'Kurt A. Mäder, Lin-Wang Wang, and Alex Zunger'
date: 'Received: 5 October 1994'
title: 'Electronic structure of intentionally disordered AlAs/GaAs superlattices'
---
Ordered semiconductor superlattices—produced routinely by epitaxial crystal growth techniques—are widely recognized for their unique electronic and optical properties [@SL]. To tailor the electronic properties (e.g., through band folding) one aims at growing ordered superlattices (o-SL) with [*definite values*]{} of the thicknesses $n$ and $m$ in $(A)_{n}/(G)_{m}$. One could, however, deliberately grow a [*disordered*]{} superlattice (d-SL) [@Chomette86; @Sasaki89], where the individual layer thicknesses $n,m,n',m',n'',m'',\dots$ are selected at random according to a given probability distribution $p_{\alpha}(n)$ ($\alpha=A,G$). While the electronic structure of an o-SL is characterized by extended states and the formation of mini-bands, a truly one-dimensional disordered system (described, e.g., by the Anderson model) shows carrier localization and absence of dispersion [@Borland63; @Papa76]. Sasaki et al. [@Sasaki89] grew $\sim$1000 monolayer (ML) thick AlAs/GaAs d-SL’s (i.e., $A =$ AlAs, $G
=$ GaAs) with $n$ and $m$ chosen from a set of small integers $\{1,2,3\}$, viz., $p_{A} = p_{G} = p$, with $p(1)=p(2)=p(3)=\frac{1}{3}$. Since the average layer thickness is 2 monolayers, one can think of this d-SL as evolving from an $(A)_{2}/(G)_{2}$ o-SL by random substitution of $A_{2}$ and $G_{2}$ layers by $A_{1}$, $A_{3}$, $G_{1}$, and $G_{3}$ layers (“$\delta$-doping”). Such disordered superlattices have shown surprising and unique optical properties relative to their parent o-SL [@Sasaki89]: (a) strong and initially fast decaying (lifetime $\tau = 0.25$ ns at $T =$ 77 K) photoluminescence (PL) intensity even though the equivalent o-SL has an indirect band gap and thus emits both weakly and slowly, (b) a large red shift ($\sim$60 meV) of the PL peak with respect to the equivalent o-SL, and (c) an order of magnitude slower rate of reduction of the PL intensity with temperature. These unusual properties of d-SL’s appear very attractive for optoelectronic applications [@Sasaki89].
In modelling the electronic structure of a d-SL [@Chomette86; @Pavesi89; @EGWang94], one faces difficulties arising from the existence of two entirely different length scales: (i) The lack of translational symmetry requires the use of unit cells with a macroscopic length $N \approx 1000$ ML, equal to the [*total length*]{} of the d-SL ($N d \approx 300$ nm, where $d$ is the monolayer thickness). (ii) The spatial variations of the electron potential, however, occur on a microscopic length scale of about $0.1$ nm. While it is possible to rescale the microscopic length scale by replacing the periodic atomic potential by an external, rectangular potential [@Chomette86; @Pavesi89], this approach fails to describe the band structure (e.g., the indirect gap of the (AlAs)$_{2}$/(GaAs)$_{2}$ o-SL) in the present regime of rapid layer fluctuations. To overcome the problems arising from the existence of two disparate length scales, we extended a microscopic pseudopotential description of the electron structure to a macroscopic length scale necessitated by the absence of translational symmetry. We use [*fixed*]{} (screened) atomic pseudopotentials that were carefully fitted to bulk band structures, effective masses, deformation potentials, band offsets, and energy levels in superlattices [@Mader94]. The wavefunctions are expanded in about 30 plane waves per atom. For an $N$-monolayer superlattice along (001) with two atoms per monolayer the corresponding matrix dimension is therefore about $60N\times60N$. Standard techniques to solve the Schrödinger equation require orthogonalization of the states of interest (i.e., near the band gap) to all lower-lying states. This leads to an $N^{3}$ scaling of the effort and becomes impractical for structures with $N\approx100$ ML. We use instead the “folded-spectrum” method [@Wang94], where eigenstates are obtained directly in an energy window of interest (e.g., near the band gap), without having to solve for any of the $\sim 8N$ lower-lying eigenstates first, thus circumventing the need for orthogonalization. The effort scales linearly with $N$, allowing us to use the realistic, three-dimensional pseudopotentials, and to solve the Schrödinger equation in a highly flexible plane-wave basis even for $N = 1000$ ML.
Application to d-SL’s of AlAs/GaAs leads to an explanation of the large red shift, the enhanced oscillator strength, and the weak temperature dependence in terms of band-edge wavefunction localization. The source of localization is interesting: In truly one-dimensional disordered chains [*all*]{} states are in general localized [@Borland63; @Papa76]. However, the laboratory-grown [@Sasaki89] d-SL’s have extended layers in the $(x,y)$ plane (perpendicular to the disorder direction $z = [001]$), so these [*quasi*]{} one-dimensional systems retain in fact a three-dimensional character, and the states need not be localized by disorder. We will show that the localization in the d-SL originates mostly from the formation of impurity-like, localized bound states due to insertion of $\delta$-layers into the o-SL. In Fig. \[fig:psi\](a) we show the planar wavefunction average $|\bar\psi_{E}(z)|^{2} = \int d^{2}r_{\perp} |\psi_{E}(\bbox{r})|^2$ of a few occupied and unoccupied band-edge states of a d-SL with a 1000-monolayer unit cell. We see that these band-edge wavefunctions are sharply localized. We can quantify the effective localization length (in monolayer units) for wavefunction $\psi_{E}$ at energy $E$ as [@Papa76] $$\label{eq:Leff}
L_{\text{eff}}(E) = \frac{1}{h} \left(\sum_{i}
|\langle i|\bar\psi_{E}\rangle|^4 \right)^{-1},$$ where the sum extends over the grid points $i$ along $z$, and $h$ is the number of grid points per monolayer. For a truly extended state, $L_{\text{eff}}$ is of the order of $N$, while for a state localized on one site, $L_{\text{eff}}=1$. We find that $L_{\text{eff}}
\lesssim 15$ ML for the band-edge states. The asymptotic decay length $\gamma^{-1}$ can be quantified by $\langle |\bar\psi_{E}(z)|\rangle \propto e^{-\gamma z}$, where the angular brackets denote averaging over the fast oscillations of $|\bar\psi_{E}(z)|$ along $z$. Near the band edges, the calculated values are $\gamma\approx$ 0.2 ML$^{-1}$.
We now investigate two possible mechanisms for the wavefunction localization apparent in Fig. \[fig:psi\](a): (i) chemical binding to $\delta$-like “impurity layers” in an otherwise ordered $(A)_{2}/(G)_{2}$ host, and (ii) a continuous increase of localization (measured by $\gamma$ or by $L_{\text{eff}}^{-1}$) with increasing degree of disorder. Scenario (i) is motivated by the fact that in one dimension an attractive $\delta$-potential always has one bound state, whereas scenario (ii) is valid for the Anderson model with on-site disorder obeying a continuous probability distribution [@Papa76]. In our case of a discrete probability distribution $p(n)$, we connect the two pictures by starting with the reference o-SL $(A)_{2}/(G)_{2}$, and gradually substituting $A_{2}$ and $G_{2}$ layers by “wrong” layers with $n=1$ or $n=3$. We measure the degree of disorder by counting the relative frequency $R$ of these layers, i.e., $$\label{eq:R}
R = p(1)/p(2) = p(3)/p(2).$$ The fully disordered SL has $R=1$, the single $\delta$-layer in an ordered $(A)_{2}/(G)_{2}$ superlattice corresponds to $R \approx N^{-1}$, while the perfect o-SL has $R=0$.
To understand the possibility of impurity-like localization, consider for example a $G_{3}$ $\delta$-layer embedded in the otherwise perfect o-SL $\cdots A_{2}G_{2}A_{2}G_{2}A_{2}G_{2}\cdots$, thus converting it into $\cdots
A_{2}G_{2}A_{2}G_{3}A_{2}G_{2}\cdots$. If the $G_{3}$ $\delta$-layer is attractive to electrons (holes) it will bind a state below the conduction-band minimum (CBM) \[above the valence-band maximum (VBM)\] of the o-SL [@Mader92]. We find that a (GaAs)$_{3}$ $\delta$-layer in the (AlAs)$_{2}$/(GaAs)$_{2}$ o-SL indeed binds an electron [*and*]{} a hole \[Fig. \[fig:psi\](b)\], while an (AlAs)$_{3}$ layer binds an electron but does not bind a hole \[Fig. \[fig:psi\](c)\]. The bound-state localization lengths $\gamma^{-1}$ and $L_{\text{eff}}$ obtained with a [*single*]{} $\delta$-layer are similar to those obtained in the fully disordered SL (Fig. \[fig:psi\]), suggesting that the same mechanism of localization could be at work in both cases. As one increases the concentration $R$ of randomly positioned $\delta$-layers, one finds more bound states which are eventually forming a quasi continuum inside the band gap [@Lifshits63]. This is illustrated in Fig. \[fig:gaps\], where the band-edge energies of d-SL’s with $N =$ 128 ML [@length] are plotted as a function of the degree of disorder $R$. As $R$ approaches zero, the band edges merge with the finite binding energies of an isolated $\delta$-layer \[scenario (i)\], and not with the unperturbed band edges of the o-SL \[scenario (ii)\]. This significant observation suggests that the localization energy in the d-SL comes mostly from impurity binding. Figure \[fig:gaps\] shows that for large degrees of disorder $R$, the band edges are pushed further into the gap. To isolate the effect of pure disorder from the effect of impurity-like localization, we also show in Fig. \[fig:gaps\] the band edges of an o-SL containing a [*periodic array*]{} of $\delta$-layers of concentration $R$, i.e., separated by a distance $\sim R^{-1}$. We see that even for an array of closely spaced $\delta$-layers ($R\to1$), the binding energy does not increase, indicating a negligible interaction between the neighboring, [*coherently arranged*]{} bound states. In contrast, in a d-SL the $\delta$-layer-like bound states are arranged [*incoherently*]{}, leading to a band tail. These studies show that the localization length of the band-edge states in a d-SL is decided already by the chemical, impurity-like binding of a single $\delta$-layer \[Figs. \[fig:psi\](b) and \[fig:psi\](c)\]. Furthermore, the energy position of the gap levels at small to intermediate degrees of disorder $R$ is also determined by the properties of non-interacting, [*periodically arranged*]{} $\delta$-layers (Fig. \[fig:gaps\]). For higher values of $R$, disorder shifts and broadens the gap levels further into the gap, without modifying their localization length.
To illustrate the dependence of the gap-level shifts on the particular form of disorder, we compare the results obtained for the d-SL with those found in a partially-ordered superlattice (po-SL): Arent et al.[@Arent94] grew po-SL’s with the same distribution function $p(n)$ for the $A_{n}$ layers as in a d-SL, but preserved long-range order by requiring that each $A_{n}$ layer be followed by a $G_{4-n}$ layer. Therefore, at positions 1,5,9,…there is always an $A$ layer, and at positions 4,8,12,…there is always a $G$ layer, while at the “sandwiched” positions, $A$ and $G$ layers are distributed randomly. We see in Fig. \[fig:gaps\] that the presence of long-range order leads to an even larger shift of gap levels than in the equivalent d-SL at the same $R$ value. The band-edge wavefunctions have the same characteristic $L_{\text{eff}}$ and $\gamma$ as in a d-SL. Thus, absence of long-range order (in the d-SL) is not essential for obtaining large band-edge shifts.
The calculated band-gap reduction of the d-SL and po-SL with respect to the o-SL (Fig. \[fig:gaps\]) is consistent with experiment [@Sasaki89; @Arent94]: We find band gaps of 2.09, 1.94, and 1.87 eV for the o-SL, d-SL, and po-SL, respectively, compared with the experimental PL emission lines at 2.02, 1.96 [@Sasaki89], and 1.87 eV [@Arent94], respectively.
To determine if the d-SL has a direct or indirect gap in the two-dimensional Brillouin zone, we show in Figure \[fig:disp\] the dispersion of the band-edge states of the d-SL (solid lines), o-SL (thin lines) and single $\delta$-layer (dotted lines) along the symmetry lines $\bar\Sigma$ and $\bar\Delta$, i.e., from $\bar\Gamma$ to $\bar M = \frac{1}{\sqrt{2}} (1,1)$ and from $\bar\Gamma$ to $\bar
X = \frac{1}{\sqrt{2}} (1,0)$, respectively. The thin horizontal lines denote the band edges of the underlying o-SL. We find that the conduction bands of the d-SL dip below these lines. The difference (“binding energy”) increases in the order $\bar M \to \bar\Gamma \to
\bar X$. [@L-repulsion] Thus, the large binding energy at $\bar\Gamma$ pulls the conduction-band edge below the one at $\bar M$ by 60 meV, [*making the d-SL a direct-gap material*]{} [@tb], even though the o-SL is indirect (with CBM at $\bar M$). This suggests strongly that the observed [@Sasaki89] strong PL intensity is due to the occurrence of a direct band gap, leading to efficient recombination of electrons and holes localized in the same spatial region along $z$ \[see, e.g., the same positions along the chain of states v2 and c3 in Fig. \[fig:psi\](a)\]. The localization along $z$ relaxes the $\bbox{k}_{\parallel}$-selection rule, thus further enhancing the oscillator strength. The enhanced oscillator strength is reflected by short radiative lifetimes $\tau$: we calculate $\tau =$ 1 ns for the v2$\to$c3 transition at energy 1.96 eV \[Fig. \[fig:psi\](a)\]. These radiative lifetimes are 1000$\times$ faster compared to those measured in indirect-gap o-SL’s ($\tau \approx 5.5$ $\mu$s at $T =$ 2 K) [@Ge94].
To understand why the PL intensity in a d-SL has a weaker temperature dependence than in an o-SL, consider vertical transport to non-radiative recombination centers. This channel will be inhibited, unless there are strongly overlapping electron states within an energy $\sim k T$ of each other. Strong overlap occurs when $\zeta(E,T) =
k T \rho(E_{T}) L_{\rm eff}(E_{T}) > 1$, where $\rho(E)$ is the one-dimensional density of states, and $E_{T}$ is a typical energy that is thermally occupied at temperature $T$. Figure \[fig:DOS\] shows $\rho(E)$, $L_{\rm eff}(E)$, and $\zeta(E,T)$ obtained by averaging over 100 realizations of $N=2000$ ML d-SL’s, calculated within the Kronig-Penney approximation with band offset fitted to our pseudopotential calculations. We find that: (i) the DOS exhibits a $\sim$200 meV wide band tail extending below the CBM of the o-SL (the zero of energy); (ii) $L_{\rm eff}$ is almost constant ($\sim$20 ML) in the band tail; (iii) at the thermally populated levels \[denoted by the arrows in Fig. \[fig:DOS\](b)\] $\zeta \ll 1$, thus vertical hopping is suppressed. Consequently, the d-SL will have a weaker temperature dependence of the PL decay, because higher temperatures will be needed to dissociate the electron-hole pairs in the vertical dimension.
Acknowledgment—Fruitful discussions with D. J. Arent and S.-H. Wei are gratefully acknowledged. This work was supported by the Office of Energy Research, Materials Science Division, U.S. Department of Energy, under grant No. DE-AC36-83CH10093.
J. L. Beeby et al.(editors), [*Condensed Systems of Low Dimensionality*]{}, NATO ASI Series B [**253**]{}, (Plenum Press, New York, 1991). A. Chomette et al., Phys. Rev. Lett. [**57**]{}, 1464 (1986). A. Sasaki et al., Jpn. J. Appl.Phys. [**28**]{}, L1249 (1989); J. Crystal Growth [**115**]{}, 490 (1991). R. E. Borland, Proc. R. Soc. A [**274**]{}, 529 (1963). C. Papatriantafillou and E. N. Economou, Phys. Rev. B [**13**]{}, 920 (1976). L. Pavesi et al., Phys. Rev. B [**39**]{}, 7788 (1989). E. G. Wang et al., Appl. Phys. Lett., 443 (1994). K. A. Mäder and A. Zunger, Phys. Rev. B, BT5032, in press (1994). L.-W. Wang and A. Zunger, J. Chem. Phys. [**100**]{}, 2394 (1994). K. A. Mäder and A. Baldereschi, Mat. Res. Soc.Symp. Proc. [**240**]{}, 597 (1992). I. M. Lifshits, Sov. Phys. JETP [**17**]{}, 1159 (1963). Note that in order to correctly describe localized wavefunctions we need $N \gg
L_{\text{eff}}$. We have confirmed that band-edge wavefunctions calculated in an $N=128$ d-SL agree very well with those shown in Fig. \[fig:psi\](a), which were calculated with $N=1000$. D. J. Arent et al., Phys. Rev. B [**49**]{}, 11173 (1994). The large binding energy at $\bar X$ is a consequence of the level repulsion of the folded $L_{\rm 1c}$ states, which is much stronger for odd values of the repeat period $n$ than for even $n$ \[See, for example, S.-H. Wei and A. Zunger, Appl. Phys.Lett. [**53**]{}, 2077 (1988)\]. In the d-SL the odd-even selection rule is broken, leading to a stronger level repulsion in the d-SL than in the $n=2$ o-SL. In a recent tight-binding study of an $N = $ 10–20 ML model of a d-SL, Wang et al. [@EGWang94] found a nearly dispersionless conduction band along $\bar\Delta$ and $\bar\Sigma$, an [*indirect gap*]{} at $\bar M$, and a 2–4 ML localization length. The differences with respect to the present pseudopotential calculation may reflect a combination of the use of rather short SL’s and the restricted variational flexibility of the tight-binding method used in Ref. [@EGWang94]. W. Ge et al., J. Luminesc. [**59**]{}, 163 (1994).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Numerical models of core-collapse supernova explosions powered by the neutrino-driven mechanism have matured considerable in recent years. Explosions at the low-mass end of the progenitor spectrum can routinely be simulated in 1D, 2D, and 3D today and already allow us to study supernova nucleosynthesis based on first-principle models. Results of nucleosynthesis calculations indicate that supernovae of the lowest masses could be important as contributors of some lighter neutron-rich elements beyond iron. The explosion mechanism of more massive stars is still under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have recently become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and multi-dimensional hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added or improved before simulations can robustly explain supernova explosions over a wide mass range. We explore possible issues that may affect the accuracy of numerical supernova simulations, and review some of the ideas that have recently been explored as avenues to robust explosions, including uncertainties in the neutrino rates, rapid rotation, and an external forcing of non-radial fluid motions by strong seed perturbations from convective shell burning. The “perturbation-aided” neutrino-driven mechanism and the implications of recent 3D simulations of shell burning in supernova progenitors are discussed in detail. The efficacy of the perturbation-aided mechanism in some progenitors is illustrated by the first successful multi-group neutrino hydrodynamics simulation of an $18
M_\odot$ progenitor with 3D initial conditions. We conclude with a few speculations about the potential impact of 3D effects on the structure of massive stars through convective boundary mixing.
author:
- 'B. Müller$^{1,2,3}$\'
bibliography:
- 'paper.bib'
title: 'The Status of Multi-Dimensional Core-Collapse Supernova Models'
---
supernovae: general – hydrodynamics – instabilities – neutrinos – stars: massive – stars: evolution
INTRODUCTION {#sec:intro}
============
The explosions of massive stars as *core-collapse supernovae* (CCSNe) constitute one of the most outstanding problems in modern astrophysics. This is in no small measure due to the critical role of supernova explosions in the history of the Universe. Core-collapse supernovae figure prominently in the chemical evolution of galaxies as the dominant producers, e.g., of elements between oxygen and the iron group [@arnett_96; @woosley_02], and supernova feedback is a key ingredient in the modern theory of star formation [@krumholz_14]. The properties of neutron stars and stellar-mass black holes (masses, spins, kicks; @oezel_10 [@oezel_12; @kiziltan_13; @antoniadis_16; @arzoumanian_02; @hobbs_05]) cannot be understood without addressing the origin of these compact objects in stellar explosions.
Why (some) massive stars explode is, however, a daunting problem in its own right regardless of the wider implications of supernova explosions: The connection of supernovae of massive stars with the gravitational collapse to a neutron star has been postulated more than eighty years ago [@baade_34b], and the best-explored mechanism for powering the explosion, the neutrino-driven mechanism, has gone through several stages of “moulting” in the fifty years after its conception by @colgate_66. Yet the problem of the supernova explosion mechanism still awaits a definitive solution. The rugged path towards an understanding of the explosion mechanism merely reflects that core-collapse supernovae are the epitome of a “multi-physics” problem that combines aspects of stellar structure and evolution, nuclear and neutrino physics, fluid dynamics, kinetic theory, and general relativity. We cannot recapitulate the history of the field here and instead refer the reader to the classical and modern reviews of @bethe_90, @arnett_96, @mezzacappa_05, @kotake_06, @janka_07, @burrows_07, @janka_12, and @burrows_13 as starting points.
The longevity of the supernova problem should not be misinterpreted: Despite the occasional detour, supernova theory has made steady progress, particularly so during the last few years, which have seen the emergence of mature – and increasingly successful – multi-dimensional first-principle simulations of the collapse and explosion of massive stars as well as conceptual advances in our understanding of the neutrino-driven explosion mechanism and its interplay with multi-dimensional hydrodynamic instabilities.
The Neutrino-driven Explosion Mechanism in its Modern Flavour
-------------------------------------------------------------
Before we review these recent advances, it is apposite to briefly recapitulate the basic idea of the neutrino-driven supernova mechanism in its modern guise. Stars with zero-age main sequence masses above $\gtrsim 8 M_\odot$ and with a helium core mass $\lesssim 65 M_\odot$ (the lower limit for non-pulsational pair-instability supernovae; [@heger_02; @heger_03]) develop iron cores that eventually become subject to gravitational instability and undergo collapse on a free-fall time-scale. For low-mass supernova progenitors with highly degenerate iron cores, collapse is triggered by the reduction of the electron degeneracy pressure due to electron captures; for more massive stars with higher core entropy and a strong contribution of radiation pressure, photo-disintegration of heavy nuclei also contributes to gravitational instability. Aside from these “iron core supernovae”, there may also be a route towards core collapse from super-AGB stars with O-Ne-Mg cores [@nomoto_84; @nomoto_87; @poelarends_08; @jones_13; @jones_14; @doherty_15], where rapid core contraction is triggered by electron captures on $^{20}\mathrm{Ne}$ and $^{24}\mathrm{Mg}$;[^1] hence this sub-class is designated as “electron-capture supernovae” (ECSNe).
According to modern shell-model calculations [@langanke_00; @langanke_03], the electron capture rate on heavy nuclei remains high even during the advanced stages of collapse [@langanke_03] when the composition of the core is dominated by increasingly neutron-rich and massive nuclei. Further deleptonisation during collapse thus reduces the lepton fraction $Y_\mathrm{lep}$ to about $0.3$ according to modern simulations [@marek_05; @sullivan_16] until neutrino trapping occurs at a density of $\mathord{\sim}10^{12} \, \mathrm{g} \, \mathrm{cm}^{-3}$. As a result, the homologously collapsing inner core shrinks [@yahil_83], and the shock forms at a small enclosed mass of $\mathord{\sim} 0.5 M_\odot$ [@langanke_03; @hix_03; @marek_05] after the core reaches supranuclear densities and rebounds (“bounces”). Due to photodisintegration of heavy nuclei in the infalling shells into free nucleons as well as rapid deleptonisation in the post-shock region once the shock breaks out of the neutrinosphere, the shock stalls a few milliseconds after bounce, i.e.it turns into an accretion shock with negative radial velocity downstream of the shock. Aided by a continuous reduction of the mass accretion rate onto the young proto-neutron star, the stalled accretion shock still propagates outward for $\mathord{\sim} 70 \, \mathrm{ms}$, however, and reaches a typical peak radius of $\mathord{\sim} 150 \, \mathrm{km}$ before it starts to recede again.
The point of maximum shock expansion is roughly coincident with several other important changes in the post-shock region: Photons and electron-positron pairs become the dominant source of pressure in the immediate post-shock region, deleptonisation behind the shock occurs more gradually, and the electron neutrino and antineutrino luminosities become similar. Most notably, a region of net neutrino heating (“gain region”) emerges behind the shock. In the “delayed neutrino-driven mechanism” as conceived by @bethe_85 and @wilson_85, the neutrino heating eventually leads to a sufficient increase of the post-shock pressure to “revive” the shock and make it re-expand, although the post-shock velocity initially remains negative. Since shock expansion increases the mass of the dissociated material exposed to strong neutrino heating, this is thought to be a self-sustaining runaway process that eventually pumps sufficient energy into the post-shock region to allow for the development of positive post-shock velocities and, further down the road, the expulsion of the stellar envelope.
Modern simulations of core-collapse supernovae that include energy-dependent neutrino transport, state-of-the art microphysics, and (to various degrees) general relativistic effects have demonstrated that the neutrino-driven mechanism is not viable in spherical symmetry [@rampp_00; @rampp_02; @liebendoerfer_00; @liebendoerfer_04; @liebendoerfer_05; @sumiyoshi_05; @buras_06a; @buras_06b; @mueller_10; @fischer_10; @lentz_12a; @lentz_12b], except for supernova progenitors of the lowest masses [@kitaura_06; @janka_08; @burrows_07b; @fischer_10], which will be discussed in Section \[sec:ecsn\].
In its modern guise, the paradigm of neutrino-driven explosions therefore relies on the joint action of neutrino heating and various hydrodynamic instabilities to achieve shock revival. As demonstrated by the first generation of multi-dimensional supernova models in the 1990s [@herant_94; @burrows_95; @janka_95; @janka_96], the gain region is subject to convective instability due to the negative entropy gradient established by neutrino heating. Convection can be suppressed if the accreted material is quickly advected from the shock to the gain radius [@foglizzo_06]. Under these conditions, the standing accretion shock instability (SASI; [@blondin_03; @blondin_06; @foglizzo_07; @laming_07; @yamasaki_07; @fernandez_09a; @fernandez_09b]) can still grow, which is mediated by an advective-acoustic cycle [@foglizzo_02; @foglizzo_07; @guilet_12] and manifests itself in the form of large-scale sloshing and spiral motions of the shock. The precise mechanism whereby these instabilities aid shock revival requires careful discussion (see Section \[sec:3deffects\]), but their net effect can be quantified using the concept of the “critical luminosity” [@burrows_93] for the transition from a steady-state accretion flow to runaway shock expansion: In effect, convection and/or the SASI reduce the critical luminosity in multi-D by $20 \ldots 30 \%$ [@murphy_08b; @nordhaus_10; @hanke_12; @fernandez_15] compared to the case of spherical symmetry (1D).
Current Questions and Structure of This Review
----------------------------------------------
We cannot hope to comprehensively review all aspects of the core-collapse supernova explosion problem, even if we limit ourselves to the neutrino-driven paradigm. Instead we shall focus on the following topics that immediately connect to the above overview of the neutrino-driven mechanism:
- The neutrino-driven explosion mechanism demonstrably works at the low-mass end of supernova progenitors. In Section \[sec:ecsn\], we shall discuss the specific explosion dynamics in the region around the mass limit for iron core formation, i.e. for ECSN progenitors and structurally similar iron core progenitors. We shall also consider the nucleosynthesis in these explosions; since they are robust, occur early after bounce, and can easily be simulated until the explosion energy has saturated, explosions of ECSN and ECSN-like progenitors currently offer the best opportunity to study core-collapse supernova nucleosynthesis based on first-principle explosion models.
- For more massive progenitors, it has yet to be demonstrated that the neutrino-driven mechanism can produce robust explosions in 3D with explosion properties (e.g. explosion energy, nickel mass, remnant mass) that are compatible with observations. In Section \[sec:3d\], we shall review the current status of 3D supernova simulations, highlighting the successes and problems of the current generation of models and detailing the recent progress towards a quantitative understanding of the interplay of neutrino heating and multi-dimensional fluid flow.
- In the wake of a rapid expansion of the field of core-collapse supernova modelling, a wide variety of methods have been employed to investigate the supernova problem with a continuum from a rigorous first-principle approach to parameterised models of limited applicability that are only suitable for attacking well-circumscribed problems. In Section \[sec:methods\], we present an overview of the different numerical approaches to simulations of neutrino-driven explosions and provide some guidance for assessing and comparing simulation results.
- The problem of shock revival by the neutrino-driven mechanism has not been conclusively solved. In Section \[sec:prog\], we shall review one of the promising ideas that could help explain supernova explosions over a wide range of progenitors, viz.the suggestion that shock revival may be facilitated by strong seed perturbations from prior convective shell burning in the infalling O or Si shells [@arnett_11; @couch_13; @mueller_15a; @couch_15; @mueller_16b]; and we shall also discuss some other perspectives opened up by current and future three-dimensional simulations of late burning stages in supernova progenitors.
Potential observational probes for multi-dimensional fluid flow in the supernova core during the first $\mathord{\sim} 1 \, \mathrm{s}$ exist in the form of the neutrino and gravitational wave signals, but we shall not touch these in any depth and instead point the reader to topical reviews (@ott_08b [@kotake_13] for gravitational wave emission; @mirizzi_16 for the neutrino signal) as well as some of the major publications of recent years (gravitational waves: @mueller_13 [@yakunin_16; @nakamura_16]; neutrinos: @tamborra_13 [@tamborra_14b; @mueller_14]) Neither do we address alternative explosion scenarios here. and refer the reader to @janka_12 for a broader discussion that covers, e.g. the magnetorotational mechanism as the most likely explanation for hypernovae with explosion energies of up to $\mathord{\sim} 10^{52} \, \mathrm{erg}$.
THE LOW-MASS END ELECTRON-CAPTURE SUPERNOVAE AND THEIR COUSINS {#sec:ecsn}
==============================================================
Stars with zero-age main sequence (ZAMS) masses in the range $\mathord{\sim} 8 \ldots 10 M_\odot$ exhibit structural peculiarities during their evolution that considerably affect the supernova explosion dynamics if they undergo core collapse. The classical path towards electron-capture supernovae (ECSNe; [@nomoto_84; @nomoto_87]), where electron captures on ${}^{24}
\mathrm{Mg}$ and ${}^{20} \mathrm{Ne}$ in a degenerate O-Ne-Mg core of $\mathord{\sim 1.37} M_\odot$ drive the core towards collapse, best exemplifies these peculiarities: Only a small C/O layer is present on top of the core, and the He layer has been effectively whittled down by dredge-up. The consequence is an extremely steep density gradient between the core and the high-entropy hydrogen envelope (Figure \[fig:threshold\]). While this particular scenario is beset with many uncertainties [@siess_07; @poelarends_08; @jones_13; @jones_14; @jones_16; @doherty_15; @schwab_15; @woosley_15], recent studies of stellar evolution in the mass range around $9M_\odot$ have demonstrated that there is a variety of paths towards core-collapse that result in a similar progenitor structure [@jones_13; @woosley_15], though there is some variation, e.g. in the mass of the remaining He shell due to a different history of dredge-up events. From the perspective of supernova explosion dynamics, the crucial features in the mass range around $9M_\odot$ are the small mass of the remaining C/O shell and the rapid drop of the density outside the core; both are shared by ECSN progenitors and the lowest iron-core progenitors. This is illustrated in Figure \[fig:threshold\] (see also Figure 7 in @jones_13 and Figure 4 in @woosley_15).
![Density profiles of several low-mass supernova progenitors illustrating the conditions for ECSN-like explosions. Profiles are shown for the $8.8 M_\odot$ ECSN-progenitor of @nomoto_84 [@nomoto_87] (N8.8, black), the 8.8 $M_\odot$ “failed massive star” of @jones_13 (J8.8, purple), low-mass iron core progenitors (A. Heger, private communication) of $9.6 M_\odot$ (z9.6, with Z=0, red) and $8.1 M_\odot$ (u8.1, with $Z=10^{-4}$, blue), and iron progenitors with $10.09 M_\odot$ and $15 M_\odot$ (s10.09 and s15, from @mueller_16a, yellow and cyan) and $11.2 M_\odot$ (s11.2 from @woosley_02, green). The thick dashed vertical line roughly denotes the location of the shell that reaches the shock $0.5 \, \mathrm{s}$ after the onset of collapse. Slanted dashed lines roughly demarcate the regime where the accretion rate onto the shock reaches $0.05 M_\odot \, \mathrm{s}^{-1}$ (thick dashed line), $5 \times 10^{-3} M_\odot \, \mathrm{s}^{-1}$ (thin), and $5 \times 10^{-4} M_\odot \, \mathrm{s}^{-1}$ (thin) (see Section \[sec:ecsn\_conditions\] for details and underlying assumptions). ECSN-like explosion dynamics is expected if the density profile intersects the grey region. \[fig:threshold\]](f1.pdf){width="\linewidth"}
Explosion Dynamics in ECSN-like Progenitors
-------------------------------------------
### Classical Electron-Capture Supernova Models {#sec:classical}
The steep density gradient outside the core in ECSN-like progenitors is immediately relevant for the dynamics of the ensuing supernova because it implies a rapid decline of the mass accretion rate $\dot{M}$ as the edge of the core reaches the stalled accretion shock. A rapid drop in $\dot{M}$ implies a decreasing ram pressure ahead of the shock and a continuously increasing shock radius (though the shock remains a stationary accretion shock for at least $\mathord{\sim} 50\, \mathrm{ms}$ after bounce and longer for some ECSN-like progenitor models). Under these conditions, neutrino heating can easily pump sufficient energy into the gain region to make the accreted material unbound and power runaway shock expansion. As a result, the neutrino-driven mechanism works for ECSN-like progenitors even under the assumption of spherical symmetry. Using modern multi-group neutrino transport, this was demonstrated by @kitaura_06 for the progenitor of @nomoto_84 [@nomoto_87] and confirmed in subsequent simulations by different groups [@janka_08; @burrows_07b; @fischer_10]. The explosions are characterised by a small explosion energy of $\mathord{\sim} 10^{50}
\, \mathrm{erg}$ [@kitaura_06; @janka_08] and a small nickel mass of a few $ 10^{-3}
M_\odot$ [@wanajo_09].
Even though multi-dimensional effects are not crucial for shock revival in these models, they are not completely negligible. Higher entropies at the bottom of the gain layer lead to convective overturn driven by Rayleigh-Taylor instability shortly after the explosion is initiated [@wanajo_11]. Simulations in axisymmetry (2D) showed that this leads to a modest increase of the explosion energy in @janka_08; an effect which is somewhat larger in more recent models (von Groote et al., in preparation). The effect of Rayleigh-Taylor overturn on the ejecta composition is, however, much more prominent (see Section \[sec:nucleosynthesis\]).
### Conditions for ECSN-like Explosion Dynamics {#sec:ecsn_conditions}
Not all of the newly available supernova progenitor models at the low-mass end [@jones_13; @jones_14; @woosley_15] exhibit a similarly extreme density profile as the model of @nomoto_84 [@nomoto_87]; in some of them the density gradient is considerably more shallow (Figure \[fig:threshold\]). This prompts the questions: How steep a density gradient is required outside the core to obtain an explosion that is triggered by a rapid drop of the accretion rate and works with no or little help from multi-D effects? In reality, there will obviously be a continuum between ECSN-like events and neutrino-driven explosions of more massive stars, in which multi-D effects are crucial for achieving shock revival. Nonetheless, a rough distinction between the two different regimes is still useful, and can be based on the concept of the critical neutrino luminosity of @burrows_93.
@burrows_93 showed that stationary accretion flow onto a proto-neutron star in spherical symmetry is no longer possible if the neutrino luminosity $L_\nu$ (which determines the amount of heating) exceeds a critical value $L_\mathrm{crit}(\dot{M})$ that is well approximated by a power law in $\dot{M}$ with a small exponent, or, equivalently, if $\dot{M}$ drops below a threshold value for a given luminosity. This concept has recently been generalised [@janka_12; @mueller_15a; @summa_16; @janka_16] to a critical relation for the (electron-flavour) neutrino luminosity $L_\nu$ and neutrino mean energy $E_\nu$ as a function of mass accretion rate $\dot{M}$ and proto-neutron star mass $M$ as well as additional correction factors, e.g., for shock expansion due to non-radial instabilities.
For low-mass progenitors with tenuous shells outside the core, $M$, $L_\nu$, and $E_\nu$ do not depend dramatically on the stellar structure outside the core during the early post-bounce: The proto-neutron star mass is inevitably $M\approx 1.4 M_\odot$, and since the neutrino emission is dominated by the diffusive neutrino flux from the core, the neutrino emission properties are bound to be similar to the progenitor of @nomoto_84, i.e. one has $L_\nu
\sim 5 \times 10^{52} \, \mathrm{erg} \, \mathrm{s}^{-1}$ and $E_\nu
\approx 11 \, \mathrm{MeV}$ [@huedepohl_10], with a steady decrease of the luminosity towards later times. Using calibrated relations for the “heating functional”[^2] $L_\nu E_\nu^2$ [@janka_16], this translates into a critical mass accretion rate of $\dot{M}_\mathrm{crit} \approx 0.07 M_\odot \, \mathrm{s}^{-1}$ for ECSN-like progenitors.
To obtain similarly rapid shock expansion as for the $8.8 M_\odot$ model of @nomoto_84, $\dot{M}$ must rapidly plummet *well below* this value. This can be translated into a condition for the density profile outside the core using analytic expressions for the infall time $t_\mathrm{infall}$ and accretion rate $\dot{M}$ for mass shell $m$, which are roughly given by [@woosley_12; @woosley_15b; @mueller_16a], $$t_\mathrm{infall}=\sqrt{\frac{\pi}{4 G \bar{\rho}}}
=\sqrt{\frac{\pi^2 r^3}{3 G m}},$$ and $$\label{eq:macc}
\dot{M}=\frac{2 m }{t_\mathrm{infall}} \frac{\rho}{\bar{\rho} - \rho},$$ where $\bar{\rho}$ is the average density inside the mass shell. For progenitors with little mass outside the core, we have $$\dot{M}\approx\frac{2 m }{t_\mathrm{infall}} \frac{\rho}{\bar{\rho}}
=\frac{8\rho }{3}\sqrt{3 G m r^3}.$$ Using $m=1.4 M_\odot$ and assuming that $\dot{M}$ needs to drop at least to $M_\mathrm{crit}=0.05 M_\odot \, \mathrm{s}^{-1}$ within $0.5 \, \mathrm{s}$ after the onset of collapse to obtain ECSN-like explosion dynamics, one finds that the density needs to drop to $$\rho \lesssim \frac{1}{8}\sqrt{\frac{3}{G m}}\dot{M}_\mathrm{crit}
r^{-3/2}$$ for a radius $r < 2230 \, \mathrm{km}$.
Figure \[fig:threshold\] illustrates that the density gradient at the edge of the core can be far less extreme than in the model of @nomoto_84 to fulfil this criterion. ECSN-like explosion dynamics is expected alike for the modern $8.8M_\odot$ ECSN progenitor of @jones_13 and low-mass iron cores (A. Heger, private communication) of $8.1 M_\odot$ (with metallicity $Z=10^{-4}$) and $9.6 M_\odot$ (Z=0), though the low-mass iron core progenitors are a somewhat marginal case.
### Low-mass Iron Core Progenitors
Simulations of these two low-mass iron progenitors with $8.1 M_\odot$ [@mueller_12b] and $9.6 M_\odot$ ([@janka_12b; @mueller_13] in 2D; [@melson_15a] in 3D) nonetheless demonstrated that the structure of these stars is sufficiently extreme to produce explosions reminiscent of ECSN models: Shock revival sets in early around $100 \, \mathrm{ms}$ after bounce, aided by the drop of the accretion rate associated with the infall of the thin O and C/O shells, and the explosion energy remains small ($5
\times 10^{49} \ldots 10^{50} \, \mathrm{erg}$).
As shown by @melson_15a, there are important differences to ECSNe, however: While shock revival also occurs in spherical symmetry, multi-dimensional effects significantly alter the explosion dynamics. In 1D, the shock propagates very slowly through the C/O shell after shock revival, and only accelerates significantly after reaching the He shell. Without the additional boost by convective overturn, the explosion energy is lower by a factor of $\mathord{\sim} 5$ compared to the multi-D case. Different from ECSNe, somewhat slower shock expansion provides time for the small-scale convective plumes to merge into large structures as shown for the $9.6 M_\odot$ model of @janka_12b in Figure \[fig:z96\_2d\].
Both for the $8.8 M_\odot$ model of @wanajo_11 and the low-mass iron-core explosion models, the dynamics of the Rayleigh-Taylor plumes developing after shock revival is nonetheless quite similar. The entropy of the rising plumes is roughly $\mathord{\sim} 15 \ldots 20
k_\mathrm{b}/\mathrm{nucleon}$ compared to $\mathord{\sim}10
k_\mathrm{b}/\mathrm{nucleon}$ in the ambient medium. For such an entropy contrast, balance between buoyancy and drag forces applies a limiting velocity of the order of the speed of sound. This limit appears to be reached relatively quickly in the simulations. Apart from the very early growth phase, the plume velocities should therefore not depend strongly on the initial seed perturbations; they are rather set by bulk parameters of the system, namely the post-shock entropy at a few hundred kilometres and the entropy close to the gain radius, which together determine the entropy contrast of the plumes. This will become relevant later in our discussion of the nucleosynthesis of ECSN-like explosions.
![Entropy $s$ (left half of plot) and electron fraction $Y_e$ (right half) in the $9.6 M_\odot$ explosion model of @janka_12b and @mueller_13 $280 \, \mathrm{ms}$ after bounce. Large convective plumes push neutron-rich material from close to the gain region out at high velocities. \[fig:z96\_2d\]](f2.png){width="\linewidth"}
![Binned distribution of the electron fraction $Y_e$ in the early ejecta for different explosion models of a $9.6 M_\odot$ star $270 \, \mathrm{ms}$ after bounce. The plots show the relative contribution $\Delta M_\mathrm{ej} /M_\mathrm{ej}$ to the total mass of (shocked) ejecta in bins with $\Delta Y_e=0.01$. The upper panel shows the $Y_e$-distribution for the 2D model of @janka_12b computed using the <span style="font-variant:small-caps;">Vertex-CoCoNuT</span> code [@mueller_10]. The bottom panel illustrates the effect of stochastic variations and dimensionality using several 2D models (thin lines) and a 3D model computed with the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code @mueller_15a (thick lines). Note that the *dispersion* in $Y_e$ in the early ejecta is similar for both codes, though the average $Y_e$ in the early ejecta is spuriously low when less accurate neutrino transport is used (<span style="font-variant:small-caps;">FMT</span> instead of <span style="font-variant:small-caps;">Vertex</span>). The bottom panel is therefore only intended to show *differential effects between different models*, and is not a prediction of the absolute value of $Y_e$. It suggests that i) stochastic variations do not strongly affect the distribution of $Y_e$ in the ejecta, and that ii) the resulting distribution of $Y_e$ in 2D and 3D is relatively similar. \[fig:hists\]](f3a.pdf "fig:"){width="\linewidth"} ![Binned distribution of the electron fraction $Y_e$ in the early ejecta for different explosion models of a $9.6 M_\odot$ star $270 \, \mathrm{ms}$ after bounce. The plots show the relative contribution $\Delta M_\mathrm{ej} /M_\mathrm{ej}$ to the total mass of (shocked) ejecta in bins with $\Delta Y_e=0.01$. The upper panel shows the $Y_e$-distribution for the 2D model of @janka_12b computed using the <span style="font-variant:small-caps;">Vertex-CoCoNuT</span> code [@mueller_10]. The bottom panel illustrates the effect of stochastic variations and dimensionality using several 2D models (thin lines) and a 3D model computed with the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code @mueller_15a (thick lines). Note that the *dispersion* in $Y_e$ in the early ejecta is similar for both codes, though the average $Y_e$ in the early ejecta is spuriously low when less accurate neutrino transport is used (<span style="font-variant:small-caps;">FMT</span> instead of <span style="font-variant:small-caps;">Vertex</span>). The bottom panel is therefore only intended to show *differential effects between different models*, and is not a prediction of the absolute value of $Y_e$. It suggests that i) stochastic variations do not strongly affect the distribution of $Y_e$ in the ejecta, and that ii) the resulting distribution of $Y_e$ in 2D and 3D is relatively similar. \[fig:hists\]](f3b.pdf "fig:"){width="\linewidth"}
Nucleosynthesis {#sec:nucleosynthesis}
---------------
### 1D Electron-Capture Supernovae Models – Early Ejecta
Nucleosynthesis calculations based on modern, spherically symmetric ECSN models were first performed by @hoffman_08 and @wanajo_09. The results of these calculations appeared to point to a severe conflict with observational constraints, showing a strong overproduction of $N=50$ nuclei, in particular ${}^{90}\mathrm{Zr}$, due to the ejection of slightly neutron-rich material (electron fraction $Y_e\gtrsim 0.46$) with relatively low entropy ($s \approx 18
k_b/\mathrm{nucleon}$) immediately after shock revival. @hoffman_08 inferred that such nucleosynthesis yields would only be compatible with chemogalactic evolution if ECSNe were rare events occurring at a rate no larger than once per 3,000 years.
The low $Y_e$-values in the early ejecta stem from the ejection of matter at relatively high velocities in the wake of the fast-expanding shock. In slow outflows, neutrino absorption on neutrons and protons drives $Y_e$ to an equilibrium value that is set by the electron neutrino and antineutrino luminosities $L_{\nu_e}$ and $L_{\bar{\nu}_e}$, the “effective” mean energies[^3] $\varepsilon_{\nu_e}$ and $\varepsilon_{{\bar\nu}_e}$, and the proton-neutron mass difference $\Delta =1.293 \, \mathrm{MeV}$ as follows [@qian_96], $$Y_e \approx \left[1+\frac{L_{\bar{\nu}_e} (\varepsilon_{\bar{\nu}_e}-2 \Delta)}{L_{{\nu}_e} (\varepsilon_{{\nu}_e}+2 \Delta)}\right]^{-1}.$$ For the relatively similar electron neutrino and antineutrino luminosities and a small difference in the mean energies of $2\ldots 3 \, \mathrm{MeV}$ in modern simulations, one typically finds an asymptotic value of $Y_e>0.5$, i.e. *proton-rich* conditions. To obtain low $Y_e<0.5$ in the ejecta, neutrino absorption reactions need to freeze out at a high density (small radius) when the equilibrium between the reactions $n(\nu_e,e^-)p$ and $p(\nu_e,e^+)n$ is still skewed towards low $Y_e$ due to electron captures $p (e^-,\nu_e) n$ on protons. Neglecting the difference between arithmetic, quadratic, and cubic neutrino mean energies and assuming a roughly equal contribution of $n(\nu_e,e^-)p$ and $p(\bar{\nu}_e,e^+)n$ to the neutrino heating, one can estimate that freeze-out roughly occurs when (cp. Eq. 81 in [@qian_96]), $$\frac{v_r}{r}
\approx \frac{2 m_N \dot{q}_\nu}{E_{\nu_e}+E_{\bar{\nu}_e}},$$ where $m_N$ is the nucleon mass, $\dot{q}_\nu$ is the mass-specific neutrino heating rate, $r$ is the radius and $v_r$ is the radial velocity. Since $\dot{q}_\nu \propto r^{-2}$, freeze-out will occur at smaller $r$, higher density, and smaller $Y_e$ for higher ejection velocity.
### Multi-D Effects and the Composition of the Early Ejecta {#sec:early_ejecta}
Since high ejection velocities translate into lower $Y_e$, the Rayleigh-Taylor plumes in 2D simulations of ECSNe (Figure 2 in [@wanajo_11]) and explosions of low-mass iron cores (Figure \[fig:z96\_2d\]) contain material with even lower $Y_e$ than found in 1D ECSN models. Values of $Y_e$ as low as $0.404$ are found in @wanajo_11.
Surprisingly, @wanajo_11 found that the neutron-rich plumes did not aggravate the problematic overproduction of $N=50$ nuclei in their 2D ECSN model. This is due to the fact that the entropy in the neutron-rich lumps is actually *smaller* than in 1D[^4] (but higher than in the ambient medium), which changes the character of the nucleosynthesis by reducing the $\alpha$-fraction at freeze-out from nuclear statistical equilibrium (NSE). The result is an interesting production of trans-iron elements between Zn and Zr for the progenitor of @nomoto_84 [@nomoto_87]; the production factors are consistent with current rate estimates for ECSNe of about $4\%$ of all supernovae [@poelarends_08]. Subsequent studies showed that neutron-rich lumps in the early ejecta of ECSNe could contribute a sizeable fraction to the live $^{60}\mathrm{Fe}$ in the Galaxy [@wanajo_13b], and might be production sites for some other rare isotopes of obscure origin, such as $^{48} \mathrm{Ca}$ [@wanajo_13a]. Due to the similar explosion dynamics, low-mass iron-core progenitors exhibit rather similar nucleosynthesis (Wanajo et al., in preparation; Harris et al., in preparation). The results of these nucleosynthesis calculations tallies with the observed abundance trends in metal-poor stars that suggest a separate origin of elements like Sr, Y, and Zr from the heavy r-process elements (light element primary process; [@travaglio_04; @wanajo_06; @qian_08; @arcones_11; @hansen_12; @ting_12]).
Since $Y_e$ in the early ejecta of ECSNe and ECSN-like explosion is sensitive to the neutrino luminosities and mean energies and to the ejection velocity of the convective plumes (which may be different in 3D compared to 2D, or exhibit stochastic variations), @wanajo_11 also explored the effect of potential uncertainties in the minimum $Y_e$ in the ejecta on the nucleosynthesis. They found that a somewhat lower $Y_e$ of $\sim 0.3$ in the plumes might make ECSNe a site for a “weak r-process” that could explain the enhanced abundances of lighter r-process elements up to Ag and Pd in some metal-poor halo stars [@wanajo_06; @honda_06].
Whether the neutron-rich conditions required for a weak r-process can be achieved in ECSNe or low-mass iron-core supernovae remains to be determined. Figure \[fig:hists\] provides a tentative glimpse on the effects of stochasticity and dimensionality on the $Y_e$ in neutron-rich plumes based on several 2D and 3D explosion models of a $9.6 M_\odot$ low-mass iron core progenitor (A. Heger, private communication) conducted using the <span style="font-variant:small-caps;">FMT</span> transport scheme of @mueller_15a.[^5] Stochastic variations in 2D models due to different (random) initial perturbations shift the minimum $Y_e$ in the ejecta at most by $0.02$. This is due to the fact that the Rayleigh-Taylor plumes rapidly transition from the initial growth phase to a stage where buoyancy and drag balance each other and determine the velocity [@alon_95]. 3D effects do not change the distribution of $Y_e$ tremendously either, at best they tend to shift it to slightly higher values compared to 2D, which is consistent with a somewhat stronger braking of expanding bubbles in 3D as a result of the forward turbulent cascade [@melson_15a]. It thus appears unlikely that the dynamics of convective overturn is a major source of uncertainty for the nucleosynthesis in ECSN-like explosions, though confirmation with better neutrino transport is still needed.
If these events are indeed sites of a weak r-process, the missing ingredient is likely to be found elsewhere. Improvements in the neutrino opacities, such as the proper inclusion of nucleon potentials in the charged-current interaction rates [@martinez_12; @roberts_12c], or flavour oscillations involving sterile neutrinos [@wu_14] could lower $Y_e$ somewhat. @wu_14 found a significant reduction of $Y_e$ by up to $0.15$ in some of the ejecta, but these results may depend sensitively on the assumption that collective flavour oscillations are still suppressed during the phase in question. Moreover, @wu_14 pointed out that a reduction of $Y_e$ with the help of active-sterile flavour conversion might require delicate fine-tuning to avoid shutting off neutrino heating before the onset of the explosion due to the disappearance of $\nu_e$’s (which could be fatal to the explosion mechanism).
Moreover, whether ECSNe necessarily *need* to co-produce Ag and Pd with Sr, Y, and Zr is by no means clear. While observed abundance trends may suggest such a co-production, the abundance patterns of elements between Sr and Ag in metal-poor stars appear less robust [@hansen_14]; and the failure of unaltered models to produce Ag and Pd may not be indicative of a severe tension with observations.
### Other Nucleosynthesis Scenarios for Electron-Capture Supernovae
There are at least two other potentially interesting sites for nucleosynthesis in ECSN-like supernovae. For “classical” ECSN-progenitors with more extreme density profiles, it has been proposed that the rapid acceleration of the shock in the steep density gradient outside the core can lead to sufficiently high post-shock entropies ($s \sim 100 \,
k_b/\mathrm{nucleon}$) and short expansion time-scales ($\tau_\mathrm{exp} \sim 10^{-4} \, \mathrm{s}$) to allow r-process nucleosynthesis in the thin shells outside the core [@ning_07]. This has not been borne out by numerical simulations, however [@janka_08; @hoffman_08]. When the requisite high entropy is reached, the post-shock temperature has already dropped far too low to dissociate nuclei, and the expansion time-scale does not become sufficiently short for the scenario of @ning_07 to work. The proposed r-process in the rapidly expanding shocked shells would require significantly different explosion dynamics, e.g. a much higher explosion energy.
The neutrino-driven wind that is launched after accretion onto the proto-neutron star has been completely subsided has long been discussed as a potential site of r-process nucleosynthesis in supernovae [@woosley_94; @takahashi_94; @qian_96; @cardall_97; @thompson_01; @arcones_07; @arcones_13]. ECSN-like explosions are in many respects the least favourable site for an r-process in the neutrino driven wind since they produce low-mass neutron stars, which implies low wind entropies and long expansion time-scales [@qian_96], i.e. conditions that are detrimental to r-process nucleosynthesis. However, ECSNe are unique inasmuch as the neutrino-driven wind can be calculated self-consistently with Boltzmann neutrino transport [@huedepohl_10; @fischer_10] without the need to trigger an explosion artificially. These simulations revealed a neutrino-driven wind that is not only of moderate entropy ($s \lesssim 140
k_b/\mathrm{nucleon}$ even at late times), but also becomes increasingly *proton-rich* with time, in which case the $\nu
p$-process [@froehlich_06] could potentially operate. The most rigorous nucleosynthesis calculations for the neutrino-driven wind in ECSNe so far [@pllumbi_15] are based on simulations that properly account for nucleon interaction potentials in the neutrino opacities [@martinez_12; @roberts_12c] and have also explored the effects of collective flavour oscillations, active-sterile flavour conversion. @pllumbi_15 suggest that wind nucleosynthesis in ECSNe is rather mundane: Neither does the $\nu p$-process operate nor can neutron-rich conditions be restored to obtain conditions even for a weak r-process. Instead, they find that wind nucleosynthesis mainly produces nuclei between Sc and Zn, but the production factors are low, implying that the role of neutrino-driven winds in ECSNe is negligible for this mass range for the purpose of chemogalactic evolution.
Electron-Capture Supernovae – Transients and Remnants
-----------------------------------------------------
Although the explosion mechanism of ECSNe is in many respects best understood among all core-collapse supernova types from the viewpoint of explosion mechanism, unambiguously identifying transients as ECSNe has proved more difficult. It has long been proposed that SN 1054 was an ECSN [@nomoto_82] based on the properties of its remnant, the Crab nebula: The total mass of ejecta in the nebula is small ($\lesssim 5 M_\odot$; [@davidson_85; @macalpine_91; @fesen_97]), as is the oxygen abundance [@davidson_82; @henry_82; @henry_86], which is in line with the thin O-rich shells in ECSN progenitors. Moreover, the kinetic energy of the ejecta is only about $\lesssim
10^{50} \, \mathrm{erg}$ [@fesen_97; @hester_08] as expected for an ECSN-like event. Whether the Crab originates from a classical ECSN or from something slightly different like a “failed massive star” of @jones_13 continues to be debated; @macalpine_08 have argued, for example, against the former interpretation based on a high abundance ratio of C vs. N and the detection of some ashes of oxygen burning (S, Ar) in the nebula.
It has been recognised in recent years that the (reconstructed) light curve of SN 1054 – a type IIP supernova with a relatively bright plateau – is also compatible with the low explosion energy of $\lesssim 10^{50} \, \mathrm{erg}$ predicted by recent numerical simulations. @smith_13 interpreted the bright plateau, which made SN 1054 visible by daytime for $\mathord{\sim} 3
\ \mathrm{weeks}$, as the result of interaction with circumstellar medium (CSM). The scenario of @smith_13 requires significant mass loss ($0.1 M_\odot$ for about 30 years) shortly before the supernova, which may be difficult to achieve, although some channels towards ECSN-like explosions could involve dramatic mass loss events [@woosley_15]. Subsequent numerical calculations of ECSN light curves [@tominaga_13; @moriya_14] demonstrated, however, that less extreme assumptions for the mass loss are required to explain the optical signal of SN 1054; indeed a very extended hydrogen envelope may be sufficient to explain the bright plateau, and CSM interaction with the progenitor wind may only be required to prevent the SN from fading too rapidly.
Several other transients have also been interpreted as ECSNe, e.g. faint type IIP supernovae such as SN 2008S [@botticella_09]. @smith_13 posits that ECSNe are observed type IIn-P supernovae with circumstellar interaction like SN 1994W with a bright plateau and a relatively sharp drop to a faint nickel-powered tail, but again the required amount of CSM is not easy to explain. All of these candidate events share low kinetic energies and small nickel masses as a common feature and are thus *prima facie* compatible with ECSN-like explosion dynamics. Variations in the envelope structure of ECSN-progenitors (e.g. envelope stripping in binaries) may account for the very different optical signatures [@moriya_14].
The peculiar nucleosynthesis in ECSNe-like explosions may also leave observable fingerprints in the electromagnetic signatures. The slightly neutron-rich character of the early ejecta results in a strongly supersolar abundance ratio of Ni to Fe after $\beta$-decays are completed [@wanajo_11]. Such high Ni/Fe ratios are seen in the nebular spectra of some supernovae [@jerkstrand_15a; @jerkstrand_15b]. ECSNe can only explain some of these events, however; many of them exhibit explosion energies and Nickel masses that are incompatible with an ECSN.
3D SUPERNOVA MODELS OF MASSIVE PROGENITORS {#sec:3d}
==========================================
In more massive progenitors with extended Si and O shells, the mass accretion rate onto the shock does not drop as rapidly as in ECSN-like explosions. Typically, one finds a relatively stable accretion rate of a few $0.1 M_\odot \, \mathrm{s}^{-1}$ during the infall of the O shell, which implies a high ram pressure ahead of the shock. Under these conditions, it is no longer trivial to demonstrate that neutrino heating can pump a sufficient amount of energy into the post-shock region to power runaway shock expansion. 1D simulations of the post-bounce phase using Boltzmann solvers for the neutrino transport convincingly demonstrated that neutrino-driven explosions cannot be obtained under such conditions in spherical symmetry [@liebendoerfer_00; @rampp_00; @thompson_00b]. Much of the work of recent years has therefore focused on better understanding and accurately modelling how multi-dimensional effects in supernovae facilitate neutrino-driven explosions – an undertaking first begun in the 1990s with axisymmetric (2D) simulations employing various approximations for neutrino heating and cooling [@herant_92; @yamada_93; @herant_94; @burrows_95; @janka_95; @janka_96]. 2D simulations have by now matured to the point that multi-group neutrino transport and the neutrino-matter interactions can be modelled with the same rigour as in spherical symmetry [@livne_04; @buras_06a; @mueller_10; @bruenn_13; @just_15; @skinner_15], or with, still with acceptable accuracy for many purposes (see Section \[sec:methods\] for a more careful discussion), by using some approximations either in the transport treatment or the neutrino microphysics [@suwa_10; @mueller_15a; @pan_16; @oconnor_16; @roberts_16].
Prelude – First-principle 2D Models
-----------------------------------
The current generation of 2D supernova simulations with multi-group neutrino transport has gone a long way towards demonstrating that neutrino heating can bring about explosion in conjunction with convection or the SASI. Thanks to steadily growing computational resources, the range of successful neutrino-driven explosion models has grown from about a handful in mid-2012 [@buras_06b; @marek_09; @suwa_10; @mueller_12a] to a huge sample of explosion models with ZAMS masses between $10 M_\odot$ and $ 75
M_\odot$, different metallicities, and different choices for the supranuclear equation of state [@mueller_12b; @janka_12b; @suwa_13; @bruenn_13; @obergaulinger_14; @nakamura_15; @mueller_15b; @bruenn_16; @oconnor_16; @summa_16; @pan_16].
Many of the findings from these simulations remain important and valid after the advent of 3D modelling: The 2D models have established, among other things, the existence of distinct SASI- and convection-dominated regimes in the accretion phase, both of which can lead to successful explosion [@mueller_12b] in agreement with tunable, parameterised models [@scheck_08; @fernandez_14]. They have shown that “softer” nuclear equations of state that result in more compact neutron stars are generally favourable for shock revival [@janka_12; @suwa_13; @couch_12a]. The inclusion of general relativistic effects, whether by means of the conformally-flat approximation (CFC) or, less rigorously, an effective pseudo-relativistic potential for Newtonian hydrodynamics, was found to have a similarly beneficial effect (CFC: [@mueller_12a]; pseudo-Newtonian: [@oconnor_16]). Moreover, there are signs that the 2D models of some groups converge with each other; simulations of four different stellar models ($12,15,20,25 M_\odot$) of @woosley_07 by @summa_16 and @oconnor_16 have yielded quantitatively similar results.
Despite these successes, 2D models have, by and large, struggled to reproduce the typical explosion properties of supernovae. They are often characterised by a slow and unsteady growth of the explosion energy after shock revival. Usually the growth of the explosion energy cannot be followed beyond $2\ldots 4 \times 10^{50} \, \mathrm{erg}$ after simulating up to $\mathord{\sim} 1 \, \mathrm{s}$ of physical time [@janka_12b; @nakamura_15; @oconnor_16], i.e. below typical observed values of $5\ldots 9 \times 10^{50} \, \mathrm{erg}$ [@kasen_09; @pejcha_15c]. Only the models of @bruenn_16 reach significantly higher explosion energies. While the explosion energy often has not levelled out yet at the end of the simulations and may still grow significantly for several seconds [@mueller_15b], its continuing growth comes at the expense of long-lasting accretion onto the proto-neutron star. This may result in inordinately high remnant masses. Thus, while 2D models appeared to have solved the problem of shock revival, they faced an *energy problem* instead.
Status of 3D Core-Collapse Supernova Models
-------------------------------------------
Before 3D modelling began in earnest (leaving aside tentative sallies into 3D by @fryer_02), it was hoped that 3D effects might facilitate shock revival even at earlier times than in 2D, and that this might then also provide a solution to the energy problem, since more energy can be pumped into the neutrino-heated ejecta at early times when the mass in the gain region is larger. These hopes were already disappointed once several groups investigated the role of 3D effects in the explosion mechanism using a simple “light-bulb” approach, where the neutrino luminosity and mean energy during the accretion phase are prescribed and very simple approximations for the neutrino heating and cooling terms are employed. Although @nordhaus_10 initially claimed a significant reduction of the critical neutrino luminosity for shock revival in 3D compared to 2D based on such an approach, these results were affected by the gravity treatment [@burrows_12] and have not been confirmed by subsequent studies. Similar parameterised simulations have shown that the critical luminosity in 3D is roughly equal to 2D [@hanke_12; @couch_12b; @burrows_12; @dolence_13] and about $20\%$ lower than in 1D, though the results differ about the hierarchy between 2D and 3D.
Subsequent supernova models based on multi-group neutrino transport yielded even more unambiguous results: Shock revival in 3D was either not achieved for progenitors that explode in 2D [@hanke_13; @tamborra_14a], or was delayed significantly [@takiwaki_14; @melson_15b; @lentz_15]. These first disappointing results need to be interpreted carefully, however: A detailed analysis of the heating conditions in the non-exploding 3D models of $11.2 M_\odot$, $20 M_\odot$, and $27 M_\odot$ progenitors simulated by the Garching supernova group revealed that these are very close to shock revival [@hanke_13; @hanke_phd; @melson_15b]. Moreover, the 3D models of the Garching group are characterised by more optimistic heating conditions, larger average shock radii, and higher kinetic energies in non-spherical motions compared to 2D for extended periods of time; the same is true for the delayed (compared to 2D) 3D explosion of @lentz_15 of a $15M_\odot$ progenitor. It is merely when is comes to sustaining shock expansion that the 3D models prove less resilient than their 2D counterparts, which transition into an explosive runaway more robustly.
The conclusion that 3D models are only slightly less prone to explosion is reinforced by the emergence of the first successful simulations of shock revival in progenitors with $20 M_\odot$ [@melson_15b] and $15 M_\odot$ [@lentz_15] using rigorous multi-group neutrino transport and the best available neutrino interaction rates. There is also a number of 3D explosion models based on more simplified approaches to multi-group neutrino transport [@takiwaki_12; @takiwaki_14; @mueller_15b; @roberts_16].
How Do Multi-D Effects Facilitate Shock Revival? {#sec:3deffects}
------------------------------------------------
Despite these encouraging developments, several questions now need to be addressed to make further progress: What is the key to *robust* 3D explosion models across the entire progenitor mass range for which we observe explosions (i.e. at least up to $15 \ldots 18 M_\odot$; see [@smartt_09a] and [@smartt_15])? This question is tightly connected to another, more fundamental one, namely: What are the conditions for an explosive runaway, and how do multi-dimensional effects modify them?
### Conditions for Runaway Shock Expansion
Even without the complications of multi-D fluid flow, the physics of shock revival is subtle. In spherical symmetry, one can show that for a given mass accretion rate $\dot{M}$, there is a maximum (“critical”) electron-flavour luminosity $L_\nu$ at the neutrinosphere above which stationary accretion flow onto the proto-neutron star is no longer possible ([@burrows_93]; cp. Section \[sec:ecsn\]). This also holds true if the contribution of the accretion luminosity due to cooling outside the neutrinosphere is taken into account [@pejcha_12]. The limit for the existence of stationary solutions does not perfectly coincide with the onset of runaway shock expansion, however. Using 1D light-bulb simulations (i.e. neglecting the contribution of the accretion luminosity), @fernandez_12 and @gabay_15 showed that the accretion flow becomes unstable to oscillatory and non-oscillatory instability slightly below the limit of @burrows_93. Moreover, it is unclear whether the negative feedback of shock expansion on the accretion luminosity and hence on the neutrino heating could push models into a limit cycle (cp. Figure 28 of [@buras_06a]) even above the threshold for non-stationarity.
Since an *a priori* prediction of the critical luminosity, $L_\nu
(\dot{M})$ is not feasible, heuristic criteria have been developed [@janka_98; @janka_01b; @thompson_00; @thompson_05; @buras_06b; @murphy_08b; @pejcha_12; @fernandez_12; @gabay_15; @murphy_15] to gauge the proximity of numerical supernova models to an explosive runaway (rather than for pinpointing the formal onset of the runaway after the fact, which is of less interest). The most commonly used criticality parameters are based on the ratio of two relevant time-scales for the gain region [@janka_98; @janka_01b; @thompson_00; @thompson_05; @buras_06b; @murphy_08b], namely the advection or dwell time $\tau_\mathrm{adv}$ that accreted material spends in the gain region, and the heating time-scale $\tau_\mathrm{heat}$ over which neutrino energy deposition changes the total or internal energy of the gain region appreciably. If $\tau_\mathrm{adv} > \tau_\mathrm{heat}$, neutrino heating can equalise the net binding energy of the accreted material before it is lost from the gain region, and one expects that the shock must expand significantly due to the concomitant increase in pressure. Since this expansion further increases $\tau_\mathrm{adv}$, an explosive runaway is likely to ensue.
The time-scale criterion $\tau_\mathrm{adv}/\tau_\mathrm{heat}>1$ has the virtue of being easy to evaluate since the two time-scales can be defined in terms of global quantities such as the total energy $E_\mathrm{tot,g}$ in the gain region, the volume-integrated neutrino heating rate $\dot{Q}_\nu$, and the mass $M_\mathrm{g}$ in the gain region (which can be used to define $\tau_\mathrm{adv}=M_\mathrm{gain}/\dot{M}$ under steady-state conditions). The significance of these global quantities for the problem of shock revival is immediately intuitive, though care must be taken to define the heating time-scale properly. @thompson_00, @thompson_05, @murphy_08b, and @pejcha_12 define $\tau_\mathrm{heat}$ as the time-scale for changes in the internal energy $E_\mathrm{int}$ in the gain region, $$\tau_\mathrm{heat}
=\frac{E_\mathrm{int}}{\dot{Q}_\nu},$$ based on the premise that shock expansion is regulated by the increase in pressure (and hence in internal energy). This definition yields unsatisfactory results, however. The criticality parameter can be spuriously low at shock revival if this definition is used ($\tau_\mathrm{adv}/\tau_\mathrm{heat}<0.4$).
By defining $\tau_\mathrm{heat}$ in terms of the total (internal+kinetic+potential) energy[^6] of the gain region [@buras_06b], $$\tau_\mathrm{heat}
=\frac{E_\mathrm{tot,g}}{\dot{Q}_\nu},$$ the criterion $\tau_\mathrm{adv}/\tau_\mathrm{heat}>1$ becomes a very accurate predictor for non-oscillatory instability [@fernandez_12; @gabay_15]. This indicates that the relevant energy scale to which the quasi-hydrostatic stratification of the post-shock region is the total energy (or perhaps the total or stagnation enthalpy) of the gain region, and not the internal energy. This is consistent with the observation that runaway shock expansion occurs roughly once the total energy or the Bernoulli integral [@fernandez_12; @burrows_95] reach positive values somewhere (*not* everywhere) in the post-shock region, which is essentially what the time-scale criterion estimates. What is crucial is that the density and pressure gradients between the gain radius and the shock (and hence the shock position) depends sensitively on the *ratio* of enthalpy $h$ (or the internal energy) and the gravitational potential, rather than on enthalpy alone. Under the (justified) assumption that quadratic terms in $v_r^2$ in the momentum and energy equation are sufficiently small to be neglected in the post-shock region, one can show (see Appendix \[sec:app\_gradient\]) that the logarithmic derivative of the density $\rho$ in the gain region is constrained by $$\frac{{\partial}\ln \rho}{{\partial}\ln r}
>-\frac{3GM}{rh},$$ where $M$ is the proto-neutron star mass. Once $h>GM/r$ or even $e_\mathrm{int}>GM/r$ (where $e_\mathrm{int}$ is the internal energy per unit mass), significant shock expansion must ensue due to the flattening of pressure and density gradients.
@janka_12, @mueller_15a and @summa_16 have also pointed out that the time-scale criterion can be converted into a scaling law for the critical electron-flavour luminosity $L_\nu$ and mean energy $E_\nu$ in terms of the proto-neutron star mass $M$, the accretion rate $\dot{M}$, and the gain radius $r_\mathrm{g}$, $$(L_\nu E_\nu^2)_\mathrm{crit} \propto (\dot{M} M)^{3/5}
r_\mathrm{g}^{-2/5}.$$ The concept of the critical luminosity, the time-scale criterion, and the condition of positive total energy or a positive Bernoulli parameter at the gain radius are thus intimately related and appear virtually interchangeable considering that they remain *approximate criteria* for runaway shock expansion anyway. This is also true for some other explosion criteria that have been proposed, e.g. the antesonic condition of @pejcha_12, which states that the sound speed $c_\mathrm{s}$ must exceed a certain fraction of the escape velocity $v_\mathrm{esc}$ for runaway shock expansion somewhere in the accretion flow, $$c_\mathrm{s}^2>3/16 v_\mathrm{esc}^2.$$ Approximating the equation of state as a radiation-dominated gas with an adiabatic index $\gamma=4/3$ and a pressure of $P=\rho e_\mathrm{int} /3= \rho h/4$, one finds that the antesonic condition roughly translates to $$\frac{c_\mathrm{s}^2}{3/16 v_\mathrm{esc}^2}
=\frac{4/3 P/\rho}{3/8 GM/r}
=\frac{32 e_\mathrm{int}}{27 GM/r}
=\frac{8h}{9GM/r}>1,$$ i.e. the internal energy and the enthalpy must be close to the gravitational binding energy (even if the precise critical values for $e_\mathrm{int}$ and $h$ may shift a bit for a realistic equation of state).[^7]
### Impact of Multi-D Effects on the Heating Conditions {#sec:heat3d}
Why do multi-D effects bring models closer to shock revival, and how is this reflected in the aforementioned explosion criteria? Do these explosion criteria even remain applicable in multi-D in the first place?
The canonical interpretation has long been that the runaway condition $\tau_\mathrm{adv}>\tau_\mathrm{heat}$ remains the decisive criterion in multi-D, and that multi-D effects facilitate shock revival mainly by increasing the advection time-scale $\tau_\mathrm{adv}$ [@buras_06b; @murphy_08b]. Especially close to criticality, $\tau_\mathrm{heat}$ is also shortened due to feedback processes – better heating conditions imply that the net binding energy in the gain region and hence $\tau_\mathrm{heat}$ must decrease.
While simulations clearly show increased advection time-scales in multi-D compared to 1D [@buras_06b; @murphy_08b; @hanke_12] as a result of larger shock radii, the underlying cause for larger accretion shock radii in multi-D is more difficult to pinpoint. Ever since the first 2D simulations, both the transport of neutrino-heated high-entropy material from the gain radius out to the shock [@herant_94; @janka_96] as well as the “turbulent pressure” of convective bubbles colliding with the shock [@burrows_95] have been invoked to explain larger shock radii in multi-D. Both effects are plausible since they change the components $P$ (thermal pressure) and $\rho \mathbf{v} \otimes \mathbf{v}$ (where $\mathbf{v}$ is the velocity) of the momentum stress tensor that must balance the ram pressure upstream of the shock during stationary accretion.
That the turbulent pressure plays an important role follows already from the high turbulent Mach number $\mathord{\sim 0.5}$ in the post-shock region [@burrows_95; @mueller_12b] before the onset of shock revival, and has been demonstrated quantitatively by @murphy_12 and @couch_14 using spherical Reynolds decomposition to analyse parameterised 2D and 3D simulations. Using a simple estimate for the shock expansion due to turbulent pressure, @mueller_15a were even able to derive the reduction of the critical heating functional in multi-D compared to 1D in terms of the average squared turbulent Mach number $\langle
\mathrm{Ma}^2\rangle$ in the gain region, $$\begin{aligned}
\label{eq:lcrit_3d}
(L_\nu E_\nu^2)_\mathrm{crit,2D}
&\approx&
(L_\nu E_\nu^2)_\mathrm{crit,1D} \left(1+\frac{4\langle \mathrm{Ma}^2 \rangle}{3}\right)^{-3/5}\\
\nonumber &\propto& (\dot{M} M)^{3/5}
r_\mathrm{g}^{-2/5}
\left(1+\frac{4\langle \mathrm{Ma}^2 \rangle}{3}\right)^{-3/5},\end{aligned}$$ and then obtained $(L_\nu E_\nu^2)_\mathrm{crit,2D} \approx 0.75
(L_\nu E_\nu^2)_\mathrm{crit,1D}$ in rough agreement with simulations using a model for the saturation of non-radial fluid motions (see Section \[sec:saturation\]).
Nonetheless, there is likely no monocausal explanation for better heating conditions in multi-D. @yamasaki_06 found, for example, that convective energy transport from the gain radius to the shock also reduces the critical luminosity (although they somewhat overestimated the effect by assuming constant entropy in the entire gain region). Convective energy transport reduces the slope of the pressure gradient between the gain radius (where the pressure is set by the neutrino luminosity and mean energy) and the shock, and thus pushes the shock out by increasing the thermal post-shock pressure. That this effect also plays a role alongside the turbulent pressure can be substantiated by an analysis of neutrino hydrodynamics simulations (Bollig et al. in preparation).
Only a detailed analysis of the properties of turbulence in the gain region [@murphy_11] combined with a model for the interaction of turbulence with a non-spherical accretion shock will reveal the precise combination of multi-D effects that conspire to increase the shock radius compared to 1D. This is no prerequisite for understanding the impact of multi-D effects on the runaway condition as encapsulated by a phenomenological correction factor in Equation (\[eq:lcrit\_3d\]), since effects like turbulent energy transport, turbulent bulk viscosity, etc. will also scale with the square of the turbulent Mach number in the post-shock region just like the turbulent pressure. They are effectively lumped together in the correction factor $(1+4 /3 \langle
\mathrm{Ma}^2 \rangle)^{-3/5}$. *The turbulent Mach number in the post-shock region is thus the crucial parameter for the reduction of the critical luminosity in multi-D*, although the coefficient of $\langle\mathrm{Ma}^2\rangle$ still needs to be calibrated against multi-D simulations (and may be different in 2D and 3D).
This does *not* imply, however, that the energetic requirements for runaway shock expansion in multi-D are fundamentally different from 1D: Runaway still occurs roughly once some material in the gain region first acquires positive total (internal+kinetic+potential) energy $e_\mathrm{tot}$; and the required energy input for this ultimately stems from neutrino heating.[^8]
![Comparison of the root-mean-square average $\delta v$ of non-radial velocity component in the gain region (black) with two phenomenological models for the saturation of non-radial instabilities in a SASI-dominated 3D model of an $18 M_\odot$ star using the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code. The red curve shows an estimate based on Equation (\[eq:vturb\]), which rests on the assumption of a balance between buoyant driving and turbulent dissipation [@murphy_12; @mueller_15a]. The blue curve shows the prediction of Equation (\[eq:vsasi\]), which assumes that saturation is regulated by a balance between the growth rate of the SASI and parasitic Kelvin-Helmholtz instabilities [@guilet_10]. Even though Equation (\[eq:vsasi\]) assumes a constant quality factor $|\mathcal{Q}|$ to estimate the SASI growth rate, it appears to provide a good estimate for the dynamics of the model. Interestingly, the saturation models for the SASI- and convection dominated regimes give similar results during later phases even though the mechanism behind the driving instability is completely different. \[fig:vturb3d\]](f4.pdf){width="\linewidth"}
### Saturation of Instabilities {#sec:saturation}
What complicates the role of multi-D effects in the neutrino-driven mechanism is that the turbulent Mach number in the gain region itself depends on the heating conditions, which modify the growth rates and saturation properties of convection and the SASI. Considerable progress has been made in recent years in understanding this feedback mechanism and the saturation properties of these two instabilities.
The *linear* phases of convection and the SASI are now rather well understood. The growth rates for buoyancy-driven convective instability are expected to be of order of the Brunt-Väisälä frequency $\omega_\mathrm{BV}$, which can be expressed in terms of $P$, $\rho$, $c_\mathrm{s}$, and the local gravitational acceleration $g$ as[^9] $$\omega_\mathrm{BV}^2=g \left(\frac{1}{\rho } \frac{{\partial}\rho}{{\partial}r}-
\frac{1}{\rho c_\mathrm{s}^2}\frac{{\partial}P}{{\partial}r}\right),$$ which becomes positive in the gain region due to neutrino heating. A first-order estimate yields, $$\omega_\mathrm{BV}^2 \sim
\frac{G M \dot{Q}_\nu}{4\dot{M} r_\mathrm{g} ^2 c_\mathrm{s}^2 \left(r_\mathrm{sh}-r_\mathrm{g}\right)}
\sim \frac{3\dot{Q}_\nu}{4\dot{M} r_\mathrm{g} \left(r_\mathrm{sh}-r_\mathrm{g}\right)},$$ using $c_\mathrm{s}^2 \approx GM/(3r_\mathrm{g})$ at the gain radius [cp. @mueller_15a]. An important subtlety is that advection can stabilise the flow so that $\omega_\mathrm{BV}^2>0$ is no longer sufficient for instability unless large seed perturbations in density are already present. Instability instead depends on the more restrictive criterion for the parameter $\chi$ [@foglizzo_06], $$\chi = \int\limits_{r_\mathrm{g}}^{r_\mathrm{sh}}\frac{\omega_\mathrm{BV}}{|v_r|}{\mathrm{d}}r,$$ with $\chi \gtrsim 3$ indicating convective instability.
The scaling of the linear growth rate $\omega_\mathrm{SASI}$ of SASI modes is more complicated, since it involves both the duration $\tau_\mathrm{cyc}$ of the underlying advective-acoustic cycle as well as a quality factor $\mathcal{Q}$ for the conversion of vorticity and entropy perturbations into acoustic perturbation in the deceleration region below the gain region and the reverse process at the shock [@foglizzo_06; @foglizzo_07], $$\label{eq:om_sasi}
\omega_\mathrm{SASI}
\sim \frac{ \ln |\mathcal{Q}|} {\tau_\mathrm{cyc}}.$$ For realistic models with strong SASI, one finds $\ln |\mathcal{Q}|
\sim 2$ [@scheck_08; @mueller_12b]. SASI growth appears to be suppressed for $\chi \gtrsim 3$ probably because convection destroys the coherence of the waves involved in the advective-acoustic cycle [@guilet_10]. Interestingly, the demarcation line $\chi=3$ between the SASI- and convection-dominated regimes is also valid in the non-linear regime if $\chi$ is computed from the angle- and time-averaged mean flow [@fernandez_12]; and both the SASI and convection appear to drive $\chi$ close to this critical value [@fernandez_12].
Both in the SASI-dominated regime and the convection-dominated regime, large growth rates are observed in simulations. It only takes a few tens of milliseconds until the instabilities reach their saturation amplitudes. For this reason, the turbulent Mach number and the beneficial effect of multi-D effects on the heating conditions are typically more sensitive to the saturation mechanism than to initial conditions, so that the onset of shock revival is only subject to modest stochastic variations [@summa_16]. Exceptions apply when the heating conditions vary rapidly, e.g. due to the infall of a shell interface or extreme variations in shock radius (as in the light-bulb models of @cardall_15), and the runaway condition is only narrowly met or missed [@melson_15a; @roberts_16].
The saturation properties of convection were clarified by @murphy_12, who determined that the volume-integrated neutrino heating rate $\dot{Q}_\nu$ and the convective luminosity $L_\mathrm{conv}$ in the gain region roughly balance each other. This can be understood as the result of a self-adjustment process of the accretion flow, whereby a marginally stable, quasi-stationary stratification with $\chi \approx 3$ is established [@fernandez_12]. @mueller_15a showed that this can be translated into a scaling law that relates the average mass-specific neutrino heating rate $\dot{q}_\nu$ in the gain region to the root mean square average $\delta v$ of non-radial velocity fluctuations, $$\label{eq:vturb}
\delta v \sim \left[ \dot{q}_\nu (r_\mathrm{sh}-r_\mathrm{g}) \right]^{1/3}.$$
That a similar scaling should apply in the SASI-dominated regime is not immediately intuitive. @mueller_15a in fact tested Equation (\[eq:vturb\]) using a SASI-dominated 2D model and argued that self-adjustment of the flow to $\chi \approx 3$ will result in the same scaling law as for convection-dominated models. However, models suggest that a different mechanism may be at play in the SASI-dominated regime. Simulations are at least equally compatible with the mechanism proposed by @guilet_10, who suggested that saturation of the SASI is mediated by parasitic instabilities and occurs once the growth rate of the parasite equals the growth rate of the SASI: Assuming that the Kelvin-Helmholtz instability is the dominant parasite, a simple order-of-magnitude estimate for saturation can be obtained by equating $\omega_\mathrm{SASI}$ and the average shear rate, $$\omega_\mathrm{SASI}
\sim \frac{\delta v}{\Lambda}$$ where $\Lambda$ is the effective width of the shear layer. @kazeroni_16 find that the Kelvin-Helmholtz instability operates primarily in directions where the shock radius is larger, which suggests $\Lambda =r_\mathrm{sh,max}-r_\mathrm{g}$. This results in a scaling law that relates the velocity fluctuations to the average radial velocity $\langle v_r \rangle$ in the gain region, $$\label{eq:vsasi}
\delta v \sim \omega_\mathrm{SASI} \Lambda
\sim
\frac{\ln |\mathcal{Q}| (r_\mathrm{sh,max}-r_\mathrm{g})}{\tau_\mathrm{adv}}
\sim
\ln |\mathcal{Q}| \, |\langle v_r \rangle|,$$ where we assumed $\tau_\mathrm{cyc} \approx \tau_\mathrm{adv}$. The quality factor $\mathcal{Q}$ can in principle change significantly with time and between different models. Nonetheless, together with the assumption of a roughly constant quality factor, Equation (\[eq:vsasi\]) appears to capture the dynamics of the SASI in 3D quite well for a simulation of an $18 M_\odot$ progenitor with the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code [@mueller_15a] as illustrated in Figure \[fig:vturb3d\].
Equation (\[eq:vturb\]) for the convection-dominated regime and Equation (\[eq:vsasi\]) apparently predict turbulent Mach numbers in the same ballpark. This can be understood by expressing $\dot{q}_\nu$ in terms of the accretion efficiency $\eta_\mathrm{acc}
=L_\nu/(GM \dot{M}/r_\mathrm{g})$ and the heating efficiency $\eta_\mathrm{heat}
=\dot{Q}_\nu/L_\nu$, $$\begin{aligned}
\dot{q}_\nu &=&\frac{\dot{Q}_\nu}{M_\mathrm{g}}=
\eta_\mathrm{heat} \eta_\mathrm{acc}
\frac{GM \dot{M}} {r_\mathrm{g} M_\mathrm{g}}=
\eta_\mathrm{heat} \eta_\mathrm{acc}
\frac{GM} {r_\mathrm{g} \tau_\mathrm{adv}}
\\
\nonumber
&=&
\eta_\mathrm{heat} \eta_\mathrm{acc}
\frac{GM}{r_\mathrm{sh} \tau_\mathrm{adv}}\frac{r_\mathrm{sh}}{r_\mathrm{gain}}.\end{aligned}$$ If we neglect the ratio $r_\mathrm{sh}/r_\mathrm{g}$ and approximate the average post-shock velocity as $| \langle v_\mathrm{r} \rangle | \approx \beta^{-1}
\sqrt{GM/r_\mathrm{sh}}$ (where $\beta$ is the compression ratio in the shock), we obtain $$\dot{q}_\nu \sim
\eta_\mathrm{heat} \eta_\mathrm{acc}
\frac{\beta^2 |\langle v_r \rangle|^2 }{\tau_\mathrm{adv}},$$ and hence $$\delta v \sim (\eta_\mathrm{heat} \eta_\mathrm{acc} \beta^2)^{1/3}
|\langle v_r \rangle|.$$ For plausible values (e.g. $\eta_\mathrm{heat} = 0.05$ $\eta_\mathrm{acc} =2$, $\beta=10$), one finds $\delta v \sim 2 |\langle v_r \rangle|$, i.e. the turbulent Mach number at saturation is of the same order of magnitude in the convection- and SASI-dominated regimes (where at least $\ln |\mathcal{Q}| \mathord{\sim} 2$ can be reached).
Equations (\[eq:vturb\]) and (\[eq:vsasi\]) remain order-of-magnitude estimates; and either of the instabilities may be more efficient at pumping energy into non-radial turbulent motions in the gain region, as suggested by the light-bulb models of @fernandez_15 and @cardall_15. These authors find that the SASI can lower the critical luminosity in 3D considerably further than convection. @fernandez_10 attributes this to the emergence of the spiral mode of the SASI [@blondin_07; @fernandez_10] in 3D, which can store more non-radial kinetic energy than the SASI sloshing mode in 2D, but this has yet to be borne out by self-consistent neutrino hydrodynamics simulations (see Section \[sec:outlook\] for further discussion).
### Why Do Models Explode More Easily in 2D Than in 3D?
How can one explain the different behaviour of 2D and 3D models in the light of our current understanding of the interplay between neutrino heating, convection, and the SASI? It seems fair to say that we can presently only offer a heuristic interpretation for the more pessimistic evolution of 3D models.
The most glaring difference between 2D and 3D models (especially in the convection-dominated regime) prior to shock revival lies in the typical scale of the turbulent structures, which are smaller in 3D [@hanke_12; @couch_12b; @couch_14], whereas the inverse turbulent cascade in 2D [@kraichnan_67] artificially channels turbulent kinetic energy to large scales. This implies that the effective dissipation length (or also the effective mixing length for energy transport) are smaller in 3D, so that smaller dimensionless coefficients $C$ appear in relations like Equation (\[eq:vturb\]), $$\label{eq:vturb1}
\delta v =C \left[ \dot{q}_\nu (r_\mathrm{sh}-r_\mathrm{g})
\right]^{1/3},$$ and the turbulent Mach number will be smaller for a given neutrino heating rate. Indeed, for the $18 M_\odot$ model shown in Figure (\[fig:vturb3d\]), we find $$\label{eq:vturb3}
\delta v =0.7 \left[ \dot{q}_\nu (r_\mathrm{sh}-r_\mathrm{g})
\right]^{1/3}$$ in 3D rather than what @mueller_15a inferred from 2D models (admittedly using a different progenitor), $$\label{eq:vturb2}
\delta v =\left[ \dot{q}_\nu (r_\mathrm{sh}-r_\mathrm{g})
\right]^{1/3}.$$ Following the arguments of @mueller_15a to infer the correction factor $\left(1+\frac{4\langle \mathrm{Ma}^2 \rangle}{3}\right)^{-3/5}$ for multi-D effects in Equation (\[eq:lcrit\_3d\]), one would then expect a *considerably* larger critical luminosity in 3D, i.e.$(L_\nu E_\nu^2)_\mathrm{crit,3D}
\approx 0.85 (L_\nu E_\nu^2)_\mathrm{crit,1D}$ instead of $(L_\nu E_\nu^2)_\mathrm{crit,2D}
\approx 0.75 (L_\nu E_\nu^2)_\mathrm{crit,1D}$ in 2D.
Such a large difference in the critical luminosity does not tally with the findings of light-bulb models that show that the critical luminosities in 2D and 3D are still very close to each other. This already indicates that more subtle effects may be at play in 3D that almost compensate the stronger effective dissipation of turbulent motions. The fact that simulations typically show transient phases of stronger shock expansion and more optimistic heating conditions in 3D than in 2D [@hanke_12; @melson_15b] also points in this direction.
Furthermore, light-bulb models [@handy_14] and multi-group neutrino hydrodynamics simulations [@melson_15a; @mueller_15b] have demonstrated that favourable 3D effects come into play *after shock revival*. These works showed that 3D effects can lead to a faster, more robust growth of the explosion energy provided that shock revival can be achieved in the first place.
The favourable 3D effects that are responsible for this may already counterbalance the adverse effect of stronger dissipation in the pre-explosion phase to some extent: Energy leakage from the gain region by the excitation of g-modes is suppressed in 3D because the forward turbulent cascade [@melson_15a] and (at high Mach number) the more efficient growth of the Kelvin-Helmholtz instability [@mueller_15b] brake the downflows before they penetrate the convectively stable cooling layer. Moreover, the non-linear growth of the Rayleigh-Taylor instability is faster for three-dimensional plume-like structures than for 2D structures with planar [@yabe_91; @hecht_95; @marinak_95] or toroidal geometry (as in the context of Rayleigh-Taylor mixing in the stellar envelope during the explosion phase; [@kane_00; @hammer_10]), which might explain why 3D models initially respond more strongly to sudden drops in the accretion rate at shell interfaces and exhibit better heating conditions than their 2D counterparts for brief periods. Finally, the difference in the effective dissipation length in 3D and 2D that is reflected by Equations (\[eq:vturb3\]) and (\[eq:vturb2\]) may not be universal and depend, e.g., on the heating conditions or the $\chi$-parameter; the results of @fernandez_15 in fact demonstrate that under appropriate circumstances more energy can be stored in non-radial motions in 3D than in 2D in the SASI-dominated regime.
Outlook: Classical Ideas for More Robust Explosions {#sec:outlook}
---------------------------------------------------
The existence of several competing – favourable and unfavourable – effects in 3D first-principle models does not change the fundamental fact that they remain more reluctant to explode than their 2D counterparts. This suggests that some important physical ingredient are still lacking in current simulations. Several avenues towards more robust explosion models have recently been explored. Some of the proposed solutions have a longer pedigree and revisit ideas (rapid rotation in supernova cores, enhanced neutrino luminosities) that have been investigated on and off in supernova theory already before the advent of 3D simulations. The more “radical” solution of invoking strong seed perturbations from convective shell burning to boost non-radial instabilities in the post-shock region will be discussed separately in Section \[sec:prog\].
### Rotation and Beyond
@nakamura_14 and @janka_16 pointed out that rapid progenitor rotation can facilitate explosions in 3D. @janka_16 ascribed this partly to the reduction of the pre-shock infall velocity due to centrifugal forces, which decreases the ram pressure ahead of the shock. Even more importantly, rotational support also decreases the net binding energy $|e_\mathrm{tot}|$ per unit mass in the gain region in their models. They derived an analytic correction factor for the critical luminosity in terms of the average specific angular momentum $j$ in the infalling shells, $$\label{eq:lcrit_rot}
(L_\nu E_\nu^2)_\mathrm{crit,rot} \approx
(L_\nu E_\nu^2)_\mathrm{crit} \times \left(1-\frac{j^2}{2 G M
r_\mathrm{sh}}\right)^{3/5}.$$ Assuming rapid rotation with $j \gg 10^{16} \, \mathrm{cm}^2\,
\mathrm{s}^{-1}$, one can obtain a significant reduction of the critical luminosity by several $10 \%$ as @janka_16 tested in a simulation with a modified rotation profile.[^10] For very rapid rotation, other explosion mechanisms also become feasible, such as the magnetorotational mechanism [@akiyama_03; @burrows_07b; @winteler_12; @moesta_14], or explosions driven by the low-$T/W$ spiral instability [@takiwaki_16].
However, current stellar evolution models do not predict the required rapid rotation rates for these scenarios for the generic progenitors of type IIP supernovae. The typical specific angular momentum at a mass coordinate of $m=1.5 M_\odot$ is only of the order of $j \sim
10^{15} \, \mathrm{cm}^2 \, \mathrm{s}^{-1}$ in models [@heger_05] that include angular momentum transport by magnetic fields generated by the Tayler-Spruit dynamo [@spruit_02], and asteroseismic measurements of core rotation in evolved low-mass stars suggest that the spin-down of the cores may be even more efficient [@cantiello_14]. For such slow rotation, centrifugal forces are negligible; Equation (\[eq:lcrit\_rot\]) suggests a change of the critical luminosity on the per-mil level. Neither is rotation expected to affect the character of neutrino-driven convection appreciably because the angular velocity $\Omega$ in the gain region is too small. The Rossby number is well above unity, $$\mathrm{Ro}\sim \frac{|v_r|}{ (r_\mathrm{sh}-r_\mathrm{g}) \Omega}
\sim \frac{r_\mathrm{s}^2}{\tau_\mathrm{adv} j}
\sim 10,$$ assuming typical values of $\tau_\mathrm{adv} \sim 10 \, \mathrm{ms}$ and $r_\mathrm{sh} \sim 100\, \mathrm{km}$.
Magnetic field amplification by a small-scale dynamo or the SASI [@endeve_10; @endeve_12] could also help to facilitate shock revival with magnetic fields acting as a *subsidiary* to neutrino heating but without directly powering the explosion as in the magnetorototational mechanism. The 2D simulations of @obergaulinger_14 demonstrated that magnetic fields can help organise the flow into large-scale modes and thereby allow earlier explosions, though the required initial field strengths for this are higher ($\mathord{\sim} 10^{12} \, \mathrm{G}$) than the typical values predicted by stellar evolution models.
### Higher Neutrino Luminosities and Mean Energies?
Another possible solution for the problem of missing or delayed explosions in 3D lies in increasing the electron flavour luminosity and mean energy. This is intuitive from Equation (\[eq:lcrit\_3d\]), where a mere change of $\mathord{\sim}5\%$ in both $L_\nu$ and $E_\nu$ results in a net effect of $16\%$, which is almost on par with multi-D effects.
The neutrino luminosity is directly sensitive to the neutrino opacities, which necessitates precision modelling in order to capture shock propagation and heating correctly ([@lentz_12a; @lentz_12b; @mueller_12a]; see also Section \[sec:methods\]), as well as to other physical ingredients of the core-collapse supernova problem that influence the contraction of the proto-neutron star, such as general relativity and the nuclear equation of state [@janka_12; @mueller_12a; @couch_12a; @suwa_13; @oconnor_16]. Often such changes to the neutrino emission come with counterbalancing side effects (*Mazurek’s law*); e.g. stronger neutron star contraction will result in higher neutrino luminosities and mean energies, but will also result in a more tightly bound gain region, which necessitates stronger heating to achieve shock revival.
That the lingering uncertainties in the microphysics may nonetheless hold the key to more robust explosions has long been recognised in the case of the equation of state. @melson_15b pointed out that missing physics in our treatment of neutrino-matter interactions may equally well be an important part of the solution of the problem shock revival. Exploring corrections to neutral-current scattering cross section due to the “strangeness” of the nucleon, they found that changes in the neutrino cross section on the level of a few ten percent were sufficient to tilt the balance in favour of explosion for a $20 M_\odot$ progenitor. While @melson_15b deliberately assumed a larger value for the contribution of strange quarks to the axial form factor of the nucleon than currently measured [@airapetian_07], the deeper significance of their result is that Mazurek’s law can sometimes be circumvented so that modest changes in the neutrino opacities still exert an appreciable effect on supernova dynamics. A re-investigation of the rates currently employed in the best supernova models for the (more uncertain) neutrino interaction processes that depend strongly on in-medium effects (charged-current absorption/emission, neutral current scattering, Bremsstrahlung; [@burrows_98; @burrows_99; @reddy_99; @hannestad_98]) may thus be worthwhile (see [@bartl_14; @rrapaj_15; @shen_14] for some recent efforts).
ASSESSMENT OF SIMULATION METHODOLOGY {#sec:methods}
====================================
Considering what has been pointed out in Section \[sec:3d\] – the crucial role of hydrodynamic instabilities and the delicate sensitivity of shock revival to the neutrino luminosities and mean energies – it is natural to ask: What are the requirements for modelling the interplay of the different ingredients of the neutrino-driven mechanism accurately? This question is even more pertinent considering that the enormous expansion of the field during the recent years has sometimes produced contradictory results, debates about the relative importance of physical effects, and controversies about the appropriateness of certain simulation methodologies.
Ultimately, only the continuous evolution of the simulation codes, the inclusion of similar physics by different groups, and carefully designed cross-comparisons will eventually produce a “concordance model” of the neutrino-driven mechanism and confirm that simulation results are robust against uncertainties. For 1D neutrino hydrodynamics simulations, this has largely been achieved in the wake of the pioneering comparison paper of @liebendoerfer_05, which has served as reference for subsequent method papers and sensitivity studies in 1D [@mueller_10; @lentz_12a; @lentz_12b; @oconnor_15; @just_15; @summa_16]. Similar results of the Garching-QUB collaboration [@summa_16] and @oconnor_16 with multi-group neutrino transport indicate a trend to a similar convergence in 2D, and more detailed comparisons are underway (see, e.g., <https://www.authorea.com/users/1943/articles/97450/_show_article> for efforts coordinated by E. O’Connor). Along the road to convergence, it appears useful to provide a preliminary review of some issues concerning the accuracy and reliability of supernova simulations.
Hydrodynamics
-------------
Recently, the discussion of the fidelity of the simulations has strongly focused on the the hydrodynamic side of the problem. As detailed in Section \[sec:3d\], multi-D effects play a crucial role in the explosion mechanism, and are regulated by a balance of driving (by neutrino heating through buoyancy, or by an inherent instability of the flow like the SASI) and dissipation.
### Turbulence in Supernova Simulations
This balance needs to be modelled with sufficient physical and numerical accuracy. On the numerical side, the challenge consists in the turbulent high-Reynolds number flow, and the question arises to what extent simulations with relatively coarse resolution can capture this turbulent flow accurately. Various authors [@handy_14; @abdikamalov_15; @radice_15; @roberts_16] have stressed that the regime of fully developed turbulence cannot be reached with the limited resolution affordable to cover the gain region ($\mathord{\sim} 100$ zones, or even less) in typical models, and @handy_14 thus prefer to speak of “perturbed laminar flow” in simulations. Attempts to quantify the effective Reynolds number of the flow using velocity structure functions and spectral properties of the post-shock turbulence [@handy_14; @abdikamalov_15; @radice_15] put it at a few hundred at best, and sometimes even below $100$.
This is in line with rule-of-thumb estimates based on the numerical diffusivity for the highest-wavenumber (odd-even) modes in Godunov-based schemes as used in many supernova codes. This diffusivity can be calculated analytically (Appendix D of @mueller_phd; see also @arnett_16 for a simpler estimate). For Riemann solvers that take all the wave families into account (e.g. [@colella_85; @toro_94; @mignone_05_a; @marquina_96]), the numerical kinematic viscosity $\nu_\mathrm{num}$ in the subsonic regime is roughly given in terms of the typical velocity jump per cell $\delta v_\mathrm{gs}$ and the cell width $\delta l$ as $\nu_\mathrm{num} \sim \delta l \, \delta v_\mathrm{gs}$. Relating $\delta v_\mathrm{gs}$ to the turbulent velocity $v$ and scale $l$ of the largest eddy as $\delta v_\mathrm{gs} \sim v (\delta l/l)^{1/3}$ (i.e. assuming Kolmogorov scaling) yields a numerical Reynolds number of $$\mathrm{Re} = \frac{v l}{\nu_\mathrm{num}} \sim
\left(\frac{l}{\delta l}\right)^{4/3}=N^{4/3},$$ where $N$ is the number of zones covering the largest eddy scale. For more diffusive solvers like HLLE [@einfeldt_88], one obtains $\nu_\mathrm{num} \sim \delta l \, c_\mathrm{s} \sim \delta l \, v\,
\mathrm{Ma}^{-1}$ instead and $$\mathrm{Re} \sim (l/\delta l)
\mathrm{Ma} \sim N \, \mathrm{Ma},$$ i.e. such solvers are strongly inferior for subsonic flow with low Mach number $\mathrm{Ma}$.
Such coarse estimates are to be taken with caution, however. The numerical dissipation is non-linear and self-regulated as typical of implicit large-eddy simulations (ILES, [@boris_92; @grinstein_07]). In fact, the estimates already demonstrate that simply comparing the resolution in codes with different solvers and grid geometries can be misleading. Codes with three-wave solvers like <span style="font-variant:small-caps;">Vertex-Prometheus</span> [@rampp_02; @buras_06a] and <span style="font-variant:small-caps;">CoCoNuT-FMT</span> [@mueller_15a] of the MPA-QUB collaboration, <span style="font-variant:small-caps;">Flash</span> [@fryxell_00] as used in @couch_12a and subsequent work by S. Couch and E. O’Connor, and the VH-1 hydro module [@blondin_91] in the <span style="font-variant:small-caps;">Chimera</span> code of the Oak Ridge-Florida Atlantic-NC State collaboration, have less stringent resolution requirements than HLLE-based codes [@ott_12; @kuroda_12]. The reconstruction method, special tweaks for hydrostatic equilibrium (or an the lack of such a treatment), as well as the grid geometry and grid-induced perturbations [@janka_16; @roberts_16] also affect the behaviour and resolution-dependence of the simulated turbulence.
### Resolution Requirements – A Critical Assessment
Regardless of the employed numerical schemes, the fact remains that the achievable numerical Reynolds number in supernova simulations is limited, and that the regime of fully developed turbulence ($\mathrm{Re} \gg 1000$) will not be achieved in the near future, as it would require $\gtrsim 512$ radial zones *in the gain region alone*. The question for supernova models, however, is not whether all the facets of turbulence in inviscid flow can be reproduced, but whether the flow properties that matter for the neutrino-driven mechanism are computed with sufficient accuracy. In fact, one cannot even hope that simply cranking up the numerical resolution with ILES methods would give the correct solution: In reality, non-ideal effects such as neutrino viscosity and drag [@van_den_horn_84; @burrows_88; @jedamzik_98; @guilet_15] come into play, and deviations of the turbulent Prandtl number from unity as well as MHD effects like a small-scale dynamo (see Section \[sec:outlook\]) can complicate the picture even for non-rotating, weakly magnetised supernova cores. These effects will likely not grossly alter the dynamics of convection and the SASI, but the physical reality may be slightly different from the limit of infinite resolution if these effects are not accounted for and inviscid flow is assumed instead.
At the end of the day, these additional complications and the finite resolution probably have a limited effect on supernova dynamics, since they only affect *a correction term* to the critical luminosity such as $(1+4 /3 \langle \mathrm{Ma}^2 \rangle)^{-3/5}$ in Equation (\[eq:lcrit\_3d\]) through the effective dissipation length that determines the non-dimensional coefficient in Equation (\[eq:vturb\]). If we repeat the analytic estimate for $L_\mathrm{crit}$ of @mueller_15a, but assume stronger dissipation and decrease their critical Mach number at shock revival $\mathrm{Ma}_\mathrm{crit}^2=0.4649$ by $10\%$, then Equation (\[eq:lcrit\_3d\]) suggests an increase of the critical luminosity from $74.9\%$ of the 1D value to of $76.6\%$ of the 1D value, which is a minute change. Modelling turbulent dissipation within $10 \%$ uncertainty thus seems wholly sufficient given that one can hardly hope to achieve $1\%$ accuracy for the neutrino luminosities and mean energies.
The turbulent dissipation does not change without bounds with increasing resolution, but eventually reaches an asymptotic limit at high Reynolds numbers. Although most supernova simulation may not fully reach this asymptotic regime, they do not fall far short of it: The works of @handy_14 and @radice_15 [@radice_16] suggest that this level of accuracy in the turbulent dissipation can be reached even with moderate resolution ($<100$ grid points per direction, $\sim 2^\circ$ resolution in angle in spherical polar coordinates) in the gain region with higher-order reconstruction methods and accurate Riemann solvers. Problems due to stringent resolution requirements may still lurk elsewhere, though, e.g. concerning SASI growth rates as already pointed out ten years ago by @sato_08. Resolution studies and cross-comparisons thus remain useful, though cross-comparisons are of course hampered by the different physical assumptions used in different codes and the feedback processes in the supernova core. For this reason a direct comparison of, e.g., turbulent kinetic energies and Mach numbers between different models is not necessarily meaningful. The dimensionless coefficients governing the dynamics of non-radial instabilities such the proportionality constant $\eta_\mathrm{conv}=v_\mathrm{turb}/[\dot{q_\nu}
(r_\mathrm{sh}-r_\mathrm{g})]$ in Equation (\[eq:vturb\]) or the quality factor $\mathcal{Q}$ in Equation (\[eq:om\_sasi\]) may be more useful metrics of comparison.
Neutrino Transport
------------------
The requirements on the treatment of neutrino heating and cooling are highly problem-dependent. The *physical principles* behind convection and the SASI can be studied with simple heating and cooling functions in a light-bulb approach, and such an approach is indeed often advantageous as it removes some of the feedback processes that complicate the analysis of full-scale supernova simulations. To model the fate and explosion properties of concrete progenitors in a predictive manner, some form of neutrino transport is required, and depending on the targeted level of accuracy, the requirements become more stringent; e.g. higher standards apply when it comes to predicting supernova nucleosynthesis. There is no perfect method for neutrino transport in supernovae as yet. Efforts toward a solution of the full 6-dimensional Boltzmann equation are underway [e.g. @cardall_13; @peres_13; @radice_13; @nagakura_14], but not yet ripe for real supernova simulations.
Neutrino transport algorithms (beyond fully parameterised light-bulb models) currently in use for 1D and multi-D models include:
- leakage schemes as, e.g., in @oconnor_10, @oconnor_11, @ott_13 and @couch_14b
- the isotropic diffusion source approximation (IDSA) of @liebendoerfer_09,
- one-moment closure schemes employing prescribed flux factors [@scheck_06], flux-limited diffusion as in the <span style="font-variant:small-caps;">Vulcan</span> code [@livne_04; @walder_05], the <span style="font-variant:small-caps;">Chimera</span> code [@bruenn_85; @bruenn_13] and the <span style="font-variant:small-caps;">Castro</span> code [@zhang_13; @dolence_15], or a dynamic closure as in the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code,
- two-moment methods employing algebraic closures in 1D [@oconnor_15] and multi-D [@obergaulinger_11; @kuroda_12; @just_15; @skinner_15; @oconnor_16; @roberts_16; @kuroda_16] or variable Eddington factors from a model Boltzmann equation [@burrows_00; @rampp_02; @buras_06a; @mueller_10],
- discrete ordinate methods for the Boltzmann equation, mostly in 1D [@mezzacappa_93; @yamada_99; @liebendoerfer_04] or, at the expense of other simplifications, in multi-D ([@livne_04; @ott_08_a; @nagakura_16]; only for static configurations: [@sumiyoshi_15]).
This list should not be taken as a hierarchy of accuracy; it mere reflects crudely the rigour in treating *one aspect* of the neutrino transport problem, i.e. the angle-dependence of the radiation field in phase space. When assessing neutrino transport methodologies, there are other, equally important factors that need to be taken into account when comparing different modelling approaches.
Most importantly, the sophistication of the microphysics varies drastically. On the level of one-moment and two-moment closure models, it is rather the neutrino microphysics that decides about the quantitative accuracy. The 3D models of the MPA-QUB group [@melson_15a; @melson_15b; @janka_16] and the <span style="font-variant:small-caps;">Chimera</span> team [@lentz_15] currently represent the state-of-the-art in this respect; though other codes [@oconnor_15; @just_15; @skinner_15; @kuroda_16] come close.
Often, the neutrino physics is simplified considerably, however. Some simulations disregard heavy flavour neutrinos altogether [e.g @suwa_10; @takiwaki_12], or only treat them by means of a leakage scheme [@takiwaki_14; @pan_16]. This affects the contraction of the proto-neutron star and thus indirectly alters the emission of electron flavour neutrinos and the effective inner boundary for the gain region as well.
Among multi-D codes, energy transfer due to inelastic neutrino-electron scattering (NES) is routinely taken into account only in the <span style="font-variant:small-caps;">Vertex</span> code [@rampp_02; @buras_06a; @mueller_10] of the MPA-QUB collaboration, the <span style="font-variant:small-caps;">Alcar</span> code [@just_15], the <span style="font-variant:small-caps;">Chimera</span> code of the <span style="font-variant:small-caps;">Chimera</span> team [@bruenn_85; @bruenn_13], and the <span style="font-variant:small-caps;">Fornax</span> code of the Princeton group [@skinner_15]. Without NES [@bruenn_85] and modern electron capture rates [@langanke_03], the core mass at bounce is larger and the shock propagates faster at early times [@lentz_12a; @lentz_12b]. In multi-D, this can lead to unduly strong prompt convection. Because of this problem, a closer look at the bounce dynamics is in order whenever explosions occur suspiciously early ($< 100 \ \,
\mathrm{ms}$ after bounce). Parameterising deleptonisation during collapse [@liebendoerfer_05_b] provides a workaround to some extent.
The recoil energy transfer in neutrino-nucleon scattering effectively reshuffles heavy flavour neutrino luminosity to electron flavour luminosity in the cooling region [@mueller_12a] and hence critically influences the heating conditions in the gain region. Among multi-D codes, only <span style="font-variant:small-caps;">Vertex</span> and <span style="font-variant:small-caps;">Chimera</span> currently take this into account, and the code <span style="font-variant:small-caps;">CoCoNuT-FMT</span> [@mueller_15a] uses an effective absorption opacity for heavy flavour neutrinos to mimic this phenomenon.
<span style="font-variant:small-caps;">Vertex</span> and <span style="font-variant:small-caps;">Chimera</span> are also the only multi-D codes to include the effect of nucleon-nucleon correlations [@burrows_98; @burrows_99; @reddy_99] on absorption and scattering opacities. Nucleon correlations have a huge impact during the cooling phase, which they shorten by a factor of several [@huedepohl_10]. Their role during the first second after bounce is not well explored. Considering that the explosion energetics are determined on a time-scale of seconds [@mueller_15b; @bruenn_16], it is plausible that the increased diffusion luminosity from the neutron star due to in-medium corrections to the opacities may influence the explosion energy to some extent.
Gray schemes [@fryer_02; @scheck_06; @kuroda_12] cannot model neutrino heating and cooling accurately; an energy-dependent treatment is needed because of the emerging neutrino spectra are highly non-thermal with a pinched high-energy tail [@janka_89b; @keil_03].
Some multi-D codes use the ray-by-ray-plus approximation [@buras_06a], which exaggerates angular variations in the radiation field, and has been claimed to lead to spuriously early explosions in some cases in conjunction with artificially strong sloshing motions in 2D [@skinner_15]. Whether this is a serious problem is unclear in the light of similar results of @summa_16 for ray-by-ray-plus models and @oconnor_16 for fully two-dimensional two-moment transport. On the other hand, fully multi-dimensional flux limited diffusion approaches smear out angular variations in the radiation field too strongly [@ott_08_a].
Neglecting all or part of the velocity-dependent terms in the transport equations potentially has serious repercussions. Neglecting only observer correction (Doppler shift, compression work, etc.) as, e.g. in @livne_04 can already have an appreciable impact on the dynamics [@buras_06a; @lentz_12a]. Disregarding even the co-advection of neutrinos with the fluid [@oconnor_15; @roberts_16] formally violates the diffusion limit and effectively results in an extra source term in the optically thick regime due to the equilibration of matter with lagging neutrinos, $$\dot{q}_\nu \approx \rho^{-1} \mathbf{v} \cdot \nabla E_\mathrm{eq}$$ where $E_\mathrm{eq}$ is the equilibrium neutrino energy density. Judging from the results of @oconnor_16 and @roberts_16, which are well in line with results obtained with other codes, the effect may not be too serious in practice, though. It should also be noted that (semi-)stationary approximations of the transport equation [@liebendoerfer_09; @mueller_15a] avoid this problem even if advection terms are not explicitly included. Leakage-based schemes as used, e.g., in @ott_12, @couch_14, @abdikamalov_15, and @couch_15 also manifestly fail to reproduce the diffusion limit. Here, however, the violation of the diffusion limit is unmistakable and can severely affect the stratification of the gain region and, in particular, the cooling region. Together with *ad hoc* choices for the flux factor for calculating the heating rate, this can result in inordinately high heating efficiencies immediately after bounce and a completely inverted hierarchy of neutrino mean energies. It compromises the dynamics of leakage models to an extent that they can only be used for very qualitative studies of the multi-D flow in the supernova core.
There is in fact no easy lesson to be learned from the pitfalls and complications that we have outlined. In many contexts approximations for the neutrino transport are perfectly justified for a well-circumscribed problem, and feedback processes sometimes mitigate the effects of simplifying assumptions. It it crucial, though, to be aware of the impact that such approximations can potentially have, and our (incomplete) enumeration is meant to provide some guidance in this respect.
![Impact of pre-collapse asphericities on shock revival in 3D multi-group neutrino hydrodynamics simulations of an $18 M_\odot$ progenitor. The plot shows the minimum, maximum (solid lines) and average (dashed) shock radii for a model using 3D initial conditions (black) from the O shell burning simulation of @mueller_16b and a spherically averaged version of the same progenitor (red). The gain radius (dash-dotted) and the proto-neutron star radius (dotted, defined by a fiducial density of $10^{11} \, \mathrm{g} \,
\mathrm{cm}^{-3}$) are shown only for the model starting from 3D initial conditions; they are virtually identical for both models. A neutrino-driven explosion is triggered roughly $0.25 \, \mathrm{s}$ after bounce aided by the infall of the convectively perturbed oxygen shell in the model using 3D initial conditions. The simulation starting from the 1D progenitor model exhibits steady and strong SASI oscillations after $0.25 \, \mathrm{s}$, but does not explode at least for another $0.3 \, \mathrm{s}$. \[fig:shock\_s18\]](f5.pdf){width="\linewidth"}
FUTURE DIRECTIONS: MULTI-D EFFECTS IN SUPERNOVA PROGENITORS {#sec:prog}
===========================================================
Given the sophisticated simulation methodology employed in the best currently available supernova codes, one may be tempted to ask whether another missing ingredient for robust neutrino-driven explosion is to be sought elsewhere. One recent idea, first proposed by @couch_13, focuses on the progenitor models used in supernova simulations. The twist consists in an extra “forcing” of the non-radial motions in the gain region by large seed perturbations in the infalling shells. Such seed perturbations will arise naturally in active convective burning shells (O burning, and perhaps also Si burning) that reach the shock during the first few hundred milliseconds after bounce.
{width="0.48\linewidth"} {width="0.48\linewidth"}\
{width="0.48\linewidth"} {width="0.48\linewidth"}
Role of Pre-Collapse Perturbations in the Neutrino-Driven Mechanism {#sec:lcrit_pert}
-------------------------------------------------------------------
In default of multi-D progenitor models, this new variation of the neutrino-driven mechanism was initially studied by imposing large initial perturbations by hand in leakage-based simulations [@couch_13; @couch_14] and multi-group neutrino hydrodynamics simulations [@mueller_15a]; the earlier light-bulb based models of @fernandez_12 also touched parts of the problem. The results of these investigations were mixed, even though some of these calculations employed perturbations far in excess of what estimates based on mixing-length theory [@biermann_32; @boehm_58] suggest: For example, @couch_13 used transverse velocity perturbations with a peak Mach number of $\mathrm{Ma}=0.2$ in their 3D models, and found a small beneficial effect on shock revival, which, however, was tantamount to a change of the critical neutrino luminosity by only $\mathord{\sim} 2\%$. The more extensive 2D parameter study of different solenoidal and compressive velocity perturbations and density perturbations by @mueller_15a established that both significant perturbation velocities ($\mathrm{Ma} \gtrsim 0.1$) as well as large-scale angular structures (angular wavenumber $\ell
\lesssim 4$) need to be present in active convective shell in order to reduce the critical luminosity appreciably, i.e. by $\gtrsim 10\%$.
These parametric studies already elucidated the physical mechanism whereby pre-collapse perturbations can facilitate shock revival. @mueller_15a highlighted the importance both of the infall phase as well as the interaction of the perturbations with the shock. Linear perturbation theory shows that the initial perturbations are amplified during collapse [@lai_00; @takahashi_14]. This not only involves a strong growth of transverse velocity perturbations as $\delta v_t \propto r^{-1}$, but even more importantly a conversion of the initially dominating solenoidal velocity perturbations with Mach number $\mathrm{Ma}_\mathrm{conv}$ into density perturbations $\delta
\rho/\rho \approx \mathrm{Ma}$ [@mueller_15a] during collapse, i.e. the relative density perturbations are much larger ahead of the shock than during quasi-stationary convection, where $\delta\rho /\rho
\approx \mathrm{Ma}^2$.[^11]
Large density perturbations ahead of the shock imply a pronounced asymmetry in the pre-shock ram pressure and deform the shock, creating fast lateral flows as well as post-shock density and entropy perturbations that buoyancy then converts into turbulent kinetic energy. The direct injection of kinetic energy due to infalling turbulent motions may also play a role [@abdikamalov_16], though it appears to be subdominant [@mueller_15a; @mueller_16b]. A very crude estimate for the generation of additional turbulent kinetic energy due to the different processes as well as turbulent damping in the post-shock region has been used by @mueller_16b to estimate the reduction of the critical luminosity as, $$\label{eq:dlcrit}
(L_\nu E_\nu^2)_\mathrm{crit,pert}
\approx
(L_\nu E_\nu^2)_\mathrm{crit,3D}
\left(
1-
0.47 \frac{\mathrm{Ma_\mathrm{conv}}}{\ell \eta_\mathrm{acc} \eta_\mathrm{heat}}
\right),$$ in terms of the pre-collapse Mach number $\mathrm{Ma}_\mathrm{conv}$ of eddies from shell burning, their typical angular wavenumber $\ell$, and the accretion efficiency $\eta_\mathrm{acc}=L_\nu/(GM \dot{M} r_\mathrm{gain})$ and heating efficiency $\eta_\mathrm{heat}$ during the pre-explosion phase.
A more rigorous understanding of the interaction between infalling perturbations, the shock, and non-radial motions in the post-shock region is currently emerging: @abdikamalov_16 studied the effect of upstream perturbations on the shock using the linear interaction approximation of @ribner_53 and argue, in line with @mueller_16b, that a reduction of the critical luminosity by $>10\%$ is plausible. Their estimate may, however, be even too pessimistic as they neglect acoustic perturbations upstream of the shock. Different from @abdikamalov_16, the recent analysis of @takahashi_16 also takes into account that instabilities or stabilisation mechanisms operate in the post-shock flow, and studied the (linear) response of convective and SASI eigenmodes to forcing by infalling perturbations. A rigorous treatment along these lines that explains the saturation of convective and SASI modes as forced oscillators with *non-linear damping* remains desirable.
{width="0.48\linewidth"} {width="0.48\linewidth"}\
{width="0.48\linewidth"} {width="0.48\linewidth"}
The Advent of 3D Supernova Progenitor Models
--------------------------------------------
The parametric studies of @couch_13 [@couch_14] and @mueller_15a still hinged on uncertain assumptions about the magnitude and scale of the seed perturbations left by O and Si shell burning. Various pioneering studies of advanced shell burning stages (O, Si, C burning) [@arnett_94; @bazan_94; @bazan_98; @asida_00; @kuhlen_03; @meakin_06; @meakin_07; @meakin_07_b; @arnett_11; @viallet_13; @chatzopoulos_14] merely indicated that convective Mach numbers of a few $10^{-2}$ and the formation of large-scale eddies are plausible, but did not permit a clear-cut judgement about whether pre-collapse perturbations play a dynamical role in the neutrino-driven mechanism.
The situation has changed recently with the advent of models of convective shell burning that have been evolved up to collapse. The idea here is to calculate the last few minutes prior to collapse to obtain multi-dimensional initial conditions, while ignoring potential long-term effects in 3D such as convective boundary mixing (which we discuss in Section \[sec:cbm\]). @couch_15 performed a 3D simulation of the last minutes of Si shell burning in a $15
M_\odot$ star. The simulation was limited to an octant, and nuclear quasi-equilibrium during Si burning was only treated with a small network. More importantly, the evolution towards collapse was artificially accelerated by artificially increasing electron capture rates in the iron core. As pointed out by @mueller_16b, this can alter the shell evolution and the convective velocities considerably. Since the shell configuration and structure at collapse varies considerably in 1D models, such an exploratory approach is nonetheless still justified (see below).
@mueller_16b explored the more generic case where Si shell burning is extinguished before collapse and the O shell is the innermost active convective region. In their 3D simulation of the last five minutes of O shell burning in an $18
M_\odot$ progenitor, they circumvented the aforementioned problems by excising the non-convective Fe and Si core and contracting it in accordance with a 1D stellar evolution model. Moreover, @mueller_16b simulated the entire sphere using an overset Yin-Yang grid [@kageyama_04; @wongwathanarat_10a] as implemented (with some improvements) in the <span style="font-variant:small-caps;">Prometheus</span> supernova code [@melson_msc; @melson_15a].
The implications of these simulations for supernova modelling are mixed. The typical convective Mach number in @couch_15 was only $\mathord{\sim}0.02$, and while they found large-scale motions, the scale of the pre-collapse perturbations was still limited by the restriction to octant symmetry. Perturbations of such a magnitude are unlikely to reduce the critical luminosity considerably (Section \[sec:lcrit\_pert\]). Consequently, supernova simulations starting from 1D and 3D initial conditions using a leakage scheme performed by @couch_15 did not show a qualitative difference; both 1D and 3D initial conditions result in explosions, though the shock expands slightly faster in the latter case. The use of a leakage scheme and possible effects of stochasticity preclude definite conclusions from these first results.
The typical convective Mach number in the $18 M_\odot$ model of @mueller_16b is considerably larger ($\mathord{\sim}0.1$), and their simulation also showed the emergence of a bipolar ($\ell=2$) flow structure, which lead them to predict a relatively large reduction of the critical luminosity by $12 \ldots 24\%$, which would accord a decisive role to 3D initial conditions in the neutrino-driven mechanism at least in some progenitors. A first 3D multi-group neutrino hydrodynamics simulation of their $18 M_\odot$ progenitor using the <span style="font-variant:small-caps;">CoCoNuT-FMT</span> code appears to bear this out (Müller et al. 2016, in preparation): Figure \[fig:shock\_s18\] shows the shock radius both for two simulations using 3D and 1D initial conditions, respectively: In the former case, shock revival occurs around $250 \, \mathrm{ms}$ after bounce thanks to the infall of the convectively perturbed oxygen shell, whereas no explosion develops in the reference simulation by the end of the run more than $600 \, \mathrm{ms}$ after bounce. An analysis of the heating conditions indicates that the non-exploding reference model is clearly *not* a near miss at $250 \, \mathrm{ms}$. The effect of 3D initial conditions is thus unambiguously large and sufficient to change the evolution *qualitatively*. Moreover, the model indicates that realistic supernova explosion energies are within reach in 3D as well: The diagnostic explosion energy reaches $5 \times 10^{50} \,
\mathrm{erg}$ and still continues to mount by the end of the simulation $1.43 \, \mathrm{s}$ after bounce. It is also interesting to note that the initial asymmetries are clearly reflected in the explosion geometry (Figure \[fig:s18\_3d\]) as speculated by @arnett_11. Incidentally, the model also shows that the accretion of convective regions does not lead to the formation of the “accretion belts” proposed by @gilkis_14 as an ingredient for their jittering-jet mechanism.
Whether 3D initial conditions generally play an important role in the neutrino-driven mechanism cannot be answered by studying just two progenitors, aside from the fact that the models of @couch_15 and @mueller_16b still suffer from limitations. The properties (width, nuclear energy generation rate) and the configuration of convective burning shells at collapse varies tremendously across different progenitors in 1D stellar evolution models as, e.g., the Kippenhahn diagrams in the literature indicate [@heger_00; @chieffi_13; @sukhbold_14; @cristini_16] indicate. The interplay of convective burning, neutrino cooling, and the contraction/re-expansion of the core and the shells sometimes leave inversions in the temperature stratification and a complicating layering of material at different nuclear processing stages. For this reason, 1D stellar evolution models sometimes show a highly dynamic behaviour immediately prior to collapse with shells of incompletely burnt material flaring up below the innermost active shell. This is illustrated by follow-up work to @mueller_16b shown in Figure \[fig:s12\_5\], where a partially processed layer with unburnt O becomes convective shortly before collapse due to violent burning and is about to merge with the overlying O/Ne shell before collapse intervenes.
The diverse shell configurations in supernova progenitors need to be thoroughly explored in 3D before a general verdict on the efficacy of convective seed perturbations in aiding shock revival can be given. Since the bulk properties of the flow (typical velocity, eddy scales) in the *interior* of the convective shells are apparently well captured by mixing-length theory [@arnett_09; @mueller_16b], the convective Mach numbers and eddy scales predicted from 1D stellar evolution models can provide guidance for exploring interesting spots in parameter space.
Convective Boundary Mixing – How Uncertain is the Structure of Supernova Progenitors? {#sec:cbm}
-------------------------------------------------------------------------------------
In what we discussed so far, we have considered multi-D effects in advanced convective burning stages merely because of their role in determining the initial conditions for stellar collapse. They could also have an important effect on the secular evolution of massive stars long before the supernova explosion, and thereby change critical structural properties of the progenitors, such as the compactness parameter [@oconnor_11]. While mixing-length theory [@biermann_32; @boehm_58] may adequately describe the mixing in the interior of convective zones,[^12] the mixing across convective boundaries is less well understood, and may play an important role in determining the pre-collapse structure of massive stars along with other non-convective processes [e.g. @heger_00; @maeder_04; @heger_05; @young_05; @talon_05; @cantiello_14] for mixing and angular momentum transport. That some mixing beyond the formally unstable regions needs to be included has long been known [@kippenhahn]. Phenomenological recipes for this include extending the mixed region by a fraction of the local pressure scale height, or adding diffusive mixing in the formally stable regions with a calibrated functional dependence on the distance to the boundary [@freytag_96; @herwig_97].
The dominant mechanism for convective boundary mixing during advanced burning stages is entrainment [@fernando_91; @meakin_07; @viallet_15] due to the growth of the Kelvin-Helmholtz or Holmböe instability at the shell interfaces. For interfaces with a discontinuous density jump as often encountered in the interiors of evolved massive stars, the relevant dimensionless number for such shear-driven instabilities is the bulk Richardson number $\mathrm{Ri}_\mathrm{B}$. For entrainment driven by turbulent convection, one has $$\mathrm{Ri}_\mathrm{B}=\frac{g l\, \delta \rho/\rho }{v_\mathrm{conv}^2},$$ in terms of the local gravitational acceleration $g$, the density contrast $\delta\rho /\rho$ at the interface, the typical convective velocity $v_\mathrm{conv}$ in the convective region, and the integral scale $l$ of the convective eddies. Equating $l$ with the pressure scale height $l=P/\rho g$ allows us to re-express $ \mathrm{Ri}_\mathrm{B}$ in terms of the convective Mach number $ \mathrm{Ma}_\mathrm{conv}$ and the adiabatic exponent $\gamma$, $$\mathrm{Ri}_\mathrm{B}=
\frac{\delta \rho}{\rho} \frac{g l}{v_\mathrm{conv}^2}
=
\frac{\delta \rho}{\rho} \frac{P}{\rho v_\mathrm{conv}^2}
=
\frac{\delta \rho}{\rho} \frac{1}{\gamma \mathrm{Ma}_\mathrm{conv}^2}.$$ Deep in the stellar core, $\mathrm{Ma}_\mathrm{conv}$ is typically small during most evolutionary phases, and $ \mathrm{Ri}_\mathrm{B}$ is large so that the convective boundaries are usually very “stiff” [@cristini_16].
Various power laws for the entrainment rate have been proposed in the general fluid dynamics literature [@fernando_91; @strang_01] and astrophysical studies [@meakin_07] of interfacial mixing driven by turbulent convection on one side of the interface. In the astrophysical context, it is convenient to translate these into a power law for the mass flux $\dot{M}_\mathrm{entr}$ of entrained material into the convective region, $$\label{eq:mentr}
\dot{M}_\mathrm{entr}
=4 \pi r^2 \rho v_\mathrm{conv} A \, \mathrm{Ri}_\mathrm{B}^{-n},$$ with a proportionality constant $A$ and a power-law exponent $n$. Here $\rho$ is the density on the convective side of the interface.
A number of laboratory studies [@fernando_91; @strang_01] and astrophysical simulations [@meakin_07; @mueller_16b] suggest values of $A\sim 0.1$ and $n=1$. This can be understood heuristically by assuming that layer of width $\delta l \sim A v_\mathrm{conv}^2/(g \, \delta\rho/\rho)$ always remains well mixed,[^13] and that a fraction $\delta l/l$ of the mass flux $\dot{M}_\mathrm{down}
=2 \pi r^2 \rho v_\mathrm{conv}$ in the convective downdrafts comes from this mixed layer.
This estimate is essentially equivalent to another one proposed in a slightly different context (ingestion of unburnt He during core-He burning; [@constantino_16]) by @spruit_15, who related the ingestion (or entrainment) rate into a convective zone to the convective luminosity $L_\mathrm{conv}$. Spruit’s argument can be interpreted as one based on energy conservation; work is needed to pull material with positive buoyancy from an outer shell down into a deeper one, and the energy that is tapped for this purpose comes from convective motions. Since $L_\mathrm{conv} \sim 4\pi r^2 \rho v_\mathrm{conv}^3$, we can write Equation (\[eq:mentr\]) as $$\label{eq:mentr2}
\dot{M}_\mathrm{entr}
=A \times \frac{4 \pi r^2 \rho v_\mathrm{conv}^3 }{g l\,\delta \rho /\rho}
\approx A \times \frac{L_\mathrm{conv}}{g l \, \delta \rho /\rho },$$ which directly relates the entrainment rate to the ratio of $L_\mathrm{conv}$ and the potential energy of material with positive buoyancy after downward mixing over an eddy scale $l$. The entrainment law (\[eq:mentr\]), the argument of @spruit_15, and the proportionality of the entrainment rate with $L_\mathrm{conv}$ found in the recent work of @jones_16b on entrainment in highly-resolved idealised 3D simulation of O shell burning appear to be different sides of the same coin.
Long-Term Effects of Entrainment on the Shell Structure? {#sec:long_term}
--------------------------------------------------------
How much will entrainment affect the shell structure of massive stars in the long term? First numerical experiments based on the entrainment law of @meakin_07 were performed by @staritsin_13 for massive stars on the main sequence [^14] and did not reveal dramatic differences in the size of the convective cores compared to more familiar, calibrated recipes for core overshooting.
Taking Equation (\[eq:mentr2\]) at face value allows some interesting speculations about the situation during advanced burning stages. Since the convective motions ultimately feed on the energy generated by nuclear burning $E_\mathrm{burn}$, we can formulate a time-integrated version of Equation (\[eq:mentr2\]) for the entrained mass $\Delta M_\mathrm{entr}$ over the life time of a convective shell, $$\begin{aligned}
\frac{GM}{r} \frac{\delta \rho}{\rho } \Delta M_\mathrm{entr}
&\lesssim &
A E_\mathrm{burn}, \\
\frac{GM}{r} \frac{\delta \rho}{\rho } \Delta M_\mathrm{entr}
&\lesssim &
A M_\mathrm{shell} \Delta Q,\end{aligned}$$ where $M_\mathrm{shell}$ is the (final) mass of the shell, and $\Delta Q$ is the nuclear energy release per unit mass. With $GM/r \sim 2 e_\mathrm{int}$ in stellar interiors, we can estimate $\Delta M_\mathrm{entr}$ in terms $\Delta Q$ and the internal energy $e_\mathrm{int}$ at which the burning occurs,[^15] $$\Delta M_\mathrm{entr}
\lesssim A M_\mathrm{shell} \left(\frac{\delta \rho }{\rho}\right)^{-1} \frac{\Delta Q}{2 e_\mathrm{int} }.$$ For O burning at $\sim 2 \times 10^{9} \, \mathrm{K}$ and with $\Delta
Q \approx 0.5 \, \mathrm{MeV}/ \mathrm{nucleon}$, the factor $\Delta
Q/(2 e_\mathrm{int})$ is of order unity. Typically, the density contrast $\delta \rho/\rho$ between adjacent shells is also not too far below unity. Since $A\approx 0.1$, this suggests that the shell growth due to entrainment comes up to at most a few tens of percent during O shell burning unless $\delta \rho/\rho$ is rather small to begin with. Thus, a result of entrainment might be that convective zones may swallow thin, unburnt shells with a small density contrast before bounce, whereas the large entropy jumps between the major shells are maintained and even enhanced as a result of this cannibalisation.
For C burning, the long-term effect of entrainment could be somewhat larger than for O burning due to the lower temperature threshold and the higher ratio $\Delta Q/2 e_\mathrm{int}$; for Si burning, the effect should be smaller. During earlier phases our estimates break down because the convective flux carries only a small fraction of the energy generation by nuclear burning. If this is taken into account, the additional growth of convective regions due to entrainment is again of a modest scale [@spruit_15].
Caveats
-------
The estimates for the long-term effect of entrainment on the growth of convective regions in Section \[sec:long\_term\] are to be taken with caution, however. They are not only crude, time-integrated zeroth-order estimates; the entrainment law (\[eq:mentr2\]) is by no means set in stone. Current astrophysical 3D simulations only probe a limited range in the critical parameter $ \mathrm{Ri}_\mathrm{B}$, and tend to suffer from insufficient resolution for high $ \mathrm{Ri}_\mathrm{B}$, as shear instabilities develop on smaller and smaller scales.
As a result, it cannot be excluded that the entrainment law (\[eq:mentr\]) transitions to a steeper slope in the astrophysically relevant regime of high $
\mathrm{Ri}_\mathrm{B}$. Experiments also compete with the difficulties of a limited dynamic range in Reynolds, Prandtl, and Péclet number, and remain inconclusive about the regime of high $\mathrm{Ri}_\mathrm{B}$ that obtains in stellar interiors. Power-law exponents larger than $n=1$ (up to $n=7/4$) have also been reported in this regime as alternatives to $n=1$ [@fernando_91; @strang_01; @fedorovich_04]. A power-law exponent $n>1$ would imply a strong suppression of entrainment in stellar interiors under most circumstances, and the long-term effect of entrainment would be negligible. Moreover, magnetic fields will affect the shear-driven instabilities responsible for convective boundary mixing [@brueggen_01].
Finally, most of the current 3D simulations of convective boundary mixing suffer from another potential problem; the balance between nuclear energy generation and neutrino cooling that obtains during quasi-stationary shell burning stages is typically violated, or neutrino cooling is not modelled at all. @jones_16b pointed out that this may be problematic if neutrino cooling decelerates the buoyant convective plumes and reduces the shear velocity at the interfacial boundary. Only sufficiently long simulations will be able clarify whether the strong entrainment seen in some numerical simulations is robust or (partly) specific to a transient adjustment phase.
Thus, it remains to be seen whether convective boundary mixing has significant effects on the structure of supernova progenitors. Even if it does, it is not clear whether it will qualitatively affect the landscape of supernova progenitors. The general picture of the evolution of massive stars may stay well within the bounds of the variations that have been explored already, albeit in a more parametric way [see, e.g., @sukhbold_14].
CONCLUSIONS
===========
It is evident that our understanding of the supernova explosion mechanism has progressed considerably over the last few years. While simulations of core-collapse supernovae have yet to demonstrate that they can correctly reproduce and explain the whole range explosions that is observed in nature, there are plenty of ideas for solving the remaining problems. Some important milestones from the last few years have been discussed in this paper, and can be summarised as follows:
- ECSN-like explosions of supernova progenitors with the lowest masses ($8\ldots
10 M_\odot$) can be modelled successfully both in 2D and in 3D. Regardless of the precise evolutionary channel from which they originate, supernovae from the transition region between the super-AGB star channel and classical iron-core collapse supernovae share similar characteristics, i.e. low explosion energies of $\mathord{\sim}10^{50}
\, \mathrm{erg}$ and small nickel masses of a few $10^{-3} M_\odot$. Due to the ejection of slightly neutron-rich material in the early ejecta, they are an interesting source site for the production of the lighter neutron-rich trans-iron elements (Sr, Y, Zr), and are potentially even a site for a weak r-process up to Ag and Pd [@wanajo_11]. An unambiguous identification of ECSN-like explosions among observed transients is still pending, however, although there are various candidate events.
- Though it has yet to be demonstrated that the neutrino-driven explosion mechanism can robustly account for the explosions of more massive progenitors, first successful 3D models employing multi-group neutrino transport have recently become available. The reluctance of the first 3D models to develop explosions due to the different nature of turbulence in 3D proves to be no insurmountable setback; and even the unsuccessful 3D models computed so far appear to be close to explosion.
- Some of the recent 2D models produced by different groups [@summa_16; @oconnor_16] show similar results, which inspires some confidence that the simulations are now at a stage where modelling uncertainties due to different numerical methodologies are under reasonable control, though they have not been completely eliminated yet. We have addressed some of the sensitivities to the modelling assumption in this paper, including possible effects of numerical resolution as well as various aspects of the neutrino transport treatment.
- Recent studies have helped to unravel how the interplay between neutrino heating and hydrodynamic instabilities works quantitatively, and they have clarified why neutrino-driven mechanism can be obtained with a considerably smaller driving luminosity in multi-D.
- There is a number of ideas about missing physics that could make the neutrino-driven mechanism robust for a wider range of progenitors. These include rapid rotation ([@nakamura_14; @janka_16]; though stellar evolution makes this unlikely as a generic explanation), changes in the neutrino opacities [@melson_15b], and a stronger forcing of non-radial instabilities due to seed perturbations from convective shell burning [@couch_13; @couch_15; @mueller_15a; @mueller_16b].
- 3D initial conditions for supernova simulations have now become available [@couch_15; @mueller_16b], and promise to play a significant and beneficial role in the explosion mechanism. A first 3D multi-group simulation starting from a 3D initial model of an $18
M_\odot$ progenitor has been presented in this review. The model has already reached an explosion energy of $5 \times 10^{50} \, \mathrm{erg}$, and suggests that the observed range of explosion energies may be within reach of 3D simulations.
- Nonetheless, the study of 3D effects in supernova progenitors is yet in its infancy. A thorough exploration of the parameter space is required in order to judge whether they are generically important for our understanding of supernova explosions. This is not only true with regard to the 3D pre-collapse perturbations from shell burning that are crucial to the “perturbation-aided” neutrino-driven mechanism. The role of convective boundary mixing on the structure of supernova progenitors also deserves to be explored.
Many of these developments are encouraging, though there are also hints of new uncertainties that may plague supernova theory in the future. Whether the new ideas of recent years will prove sufficient to explain shock revival in core-collapse supernovae remains to be seen. The perspectives are certainly good, but obviously a lot more remains to be done before simulations and theory can fully explain the diversity of core-collapse events in nature. There is no need to fear a shortage of fruitful scientific problems concerning the explosions of massive stars.
The author acknowledges fruitful discussions with R. Bollig, A. Burrows, S. Couch, E. Lentz, Th. Foglizzo, A. Heger, F. Herwig, W. R. Hix, H.-Th. Janka, S. Jones, T. Melson, R. Kotak, J. Murphy, K. Nomoto, E. O’Connor, L. Roberts, S. Smartt, H. Spruit, and M. Viallet. Particular thanks go to A. Heger, S. Jones, and K. Nomoto for providing density profiles of ECSN-like progenitors for Figure \[fig:threshold\], to H.-Th. Janka for critical reading, and to T. Melson and M. Viallet for long-term assistance with the development of the <span style="font-variant:small-caps;">Prometheus</span> code. Part of this work has been supported by the Australian Research Council through a Discovery Early Career Researcher Award (grant DE150101145). This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government. This work was also supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia, and by the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). Computations were performed on the systems *raijin* (NCI) and *Magnus* (Pawsey), and also on the IBM iDataPlex system *hydra* at the Rechenzentrum of the Max-Planck Society (RZG) and at the Minnesota Supercomputing Institute.
The Density Gradient in the Post-Shock Region {#sec:app_gradient}
=============================================
Neglecting quadratic terms in the velocity and neglecting the self-gravity of the material in the gain region, one can write the momentum and energy equation for quasi-stationary accretion onto the proto-neutron star in the post-shock region as, $$\begin{aligned}
\frac{1}{\rho}\frac{{\partial}P}{{\partial}r}
&=&-\frac{GM}{r^2}, \\
\label{eq:a2}
\frac{{\partial}}{{\partial}r}\left(h-\frac{GM}{r}\right)
&=&\frac{\dot{q}_\nu}{v_r},\end{aligned}$$ in terms of the pressure $P$, the density $\rho$, the proto-neutron star mass $M$, the enthalpy $h$, the mass-specific net neutrino heating rate $\dot{q}_\nu$, and the radial velocity $v_r$. For a radiation-dominated gas, one has $h\approx 4P/\rho$, which implies, $$\label{eq:a3}
\frac{1}{4}
\frac{{\partial}h}{{\partial}r}
+\frac{h}{4}\frac{{\partial}\ln \rho}{{\partial}r}
=
-\frac{GM}{r^2},$$ and by taking ${\partial}h/{\partial}r$ from Equation (\[eq:a2\]), $$\label{eq:a4}
\frac{\dot{q}_\nu}{4v_r}+
\frac{h}{4}\frac{{\partial}\ln \rho}{{\partial}r}
=
-\frac{3GM}{4 r^2 }.$$ Solving for the local power-law slope $\alpha={\partial}\ln \rho /{\partial}\ln r$ of the density yields, $$\alpha
=
-\frac{3GM}{r h}-\frac{r \dot{q}_\nu}{v_r h}.$$ Since $\dot{q}_\nu>0$ and $v_r<0$ in the gain region before shock revival, this implies a power-law slope $\alpha$ that is no steeper than, $$\alpha
\geq
-\frac{3GM}{r h}.$$
[^1]: Whether the core continues to collapse to a neutron star depends critically on the details of the subsequent initiation and propagation of the oxygen deflagration during the incipient collapse [@isern_91; @canal_92; @timmes_92; @schwab_15; @jones_16].
[^2]: This compact designation for $L_\nu E_\nu^2$ has been suggested to me by H.-Th. Janka.
[^3]: $\epsilon$ is given in terms of the mean-square $\langle E^2\rangle$ and the mean energy $\langle E \rangle$, as $\epsilon=\langle E^2\rangle/
\langle E \rangle$. @tamborra_12 can be consulted for the ratio of the different energy moments during various evolutionary phases.
[^4]: The dynamical reasons for this difference between 1D and multi-D models have yet to be investigated. Conceivably shorter exposure to neutrino heating in 2D due to faster expansion (which is responsible for the lower $Y_e$) also decreases the final entropy of the ejecta.
[^5]: The <span style="font-variant:small-caps;">FMT</span> neutrino transport scheme cannot be relied upon for precise predictions of the value of $Y_e$, but should be sufficiently accurate for exploring differential effects such as differences between plume expansion in 2D and 3D.
[^6]: Note that rest-mass contributions to the internal energy are excluded in this definition.
[^7]: This argument holds only for stationary 1D flow, however. In multi-D, the antesonic condition becomes sensitive to fluctuations in the sound speed, which limits its usefulness as diagnostic for the proximity to explosion. The fluctuations will be of order $\delta
c_\mathrm{s}/c_\mathrm{s} \sim \delta \rho/\rho$, i.e. of the order of the square of the turbulent Mach number. This explains why high values of $c_\mathrm{s}^2/v_\mathrm{esc}^2$ are encountered in multi-D even in non-exploding models [@mueller_12a]. A similar problem occurs if the shock starts to oscillate strongly in 1D close to the runaway threshold.
[^8]: This is not at odds with the findings of @murphy_08b and @couch_14, who noticed that the neutrino heating rate in light-bulb and leakage-based multi-D simulations at runaway is *smaller* than in 1D. Due to a considerably different pressure and density stratification (cf. Figure 3 in @couch_14, which shows a very steep pressure gradient behind the shock in the critical 1D model), the gain region needs to become much more massive in 1D than in multi-D before the runaway condition $\tau_\mathrm{adv}/\tau_\mathrm{heat}>1$ is met. Therefore *both* the neutrino heating rate $\dot{Q}_\nu$ and the binding energy $E_\mathrm{tot}$ of the gain region are higher around shock revival in 1D (as both scale with $M_\mathrm{gain}$).
[^9]: Note that different sign conventions for $\omega_\mathrm{BV}$ are used in the literature; here $\omega_\mathrm{BV}^2>0$ corresponds to instability.
[^10]: One should bear in mind, though, that rotation also decreases the neutrino luminosity and mean neutrino energy because it leads to larger neutron star radii [@marek_09].
[^11]: I am indebted to T. Foglizzo for pointing out that this conversion of velocity perturbations into density perturbations is another instance of advective-acoustic coupling [@foglizzo_01; @foglizzo_02], so that there is a deep, though not immediately obvious, connection with the physics of the SASI.
[^12]: The story may be different for angular momentum transport in convective zones, which deserves to revisited (see @chatzopoulos_16 for a current study in the context of Si and O shell burning).
[^13]: The width of this region will be determined by the criterion that the gradient Richardson number is about $1/4$.
[^14]: It is doubtful whether entrainment operates efficiently for core H burning, though. Here diffusivity effects are not negligible for convective boundary mixing, which is thus likely to take on a different character [@viallet_15].
[^15]: $e_\mathrm{int}$ at the shell boundary may be the more relevant scale, but the convective luminosity typically decreases even more steeply with $r$ than $e_\mathrm{int}$, so our estimate is on the safe side for formulating an upper limit.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Considering a system composed of two different thermoelectric modules electrically and thermally connected in parallel, we demonstrate that the inhomogeneities of the thermoelectric properties of the materials may cause the appearance of an electrical current, which develops inside the system. We show that this current increases the effective thermal conductance of the whole system. We also discuss the significance of a recent finding concerning a reported new electrothermal effect in inhomogeneous bipolar semiconductors, in light of our results.'
author:
- 'Y. Apertet'
- 'H. Ouerdane'
- 'C. Goupil'
- 'Ph. Lecoeur'
title: Thermoelectric internal current loops inside inhomogeneous systems
---
Thermoelectric power generation is a promising way to achieve efficient waste energy harvesting. To ensure a high heat-to-electrical power conversion efficiency, the thermal conductances of the materials used for thermoelectric modules (TEM) have, in principle, to be as low as possible[@Shakouri2011]. Fu *et al* [@Fu2011] recently reported on an electrothermal process that can modify the effective thermopower of semiconductor devices. In particular, they claimed that the joint application of a temperature gradient and an electric field (perpendicular to each other) to a bipolar semiconductor structure induces steady current vortices (even in open circuit configuration) which in turn yield Joule heating whose effect is to lower thermal conductivity. It is thus worthwhile to check whether this effect may be used to improve the so-called figure of merit $ZT$ of bipolar semiconductor structures to a significant degree.
The theoretical prediction of Fu *et al* [@Fu2011] that internal current vortices formed at a pn junction provide a way to reduce thermal conductivity in practical devices deserves closer inspection. In this Brief Report, using a macroscopic description of a two-leg TEM, we demonstrate that an internal current also gives rise to *advective thermal transport* which, unfortunately for practical applications, largely compensates the effect proposed by Fu *et al* [@Fu2011] and hence effectively lowers $ZT$. Studying the simple case of two thermoelectric modules connected in parallel both thermally and electrically, we suggest that internal currents caused by a temperature gradient are not directly linked to the transverse electrical field as supposed in Ref. [@Fu2011] but rather caused by thermoelectric inhomogeneities inside the materials.
To gain insight into the main features of internal current loops, let us consider two thermoelectric modules TEM$_1$ and TEM$_2$, and the equivalent module, denoted TEM$_{\rm eq}$, resulting from their association in parallel, both electrically and thermally as shown in Fig. \[fig:figure1\]. Each of them is characterized by its isothermal electrical conductance $G_{i}$, its thermal conductance under open electrical circuit condition $K_{0,i}$ and its Seebeck coefficient $\alpha_{i}$, where $i$ can be $1$,$2$ or ${\rm eq}$ as appropriate. All these coefficients are supposed constant. The whole system is subjected to a temperature difference $\Delta T=T_{\rm hot}-T_{\rm cold}$, and its average temperature is $T$.
![Coupled thermoelectric modules (top) and equivalent module (bottom).[]{data-label="fig:figure1"}](figure1){width="45.00000%"}
Using linear response theory we can express each electrical current $I_i$ and thermal flux $I_{Q_{_{i}}}$ as functions of generalized forces related to the temperature difference $\Delta T$ and voltage difference $\Delta V$ to which the TEM is subjected. The potential differences $\Delta V$ and $\Delta T$ are the same for TEM$_1$ and TEM$_2$ and hence for TEM$_{\rm eq}$ by construction. The relation between the fluxes and the forces is given by [@Callen1948]:
$$\label{frcflx}
\left(
\begin{array}{c}
I_i\\
I_{Q_{_{i}}}\\
\end{array}
\right)
=
G_i
\left(
\begin{array}{cc}
1~ & ~\alpha_i\\
\alpha_i T~ & ~\alpha_i^2 T + K_{0,i}/G_i\\
\end{array}
\right)
\left(
\begin{array}{c}
\Delta V\\
\Delta T\\
\end{array}
\right),$$
from which we obtain a quite simple expression of the thermal flux:
$$\label{eq:IQI}
I_{Q_{_{i}}}=\alpha_i T I_i+K_{0,i}\Delta T,$$
This equation shows that two distinct processes contribute to the thermal transport: one is linked to thermal conduction by both phonons and electrons when there is no current flowing inside the structure (term in $K_0\Delta T$), the other to electrical current flow (term in $\alpha T I$). Since this second contribution is associated with a macroscopic displacement of electrons it can be stated to be thermal transport by [*electronic advection*]{}; and the heat quantity transported by each electron [@Pottier2007] is given by $|\alpha| T e$, $e$ being the elementary electric charge. This notion of [*electronic advection*]{} is central to explain the increase of thermal conductance when an internal current develops inside the structure. We stress that the additional term should not be confused with the electronic part of $K_0$, which is used, for example, in the Wiedemann-Franz law. On Fig. \[fig:figure1\], this is shown with the added thermal conductance parameterized by the electrical current.
Besides constitutive laws for each module given by Eq. (\[frcflx\]), there are additionnal relations linked to the parallel configuration that must be accounted for. First, we ensure electrical current conservation:
$$\label{eq:conservationI}
I_{\rm eq}=I_1+I_2$$
Next, we consider that the mean thermal flux flowing through TEM$_{\rm eq}$ is the sum of the two mean thermal fluxes flowing through TEM$_1$ and TEM$_2$; hence
$$\label{eq:conservationIQ}
I_{Q_{_{\rm eq}}}=I_{Q_{_{1}}}+I_{Q_{_{2}}}$$
Now using Eqs. (\[frcflx\]), (\[eq:conservationI\]) and (\[eq:conservationIQ\]) we are free choose the thermal and electrical configurations which permits an easy derivation of the equivalent parameters of the system as a whole. Under isothermal condition, $\Delta T=0$, and Eq. (\[eq:conservationI\]) leads to:
$$\label{eq:conductanceeff}
G_{\rm eq}=G_1 +G_2,$$
Under closed circuit condition, $\Delta V=0$, and Eq. (\[eq:conservationI\]) leads to $G_{\rm eq}\alpha_{\rm eq}=G_{1}\alpha_{1}+G_{2}\alpha_{2}$, so that the equivalent thermopower $\alpha_{\rm eq}$ is defined as the weighted average of the two Seebeck coefficients $\alpha_{1}$ and $\alpha_{2}$:
$$\label{eq:alphaeff}
\alpha_{\rm eq}=\frac{G_1 \alpha_1 + G_2 \alpha_2}{G_1 + G_2} ,$$
This equation is the same as the one given by Hicks and Dresselhaus (Eq. (2) of Ref. [@Hicks1993]) for a semiconductor with two conduction bands. The correspondence between both is explained by the fact that each conduction band can be associated with a medium where electrons flow parallelly. From this result we see that the effective Seebeck effect cannot exceed the larger one of the two materials.
The equivalent thermal conductance is now determined under open circuit condition, $I_{\rm eq}=0$. For nonzero values of $I_1$ and $I_2$, this condition is satisfied for $I_1=-I_2$; since the conservation of the thermal flux (\[eq:conservationIQ\]) remains valid, we obtain:
$$\label{eq:condthermeff0}
K_{0,{\rm eq}}=\frac{(\alpha_1 -\alpha_2) T I_1}{\Delta T}+K_{0,1}+K_{0,2},$$
using Eq. (\[eq:IQI\]). To proceed, we determine the intensity $I_1$ as follows. Under open ciruit condition $I_{\rm eq}=0$ so that the voltage reads $\Delta V=-\alpha_{\rm eq}\Delta T$, which we include in the expression of $I_1$ given by to obtain:
$$\label{eq:I1}
I_1=\frac{G_1G_2}{G_1+G_2}(\alpha_1 - \alpha_2)\Delta T$$
This equation shows that there is a non-zero electrical current flowing as long as the two Seebeck coefficients are different. Now, the substitution of the obtained expression of $I_1$ into Eq. (\[eq:condthermeff0\]), yields the equivalent thermal conductance at zero current:
$$\label{eq:condthermeff}
K_{0,{\rm eq}}=K_{0,1}+K_{0,2}+\frac{G_1G_2}{G_1+G_2}(\alpha_1 - \alpha_2)^2T$$
The above formula exibits an additional term next to the sum of the thermal conductance of each module. This term is related to the internal current that develops inside the structure when sumitted to a temperature gradient. This current is proportional to the difference between the Seebeck coefficients of each leg. The total transported heat by this current is proportionnal to this difference too, so the increase in thermal conductance is proportionnal to $(\alpha_1-\alpha_2)^2$.
As an internal current is generated, energy dissipation is caused by Joule effect. The total dissipated power is given by the sum of the power dissipated in each part of the whole system: $P_{\rm Joule} = I_{\rm int}^2(1/G_1+1/G_2)$, where $I_{\rm int}~(\equiv I_1)$ is given by Eq. (\[eq:I1\]); an explicit expression is,
$$\label{eq:Pjoule}
P_{\rm Joule}=\frac{G_1G_2}{G_1+G_2}(\alpha_1 - \alpha_2)^2\Delta T^2.$$
The dependence on the square of the difference of the Seebeck coefficients of the modules shows that the more they are dissimilar in terms of thermopower the more Joule dissipation is important. Here, there is no need to assume, as one would for the classical generator with two legs, that each module has a different doping type. We also note that the internal current is stronger for materials with high electrical conductance, hence metals are more sensitive to small inhomogeneities in the Seebeck coefficient. This explains why this effect is exploited in non-destructive testing to probe metallic inclusions in a host metal [@Nayfeh2002; @Kleber2005; @Carreon2003].
Let us remark that for inclusions at the nanoscale the previous statement about the increase of the thermal conductance no longer stands since these inclusions have a strong impact on thermal conduction by phonons. For example Kim *et al* [@Kim2006] demonstrated that nanoinclusions are efficient to lower the thermal conductivity in InGaAs.
We now turn to the analysis of the paper of Fu *et al* [@Fu2011] using our Eq. (\[eq:condthermeff\]), which allows to explain the behavior of a pn junction submitted to a transverse thermal gradient under open circuit condition. A typical Pisarenko plot [@Ioffe1957], i.e. the Seebeck coefficient plotted against the carrier concentration, shows that the Seebeck coefficient is higher for low carrier density. In a pn junction a depleted zone forms and develops on each side of the interface over a few hundreds of nanometers depending on the doping concentration: in these two regions, the Seebeck coefficients increase locally and thus become greater than those in the quasi-neutral regions. Since the carrier concentration inhomogeneity is transverse to the applied temperature gradient, this situation is similar to the simple one studied in the present paper and we can expect internal current vortices to develop as described in [@Fu2011]. We believe that the electric field present in the space charge zone is a consequence of the carrier depletion and is not a cause *per se* of the internal current generation: in a structure where one can tune the Seebeck coefficient without changing the carrier density we expect to find the same behavior. However, for the particular case of a pn junction the Seebeck inhomogeneity and the transverse electric field are closely connected through the carrier depletion zone; therefore linking, in this case, the internal current either to thermoelectric inhomogeneties or to a transverse electric field essentially amounts to express two viewpoints on the same phenomenon. Besides, the appearence of two vortices, one on each side of the junction, is due to the potentiel barrier arising at the interfaces: each type of carrier is confined in its own side so that these two separate systems have no influence on each other except when one of the sides becomes thinner than the depleted zone.
We state further that a decrease of the thermal conductance is not possible in such circumstance: Fu and co-workers [@Fu2011] overlooked the advective part of the thermal flux. According to these authors, the reduction is due to the fact that half of the energy dissipated by the Joule effect is actually going back to the hot side. This heat quantity sould be compared to the one transported by *electronic heat advection*: the ratio between these two quantities scales as $\Delta T/2T$. In the framework of linear theory, the temperature difference $\Delta T$ is assumed to be small, so the heat flowing back to the hot reservoir is small compared to the *advection* part: internal currents increase the thermal conductance; they do not decrease it. Generation of internal currents in spatially inhomogeneous systems such as multilayer structures were discussed twenty years ago by Saleh *et al* [@Saleh1991]; our analysis is consistent with their conclusions.
The specific case of TEMs with both electrical and thermal parallel configurations satisfies Bergman’s theorem, which states that the figure of merit $ZT$ of a composite material cannot be greater than the larger $ZT$ of the consituents [@Bergman1991]. We have shown that the Seebeck coefficient is lower than the larger one of the two, Eq. (\[eq:alphaeff\]). The effective electrical conductance of two conductances in parallel is always greater than the larger one, but, since $ZT\propto G/K_0$ for a give temperature $T$, this increase is counterbalanced by the fact the thermal conductances behave similarly when no internal current develops. Moreover, considering a TEM designed with components that have different Seebeck coefficients, the increase in equivalent thermal conductance is even stronger. So, as expressed by Bergman and Levy [@Bergman1991], the figure of merit of the TEM$_{\rm eq}$ can only be lower than the highest $ZT$ of the more efficient TEM. Indeed, the addition of an internal current can only lower the figure of merit, since it adds a dissipative process to the system. A full generalization of this result to composite materials necessitates the derivation of an expression of the effective Seebeck coefficient for a configuration where TEMs are both thermally and electrically in series. Since a composite system may be viewed as a network of TEMs a general formulation can, in principle, be obtained; however we anticipate that such derivation, which is beyond the scope of the present Brief Report, can be quite tricky. The simple example given in the present paper suffices to gain insight into the problem of internal currents and their properties.
To end this Brief Report, we make two additional remarks:
Assuming that the properties of the thermoelectric modules are temperature-dependent, it would be interesting to check if it is possible to create instabilities inside the module analogous to the Rayleigh-Bénard convection phenomenon which occurs in conventional fluids subjected to a thermal gradient.
The present work and the cited ones on current loops bring us back to the original interpretation of the thermoelectric effect made by Seebeck who thought that the temperature difference led to magnetism whereas it was the internal current of the structure that created the magnetic field, which deviated the compass needle [@Seebeck].\
#### Acknowledgments {#acknowledgments .unnumbered}
This work is part of the CERES 2 and ISIS projects funded by the Agence Nationale de la Recherche. Y. A. acknowledges financial support from the Ministère de l’Enseignement Supérieur et de la Recherche.
[12]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , , , ****, ().
, ****, ().
, **, , ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , , , , , ****, ().
, , ().
, , , ****, ().
, ****, ().
, ****, ().
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We point out that the observed decay mode of the pion and the Kaon decay puzzle are really imprints of discrete micro space-time.'
author:
- |
B.G. Sidharth\
Centre for Applicable Mathematics & Computer Sciences\
B.M. Birla Science Centre, Hyderabad 500 063
title: 'Imprints of Discrete Space Time - A Brief Note'
---
In recent years ideas of discrete space time have been revived through the work of several scholars and by the author within the context of Kerr-Newman Black Hole type formulation of the electron[@r1]-[@r5]. Further, even more recently this has been considered in the context of a stochastic underpinning[@r6; @r7]. Let us now consider two of the imprints that such discrete space time would have.\
First we consider the case of the neutral pion. Within the framework of the Kerr-Newman metric type formulation referred to above, it is possible to recover the usual picture of a pion as a quark-anti quark bound state [@r8; @r9], though equally well we could think of it as an electron-positron bound state also[@r4; @r10]. In this case we have, $$\frac{mv^2}{r} = \frac{e^2}{r^2}\label{e1}$$ Consistently with the above formulation, if we take $v = c$ from (\[e1\]) we get the correct Compton wavelength $l_\pi = r$ of the pion.\
However this appears to go against the fact that there would be pair annihilation with the release of two photons. However if we consider discrete space time, the situation would be different. In this case the Schrodinger equation $$H \psi = E \psi\label{e2}$$ where $H$ contains the above Coulumb interaction could be written, in terms of the space and time separated wave function components as (Cf. also ref.[@r2]), $$H\psi = E \phi T = \phi \imath \hbar [\frac{T(t-\tau)-T}{\tau}]\label{e3}$$ where $\tau$ is the minimum time cut off which in the above work has been taken to be the Compton time (Cf.refs.[@r4] and [@r5]). If, as usual we let $T = exp (irt)$ we get $$E = -\frac{2\hbar}{\tau} sin \frac{\tau r}{2}\label{e4}$$ (\[e4\]) shows that if, $$| E | < \frac{2\hbar}{\tau}\label{e5}$$ holds then there are stable bound states. Indeed inequality (\[e5\]) holds good when $\tau$ is the Compton time and $E$ is the total energy $mc^2$. Even if inequality (\[e5\]) is reversed, there are decaying states which are relatively stable around the cut off energy $\frac{2\hbar}{\tau}$.\
This is the explanation for treating the pion as a bound state of an electron and a positron, as indeed is borne out by its decay mode. The situation is similar to the case of Bohr orbits– there also the electrons would according to classical ideas have collapsed into the nucleus and the atoms would have disappeared. In this case it is the discrete nature of space time which enables the pion to be a bound state as described by (\[e1\]).\
Another imprint of discrete space time can be found in the Kaon decay puzzle, as pointed out by the author[@r11]. There also we have equations like (\[e2\]) and (\[e3\]) above, with the energy term being given by $E(1 + i)$, due to the fact that space time is quantized. Not only is the fact that the imaginary and real parts of the energy are of the same order is borne out but as pointed out in[@r11] this also explains the recently observed[@r12] decay and violation of the time reversal symmetry which in the words of Penrose[@r13], “the tiny fact of an almost completely hidden time-asymmetry seems genuinely to be present in the $K^0$-decay. It is hard to believe that nature is not, so to speak, trying to tell something through the results of this delicate and beautiful experiment.”\
From an intuitive point of view, the above should not be surprising because time reversal symmetry is based on a space time continuum and is no longer obvious if space time were discrete.
[99]{} Bombelli, L., Lee, J., Meyer, D., Sorkin, R.D., Phys. Rev. Lett. 59, 1987, 521. Wolf C., Nuovo. Cim. B 109 (3), 1994, 213. Lee, T.D., Phys. Lett. 12 (2B), 1983, 217. Sidharth, B.G., Ind. J. Pure and Applied Phys., 35 (7), 1997, 456. Sidharth, B.G., IJMPA, 13 (15), 1998, 2599. Also xxx.lanl.gov quant-ph 9808031. Sidharth, B.G., Chaos, Solitons and Fractals, 11 (8), 2000, 1269-1278. Sidharth, B.G., Chaos, Solitons and Fractals, 11 (8), 2000, 1171-1174. Sidharth, B.G., and Lobanov, Yu Yu, Proceedings of Frontiers of Fundamental Physics, (Eds.)B.G. Sidharth and A. Burinskii, Universities Press, Hyderabad, 1999. B.G. Sidharth, Mod.Phys. Lett. A., Vol. 12 No.32, 1997, pp2469-2471. B.G. Sidharth, Mod.Phys. Lett. A., Vol. 14 No. 5, 1999, pp387-389. Sidharth, B.G., Chaos, Solitons and Fractals, 11, 2000, 1045-1046. Angelopoulos, A., et al., Phys. Lett. B 444, 1998, 43. Hawking, S., Israel, W., General Relativity: An Einstein Centenary Survey, Cambridge University Press, Cambridge, 1979.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Electric machines with very power-to-weight ratios are inevitable for hybrid-electric aircraft applications. One potential technology that is very promising to achieve the required power-to-weight ratio for short-range aircraft, are superconductors used for high current densities in the stator or high magnetic fields in the rotor. In this paper, we present an in-depth analysis of the potential for fully and partially superconducting electric machines that is based on an analytical approach taking into account all relevant physical domains such as electromagnetics, superconducting properties, thermal behavior as well as structural mechanics. For the requirements of the motors in the NASA N3-X concept aircraft, we find that fully superconducting machines could achieve 3.5 times higher power-to-weight ratio than partially superconducting machines. Furthermore, our model can be used to calculate the relevant KPIs such as mass, efficiency and cryogenic cooling requirements for any other machine design.'
address:
- '^1^ Siemens AG, Corporate Technology, Postbox 32 20, 91050 Erlangen, Germany'
- '^2^ Siemens AG, eAircraft, Willy-Messerschmitt-Str. 1, 82024 Taufkirchen, Germany'
- '^3^ Institute of Technical Physics, Karlsruhe Institute of Technology, 76344 Eggenstein-Leopoldshafen, Germany'
- '\*Present address: Max-Planck-Institute of Quantum Optics, 85741 Garching, Germany'
author:
- 'Matthias Corduan^1^, Martin Boll^2^, Roman Bause^2^\*, Marijn P. Oomen^1^, Mykhaylo Filipenko^2^, Mathias Noe^3^'
bibliography:
- 'arXiv.bib'
title: 'Topology comparison of superconducting AC machines in hybrid-electric aircraft'
---
Introduction
============
Stringent emission targets have been decided by the European Commission in the Flightpath 2050 vision, which demands a 75% reduction in CO~2~, a 90% reduction in NO~x~ and 65% lower perceived noise emissions, compared with a typical new aircraft in 2000 [@EuropaischeKommission.2011]. One possible way to achieve these goals is by replacing the conventional drive train system by a hybrid-electric one. Detailed studies on aircraft level propose that remarkable reductions in emissions can be achieved. For instance, with the N3-X concept aircraft - a turbo-electrically powered blended-wing body aircraft for roughly 300 PAX - could have less 70% emission than a comparable conventionally powered design without any additional restriction on cruise speed or payload. However, the technological feasibility of this concept highly depends on the availability of very light-weight electric machines and components. For the N3-X electric machines with a power-to-weight ratio $PTW$ of at least $12.7\,\mathrm{kW\,kg^{-1}}$ are required [@Felder.2011].\
As the PTW of electric machines is proportional to airgap magnetic field and the stator current loading ($PTW \sim B_{\mathrm{gap}} \cdot A$), superconducting materials offer great potential having rotor fields larger than with permanent magnets and much higher current densities in the stator when compared to copper. Thus, many authors claim that combining both advantages could result in machines with a very high power-to-weight ratio. However, as the AC-losses in superconductors scale non-linear with the exposed magnetic field and electric frequency, several effects counteract each other and a detailed analysis on machine design level is necessary to determine the optimal trade-off.\
In this paper, we outline such an analysis based on analytical models for most relevant physical domains. We compare the potential of a fully superconducting machine (i.e. with superconducting rotor and stator) and a partially superconducting machine (i.e. only with a superconducting rotor) concerning their most relevant KPIs, such as mass and efficiency for a given set of high-level requirements such as power and rotation speed. We present an analysis based on the motor requirements from the N3-X concept aircraft design. However, the model can easily be applied to any other set of high-level requirements. Further, the choice of a radial flux topology is motivated by symmetry considerations that make it easier to calculate the magnetic potential analytically. Nevertheless, it can be expected, that our general findings would be similar for an axial flux topology.\
The paper is structured as follows: In Section 2 the details of the analytical calculation models are presented. In Section 3 the results for the N3-X requirements are presented followed by the conclusions in Section 4.
Machine model
=============
Superconducting synchronous radial flux machines can be generally categorized into partially and fully superconducting machines (SCM). Partially superconducting machines are in turn subdivided into machines with a superconducting DC rotor and a normal conducting AC stator or into machines with a normal conducting AC rotor and a superconducting DC stator. The former topology of the partial SCM is investigated in this paper. In contrast, the topology of the fully SCM uses superconducting DC coils in the rotor and superconducting AC coils in the stator. In superconducting DC rotor coils, only low losses occur, which have a minimal effect on the current-carrying capacity [@Barnes.2005]. The current-carrying capacity of DC coils is limited in general by their self field and mechanical limitations. However, in AC stator coils strong and fast varying electromagnetic fields are present and a substantial amount of losses occur. Due to the strongly temperature-dependent electric characteristics of superconductors, they have to be coupled to thermics and flow dynamics of the cooling process.\
To optimize a large number of parameters of a machine in a reasonable time, an analytical model is developed. Figure \[fig:rfm\_model\_cold\_airgap\] shows the general scheme.
![General scheme of a synchronous superconducting radial flux machine with a three-phase winding system marked , , . Geometry parameters are the inner rotor radius $r_{ri}$, the outer rotor radius $r_{ro}$, the inner stator radius $r_{si}$, the outer stator radius $r_{so}$, the inner yoke radius $r_{yi}$, the outer yoke radius $r_{yo}$ and the duty cycle of the stator coils $\alpha_{0}$.[]{data-label="fig:rfm_model_cold_airgap"}](Fig1.png)
The main features of the machine model are as follows. Firstly, the model supports the design of synchronous radial flux machines with air gap windings. Secondly, the rotor is free of iron and the stator is enclosed by a yoke of iron. Regardless of the machine topology, the stator iron is operated at ambient temperatures. This has several reasons as investigated in [@Liu.2018]. Firstly, this is advantageous for the relative permeability which increases at cryogenic temperatures, but also disadvantageous in the losses which also increase. The stator iron core is operated close to the saturation point of the yoke material. This leads to high iron losses which would burden the cryogenic system heavily. The iron loss is determined according to [@Bertotti.1985] and [@Bertotti.1985b]. Finally, the variability in geometry, rotor excitation, and materials allow the optimization of the machine KPIs with respect to a large number of model parameters. The varied internal parameters are summarized in Table \[tab:optimisation\_parameters\].\
Symbol Unit Explanation
-------------------------- ------------------------- ------------------------------------------
$p$ $\mathrm{-}$ Number of pole pairs
$f_{el}$ $\mathrm{Hz}$ Electrical frequency
$m$ $\mathrm{-}$ Number of phases
$I_{n,\mathrm{max}}$ $\mathrm{-}$ Maximum normalized current in the stator
$r_{ri}$ $\mathrm{m}$ Inner radius of the rotor
$\alpha_{0}$ $\mathrm{\%}$ Duty cycle of the stator coils
$d_{s,c}$ $\mathrm{m}$ Thickness of the stator coils
$B_{y,\mathrm{sat}}$ $\mathrm{T}$ Saturation flux density of the yoke
$f_{\mathrm{csr}}$ $\mathrm{-}$ Coil support ratio
$J_{r}$ $\mathrm{A \, mm^{-2}}$ Current density in rotor coils
$d_{m}$ $\mathrm{m}$ Thickness of rotor coils
$\alpha_{2}$ $\mathrm{^\circ}$ Inner angle of rotor coils
$\alpha_{3}$ $\mathrm{^\circ}$ Angle between adjacent rotor coils
$\theta_{\mathrm{load}}$ $\mathrm{^\circ}$ Load angle
: \[tab:optimisation\_parameters\]Main parameters of the machine design model
The calculation procedure is presented in Figure \[fig:calc\_procedure\_RFM\]. The main part of the machine design model is the determination of the machine geometry $GEO$ and its architecture. External requirements are the power $P$ and the rotation speed $n$. The electro-thermal behavior of a specific superconductor is calculated in dependence of the electrical frequency $f_{el}$, the flux density in stator coil area $B_s$, and the normalized current $I_n$ at the calculated working temperature $T$. This approach allows to design a machine with a specific engineering current density $J_e$ and computes the required cryogenic cooling conditions. It is presented in more detail in Section \[electro\_thermal\_model\]. The magnetic field that penetrates the superconductors in the stator winding is mainly generated by the rotor and by the adjacent stator coils.\
The geometry forms the basis to calculate the magnetic field inside the machine, presented in Section \[electromagnetic\]. An exact calculation of the field distribution is essential to the machine design process caused by the nonlinear electro-thermal behavior of the superconductor. Furthermore, the field information is indispensable in the calculation of the mean torque $\overline{T}$ and the cryogenic coolant consumption $\dot{M}_{co}$. These are computed by the local mass flow densities $\dot{m}_{co}$ and loss densities $p_v$ that are generated by the electro-thermal behavior. The target torque of the machine is reached by adjusting the effective machine length $l_{\mathrm{eff}}$ for a given geometry. Active parts, cryogenic and mechanic support structures as well as housing are part of the geometry and are described in detail in Section \[mechanic\_cryogenic\]. These parts can consist of different materials depending on their respective functionality. The active mass $m_a$ and passive mass $m_p$ of the machine are calculated with this approach. Moreover, the entire geometry is adapted to the machine architecture.\
In the model, the following effects are neglected. Firstly, due to the two-dimensional static magnetic field calculation, end effects are not considered. Secondly, for the calculation of the field, the thickness and the relative permeability of the stator yoke are assumed to be infinite. Thereby its hysteresis behavior is not considered. To estimate the thickness of the stator yoke, the radial field component of the total field at the transition to the yoke is integrated over a pole and then divided by the saturation flux density $B_{y,sat}$. Finally, the current density distribution in the coils is assumed to be constant. Therefore, the skin effects in normal conductors and shielding in superconductors are neglected. This approach is reasonable because a coil consists of several windings with a constant current.\
The working point current density $J_{wp}$ of the stator coil is defined at equilibrium between the spatially highest AC losses and the cooling conditions. Therefore, the current density in the stator is varied until the AC losses in presence of the total electromagnetic field meet this condition.
![Calculation procedure of the analytical design model for synchronous radial flux machines with the requirements as an input on the left side and target parameter as an output on the right side.[]{data-label="fig:calc_procedure_RFM"}](Fig.2_1.png)
Electromagnetic design {#electromagnetic}
----------------------
The machine design requires the exact magnetic field information in the entire stator region. This is necessary to determine the local loss density accurately and with spatial resolution. Due to the cylindrical symmetry of the machine model, the calculation of the electromagnetic fields can be simplified and is done in the frequency domain [@Woodson.1966]. The fields are calculated for each coil system with identical phase angle or current density, and are subsequently superimposed.\
A vector potential $\Phi$ is used to calculate the field generated by the coils. This potential has only a $z$-direction, if the field distribution is solved in the $x$-$y$-plane and the current density $J$ has only a $z$-component. The Laplace equation (\[eq:laplace\_coils\]) allows the calculation of the magnetic field $H$ outside of the coil area in the regions $m$, shown in Figure \[fig:rfm\_model\_cold\_airgap\].
$$\frac{\partial^2 \Phi_{z,m}}{\partial r^2} + \frac{1}{r} \frac{\partial \Phi_{z,m}}{\partial r} + \frac{1}{r^2} \frac{\partial^2 \Phi_{z,m}}{\partial \theta^2} = 0
\label{eq:laplace_coils}$$
The Poisson equation (\[eq:poisson\_coils\]) is used to solve the field inside the coils with the current density $J_{z,m}$ and the vacuum permeability $\mu_0$.
$$\frac{\partial^2 \Phi_{z,4}}{\partial r^2} + \frac{1}{r} \frac{\partial \Phi_{z,4}}{\partial r} +\frac{1}{r^2} \frac{\partial^2 \Phi_{z,4}}{\partial \theta^2} = -\mu_{0} \, J_{z,m}
\label{eq:poisson_coils}$$
The Laplace equation and Poisson equation can be solved for the $n^{th}$ harmonic by the general solutions given in Equation (\[eq:gs\_coils\_laplace\_source\]) by Liu [@Liu.2018b], including the singularity at $np=2$. The Fourier coefficients are $A_{n,m}$, $C_{n,m}$ and $J_{n,m}$, if a current flows in the specific area.
$$\Phi_{z,m}\left(r,\theta\right) =
\begin{cases}
\!\begin{aligned}
& \sum_{n \, \mathrm{odd}} \left( A_{n,m} \, r^{np} + C_{n,m} \, r^{-np} \right. \\[5pt]
& \left. - \frac{J_{n,m} \, r^2}{4-(np)^2}\right) \cos(np\theta) \\[5pt]
\end{aligned} & \text{if } np \neq 2\\[5pt]
\!\begin{aligned}
& \sum_{n \, \mathrm{odd}} \left( A_{1,4} \, r^2 + C_{1,4} \, r^{-2} \right. \\[5pt]
& \left. - \frac{1}{4} \, J_{1,m} r^2 \ln(r)\right) \cos(2\theta) \\[5pt]
\end{aligned} & \text{if } np = 2
\end{cases}
\label{eq:gs_coils_laplace_source}$$
The magnetic flux density $\vec{B}$ is calculated as the curl of the vector potential. The Fourier component $J_{n,m}$ of the $n^{th}$ harmonic in the $m^{th}$ region is calculated as:
$$J_{n,m} =
\begin{cases}
\!\begin{aligned}
& J_{max} \left( \, \frac{-2 \sin(\alpha_3 n) + 2 \sin(n(\alpha_3 - \pi))}{n \, pi} \right. \\[5pt]
& \left. + \frac{4 \cos(\alpha_2 n) \sin\left(\frac{n \pi}{2}\right)}{n \, pi} \right)\\[5pt]
\end{aligned} & \text{if rotor}\\[5pt]
\!\begin{aligned}
2 \, \alpha_0 \, J_{max} \, \operatorname{sinc}\left( \frac{n \, \alpha_0}{2} \right) \\[5pt]
\end{aligned} & \text{if stator}
\end{cases}
\label{eq:fourier}$$
By setting boundary conditions for the tangential magnetic field and radial flux density at the coordinate origin, between adjacent regions and in the inner yoke radius, the Fourier coefficients $A_{n,m}$ and $C_{n,m}$ can be calculated.
Mechanical and cryogenic design {#mechanic_cryogenic}
-------------------------------
The thickness of the magnetic air gap $d_{mag}$ influences the field distribution in the machine and thus also the torque and the losses in the stator windings.
0.005
Therefore, its radial thickness has to be determined accurately within the thermal and mechanical boundaries of the machine.\
The magnetic air gap consists of the sleeve and several cryo walls with the corresponding thicknesses $d_{sl}$ and $d_{cw}$, depending on the machine topology, shown in Figure \[fig:airgap\]. To take into account manufacturing tolerances and displacements caused by rotor dynamics, an additional air gap thickness of $d_{ag} = 1 \, \mathrm{mm}$ will be considered. Furthermore, a thickness of $d_{va} = 1 \, \mathrm{mm}$ is assumed for the vacuum to prevent heat conduction into the cryo system. The magnetic air gap excluding the stator coils is calculated as follows: $$r_{si}-r_{ro} = d_{mag} = \sum d_{cw} + d_{sl} + d_{ag} + d_{va}$$ The cryo walls labeled as rotor housing and stator housing provide thermal insulation. These can either be loaded under external or internal pressure and thus buckling should be excluded by a sufficient thickness of the cryo walls $d_{cw}$. The calculation of the thickness is done according to [@VerbandderTUVe.V.2000] and [@VerbandderTUVe.V.2006]. A special variant occurs in a partially superconducting machine. In the rotor, an additional centrifugal force occurs in the cryo wall, which is taken into account in its design.\
An important mechanical part is the sleeve which supports the rotor carrier and the rotor coils due to the centrifugal forces. Its thickness is determined through an analytical press-fit model which is used to assess the static strength of the sleeve. The rotor is modeled as a compound of three adjacent hollow cylinders which represent the rotor carrier, the rotor coils, and the sleeve. Thus, the homogenized tangential stress in the sleeve is calculated as a result of its radial overclosure. The displacements due to the inner pressure, the centrifugal forces, and the thermal expansion are considered. The calculation for the stress components and the radial displacement of rotating hollow cylinders is based on [@Boresi.2003] and [@Eslami.2013]. Finally, the thickness of the sleeve $d_{sl}$ is calculated iteratively by variation of itself and the radial overclosure. It strongly depends on the rotation speed, the rotor radius as well as the coil thickness and was internally cross-checked by FEM.\
The thickness of the machine housing is designed such that the torque of the machine can be transmitted. By designing these mechanical and thermodynamical support structures, the passive mass of the machine is calculated. The active mass includes the yoke and the stator and rotor coils including the winding head. Both stator coils and rotor coils are assumed to be racetrack coils.
![The calculation procedure of the electro-thermal model is divided into the behavior of the superconductor (SC behavior) and the cooling procedure (Cooling).[]{data-label="fig:calc_procedure_ETM"}](Fig.4_1.png)
Electro-thermal model {#electro_thermal_model}
---------------------
At the beginning of the machine design, the current density in the stator coils is not known. Due to the amount of AC loss compared to the cooling capability of the liquid cooling, the electric and thermal behavior of the stator windings has to be treated in a coupled closed-loop model which is schematically depicted and presented in Figure \[fig:calc\_procedure\_ETM\]. Initially, the computed engineering current density is independent of the machine design and it is calculated as a function of an alternating external field penetrating a wire that conducts a transport current. Such an electro-thermal model is developed for superconducting MgB~2~ wires and normally conducting copper litz wires.\
For the cooling of MgB~2~ wires, a two-phase flow is assumed. This cooling mechanism is modeled by empirical equations according to [@GesellschaftVerfahrenstechnikundChemieingenieurwesen.2013]. Furthermore, the two-phase cooling concept is required to keep the MgB~2~ temperature as low as possible in the case of high AC loss. The calculation procedure is shown in Figure \[fig:calc\_procedure\_ETM\] and is divided into two parts, the behavior of the superconductor (SC behavior) and cooling. Besides the wire geometry and the normalized current $I_n$ of the superconductor, the stator coil flux density $B_{s}$ and the electrical frequency $f_{el}$ are the input parameters of the model. The SC performance was determined by measurements of the critical current in a wide range of magnetic fields and temperatures. AC losses are divided into magnetization loss, eddy current loss, and coupling current loss. They are calculated according to [@Bean.1962], [@Wilson.1986] and [@Oomen.2000]. The magnetization and eddy current losses were cross-checked with Comsol and the results show that the loss we obtain analytically is slightly higher compared to FEM. The loss creates locally specific heat flux densities $\dot{q}$ depending on the material and temperature in the different parts of the wire.
Parameter Unit Fully SCM Partially SCM
-------------------------- ------------------------- ------------------ -------------------
Stator winding $\mathrm{-}$ MgB~2~ Cu
Rotor winding $\mathrm{-}$ HTS HTS
Cooling liquid $\mathrm{-}$ LH~2~ Novec 7500
$p$ $\mathrm{-}$ $3$ - $10$ $3$ - $10$
$f_{el}$ $\mathrm{Hz}$ $225$ - $750$ $225$ - $750$
$m$ $\mathrm{-}$ $3$ $3$
$I_{n,\mathrm{max}}$ $\mathrm{-}$ $0.7$ -
$r_{ri}$ $\mathrm{m}$ $0.14$ - $0.18$ $0.12$ - $0.2$
$\alpha_{0}$ $\mathrm{\%}$ $95$ $95$
$d_{s,c}$ $\mathrm{mm}$ $6$ - $11$ $14$ - $24$
$B_{y,\mathrm{sat}}$ $\mathrm{T}$ $2.3$ $2.3$
$f_{\mathrm{csr}}$ $\mathrm{-}$ $1/3$ $1/3$
$J_{r}$ $\mathrm{A \, mm^{-2}}$ $300$ $300$
$d_{m}$ $\mathrm{mm}$ $12$ $12$, $24$
$\alpha_{2}$ $\mathrm{^\circ}$ $60$ - $70$ $40$ - $65$
$\alpha_{3}$ $\mathrm{^\circ}$ $0.04$ - $0.051$ $0.034$ - $0.057$
$\theta_{\mathrm{load}}$ $\mathrm{^\circ}$ $90$ $90$
Calculated SCM $\mathrm{-}$ $1440$ $2880$
: \[tab:optimisation\_parameters\_example\]Machine parameters for the hybrid-wing-body concept aircraft N3-X [@Felder.2011]
The basic assumption of two-phase cooling is that this emitted heat over the insulated cable surface must be equal to the heat output during the evaporation of LH~2~. As a result, taking into account the pressure drop, the maximum temperature $T_{sc}$ inside the conductor can be calculated whereby the critical current is adjusted. This calculation is performed in a loop until the difference between two iteration steps falls below $0.1 \, \mathrm{K}$ to determine the static temperature. Results are the engineering current density $J_e$, which is related to the sum of conductor area and cooling area, the mass flow density $\dot{m}_{co}$ and the loss density $p_v$.\
A similar procedure exists in the electro-thermal calculation of a copper litz wire with a single-phase oil cooling. The aim of this calculation is to determine the maximum temperature $T_{max}$ at a given engineering current density $J_e$ inside the litz wire considering an inlet temperature $T_{in}$ of the cooling liquid. It is assumed that the litz wire is cast in resin and cooled by Novec 7500 [@3MDeutschlandGmbH.2014] in a channel from one side. The AC loss in the copper filaments are calculated according to [@Lammeraner.1966], which are divided into skin effects, proximity effects and additionally the ohmic loss. Subsequently, the fluid dynamics can be considered according to [@GesellschaftVerfahrenstechnikundChemieingenieurwesen.2013].\
Since the electromagnetic field in the stator winding was computed spatially dependent the loss density among the coil vary locally as well. Therefore, the electro-thermal model is evaluated in a mesh with a radial resolution of $1 \, \mathrm{mm}$ and a tangential resolution of $200$ steps per coil. The total AC loss $P_{v}$ is computed by the integration of these local loss densities. Furthermore, local temperature hot spots are visible and can be considered in the design of the cooling. The efficiency $\eta$ of the machine takes into account iron losses and stator losses.
Analysis for N3-X motor requirements
====================================
Description
-----------
We used our model to analyze the potential of a fully SCM and partially SCM for the following high-level requirements: P = $\mathrm{3 \, MW}$ and n = $\mathrm{4500 \, rpm}$. These had been derived from aircraft design considerations for the hybrid-wing-body concept aircraft N3-X [@Kim.2016] which is powered by a turboelectric distributed propulsion system incorporating 15 motors with the given power and speed.\
We assumed the following materials for the machine design. In the stator winding of the fully SCM we assume a 114-multifilament MgB~2~ wire [@Wan.2017] which is cooled with liquid hydrogen. The $J_c \left(B,T\right)$ characterization of this wire was carried out experimentally in a field range of $0$ to $2 \, \mathrm{T}$ and a temperature range of $20$ to $33 \, \mathrm{K}$. In the stator winding of the partially SCM we assume a copper litz wire with a filament diameter of $0.5 \, \mathrm{mm}$ which is cooled by the silicon oil Novec 7500. The maximum allowable temperature of the insulated wire is $180 \, \mathrm{^\circ C}$. For both topologies the rotor field is generated either by a HTS single pancake coils $\left( d_m = 12 \, \mathrm{mm} \right)$ or a HTS double pancake coil $\left( d_m = 24 \, \mathrm{mm}\right)$ with a rotor current density $J_r$ of $300 \, \mathrm{A \, mm^{-2}}$ at $25 \, \mathrm{K}$ [@Oomen.03.09.2019]. As the yoke sheet metal, the commercially available soft magnetic cobalt-iron alloy Vacodur [@Vacuumschmelze.2016] is assumed. A titanium alloy [@SpecialMetalsCorporation.2007] and an aluminum alloy [@ASMHandbook.1990] are assumed as the material of the sleeve and cryo walls, respectively. Both alloys combine high strength and an insensitive hydrogen permeability [@Robertson.1977] [@Walter.1973]. The material of the housing is assumed to be titanium [@Donachie.2000].\
Besides the number of pole pairs we varied internal geometry parameters of the machines according to Table \[tab:optimisation\_parameters\_example\], to find the configurations with the highest power-to-weight ratios and efficiencies. As the models run very fast, we did not use a dedicated optimization algorithm but calculated all configurations that can be generated by permutating the parameters.
Results
-------
Varying the parameters as presented in Table \[tab:optimisation\_parameters\_example\] results in 4320 computed configurations which are shown in Figure \[fig:results\_PTW\_coil\]. The variation parameters include the number of pole pairs $p$, the inner radius of the rotor $r_{ri}$, the thickness of the stator coils $d_{s,c}$, the thickness of the rotor coils $d_m$ and the inner angle of the rotor coils $\alpha_{2}$.\
In Figure \[subfig:PTW\_Bwp\_all\_MgB2\] and \[subfig:PTW\_Bwp\_all\_Cu\], the power-to-weight ratio $PTW$ is shown as a function of the working point flux density $B_{wp}$ for different number of pole pairs for the fully SCM and partially SCM, respectively. The power-to-weight ratio $PTW$ includes the active and passive mass of the machines. For both topologies, it can be seen that the $PTW$ increases with increasing $p$ up to a certain number of pole pairs. While for fully SCM the lightest designs with $PTW > 30 \, \mathrm{kW \, kg^{-1}}$ can be found for pole pair number between 6 and 10, the lightest partially SC machines with $PTW > 10 \, \mathrm{kW \, kg^{-1}}$ can be found for pole pair numbers between 4 and 6. Taking a closer look at fully SC machine designs with pole pair number 8 reveals that its $PTW$ values are outstanding (compared to p = 7 and p = 9) with the highest value of $36.6 \, \mathrm{kW \, kg^{-1}}$.\
The optimum design range concerning the working point flux density and the pole pair number is considerably smaller for fully SCM than for partially SCM being even more pronounced at high $PTW$. This behavior can be attributed to the high sensitivity of the MgB~2~ wire to AC loss. The highest $PTW$ values are reached for working point flux densities in the range of $0.55 \, \mathrm{T}$ - $0.9 \, \mathrm{T}$ for fully SCM and in the range of $0.8 \, \mathrm{T}$ - $1.4 \, \mathrm{T}$ for partially SCM. In the case of a small number of pole pairs, the $PTW$ increases mainly due to the reduction of the yoke thickness and higher electric frequencies that are directly proportional to the pole pair number. Both effects saturate with an increasing number of pole pairs. The higher electric frequencies enhance the generation of AC loss and consequently, the engineering current density has to be reduced leading to more active material in the stator. Thus, an optimum of the $PTW$ can be found. In the case of the partially SC machine, the frequency-independent ohmic losses dominate the total losses in the copper litz wires. This leads to a lower sensitivity of $PTW$ of the partially SCM to the number of pole pairs and the magnetic field $B_{wp}$ at the working point of the stator.
0.00 0.00
Additionally, Figure \[subfig:PTW\_Jwp\_all\_MgB2\] and \[subfig:PTW\_Jwp\_all\_Cu\] show the torque-to-weight ratio $TTW$ depending current density $J_{wp}$ in the working point. The MgB~2~ wire enables current densities up to 8 times higher compared to copper. The calculation of the current density includes the specific area required for cooling channels in both concepts. In each design $J_{wp}$ and $B_{wp}$ are linked via the electro-thermal behavior of the conductors, wherefore the current density plots in Figure \[subfig:PTW\_Jwp\_all\_MgB2\] and \[subfig:PTW\_Jwp\_all\_Cu\] are inversely related to the flux density plots in Figure \[subfig:PTW\_Bwp\_all\_MgB2\] and \[subfig:PTW\_Bwp\_all\_Cu\], respectively.\
0.00 0.00 0.00
For the five designs with the highest PTW at each pole pair number, the dependency between the efficiency $\eta$ and the $PTW$ is presented in Figure \[subfig:PTW\_EFF\_MgB2\] for the fully SCM and in Figure \[subfig:PTW\_EFF\_Cu\] for the partially SCM. For fully SCM the range of efficiencies is between $99.82 \, \%$ and $99.94 \, \%$, thus roughly two orders of magnitude higher than for the partially SCM with efficiencies of $91.8 \, \%$ to $97.1 \, \%$. This difference is linked to stator losses that approximately two orders of magnitude smaller for the superconducting stator. The stator losses are shown in Figure \[subfig:PTW\_Pv\_rotor\_MgB2\] and \[subfig:PTW\_Pv\_rotor\_Cu\]. In the case of the copper stator approximately $90\%$ of the loss are ohmic losses which sets the large offset of the x-axis. Interestingly, in the partially SCM case, the efficiency and $PTW$ are not inversely proportional as it is typical for synchronous machines but rather directly proportional. This is due to the fact that higher pole pair numbers lead both to higher losses and consequently lower current densities, thus to lower $PTW$.\
Figure \[subfig:PTW\_r\_b\_rotor\] shows the bending radius of the stator coils $r_b$ as a function of $PTW$ for the 100 lightest machines of each number of pole pairs. If the number of pole pairs increases, the bending radius decreases and is smallest at $26 \, \mathrm{mm}$ for a machine with 10 pole pairs. The minimum bending radius depends on the superconducting wire design and leads to critical current degradation. A typical MgB~2~ wire has a minimum bending radius of about $40 \, \mathrm{mm}$ [@Kovac.2016b]. However, this effect strongly depends on the conductor design and was not taken into account in the model.\
The dependency of the machines diameter-to-length aspect ratio $d/l$ and the $PTW$ is presented in Figure \[subfig:PTW\_dl\_rotor\]. In contrast to conventional machines, the $PTW$ is mostly independent on the aspect ratio.\
The machine parameters and results for the lightest fully and partially SCM machine are summarized in Table \[tab:results\_parameters\_example\] including the Esson coefficient $C_e$ and total machine length $l$.
Symbol Unit Fully SCM Partially SCM
-------------------- -------------------------------- ----------- ---------------
$PTW$ $\mathrm{kW \, kg^{-1}}$ $36.6$ $10.2$
$TTW$ $\mathrm{Nm \, kg^{-1}}$ $77.7$ $21.6$
$\eta$ $\mathrm{\%}$ $99.87$ $96$
$p$ $\mathrm{-}$ $8$ $5$
$f_{el}$ $\mathrm{Hz}$ $600$ $375$
$r_{ri}$ $\mathrm{m}$ $0.15$ $0.16$
$d_{s,c}$ $\mathrm{mm}$ $10$ $20$
$d_{m}$ $\mathrm{mm}$ $12$ $12$
$\alpha_{2}$ $\mathrm{^\circ}$ $68$ $55$
$\alpha_{3}$ $\mathrm{^\circ}$ $1.91$ $1.79$
$d_{mag}$ $\mathrm{mm}$ $8.4$ $11.7$
$m_a$ $\mathrm{kg}$ $36.8$ $160.4$
$m_p$ $\mathrm{kg}$ $45.1$ $134.5$
$B_{wp}$ $\mathrm{T}$ $0.64$ $1.08$
$J_{wp}$ $\mathrm{A \, mm^{-2}}$ $177.2$ $21.6$
$P_{v,y}$ $\mathrm{kW}$ $1.66$ $4.76$
$P_{v,s}$ $\mathrm{kW}$ $2.08$ $115.74$
$\dot{M}_{co}$ $\mathrm{g/s}$ $0.463$ $19426$
$T_{in}$ $\mathrm{K}$ $20$ $353$
$r_{b,s}$ $\mathrm{mm}$ $32.4$ $56.7$
$l_{eff}$ $\mathrm{mm}$ $225.3$ $407.5$
$l$ $\mathrm{mm}$ $292.4$ $522.0$
$C_e$ $\mathrm{kW \, min \, m^{-3}}$ $26.1$ $12.4$
$T_{\mathrm{cal}}$ $\mathrm{h}$ $21$ $11$
: \[tab:results\_parameters\_example\]Machine parameters and calculation results for the lightest fully and partially SC machine designs. The calculation time is referring to the calculation of all 4320 designs.
The calculation time $T_{\mathrm{cal}}$ of the fully SCM is longer than that of the partially SCM due to the non-linearity of the superconductor. Both designs show a higher $PTW$ with the single pancake coil compared to the double pancake coil. For partially SCM this is due to the higher necessary thickness of the sleeve which enlarges the magnetic airgap. For fully SCM the required excitation fields can be handled by a single layer coil.\
Figure \[fig:2DslotAClosses\] shows the results of the two-dimensional calculation of the AC loss of the stator coils, described in Section \[electro\_thermal\_model\], step by step for the fully SCM with machine parameters as listed in Table \[tab:results\_parameters\_example\].
0.0 0.0 0.0
The current density distribution of a 3-phase winding system is illustrated in Figure \[subfig:Jphases\_theta\] and the spatial distribution of the normalized current is shown in Figure \[subfig:In\_r\_theta\]. The current-free areas between the coils are visible. The normalized current distribution does not exceed its maximum value of $I_{n,max} = \mathrm{0.7}$ which is a requirement. A loss hot spot is detected at the inner stator radius $r_{si}$ in Figure \[subfig:Pv\_r\_theta\]. This hotspot does not necessarily have to occur in the coil which currently carries the highest current, because the loss is still strongly dependent on the magnetic field, shown in Figure \[subfig:B\_r\_theta\].
Conclusion
==========
We presented an analytical model for the design of superconducting radial flux machines. The electromagnetic design is considered by a two-dimensional approach which takes into account the mechanic, as well as the thermodynamic limits of the parts in the air gap. Furthermore, the AC loss and the consumption of the coolant in the stator winding system is calculated. We find that the model can provide results very quickly and is therefore useful for large parameter scans in a preliminary machine design.\
The influence of the material and geometry parameters on the performance of a fully and partial SCM was investigated exemplary based on requirements derived from the N3-X. We come to the conclusion that for fully SCM maximum power-to-weight ratios of $36.6 \, \mathrm{kW \, kg^{-1}}$ at an efficiency of $99.88\%$ while partially SCM maximum power-to-weight ratios of $10.2 \, \mathrm{kW \, kg^{-1}}$ at an efficiency of $96\%$ could be achievable, i.e. the fully superconducting machine is roughly 3.5 times lighter. This points out the high potential for fully superconducting machines. To compare the masses of the different topologies fairly, the penalty masses of the required cooling systems need to be taken into account. To calculate their required size the efficiency and cooling requirements of each machine needs to be computed for every different power requirement along with the mission profile of the aircraft. Even if our model takes into account the mass of most passive components, further detailing which includes bearings, shaft, instrumentation, and high voltage connectors is desireable. This will reduce the $PTW$ a bit, but we assume that $PTW$ values larger than $30 \, kW \, kg^{-1}$ are realistic. Therefore, due to our coupled electric and thermal modeling, our approach provides to accomplish this in various future studies of hybrid-electric aircraft. Our analysis concludes that the required $PTW$ of $12.7 \, \mathrm{kW \, kg^{-1}}$ for the N3-X concept aircraft is only feasible with the fully superconducting machines.\
While partially superconducting machines reached a TTW value of up to $21.6 \, \mathrm{Nm \, kg^{-1}}$ which is comparable to non-superconducting machines, fully superconducting machines showed results beyond $75 \, \mathrm{Nm \, kg^{-1}}$. In combination with decreasing losses in the stator of fully superconducting machines when lowering the electric frequency, we suggest that this machine type might be particularly interesting to be studied for even lower speed direct drive applications, such as propellers or large fans [@Cameretti.2018].\
We find also that the best power-to-weight ratios come with designs with comparatively low magnetic flux densities in the airgap in the range from 0.55T to 0.9T - values that could also be achieved with NdFeB magnets. This can be attributed to the high sensitivity of the current-carrying capacity of the MgB~2~ wire to external fields and frequencies. Consequently, we can conclude that material development should focus on improving the AC loss, current-carrying capacity and bending properties of superconducting wires rather than achieved extreme magnetic fields in the rotor.\
Further, an investigation on partially superconducting machines with a superconducting stator and a rotor with Halbach-NdFeB magnets appears interesting. The mechanical effort in a rotating superconducting rotor is eliminated in this approach. However, Halbach magnets may have the potential to achieve high power-to-weight ratios due to the small magnetic air gap in such a machine.
Acknowledgments
===============
The authors acknowledge the financial support by the Federal Ministry for Economic Affairs and Energy of Germany in the framework of LuFoV-2 (project number 20Y1516C). We thank Stefan Moldenhauer, Stefan Biser, Joern Grundmann and Mabroor Ahmed for helpful discussions.
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We perform high-accuracy calculations of the critical exponent $\gamma$ and its subleading exponent for the $3D$ $O(N)$ Dyson’s hierarchical model for $N$ up to 20. We calculate the critical temperatures for the nonlinear sigma model measure $\delta(\vec{\phi} . \vec{\phi}-1)$. We discuss the possibility of extracting the first coefficients of the $1/N$ expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits.'
author:
- 'J. J. Godina'
- 'L. Li'
- 'Y. Meurice'
- 'M. B. Oktay'
title: 'High-accuracy critical exponents of $O(N)$ hierarchical sigma models'
---
The large $N$ limit and the $1/N$ expansion [@stanley68; @ma73; @thooft74] appear prominently in recent developments in particle physics, condensed matter and string theory [@teper05; @narayanan05; @moshe03; @aharony99]. For sigma models, the basic gap equation can be obtained by using the method of steepest descent for the functional integral [@stanley68; @david85]. For $N$ large and negative, the maxima of the action dominate instead of the minima and the radius of convergence of the $1/N$ expansion should be zero. In order to turn a $1/N$ expansion into a [*quantitative*]{} tool, we need to: 1) understand the large order behavior of the series, 2) locate the singularities of the Borel transform and, 3) compare the accuracy of various procedures with numerical results for given values of $N$. Calculating the series or obtaining accurate numerical results at fixed $N$ are difficult tasks and we do not know any model where this program has been completed. For instance for the critical exponents in three dimensions, we are only aware of calculation up to order $1/N^2$ in Ref. [@okabe78; @gracey91; @pelissetto00b]. Several results related to the possibility (or impossibility) of resumming particular $1/N$ expansions are known [@dewit77; @avan83; @kneur01]. Overall, it seems that there is a rather pessimistic impression regarding the possibility of using the $1/N$ expansion for low values of $N$. For this reason, it would be interesting to discuss the three questions enumerated above for a model where we have good chances to obtain definite answers. Dyson’s hierarchical model [@dyson69; @baker72] is a good candidate for this purpose.
In this Brief Report, we provide high-accuracy numerical values for the critical exponent $\gamma$, the subleading exponent $\Delta$ and the critical parameter $\beta_c$ for the $3D$ $O(N)$ hierarchical nonlinear sigma models. These quantities appear in the magnetic susceptibility near $\beta_c$ in the symmetric phase as $$\chi= (\beta _c -\beta )^{-\gamma } (A_0 + A_1 (\beta _c -\beta)^{
\Delta }+\dots )\ . \label{eq:sus}$$
The method of calculation of the critical exponents used here is an extension of one of the methods described at length in the case of $N=1$ [@gam3] and will only be sketched briefly. On the other hand, the accuracy of the approximations used depend non trivially on $N$ as we shall discuss later. The RG transformation can be constructed as a blockspin transformation followed by a rescaling of the field. For Dyson’s hierarchical model, the block spin transformation affects only the local measure. The RG transformation can be expressed conveniently in terms of the Fourier transform (denoted $R$ hereafter) of this local measure. In the following, we keep the $O(N)$ symmetry unbroken and the Fourier transform will depend only on $\vec{k} .\vec{k} \equiv u$. Here $\vec{k}$ is a source conjugated to the local field variable $\vec{\phi}$. Replacing $k$ by $u$ and the second derivative by the $N$-dimensional Laplacian in Eq. (2.5) of Ref. [@gam3], we obtain the RG transformation for the Fourier transform of the local measure: $$R_{n+1,N}(u)\propto
{\rm e}^{\left[ -\frac {1}{2} \beta \left( 4u
\frac{\partial^2 }{\partial u^2}+
2N \frac{\partial }{\partial u} \right)
\right]}\left( R_{n,N}\left( c u/4 \right) \right)
^2 \ , \label{eq:recursionN}$$ where $c=2^{1-2/D}$ in order to reproduce the scaling of a Gaussian massless field in $D$ dimensions. $D=3$ hereafter. We fix the normalization constant by imposing $R_{n,N}(0)=1$ so that $R_{n,N}(k)$ has a simple probabilistic interpretation [@gam3]. In the following, the calculations will be performed using polynomial approximations of degree $l_{\max}$: $$R_{n,N}(k)\simeq 1+a_{n,1}u+a_{n,2}u^2+\cdots+a_{n,l_{\max}}u^{l_{\max}}\ .$$ The finite volume susceptibility for $2^n$ sites is related to the first coefficient by the relation $\chi_n =-2a_{n,1}(2/c)^n$. The truncated recursion formula for the $a_{n,m}$ reads $$\label{eq:quad}
a_{n+1,m}=\frac{\sum_{l=m}^{2l_{\max}}
\left(\sum_{p+q=l}a_{n,p}a_{n,q}\right)B_{m,l}}
{\sum_{l=0}^{2l_{\max}}\left(\sum_{p+q=l}a_{n,p}a_{n,q}\right)B_{0,l}}\ ,$$ with $$B_{m,l}=\frac{\Gamma(l+1)\Gamma(l+N/2)}{\Gamma(m+1)\Gamma(m+N/2)}
\frac{1}{(l-m)!}(\frac{c}{4})^{l}(-2\beta)^{l-m}\ .$$ We emphasize that in the above formula and in our numerical calculations, no truncation is applied after squaring and so the sum in Eq. (\[eq:quad\]) does extend up to $2l_{max}$. Since the derivatives appear to arbitrarily large order in Eq. (\[eq:recursionN\]) and can lower the degree of a polynomial of order larger than $l_{max}$, this affects all the coefficients of order less than $l_{max}$. This procedure has been discussed and justified in Ref. [@scalingjsp].
The critical exponents appearing in Eq. (\[eq:sus\]) are obtained by calculating the eigenvalues $\lambda_1, \lambda_2, \dots$ of the matrix $\partial a_{n+1,l}/\partial a_{n,m}$ at the nontrivial fixed point. The exponents $\gamma$ and $\Delta$, can be expressed as $$\label{eq:exponents}
\gamma=\frac{\ln(2/c)}{\ln(\lambda_1)} {\rm \ \ , \ \ }
\Delta=\left|\frac{\ln(\lambda_2)}{\ln(\lambda_1)}\right|\ .$$
The critical exponents are universal and, within numerical errors, independent of the manner that we approach the nontrivial fixed point. In the following, we have mostly started with the local measure of the nonlinear sigma model $\delta (\vec{ \phi}. \vec{\phi}-1)$. The corresponding Fourier transform reads $$R_{0,N}(u)=\sum_{l=0}^{\infty} \frac{(-1)^l
u^{l}\Gamma(\frac{N}{2})}{2^{2l}l!\Gamma(\frac{N}{2}+l)} \ .$$ A motivation for this choice is that, as we will explain below, the value of $\beta_c$ can be calculated in the large $N$ limit. Other measures have also been used in order to check the universal values of the two exponents.
The asymptotic behavior of the ratio $a_{n+1,1}/a_{n,1}$ allows us to decide unambiguously if we are in the symmetric phase (where the ratio approaches $c/2 \simeq 0.63$) or in the broken phase (where the ratio approaches $c $). Using a binary search, one can determine the critical value of $\beta$ with great accuracy. As this critical value depends on $l_{max}$, we denote it $\beta_c(l_{max})$. When $l_{max}\rightarrow\infty $, $\beta_c(l_{max})\rightarrow \beta_c$. The rate at which this limit is reached depends on $N$. This is illustrated in Fig. \[fig:errorlmax\] where we see that in order to reach $\beta_c$ with a given accuracy, we need to increase $l_{max}$ when $N$ increases. In Fig. \[fig:erro\], we give the minimum $l_{max}$ necessary for $\beta_c(l_{max})$ to share 20 significant digits with $\beta_c$. $l_{max}\simeq 22+6.2N^{0.7}$ is a good fit for Fig. \[fig:erro\].
![$\log_{10}\frac
{|\beta(l_{\max})-\beta_c|}{\beta_c}$ calculated for $l_{\max}=40$ to $l_{\max}=60$ for $N=10$ (filled circles), $N=11$ (empty circles), $N=12$ (empty triangles) ...... up to $N=30$ (empty squares).\[fig:errorlmax\] ](error-lmax){width="40.00000%"}
![Minimal value of $l_{\max}$ in order to have $\log_{10}\frac {|\beta_c(l_{\max})-\beta_c(\infty))|}{\beta_c(\infty)}=-20$ versus $N$.\[fig:erro\]](delta-bet-neg20-N){width="40.00000%"}
The nontrivial fixed point [*for a given value of $l_{max}$*]{} can be constructed by iterating sufficiently many times the RG map at values sufficiently close to $\beta_c(l_{\max})$. In order to get an accuracy $\epsilon$ for the fixed point for that value of $l_{max}$, we need to iterate $n$ times the map until $$\lambda_2^n\sim \epsilon \ ,
\label{eq:irr}$$ in order to get rid of the irrelevant directions. At the same time, we want the growth in the relevant direction to be limited, in other words, $$|\beta -\beta_c(l_{\max})|\lambda _1 ^n <\epsilon\ .$$ Combining these two requirements together with Eq. (\[eq:exponents\]) we obtain $$|\beta -\beta_c(l_{\max})|\simeq \epsilon ^{1+1/\Delta}$$ This is an order magnitude estimate, however it works well except for $N$=1 where we need to pick $\beta$ slightly closer to the critical value. By “working well”, we mean that if we go closer to the critical value, changes smaller than $\epsilon$ are observed in the first two eigenvalues. The numerical results for $\epsilon =10^{-10}$ and $N$ up to 20, are given in the Tables \[table:one\] and \[table:two\] for the values of $l_{max}$ of Fig. \[fig:erro\]. Errors of 1 or less in the last printed digit should be understood in all the tables.
[||c|c|c|c||]{} $N$ & $\beta_c$ & $\lambda_1$ & $\lambda_2$ 1 & 1.1790301704462697325 & 1.427172478 & 0.85941164922 & 2.4735265752919854000 & 1.385743490 & 0.8563409066 3 & 3.8273820333573397671 & 1.354668326 & 0.8506945150 4 & 5.2111615635533656165 & 1.332749866 & 0.8440522956 5 & 6.6104153462855068435 & 1.317578283 & 0.8376436747 6 & 8.0181114053706725941 & 1.306955396 & 0.8320345022 7 & 9.4307096447427796882 & 1.299321025 & 0.8273378172 8 & 10.846330737925124699 & 1.293666393 & 0.8234676785 9 & 12.263918029354988652 & 1.289354227 & 0.8202833449 10 & 13.682844072802585664 & 1.285978489 & 0.8176485461 11 & 15.102717572108367579 & 1.283274741 & 0.8154492652 12 & 16.523283812777939366 & 1.281066141 & 0.8135953137 13 & 17.944370719047342283 & 1.279231192 & 0.8120168555 14 & 19.365858255947423937 & 1.277684252 & 0.8106600963 15 & 20.787660334686062513 & 1.276363511 & 0.8094834857 16 & 22.209713705054412233 & 1.275223389 & 0.8084547150 17 & 23.631970906283518487 & 1.274229622 & 0.8075484440 18 & 25.054395659078177206 & 1.273356000 & 0.8067446107 19 & 26.476959772907788848 & 1.272582158 & 0.8060271793 20 & 27.899641020779716433 & 1.271892050 & 0.8053832116
[||c|c|c|c||]{} $N$& $\gamma$ & $\Delta$ & $\beta_c/N$ 1 & 1.29914073 & 0.425946859 & 1.179030170 2 & 1.41644996 & 0.475380831 & 1.236763288 3 & 1.52227970 & 0.532691965 & 1.275794011 4 & 1.60872817 & 0.590232008 & 1.302790391 5 & 1.67551051 & 0.642369187 & 1.322083069 6 & 1.72617703 & 0.686892637 & 1.336351901 7 & 1.76479863 & 0.723880426 & 1.347244235 8 & 1.79469274 & 0.754352622 & 1.355791342 9 & 1.81827105 & 0.779508505 & 1.362657559 10 & 1.83722291 & 0.800424484 & 1.368284407 11 & 1.85272636 & 0.817977695 & 1.372974325 12 & 1.86561092 & 0.832855522 & 1.376940318 13 & 1.87646998 & 0.845589221 & 1.380336209 14 & 1.88573562 & 0.856588705 & 1.383275590 15 & 1.89372812 & 0.866171682 & 1.385844022 16 & 1.90068903 & 0.874586271 & 1.388107107 17 & 1.90680338 & 0.882027998 & 1.390115936 18 & 1.91221507 & 0.888652409 & 1.391910870 19 & 1.91703752 & 0.894584429& 1.393524199 20 & 1.92136121 & 0.899925325 & 1.394982051 $\infty$ & 2& 1& $\frac{2-c}{2(c-1)}=1.42366..$
As $N$ increases, the values displayed in Table \[table:two\] seem to slowly approach asymptotic values. This is expected. Using the general formulation of Ref. [@ma73; @david85] together with the particular form of the propagator [@complexs] for the model considered here, one finds the leading terms $$\begin{aligned}
\label{eq:next}
\gamma &\simeq & 2+a_1/N+\dots \\ \nonumber
\Delta &\simeq & 1+b_1/N +\dots \\
\beta_c/N &\simeq & (2-c)/(2(c-1)) +c_1/N +\dots \ .\end{aligned}$$ The magnitude of the coefficients $a_1, \ b_1,\ c_1$ of the leading $1/N$ corrections can be estimated by subtracting the asymptotic value and multiplying by $N$. The results are shown in Table \[table:three\]. They indicate that $a_1 \simeq -1.6, \ b_1 \simeq -2.0,\ c_1 \simeq -0.57$. It seems possible to improve the accuracy by estimating the next to leading order corrections and so on. However, the stability of this procedure is more delicate and remains to be studied with simpler examples.
[||c|c|c|c||]{} $N$ & $N(2-\gamma )$ & $N(1-\Delta)$ & $N(\frac{2-c}{2(c-1)}-\frac{\beta_c}{N})$1 & 0.7009 & 0.5741 & 0.2446 2 & 1.167 & 1.049 & 0.3738 3 & 1.433 & 1.402 & 0.4436 4 & 1.565 & 1.639 & 0.4835 5 & 1.622 & 1.788 & 0.5079 6 & 1.643 & 1.879 & 0.5239 7 & 1.646 & 1.933 & 0.5349 8 & 1.642 & 1.965 & 0.5430 9 & 1.636 & 1.984 & 0.5490 10 & 1.628 & 1.996 & 0.5538 11 & 1.620 & 2.002 & 0.5576 12 & 1.613 & 2.006 & 0.5606 13 & 1.606 & 2.007 & 0.5632 14 & 1.600 & 2.008 & 0.5654 15 & 1.594 & 2.007 & 0.5673 16 & 1.589 & 2.007 & 0.5689 17 & 1.584 & 2.006 & 0.5703 18 & 1.580 & 2.004 & 0.5715 19 & 1.576 & 2.003 & 0.5726 20 & 1.573 & 2.001 & 0.5736
We now compare the exponents calculated here with those calculated with three other RG transformations [@comellas97; @gottker99; @litim02]. As we proceed to explain, the exponents should be the same in the four cases (including ours). The change of coordinates that relates the RG transformation considered here and the one studied in Ref. [@gottker99] is given in the introduction of [@koch91] (for $L=2^{1/3}$). The fact that the limit $L \rightarrow 1$ in the formulation of Ref. [@gottker99] yields the Polchinski equation in the local potential approximation studied in Ref. [@comellas97] is explained in Ref. [@felder87]. Consequently, these two RG transformations should be the same in the [*linear*]{} approximation. Finally, Litim [@litim00; @litim02] proposed an optimized version of the exact RG transformation and suggested [@litim05] that it was equivalent to the Polchinski equation in the local potential approximation. The equivalence was subsequently proved by Morris [@morris05].
To facilitate the comparison, we display $\nu =\gamma /2$ (since $\eta =0$ here) and $\omega = \Delta/\nu $ in Table \[table:four\]. Our results coincide with the 4 digits given in column (2) of Table 3 (for $\nu$) and 4 (for $\omega$) in [@comellas97]. They coincide with the six digits for $\nu$ given in the line $d=3$ of Table 8 of [@gottker99] for $N$= 1, 2, 3, 5 and 10. However, we found discrepancies of order 1 in the fifth digit of $\nu$ and slightly larger for $\omega$ with the values found in Table 1 of [@litim02]. Our estimated errors are of order 1 in the 9-th digit. For $N=1$, this is confirmed by an independent method [@gam3]. For $N=$ 2, 3, 5, and 10, this is confirmed up to the sixth digit [@gottker99]. Consequently, a discrepancy in the 5-th digit cannot be explained by our numerical errors. Note also that for $N\geq 2$, $\alpha$ is more negative than for nearest neighbor models [@pelissetto00b]. In summary, we have provided high-accuracy data for $\gamma$, $\Delta$ and $\beta_c$ for $N$ up to 20. It seems likely that a few terms of the $1/N$ expansion for these three quantities can be estimated from this data. Work is in progress to calculate these expansions independently by semi-analytical methods and learn about the asymptotic behavior of the series and their accuracy. The discrepancy with the 5-th digit of Ref. [@litim02] remains to be explained.
We thank G. ’t Hooft, J. Zinn-Justin and G. Parisi for valuable comments on related topics. This research was supported in part by the Department of Energy under Contract No. FG02-91ER40664. M.B. Oktay has been supported by SFI grant 04/BRG/P0275.
[||c|c|c|c||]{} $N$ & $\nu =\gamma /2$ & $\omega = \Delta/\nu $& $\alpha=2-3\nu$ 1 & 0.649570 & 0.655736 & 0.051289 2 & 0.708225 & 0.671229 & -0.124675 3 & 0.761140& 0.699861 & -0.283420 4 & 0.804364 & 0.733787 & -0.413092 5 & 0.837755 & 0.766774 & -0.513266 6 & 0.863089 & 0.795854 & -0.589266 7 & 0.882399 & 0.820355 & -0.647198 8 & 0.897346 & 0.840648 & -0.692039 9 & 0.909136 & 0.857417 & -0.727407 10 & 0.918611 & 0.871342 & -0.755834
[28]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, (); ****, ().
, ****, (), .
, , , , , ****, (), ;
(), .
(), .
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, (), .
, ****, ().
, ****, ().
, ****, (), .
, ****, ().
, ****, ().
, , , ****, ().
, ****, (), .
, ****, (), .
, ****, (), .
, ****, (), .
(), .
, ****, ().
, ****, ().
, ****, (), .
, ****, (), .
, ****, (), .
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is shown that three-dimensional systems of coupled quantum wires support fractional topological phases composed of closed loops and open planes of two-dimensional fractional quantum Hall subsystems. These phases have topologically protected edge states, and are separated by exotic quantum phase transitions corresponding to a rearrangement of fractional quantum Hall edge modes. Some support for the existence of an extended exotic critical phase separating the bulk gapped fractional topological phases is given. Without electron-electron interactions, similar but unfractionalized bulk gapped phases based on coupled integer quantum Hall states exist. They are separated by an extended critical Weyl semimetal phase.'
author:
- Tobias Meng
title: 'Fractional topological phases in three-dimensional coupled-wire systems'
---
Introduction
============
Since the experimental discovery and theoretical explanation of the fractional quantum Hall effect,[@tsui_fqhe; @laughlin_83] fractionalization in interacting topological systems has been an important theme in condensed matter physics. While the understanding of fractionalized topological phases has impressively developed in two dimensions (2D), much less is known in three dimensions (3D). Slave-particle approaches have for instance allowed one to make progress for topological Mott insulators,[@pesin_10; @wk_10; @scheurer_15] and 3D fractional topological insulators.[@maciejko_10; @swingle_11; @maciejko_12; @lee_12] Some exactly solvable models have been reported,[@castelnovo_08; @levin_11] and the Kitaev honeycomb model[@kitaev_06] has been generalized to 3D lattices.[@si_08; @mandal_09; @ryu_09; @wu_09; @hermanns_14; @hermanns_15] Further examples for fractionalized 3D phases include spin ice,[@castelnovo_07; @morris_09; @fennel_09; @bramwell_09] and stacks of 2D fractional quantum Hall layers.[@balents_96] In the latter, inter layer couplings can stabilize many-layer versions of Halperin bilayer states,[@halperin_83; @qiu_89; @jaud_00] or exotic phases with fractionally charged fermionic 3D quasiparticles.[@levin_09] Other coupled-layer constructions [@wang_13; @jian_14] can also exhibit string-like excitations, which in general allow for non-trivial 3D braiding.[@wang_14; @jian_14; @wang_15; @Jiang_14; @moradi_15]
Instead of coupling extended layers, this work engineers 3D fractional topological states by connecting 2D building blocks of finite width along different directions. While coupled-layer physics can be recovered by dominantly coupling the building blocks along planes, the blocks may also connect along other geometries, such as closed loops. This gives rise to additional topological phases and phase transitions. For concreteness, the remainder studies narrow integer and fractional quantum Hall strips as building blocks. As a further difference to previous studies, the individual building blocks are constructed from coupled quantum wires containing spin-polarized electrons. Time-reversal symmetry is thus broken from the outset. Since each wire can be treated as a Luttinger liquid,[@sondhi_00] this approach is especially powerful for the analysis of interacting phases. Starting with the pioneering work of Kane *et al.*,[@kane_02; @teo_14] it has been shown that coupled-wire constructions allow for an analytically tractable description of integer and fractional, Abelian and non-Abelian topological 2D states.[@lu-12prb125119; @vaezi-14prl236804; @seroussi-14prb104523; @klinovaja-14epjb87; @meng-14epjb203; @sagi-14prb201102; @mong-14prx011036; @vaezi14prx031009; @klino_14; @klino_15; @santos_15; @meng_15; @goroh_15] Coupled-wire constructions have also led to qualitatively new results, including a classification of interacting topological phases,[@neupert-14prb205101] and the prediction of spontaneously time-reversal-symmetry-broken states towards which 2D fractional topological insulators can be unstable.[@meng-14prb235425]
The present work extends this list by adapting coupled-wire constructions as a tool for the analysis of novel interacting topological physics in 3D. The potential of this approach is exemplified by constructing a system that hosts several fractional topological phases. The coupled-wire construction provides simple illustrations of these phases in terms of closed loops (or cylinders) and open planes of integer and fractional quantum Hall subsystems. It also allows one to identify a regime in which an extended exotic critical phase may exist. At special Luther-Emery-type points, the low-energy theory of the system can furthermore be solved by an exact mapping to non-interacting fermions of fractional charge.
The plan of the paper is as follows. Section \[sec:blocks\] discusses the individual building blocks that are used to construct the 3D system. Section \[sec:non\_int\_array\] is devoted to the analysis of the non-interacting array, whose phase diagram is detailed in Sec. \[sec:non\_int\_phases\]. In Sec. \[sec:interactions\_ll\], I turn to the description of the interacting 3D system in terms of coupled Luttinger liquids. The interacting phase diagram is discussed in Sec. \[sec:interact\_phases\]. The results are finally summarized in Sec. \[sec:summary\].
Integer and fractional quantum Hall building blocks {#sec:blocks}
===================================================
The Abelian, but in general fractionalized, phases studied in the remainder are constructed from two fundamental building blocks A-X-B and C-Y-D shown in Fig. \[fig:blocks\]. The dispersions of the electron-type wires A and B, and the hole-type wires C and D, are asymmetrical with respect to zero momentum $k_x$ along the wires. For a given chemical potential $\mu$, the Fermi points reside at momentum $k_x=-k_{1,2}$ in wires A and C, and $k_x=+k_{1,2}$ in wires B and D. This can be realized in spinful wires with spin-orbit coupling, which are polarized by a magnetic field parallel to the spin-orbit direction. The central wire X is of hole-type, while Y is of electron-type. They have Fermi points at $k_x=\pm k_3$. In each building block, the close-by inner and outer wires are tunnel-coupled by a strong hopping term. A direct tunneling, albeit of reduced strength, also exists between the more distant outer wires.
![The building blocks A-X-B and C-Y-D, and the dispersions $E^{(\cdot)}(k_x)$ of the different wires. On the left, strong (weak) tunneling couplings are indicated by thick (thin) lines.[]{data-label="fig:blocks"}](blocks){width="\columnwidth"}
Reference showed that the building blocks form narrow integer quantum Hall strips of opposite chirality if $k_1=0$ and $k_2=k_3$. This can be motivated by noting that the dominant tunnelings between the inner and outer wires induce large gaps for the counterpropagating modes at $k_2=k_3$, and $-k_2=-k_3$, such that the central wire X (Y) forms the bulk of a three wire wide integer quantum Hall strip. A right-moving edge mode of momentum $k_x\approx k_1=0$ lives in wire A (D), while a left-moving edge mode is located in wire B (C). In an isolated building block, these edge modes acquire a small gap due to both their overlap across the central wire, and the direct tunneling between the outer wires.
With electron-electron interactions, and if the filling is reduced, the building blocks can enter fractional quantum Hall states.[@kane_02] A Laughlin state at an effective filling factor $\nu=1/(2m+1)$ with positive integer $m$ can for instance arise for $k_1=m k_3$ and $k_2 = (m+1) k_3$. While single-particle tunneling between counterpropagating modes is then forbidden by momentum conservation, correlated tunnelings can drive the building blocks into fractional quantum Hall states. The central wire X (Y) then constitutes the gapped bulk of a now fractional quantum Hall strip, while the outer wires again host chiral edge modes. A more detailed discussion of the physics within a given building block in terms of coupled Luttinger liquids is given in Sec. \[subsec:blocks\] below.
3D system of non-interacting wires {#sec:non_int_array}
==================================
To realize 3D phases, I consider the periodic array shown in Fig. \[fig:system\]. I require the tunnelings between the inner and outer wires in each building block to be the largest energy scale after the bandwidth of the wires, and the chemical potential. Additional tunnelings between wires X and C, and X and D within a unit cell, as well as between wires Y and A, and Y and B in neighboring unit cells, which compete with the dominant intra building block tunnelings, can then be neglected, and the low-energy physics of the array is fully described by the integer or fractional quantum Hall edge modes in wires A, B, C, and D, as well as the interactions and tunnelings between them.
Focussing first on the integer quantum Hall case $k_1=0$, $k_2=k_3$, the low-energy dispersions of the edge modes are well approximated by $\pm v_F k_x$, where $v_F$ denotes the Fermi velocity. Along $y$, I assume neighboring edge modes to be coupled by small alternating hoppings $t_{y1}$ and $t_{y2}$, whose strengths can be controlled by the inter wire distances. I take these hoppings to be shifted by one edge mode in the next $(x,y)$ layer. Along $z$, I connect neighboring edge modes by small tunnelings $t_{z1}$ within the unit cell, and $t_{z2}$ between two adjacent unit cells. All tunnel couplings are chosen to be positive.
![Section of the periodic 3D system (the dotted box shows the unit cell). Thick dotted diagonal lines depict the dominant hoppings in each A-X-B and C-Y-D building block, thin dotted diagonal lines indicate subleading hoppings between the X and C, D (Y and A, B) wires. The edge mode couplings are $t_{y1}$ along solid lines, $t_{y2}$ along dashed lines, $t_{z1}$ along dotted vertical lines, and $t_{z2}$ along dash-dotted lines.[]{data-label="fig:system"}](system){width="0.7\columnwidth"}
Labelling the unit cells by an index $p$ in $y$ direction and $q$ in $z$ direction, the array is described by the low-energy Hamiltonian $$\begin{aligned}
H&=\sum_{k_x}\sum_{p,q,p'q'}\Psi_{k_xpq}^\dagger\,\begin{pmatrix}\mathcal{H}_{11}\delta_{q,q'}&\mathcal{H}_{12}\delta_{p,p'}\\\mathcal{H}_{21}\delta_{p,p'}&\mathcal{H}_{22}\delta_{q,q'}\end{pmatrix}\,\Psi_{k_xp'q'}^{{\phantom{\dagger}}}~,\label{eq:ham}\\
\mathcal{H}_{11}&=\begin{pmatrix}v_F k_x\delta_{p,p'}&t_{y1}\delta_{p,p'}+t_{y2}\delta_{p,p'+1}\\t_{y1}\delta_{p,p'}+t_{y2}\delta_{p,p'-1}&-v_F k_x\delta_{p,p'}\end{pmatrix},\\
\mathcal{H}_{12}&=(t_{z1}\delta_{q,q'}+t_{z2}\delta_{q,q'-1})\,\mathds{1}_{2\times2}~,\\
\mathcal{H}_{21}&=(t_{z1}\delta_{q,q'}+t_{z2}\delta_{q,q'+1})\,\mathds{1}_{2\times2}~,\\
\mathcal{H}_{22}&=\begin{pmatrix}-v_F k_x\delta_{p,p'}&t_{y2}\delta_{p,p'}+t_{y1}\delta_{p,p'+1}\\t_{y2}\delta_{p,p'}+t_{y1}\delta_{p,p'-1}&v_F k_x\delta_{p,p'}\end{pmatrix},\end{aligned}$$ where $\Psi_{k_xpq}=(c_{k_xpq}^{(\rm{A})},c_{k_xpq}^{(\rm{B})},c_{k_xpq}^{(\rm{C})},c_{k_xpq}^{(\rm{D})})^T$ is the vector of annihilation operators for edge mode electrons with momentum $k_x$ in wire A, B, C, and D of unit cell $(p,q)$. This Hamiltonian is essentially identical to the low-energy description of stacked topological insulators analyzed by Burkov and Balents.[@burkov_balents_layers_11] Following their calculation, Eq. is Fourier transformed to momenta $-\pi/a_{y,z}\leq k_{y,z}<\pi/a_{y,z}$, where $a_{y}$ ($a_z$) is the unit cell distance in $y$ ($z$) direction. The Fourier transform of $\Psi_{k_xpq}^{{\phantom{\dagger}}}$ is $\Psi_{\bf{k}}^{{\phantom{\dagger}}}$, where ${\bf{k}}=(k_x,k_y,k_z)^T$ denotes the 3D momentum. Next, it is useful to perform the gauge transformations $c_{\bf{k}}^{(\rm{B})}\to e^{ik_ya_y/2}\,c_{\bf{k}}^{(\rm{B})}$, $c_{\bf{k}}^{(\rm{C})} \to e^{-ik_za_z/2}\, c_{\bf{k}}^{(\rm{C})}$, $c_{\bf{k}}^{(\rm{D})} \to e^{ik_ya_y/2}\,e^{-ik_za_z/2}\, c_{\bf{k}}^{(\rm{D})}$, and to introduce a pseudospin $\sigma$ within the A, B, and C, D subspaces, acted on by Pauli matrices $\sigma_{x,y,z}$, as well as a pseudospin $\tau$ between these two subspaces. The Hamiltonian can then be cast into the form $H=\sum_{\bf{k}}\Psi_{\bf{k}}^\dagger\,\mathcal{H}_{\bf k}\,\Psi_{\bf{k}}^{{\phantom{\dagger}}}$ with
$$\begin{aligned}
\mathcal{H}_{\bf{k}}=&v_F k_x\,\sigma_z\tau_z+(t_{y1}+t_{y2})\cos(k_ya_y/2)\,\sigma_x\nonumber
\\&-(t_{y1}-t_{y2})\sin(k_ya_y/2)\,\sigma_y\tau_z\nonumber\\
&+(t_{z1}+t_{z2})\cos(k_za_z/2)\,\tau_x\nonumber\\
&+(t_{z1}-t_{z2})\sin(k_za_z/2)\,\tau_y~.\label{eq:final_ham}\end{aligned}$$
After the canonical transformation $\sigma_y\to\sigma_y\tau_z$, $\sigma_z\to\sigma_z\tau_z$, $\tau_x\to\tau_x\sigma_x$, $\tau_y\to\tau_y\sigma_x$, the diagonalization of the $\tau$-sector yields $H=\sum_{\bf{k}}\Psi_{\bf{k}}^\dagger\,{\rm{diag}}(\mathcal{H}_{{\bf k},+},\mathcal{H}_{{\bf k},-})\,\Psi_{\bf{k}}^{{\phantom{\dagger}}}$ with
$$\begin{aligned}
\mathcal{H}_{\bf{k},\pm}=&v_F k_x\sigma_z-(t_{y1}- t_{y2})\,\sin(k_ya_y/2)\sigma_y+M_{\pm}(\textbf{k})\sigma_x~,\end{aligned}$$
where the momentum-dependent mass reads
$$\begin{aligned}
M_{\pm}(\textbf{k})=&\pm\sqrt{t_{z1}^2+t_{z2}^2+2t_{z1}t_{z2}\cos(k_z a_z)}\nonumber\\
&+(t_{y1}+ t_{y2})\,\cos(k_ya_y/2)~.\end{aligned}$$
Phase diagram of the non-interacting model {#sec:non_int_phases}
==========================================
Despite being described by the same low-energy Hamiltonian as stacked topological insulators, I find that the array of coupled wires can enter a single-surface quantum anomalous Hall (SSQAH) phase which is not present in Ref. . This phase is realized for $t_{y1}+t_{y2} < |t_{z1}-t_{z2}|$ and $t_{z1}<t_{z2}$, and has a single quantum Hall layer formed by the the topmost (for $t_{y1}<t_{y2}$) or bottommost (for $t_{y1}>t_{y2}$) layer of wires, see below. Interestingly, the ratio $t_{y1}/t_{y2}$ thus provides an experimental knob (“quantum Hall switch”) selecting the surface on which the single quantum Hall layer, heralded by its gapless edge state, appears.
![Phase diagram in the non-interacting (left), and interacting (right) case with normal insulating (NI), (fractional) quantum anomalous Hall ((F)QAH), Weyl semimetal (WS), and single-surface (fractional) quantum anomalous Hall (SS(F)QAH) phases. In a lattice model, the filling would be 1/2 (1/6) in the non-interacting (interacting) case.[]{data-label="fig:phases"}](phase_diags){width="\columnwidth"}
All other phases of the array are, however, similar to Ref. . For $t_{y1}+t_{y2} < |t_{z1}-t_{z2}|$ and $t_{z1}>t_{z2}$, the system is a normal insulator (NI). If $|t_{z1}-t_{z2}|\leq t_{y1}+t_{y2} \leq t_{z1}+t_{z2}$, the array forms a Weyl semimetal (WS) with two gapless Weyl nodes of opposite chirality at $\textbf{k}_\pm=(0,0,\pi/a_z\pm k_{z0})^T$, where $k_{z0}=\arccos(1-[(t_{y1}+t_{y2})^2-(t_{z1}-t_{z2})^2]/2 t_{z1}t_{z2})/a_z$. Surfaces then have a Fermi arc between the projection of the Weyl nodes, and show a finite Hall conductivity proportional to the distance of the projected Weyl nodes. For $t_{y1}+t_{y2} > t_{z1}+t_{z2}$, finally, the system is in a quantum anomalous Hall (QAH) phase, and exhibits a Hall conductivity of $\sigma_{xy}=e^2/h$ per $(x,y)$ layer of units cells on surfaces with a normal in the $(x,y)$ plane. The phase diagram of the array of wires is shown in Fig. \[fig:phases\]. I have checked that the bulk phase transitions as obtained from the effective low-energy model, and the presence of a Weyl semimetal phase qualitatively agree with a tight-binding calculation that includes all wires and tunnelings (also the subleading tunnelings between wires X, C, and D, as well as Y, A, and B).
Besides a blueprint for the constructivist engineering of 3D topological states, the coupled-wire construction provides particularly simple visualizations of the bulk gapped phases based on their hierarchy of couplings. The subleading couplings, which compete with the respective dominant coupling, are irrelevant, and the system is adiabatically connected to an array in which only the leading coupling is present. Along the dominant tunnelings, the edge modes of the individual building blocks form closed loops and open planes of quantum Hall layers, see Fig. \[fig:visu\]. In the NI phase, all building blocks connect along closed loops, resulting in a full gap. The QAH phase, on the other hand, consists of open quantum Hall planes alternating with closed quantum Hall loops. Since each open plane has a gapless edge mode, the Hall conductivity is indeed $\sigma_{xy}=e^2/h$ per $(x,y)$ layer of units cells. In the SSQAH phase, all building blocks form trivial quantum Hall loops, except for the ones on the topmost or bottommost layer (depending on the ratio $t_{y1}/t_{y2}$). These form a single open quantum Hall plane. The WS, finally, occurs if the competing tunnelings along $y$ and $z$ are of similar strength. The system then enters a critical phase with gapless states of definite momentum (the Weyl nodes), which has no simple real-space picture.
![Visualization of the gapped phases for $t_{y1}<t_{y2}$. Thick solid lines depict the dominant couplings along which closed loops and open planes of integer or fractional quantum Hall layers form. Open planes are associated with chiral edge modes illustrated by thick lines with arrows.[]{data-label="fig:visu"}](phases_wires){width="1.0\columnwidth"}
Electron-electron interactions and fractionalization {#sec:interactions_ll}
====================================================
While the non-interacting array allowed the construction and analysis of the interesting SSQAH state, and gives simple physical pictures of the bulk gapped phases, the true power of coupled-wire constructions lies in the description of interacting systems. To tackle these, I linearize the spectrum of each wire $n=A,B,C,D,X,Y$ within a given unit cell around the Fermi points at $\pm k_{1,2,3}$, and decompose the electron operators into right ($R$) and left ($L$) moving modes as $c_n(x)=e^{ik_{FRn}x} R_n(x)+e^{ik_{FLn}x} L_n(x)$, where the respective Fermi momentum is $k_{Frn}$, and $r=R,L$. For $k_1=m k_3$ and $k_2 = (m+1) k_3$ with an integer $m>0$, momentum conservation forbids single-particle tunneling between counterpropagating modes. The combination of electron-electron interactions and tunneling, however, still generates momentum conserving correlated tunnelings between neighboring wires.
In the $2m$th order of a perturbation theory in a (contact) interaction $V$, the tunneling $t_{A-X}$ between wires $A$ and $X$ generates for instance a term proportional to
$$\begin{aligned}
t_{A-X}\,V^{2m}& \left(e^{-i k_{FRX} x}R_X^\dagger\,e^{i k_{FLA}x}L_A^{{\phantom{\dagger}}}\right)\nonumber\\
\times&\left(e^{-i k_{FRX} x}R_X^\dagger\,e^{i k_{FLX}x}L_X^{{\phantom{\dagger}}}\right)^m\nonumber\\
\times&\left(e^{-i k_{FRA} x}R_A^\dagger\,e^{i k_{FLA}x}L_A^{{\phantom{\dagger}}}\right)^m+\rm{H.c.}~.\end{aligned}$$
Connecting different Fermi points, this term trivially conserves energy. Momentum, on the other hand, is conserved for $(m+1)\,k_{FRX}+m\,k_{FRA}=(m+1)\,k_{FLA}+m\,k_{FLX}$, which is fulfilled for the considered $k_1=m k_3$ and $k_2 = (m+1) k_3$. Collecting all leading edge mode scatterings at this order in $V$ that conserve energy and momentum, I find that the bulk of the A-X-B (C-Y-D) building blocks can be fully gapped by
$$\begin{aligned}
R_{A(D)}^\dagger{}^mR_{X(Y)}^\dagger{}^{m+1}L_{X(Y)}^{{\phantom{\dagger}}}{}^mL_{A(D)}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\\
R_{X(Y)}^\dagger{}^mR_{B(C)}^\dagger{}^{m+1}L_{B(C)}^{{\phantom{\dagger}}}{}^mL_{X(Y)}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~.\end{aligned}$$
\[eq:block\_couplings\]
The leading couplings between the edge modes of the building blocks inside a unit cell, on the other hand, are
$$\begin{aligned}
R_B^\dagger{}^mR_A^\dagger{}^{m+1}L_A^{{\phantom{\dagger}}}{}^mL_B^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\label{eq:coupling1}\\
R_C^\dagger{}^mR_D^\dagger{}^{m+1}L_D^{{\phantom{\dagger}}}{}^mL_C^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\label{eq:coupling2}\\
R_C^\dagger{}^mR_A^\dagger{}^{m+1}L_A^{{\phantom{\dagger}}}{}^mL_C^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\label{eq:coupling3}\\
R_B^\dagger{}^mR_D^\dagger{}^{m+1}L_D^{{\phantom{\dagger}}}{}^mL_B^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}\label{eq:coupling4}\end{aligned}$$
\[eq:edge\_couplings\]
Couplings between different unit cells will be treated in section \[subsec:coupling\].
Luttinger liquid description of an individual building blocks {#subsec:blocks}
-------------------------------------------------------------
To understand how these couplings can lead to gapped Laughlin states at filling $\nu=1/(2m+1)$ within each building block, it is most convenient to switch to a bosonized language.[@giamarchi_book] Since the A-X-B and C-Y-D blocks can be treated in an analogous fashion, I only detail the description of the former. The relevant low-energy degrees of freedom are thus the chiral modes close to the Fermi points at momentum $k_x=\pm k_{1,2,3}$ in wires $n=A,X,B$, which are bosonized as $$\begin{aligned}
r_{n}^{}(x)=
\frac{U_{rn}^{}}{\sqrt{2\pi\alpha}}\,e^{-i\Phi_{rn}(x)},\label{eq:bos_form}\end{aligned}$$ where $U_{rn}^{}$ is a Klein factor, while $\alpha^{-1}$ denotes a high momentum cutoff. The bosonized fields obey $$\begin{aligned}
\bigl[\Phi_{rn}(x),\Phi_{r'n'}(x')\bigr] =\delta_{rr'}\delta_{nn'}\,i\pi r\,\text{sgn}(x-x').\end{aligned}$$ It is furthermore helpful to also introduce the fields
$$\begin{aligned}
\widetilde{\Phi}_{Rn}^{(m)} &= (m+1)\Phi_{R n}-m\Phi_{Ln}~,\\
\widetilde{\Phi}_{Ln}^{(m)} &= (m+1)\Phi_{L n}-m\Phi_{R k}~.\end{aligned}$$
\[eq:basis\_trafo\]
These obey the commutator
$$\begin{aligned}
[\widetilde{\Phi}_{rn}^{(m)}(x),\widetilde{\Phi}_{r'n'}^{(m)}(x')]=\delta_{rr'}\delta_{nn'}(2m+1)\,i\pi r\,\text{sgn}(x-x')~.\label{eq:supp_comm}\end{aligned}$$
Dropping the Klein factors for notational convenience, Eq. translates to sine-Gordon terms
$$\begin{aligned}
\cos\left(\widetilde{\Phi}_{RX}^{(m)}-\widetilde{\Phi}_{LA}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RB}^{(m)}-\widetilde{\Phi}_{LX}^{(m)}\right)~.\end{aligned}$$
\[eq:supp\_ham2\]
To show that these couplings can stabilize a three-wire wide quantum Hall state, I follow Ref. , and note that the argument of each sine-Gordon term commutes with itself at different positions. These cosines can thus order individually by pinning their arguments to one of their minima. In addition, they can order simultaneously, since their arguments also commute between each other. Because one can always find a Hamiltonian that renders the different sine-Gordon terms relevant in the renormalization group (RG) sense,[@kane_02] it is always possible to reach this situation at low energies. The resulting bulk gapped state has quasiparticle excitations of charge $e/(2m+1)$, as can be inferred from kinks in the cosines,[@kane_02; @charge_remark] and chiral edge modes $\widetilde{\Phi}_{RA}$ and $\widetilde{\Phi}_{LB}$ that obey the commutator given in Eq. , as expected for a Laughlin state at filling $\nu=1/(2m+1)$.[@kane_02] It has furthermore been shown that also the Chern-Simons term associated with a 2D (fractional) quantum Hall effect is contained in the coupled-wire construction used for the individual building blocks.[@santos_15] All of these signatures combined allow to positively identify the gapped phase resulting from the coupling in Eq. as an integer (for $m=0$) or fractional (for integer $m>0$) quantum Hall state. The integer $m$, which technically relates to the order in perturbation theory in the interaction $V$ that generates the required sine-Gordon term, thus physically indicates the effective filling fraction $\nu=1/(2m+1)$ of the individual building blocks.
In a larger and isolated building block, where there is neither overlap nor direct tunneling between them, the edge modes $\widetilde{\Phi}_{RA}^{(m)}$ and $\widetilde{\Phi}_{LB}^{(m)}$ would be gapless (they simply do not appear in the sine-Gordon part of the Hamiltonian, and therefore only carry a kinetic energy $\sim \pm v_F k_x$). In the narrow building blocks considered here, both a finite overlap through the bulk (the wire X), and a direct tunneling between the edge modes exist, such that the edge modes are gapped. The leading edge mode tunneling within the $A-X-B$ building block that is compatible with the bulk sine-Gordon terms in Eq. is given in Eq. . Its bosonized form
$$\begin{aligned}
\cos\left(\widetilde{\Phi}_{RA}^{(m)}-\widetilde{\Phi}_{LB}^{(m)}\right)\end{aligned}$$
shows that this term can order, and is indeed compatible with the bulk cosines: its argument commutes with itself at different positions, and with the arguments of the cosines given in Eq. . The couplings between the edges of the the $C-Y-D$ block and between the two building blocks of a unit cell, given in Eq. -, have the bosonized expressions
$$\begin{aligned}
\cos\left(\widetilde{\Phi}_{RD}^{(m)}-\widetilde{\Phi}_{LC}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RA}^{(m)}-\widetilde{\Phi}_{LC}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RD}^{(m)}-\widetilde{\Phi}_{LB}^{(m)}\right)~.\end{aligned}$$
The arguments of the individual cosines commute with themselves at different positions, and with the arguments of the bulk sine-Gordon terms. The arguments of different sine-Gordon terms involving the same edge mode do, however, not commute, such that these edge mode couplings cannot order simultaneously.
Luttinger liquid description of the three-dimensional system {#subsec:coupling}
------------------------------------------------------------
As in the non-interacting case, I furthermore subject the edge modes of the building blocks in the 3D system of coupled quantum wires to inter block couplings. I find that the leading edge mode couplings compatible with the bulk couplings of the individual building blocks in Eq. read
$$\begin{aligned}
R_{Bpq}^\dagger{}^mR_{Ap'q'}^\dagger{}^{m+1}L_{Ap 'q'}^{{\phantom{\dagger}}}{}^mL_{Bpq}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\\
R_{Cpq}^\dagger{}^mR_{Dp'q'}^\dagger{}^{m+1}L_{Dp'q'}^{{\phantom{\dagger}}}{}^mL_{Cpq}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\\
R_{Cpq}^\dagger{}^mR_{Ap'q'}^\dagger{}^{m+1}L_{Ap'q'}^{{\phantom{\dagger}}}{}^mL_{Cpq}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\\
R_{Bpq}^\dagger{}^mR_{Dp'q'}^\dagger{}^{m+1}L_{Dp'q'}^{{\phantom{\dagger}}}{}^mL_{Bpq}^{{\phantom{\dagger}}}{}^{m+1}+\rm{H.c.}~,\end{aligned}$$
\[eq:supp\_edge\_couplings\]
where $(pq)$ and $(p'q')$ are the $(xy)$-indices of neighboring unit cells. Like the intra unit cell couplings, these terms can be generated by a perturbation theory in an interaction $V$, and inter wire tunneling (the intra unit cell couplings in Eq. can be recovered by setting $p=p'$ and $q=q'$). The couplings in Eq. conserve energy and momentum for the considered case of $k_1=m\, k_3$ and $k_2 = (m+1)\, k_3$. Their bosonized form is
$$\begin{aligned}
\cos\left(\widetilde{\Phi}_{RAp'q'}^{(m)}-\widetilde{\Phi}_{LBpq}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RDp'q'}^{(m)}-\widetilde{\Phi}_{LCpq}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RAp'q'}^{(m)}-\widetilde{\Phi}_{LCpq}^{(m)}\right)~,\\
\cos\left(\widetilde{\Phi}_{RDp'q'}^{(m)}-\widetilde{\Phi}_{LBpq}^{(m)}\right)~,\end{aligned}$$
\[eq:supp\_edge\_couplings\_bos\]
where $\widetilde{\Phi}_{rnpq}^{(m)}$ is the bosonized field associated with the edge mode $r=R,L$ in wire $n=A,B,C,D$ of unit cell $(pq)$ defined in analogy to Eq. as
$$\begin{aligned}
\widetilde{\Phi}_{Rnpq}^{(m)} &= (m+1)\Phi_{R npq}-m\Phi_{Lnpq}~,\\
\widetilde{\Phi}_{Lnpq}^{(m)} &= (m+1)\Phi_{L npq}-m\Phi_{R kpq}~.\end{aligned}$$
\[eq:basis\_trafo\_app\]
These fields obey the commutator
$$\begin{aligned}
[\widetilde{\Phi}_{rnpq}^{(m)}(x),\widetilde{\Phi}_{r'n'p'q'}^{(m)}(x')]=&\delta_{rr'}\delta_{nn'}\delta_{pp'}\delta_{qq'}\,(2m+1)\nonumber\\&\times i\pi r\,\text{sgn}(x-x')~.\label{eq:supp_comm_2}\end{aligned}$$
The argument of each sine-Gordon term commutes with itself at different positions, and can thus order. Like within a unit cell, the arguments of different sine-Gordon terms involving the same edge mode do not commute, and these sine-Gordon terms cannot order simultaneously. They do, however, commute with the arguments of the sine-Gordon terms stabilizing the integer and fractional quantum Hall states within each building block. The couplings in Eq. are thus a set of competing edge mode couplings that leave the bulk gaps of the individual building blocks untouched.
Phase diagram of the interaction model {#sec:interact_phases}
======================================
Within each A-X-B and C-Y-D building block, where all sine-Gordon terms can order simultaneously, a bulk gap results from the pinning of the arguments of the cosines. For the competing edge mode couplings, however, the situation is more delicate: not all sine-Gordon terms can pin their arguments simultaneously. If the couplings have a clear hierarchy in strength, the leading sine-Gordon term orders, while the others are suppressed. The system is then in a phase that is adiabatically connected to the array being only subject to the leading coupling, while all other couplings vanish. If there is no clear hierarchy, the different sine-Gordon terms compete, and there is no simple description in terms of pinned fields.
Clear hierarchy of couplings
----------------------------
Like in the non-interacting case, I take the couplings of the inner and outer wires in each building block to be the largest correlated tunnelings, which puts each building block in a fractional quantum Hall state. To analyze the effect of the competing edge mode couplings, I study an array with the same relative coupling strengths as before (alternating couplings along $y$, shifted by one edge mode in the next $(x,y)$ layer, and alternating couplings along $z$). If the intra unit cell couplings and generalizing the hoppings $t_{z1}$ in the fractional case are much stronger than the other edge mode couplings, the A-X-B and C-Y-D building blocks pair up within the unit cells, and the array is in a normal insulating state. To see this, I use that the array is then adiabatically connected to a system where the edge modes are only connected by this dominant hopping. As shown in the middle panel of Fig. \[fig:visu\], the fractional quantum Hall building blocks then form small loops, which indeed corresponds to a normal insulating state. If, however, the edge modes dominantly pair up along $z$ between neighboring unit cells, a single-surface fractional quantum anomalous Hall (SSFQAH) phase with a single fractional quantum Hall layer on either the top or bottom surface is realized, as can be seen in the left panel of Fig. \[fig:visu\]. If, finally, one of the couplings along $y$ is strongest, I obtain a fractional quantum anomalous Hall (FQAH) phase analog to the above QAH phase, which has one open fractional quantum Hall plane per $(x,y)$ layer of unit cells, see right panel of Fig. \[fig:visu\]. This phase shows a fractional Hall conductivity of $\sigma_{xy}=\nu\,e^2/h$ per $(x,y)$ layer of units cells, where $\nu=1/(2m+1)$. All of these bulk gapped phases are somewhat similar to weak topological insulators in that the individual fractional quantum Hall loops and planes support gapped quasiparticles of fractional charge and statistics confined to their respective 2D subsystem. The phase diagram of the interacting array in Fig. \[fig:phases\] locates the discussed phases in the respective limits of the parameter space.
Solution of Luther-Emery type at special points in the parameter space {#sec:special_points}
----------------------------------------------------------------------
Most interesting is the fate of the Weyl semimetal. I argued that this phase emerges in the non-interacting case when several competing tunneling couplings are of comparable strength. While the Weyl semimetal phase can be identified straightforwardly in a fermionic language, as was done in Sec. \[sec:non\_int\_phases\], it corresponds to a complicated regime of competing sine-Gordon terms of comparable strength in the Luttinger liquid description, see Sec. \[subsec:coupling\] with $m=0$. In the interacting array, I find that the same competition can also exist between the now more complicated sine-Gordon terms. The striking mathematical analogy of the Weyl semimetal regime and the regime of competing edge mode couplings with comparable strength in the interacting case hints at the possible existence of an extended critical phase separating the bulk gapped fractional phases. Some support for the generic existence of critical phases in 3D comes from the superconducting version of the non-interacting Hamiltonian in Eq. , which has a gapless superconducting Weyl phase[@meng_12] similar to $^3$He-A.[@volovik_book] Also spin liquids can exhibit a Weyl phase.[@hermanns_15]
As a first step, it is instructive to analyze the special points in parameter space where only one of the competing sine-Gordon terms is present. For these situations, an exact solution of the low-energy Hamiltonian in the spirit of a Luther-Emery point is possible for special choices of the density-density interactions. For concreteness, let me take only the coupling $t_{z1}^{(m)}$ generalizing $t_{z_1}$ to be finite. The Hamiltonian then reads
$$\begin{aligned}
&H=H_{\rm quad}+H_{t_{z1}}^{(m)},\\
&H_{t_{z1}}^{(m)}=\sum_{pq}\int dx\frac{t_{z1}^{(m)}}{\pi\alpha}\cos\left(\widetilde{\Phi}_{RApq}^{(m)}(x)-\widetilde{\Phi}_{LCpq}^{(m)}(x)\right)\nonumber\\
&+\sum_{pq}\int dx\frac{t_{z1}^{(m)}}{\pi\alpha}\cos\left(\widetilde{\Phi}_{LBpq}^{(m)}(x)-\widetilde{\Phi}_{RDpq}^{(m)}(x)\right)\end{aligned}$$
Here, $H_{\rm quad}$ contains the gapless motion along the $x$ direction, and density-density interactions quadratic in the bosonized fields. $H_{\rm quad}$ is thus composed of terms proportional to $\int dx\, (\partial_x \widetilde{\Phi}_{rnpq}^{(m)}) (\partial_x \widetilde{\Phi}_{r'n'p'q'}^{(m)})$.[@giamarchi_book] I now introduce the new fields
$$\begin{aligned}
\bar{\Phi}_{RApq}^{(m)}(x)&=\frac{m+1}{2m+1}\widetilde{\Phi}_{RApq}^{(m)}(x)-\frac{m}{2m+1} \widetilde{\Phi}_{LCpq}^{(m)}(x)~,\\
\bar{\Phi}_{LCpq}^{(m)}(x)&=\frac{m+1}{2m+1}\widetilde{\Phi}_{LCpq}^{(m)}(x)-\frac{m}{2m+1} \widetilde{\Phi}_{RApq}^{(m)}(x)~,\\
\bar{\Phi}_{LBpq}^{(m)}(x)&=\frac{m+1}{2m+1}\widetilde{\Phi}_{LBpq}^{(m)}(x)-\frac{m}{2m+1} \widetilde{\Phi}_{RDpq}^{(m)}(x)~,\\
\bar{\Phi}_{RApq}^{(m)}(x)&=\frac{m+1}{2m+1}\widetilde{\Phi}_{RDpq}^{(m)}(x)-\frac{m}{2m+1} \widetilde{\Phi}_{LBpq}^{(m)}(x)~.\end{aligned}$$
\[eq:full\_trafo\]
These obey the canonical commutator
$$\begin{aligned}
[\bar{\Phi}_{rnpq}^{(m)}(x),\bar{\Phi}_{r'n'p'q'}^{(m)}(x')]=&\delta_{rr'}\delta_{nn'}\delta_{pp'}\delta_{qq'}\, i\pi r\,\text{sgn}(x-x')~.\label{eq:supp_comm_3}\end{aligned}$$
Rewriting the Hamiltonian in terms of these new degrees of freedom yields
$$\begin{aligned}
&H=\bar{H}_{\rm quad}+\bar{H}_{t_{z1}}^{(m)},\\
&\bar{H}_{t_{z1}}^{(m)}=\sum_{pq}\int dx\frac{t_{z1}^{(m)}}{\pi\alpha}\cos\left(\bar{\Phi}_{RApq}^{(m)}(x)-\bar{\Phi}_{LCpq}^{(m)}(x)\right)\nonumber\\
&+\sum_{pq}\int dx\frac{t_{z1}^{(m)}}{\pi\alpha}\cos\left(\bar{\Phi}_{LBpq}^{(m)}(x)-\bar{\Phi}_{RDpq}^{(m)}(x)\right)~,\end{aligned}$$
where $\bar{H}_{\rm quad}$ is still composed of products of linear derivatives of the new bosonized fields $\bar{\Phi}_{rnpq}^{(m)}$. Because the fields $\bar{\Phi}_{rnpq}^{(m)}$ obey the canonical commutator of the bosonized fields describing chiral fermions, I can now rewrite the Hamiltonian as
$$\begin{aligned}
&H=\bar{H}_{0}^{(m)}+\bar{H}_{\rm int},\label{eq:supp_ham_bos_full_special}\\
&\bar{H}_{0}^{(m)}=\int dx\sum_{p,q}\bar{\Psi}_{pq}^{(m)}{}^\dagger(x)\,\mathcal{H}^{(m)}_0(x)\,\bar{\Psi}_{pq}^{(m)}(x)^{{\phantom{\dagger}}}~,\\
&\mathcal{H}^{(m)}_0(x)=\begin{pmatrix}-iu \partial_x&0&t_{z_1}^{(m)}&0\\0&iu \partial_x&0&t_{z_1}^{(m)}\\t_{z_1}^{(m)}&0&iu \partial_x&0\\0&t_{z_1}^{(m)}&0&-iu \partial_x\end{pmatrix}~,\label{eq:h_cos}\\
&\bar{\Psi}_{pq}^{(m)} =\frac{1}{\sqrt{2\pi\alpha}} (e^{-i\bar{\Phi}_{RApq}^{(m)}},e^{-i\bar{\Phi}_{LBpq}^{(m)}},e^{-i\bar{\Phi}_{LCpq}^{(m)}},e^{-i\bar{\Phi}_{RDpq}^{(m)}})^T~,\label{eq:new_op_special}\end{aligned}$$
where $\bar{H}_{\rm int}$ describes only the density-density interactions between the different modes, and where $u$ is the velocity associated with the fields $\bar{\Phi}_{rnpq}^{(m)}$ (which is for simplicity assumed to be identical for all modes). I can now invert the bosonization procedure for the components of $\bar{\Psi}_{pq}^{(m)}$ for all $m$ using the refermionization procedure.[@giamarchi_book; @delf_schoeller_review] Diagonalizing the refermionized version of $\bar{H}_{0}^{(m)}$ as in Sec. \[sec:non\_int\_array\] then allows to calculate its spectrum for all $m$. As follows from Sec. \[sec:non\_int\_array\] upon setting all tunnel couplings except $t_{z_1}$ to zero, the system is in a normal insulating state for $t_{z_1}^{(m)}\neq0$, and has a critical point for $t_{z_1}^{(m)}=0$.
It is interesting to also determine the charge associated with the operator $e^{i\bar{\Phi}_{rnpq}^{(m)}}$, which can be inferred from the bosonized expression of the total charge density operator,
$$\begin{aligned}
\rho_{\rm tot}&=\sum_{pq}\sum_{n=A,B,C,D}\frac{-e}{2\pi}\partial_x(\Phi_{Rnpq} -\Phi_{Lnpq})\nonumber\\
&=\sum_{pq}\sum_{n=A,B,C,D}\frac{-e}{2\pi(2m+1)}\partial_x(\widetilde{\Phi}_{Rnpq} -\widetilde{\Phi}_{Lnpq})\nonumber\\
&=\sum_{pq}\sum_{n=A,B,C,D}\frac{-e}{2\pi(2m+1)}\partial_x(\bar{\Phi}_{Rnpq} -\bar{\Phi}_{Lnpq})~,\label{eq:charge}\end{aligned}$$
where $e$ denotes the charge of an electron. Commuting $\rho_{\rm tot}$ and $e^{i\bar{\Phi}_{rnpq}^{(m)}}$, I find that these latter exponentials create quasiparticles of charge $e/(2m+1)$. This finding agrees with the observation that $2\pi$-kinks (the smallest possible kinks) in any of the bulk sine-Gordon terms given in Eq. carry charge $e/(2m+1)$,[@charge_remark] and that quasiparticle operators add local excitations above the ground state. Indeed, kinks in the bulk sine-Gordon terms precisely correspond to local excitations above the ground state. Since the analysis performed here for only $t_{z1}^{(m)}$ being finite can be directly transposed to any other coupling, I find that if only one of the competing tunnel couplings is present, the system can be described in terms of interacting fermionic quasiparticles of fractional charge $e/(2m+1)$ that have a critical point, at which they disperse linearly in $x$ direction. In the spirit of a Luther-Emery point, one can now consider fine-tuning the density-density interactions in the system to reach $\bar{H}_{\rm int}=0$. At this point, the non-interacting spectrum of $\bar{H}_{0}^{(m)}$ is the true spectrum of the system, and the strongly interacting electron system has been mapped exactly to free fermions of fractional charge. As for a one-dimensional Mott insulator at half-filling with a Luttinger liquid parameter $K=1/2$, these emerging fermions are complicated objects that are not to be confused with the original electrons in the system.[@giamarchi_book]
Competing couplings of comparable strength {#subsec:comp_strength}
------------------------------------------
When more than one of the competing sine-Gordon couplings is present, the transformation in Eq. is not sufficient to rewrite the Hamiltonian in terms of (free) fermions anymore. For the non-interacting case $m=0$, I showed in Sec. \[sec:non\_int\_phases\] that the effect of the additional couplings is to turn the critical point associated with $t_{z1}=0$ considered in the last subsection into an extended critical phase, the Weyl semimetal. Put in simple terms, the additional couplings allow the gapless quasiparticles to move in 3D. For finite $m$, one may suspect that the critical point associated with $t_{z_1}^{(m)}=0$ considered in the last subsection also turns into an extended critical 3D phase when the additional couplings are turned on. Similar to a discussion supporting fractional topological insulators in Ref. , this scenario can be supported by rewriting the Hamiltonian as
$$\begin{aligned}
&H=H_{\rm quad}+H_{\rm cos}~,\\
&H_{\rm cos}=\int dx\sum_{p,q,p'q'}\Psi_{pq}^{(m)}{}^\dagger(x)\,\mathcal{H}^{pp'qq'}(x)\,\Psi_{p'q'}^{(m)}(x)^{{\phantom{\dagger}}}~,\label{eq:supp_ham_bos_full}\\
&\mathcal{H}^{pp'qq'}=\begin{pmatrix}\mathcal{H}_{11}^{pp'qq'}(x)&\mathcal{H}_{12}^{pp'qq'}(x)\\\mathcal{H}_{21}^{pp'qq'}(x)&\mathcal{H}_{22}^{pp'qq'}(x)\end{pmatrix}~,\label{eq:h_cos}\\
&\Psi_{pq}^{(m)} =\frac{1}{\sqrt{2\pi\alpha}} (e^{-i\widetilde{\Phi}_{RApq}^{(m)}},e^{-i\widetilde{\Phi}_{LBpq}^{(m)}},e^{-i\widetilde{\Phi}_{LCpq}^{(m)}},e^{-i\widetilde{\Phi}_{RDpq}^{(m)}})^T~.\label{eq:new_op}\end{aligned}$$
Here, the $(2\times2)$ matrices $\mathcal{H}_{ij}^{pp'qq'}(x)$ contain the prefactors of the sine-Gordon terms. By adjusting the tunnel couplings and interaction strengths, $\mathcal{H}^{pp'qq'}$ can be tuned to be of the same form for all integers $m$, including the non-interacting case $m=0$. Consequently, the same transformations that diagonalize the Hamiltonian matrix $\mathcal{H}^{pp'qq'}$ for $m=0$ also do so for positive integers $m$. These transformations follow from Eq. and its discussion in Sec. \[sec:non\_int\_array\] upon setting $v_Fk_x\to0$, and replacing the non-interacting tunnelings $t_{y,z,1,2}$ by the prefactors of the corresponding sine-Gordon terms. As one important step in this diagonalization, I highlight the Fourier transformation along $y$ and $z$, which yields $H_{\rm cos}=\sum_{k_yk_z}\int dx\,\Psi_{k_yk_z}^{(m)}{}^\dagger(x)\,\mathcal{H}_{k_yk_z}(x)\,\Psi_{k_yk_z}^{(m)}(x)$ with $\Psi_{k_yk_z}^{(m)}(x) = \frac{1}{\sqrt{N_yN_z}}\sum_{pq} e^{-i (k_y a_y p+k_z a_z q)} \Psi_{pq}^{(m)}(x)$, where $N_{y(z)}$ is the number of unit cells in $y$ ($z$) direction.
Within the region of the phase diagram hosting the Weyl semimetal in the non-interacting case $m=0$, also the interacting array has excitations $\Psi_{k_yk_z}^{(m)}(x)$ associated with momentum $k_y=0$ and $k_z=\pi/a_z\pm k_{z0}$ that do not appear in the Hamiltonian $H_{\rm cos}$, but only in $H_{\rm quad}$ (which accounts for the possibly interaction-renormalized kinetic energy along the $x$ direction). For $m=0$, these are precisely the gapless electronic quasiparticles at the Weyl nodes. Outside the parameter regime hosting the Weyl semimetal in the non-interacting case, the Hamiltonian matrix $\mathcal{H}_{k_yk_z}(x)$ does not vanishes for any $(k_{y},k_{z})$. As a consequence, all excitations create kinks of non-trivial energy in the sine-Gordon Hamiltonian $H_{\rm cos}$, and are thus gapped. This suggests that the topology of the phase diagram is the same in the interacting and non-interacting cases.
As discussed in the last subsection, the fermionic character of the operators $\Psi_{pq}^{(0)}$ for $m=0$ can most easily be derived by refermionizing its components $e^{-i\widetilde{\Phi}_{rnpq}^{(0)}}$. For $m>0$, however, the fields $\widetilde{\Phi}_{rnpq}^{(m)}$ do not obey the same commutator as the non-interacting fields $\widetilde{\Phi}_{rnpq}^{(0)}$. This implies that the refermionization procedure does not carry over to the interacting case, and that the simple operators $e^{-i\widetilde{\Phi}_{rnpq}^{(m)}}$ do, for general $m$, not satisfy the canonical fermionic commutation relations. This is in agreement with the observation that the operators
$$\begin{aligned}
e^{i\widetilde{\Phi}_{rnpq}^{(m)}}& =\left(e^{i\widetilde{\Phi}_{rnpq}^{(0)}}\right)^{m+1}\,\left(e^{-i\widetilde{\Phi}_{(-r)npq}^{(0)}}\right)^{m}\label{eq:decompose}\end{aligned}$$
correspond to products of $2m+1$ fermionic operators, rather than a single canonical fermion. The operators $\Psi_{k_yk_z}^{(m)}(x)$ with $m>0$ thus do not correspond to individual 3D quasiparticle excitations of the array of wires. Further extensions of the coupled wire construction are needed to determine if also the 3D quasiparticles at finite $m$ have linearly dispersing band touchings similar to Weyl nodes. Similarly, the properties of the interacting analog of the Weyl semimetal phase, such as its response to electro-magnetic fields, cannot be resolved by the present analysis based on a language of competing sine-Gordon terms.
Despite the fact that the present bosonized calculation does not allow to access the true quasiparticles in the generalized Weyl semimetal regime for finite $m$, I judge the striking analogy between the interacting and non-interacting cases, and (for all $m$) the existence of operators that do not appear in $H_{\rm cos}$ in the regime hosting the Weyl semimetal for $m=0$, and only in this regime, to strongly support the existence of an extended critical phase separating the bulk gapped fractional phases emerging from the critical points discussed in Sec. \[sec:special\_points\], similar to the Weyl semimetal emerging from the analogous points point for $m=0$. While one option for a possible critical phase would be a standard Weyl semimetal, I note that transitions between the bulk gapped phases correspond to a rearrangement of only the edge states of the individual fractional quantum Hall building blocks. Their bulk gaps in the X and Y wires, however, never close. I thus speculate that the quantum phase transitions of the interacting array, and the possible gapless critical phase inherit some of the exotic properties of fractional quantum Hall edge modes. A critical phase could for instance be composed of fractionally charged fermionic 3D quasiparticles (somewhat related ideas are discussed in Refs. ). This scenario is supported by the quasiparticles at the Luther-Emery-type points considered in Sec. \[sec:special\_points\].
Summary and conclusions {#sec:summary}
=======================
In this work, the coupled-wire approach has been adapted for the analysis of interacting topological phases in 3D. More precisely, I analyzed the 3D system resulting from connecting narrow 2D integer and fractional quantum Hall building blocks along different directions. The coupled-wire language is thereby used for the microscopic description of the individual building blocks, see Sec. \[sec:blocks\], and the inter block couplings. For the non-interacting array of wires detailed in Sec. \[sec:non\_int\_array\], I found that the phase diagram includes a normal insulating, a quantum anomalous Hall, a Weyl semimetal, and a single-surface quantum anomalous Hall phase, see Sec. \[sec:non\_int\_phases\]. The surface on which this latter phase appears can be controlled by the ratio of two tunnel couplings. The coupled-wire language offers simple real space visualizations of the bulk gapped phases in terms of closed loops and open planes of 2D quantum Hall subsystems. With electron-electron interactions, I found that the array can be in analogous bulk gapped phases composed of closed loops and open planes of fractional quantum Hall states, see Sec. \[sec:interactions\_ll\] and Sec. \[sec:interact\_phases\]. Some of these phases have topologically protected fractional edge states. In analogy to the non-interacting Weyl semimetal, the bulk gapped phases are separated by a regime in which certain excitation operators do not appear in the sine-Gordon part of the Hamiltonian. This supports the existence of an exotic critical phase similar to a Weyl phase. The phase transitions of the array are of exotic nature, and correspond to a rearrangement of fractional quantum Hall edge modes. If only a single sine-Gordon term is present, the exotic nature of the phases and phase transitions could be revealed by an exact mapping, which identified the quasiparticles at special Luther-Emery points as fermions of fractional charge $e/(2m+1)$. The properties of the critical regime outside these special points will be addressed in future work. Other interesting directions include the use of chiral spin liquids, topological superconductors, and states with a more complex edge structure (such as Moore-Read states) as 2D building blocks, as well as the coupled-wire analysis of non-Abelian 3D phases with string-like excitations. Experimentally, the proposed array could be realized with cold atoms, in which the necessary ingredients (fermions subject to spin-orbit coupling and magnetic field,[@soi_coldatoms_1; @soi_coldatoms_2] complex lattices,[@cold_atoms_lattices] and superlattices hosting for instance even resonating valence-bond states[@cold_atoms_unit_cell]) have been demonstrated.
*Note added.* Recetly, a related preprint discussing coupled-wire constructions of 3D integer and fractional topological insulators appeared.[@sagi_preprint]
This work has been supported by the Helmholtz association through VI-521, and the DFG through SFB 1143. The author acknowledges stimulating discussions with E. Sela, S. Rachel, M. Vojta, A. Grushin, J. Bardarson and K. Shtengel.
[99]{}
D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. **48**, 1559 (1982).
R. B. Laughlin, Phys. Rev. Lett. **50**, 1395 (1983).
D. Pesin and L. Balents, Nat. Phys. **6**, 376 (2010).
W. Witczak-Krempa, T. P. Choy, and Y. B. Kim, Phys. Rev. B **82**, 165122 (2010).
M. S. Scheurer, S. Rachel, and P. P. Orth, Sci. Rep. **5**, 8386 (2015).
J. Maciejko, X.-L. Qi, A. Karch, and S.-C. Zhang, Phys. Rev. Lett. **105**, 246809 (2010).
B. Swingle, M. Barkeshli, J. McGreevy, and T. Senthil, Phys. Rev. B **83**, 195139 (2011).
J. Maciejko, X.-L. Qi, A. Karch, and S.-C. Zhang, Phys. Rev. B **86**, 235128 (2012).
S. Bhattacharjee, Y. B. Kim, S.-S. Lee, and D.-H. Lee, Phys. Rev. B **85**, 224428 (2012).
C. Castelnovo and C. Chamon, Phys. Rev. B **78**, 155120 (2008).
M. Levin, F. J. Burnell, M. Koch-Janusz, and A. Stern, Phys. Rev. B **84**, 235145 (2011).
A. Kitaev, Annals of Physics **321**, 2 (2006).
T. Si and Y. Yu, Nuclear Physics B **803**, 428 (2008).
S. Mandal and N. Surendran, Phys. Rev. B **79**, 024426 (2009).
S. Ryu, Phys. Rev. B **79**, 075124 (2009).
C. Wu, D. Arovas, and H.-H. Hung, Phys. Rev. B **79**, 134427 (2009).
M. Hermanns and S. Trebst, Phys. Rev. B **89**, 235102 (2014).
M. Hermanns, K. O’Brien, and S. Trebst, Phys. Rev. Lett. **114**, 157202 (2015).
C. Castelnovo, R. Moessner, and S. Sondhi, Nature **451**, 42 (2007).
D. J. P. Morris *et al.*, Science **326**, 411 (2009).
T. Fennell, P. P. Deen, A. R. Wildes, K. Schmalzl, D. Prabhakaran, A. T. Boothroyd, R. J. Aldus, D. F. McMorrow, and S. T. Bramwell, Science **326**, 415 (2009).
S. T. Bramwell, S. R. Giblin, S. Calder, R. Aldus, D. Prabhakaran, and T. Fennell, Nature **461**, 956 (2009).
L. Balents and M. P. A. Fisher, Phys. Rev. Lett. **76**, 2782 (1996).
B. I. Halperin, Helv. Phys. Acta **56**, 75 (1983).
X. Qiu, R. Joynt, and A. H. MacDonald, Phys. Rev. B **40**, 11943 (1989).
J. D. Naud, L P. Pryadko, and S. L. Sondhi, Phys. Rev. Lett. **85**, 5408 (2000).
M. Levin and M. P. A. Fisher, Phys. Rev. B **79**, 235315 (2009).
C. Wang and T. Senthil, Phys. Rev. B **87**, 235122 (2013).
C.-M. Jian and X.-L. Qi, Phys. Rev. X **4**, 041043 (2014).
C. Wang and M. Levin, Phys. Rev. Lett. **113**, 080403 (2014).
J. C. Wang and X.-G. Wen, Phys. Rev. B **91**, 035134 (2015).
S. Jiang, A. Mesaros, and Y. Ran, Phys. Rev. X **4**, 031048 (2014).
H. Moradi and X.-G. Wen, Phys. Rev. B **91**, 075114 (2015).
S. L. Sondhi and K. Yang, Phys. Rev. B **63**, 054430 (2001).
C. L. Kane, R. Mukhopadhyay, and T. C. Lubensky, Phys. Rev. Lett. **88**, 036401 (2002).
J. C. Y. Teo and C. L. Kane, Phys. Rev. B **89**, 085101 (2014).
Y.-M. Lu and A. Vishwanath, Phys. Rev. B [**86**]{}, 125119 (2012).
A. Vaezi and M. Barkeshli, Phys. Rev. Lett. [**113**]{}, 236804 (2014).
I. Seroussi, E. Berg, and Y. Oreg, Phys. Rev. B [**89**]{}, 104523 (2014).
J. Klinovaja and D. Loss, Eur. Phys. J. B [**87**]{}, 171 (2014).
T. Meng, P. Stano, J. Klinovaja, and D. Loss, Eur. Phys. J. B [**87**]{}, 203 (2014).
E. Sagi and Y. Oreg, Phys. Rev. B [**90**]{}, 201102 (2014).
J. Klinovaja and Y. Tserkovnyak, Phys. Rev. B **90**, 115426 (2014).
R. S. K. Mong, D. J. Clarke, J. Alicea, N. H. Lindner, P. Fendley, C. Nayak, Y. Oreg, A. Stern, E. Berg, K. Shtengel, and M. P. A. Fisher, Phys. Rev. X [**4**]{}, 011036 (2014).
A. Vaezi, Phys. Rev. X [**4**]{}, 031009 (2014).
J. Klinovaja, Y. Tserkovnyak, and D. Loss, Phys. Rev. B **91**, 085426 (2015).
R. A. Santos, C.-W. Huang, Y. Gefen, and D. B. Gutman, Phys. Rev. B **91**, 205141 (2015).
T. Meng, T. Neupert, M. Greiter, and R. Thomale, Phys. Rev. B **91**, 241106(R) (2015).
G. Gorohovsky, R. G. Pereira, and E. Sela, Phys. Rev. B **91**, 245139 (2015).
T. Neupert, C. Chamon, C. Mudry, and R. Thomale, Phys. Rev. B [**90**]{}, 205101 (2014).
T. Meng and E. Sela, Phys. Rev. B [**90**]{}, 235425 (2014).
A. A. Burkov and L. Balents, Phys. Rev. Lett. **107**, 127205 (2011).
T. Giamarchi, [*Quantum Physics in One Dimension*]{} (Oxford University Press, 2003).
To determine the charge associated with a $2\pi$-kink in one of the bulk sine-Gordon terms, one can simply integrate the charge density given in Eq. over the kink, thus obtaining the quasiparticle charge.
T. Meng and L. Balents, Phys. Rev. B **86**, 054504 (2012).
G. E. Volovik, *The Universe in a Helium Droplet* (Clarendon Press, Oxford, 2003).
J. von Delft and H. Schoeller, Annalen Phys. **7** 225, 1998.
E. Sagi and Y. Oreg, arXiv:1506.02033.
Z. Wang, Physica B **475**, 80 (2015).
P. Wang, Z.-Q. Yu, Z. Fu, J. Miao, L. Huang, S. Chai, H. Zhai, and J. Zhang Phys. Rev. Lett. **109**, 095301 (2012).
L. W. Cheuk, A. T. Sommer, Z. Hadzibabic, T. Yefsah, W. S. Bakr, and M. W. Zwierlein, Phys. Rev. Lett. **109**, 095302 (2012).
P. Windpassinger and K. Sengstock, Rep. Prog. Phys. **76**, 086401 (2013).
S. Nascimbène, Y.-A. Chen, M. Atala, M. Aidelsburger, S. Trotzky, B. Paredes, and I. Bloch, Phys. Rev. Lett. **108**, 205301 (2012).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, a multiple antenna wire-tap channel in the presence of a multi-antenna cooperative jammer is studied. In particular, the secure degrees of freedom (s.d.o.f.) of this channel is established, with $N_t$ antennas at the transmitter, $N_r$ antennas at the legitimate receiver, and $N_e$ antennas at the eavesdropper, for all possible values of the number of antennas, $N_c$, at the cooperative jammer. In establishing the result, several different ranges of $N_c$ need to be considered separately. The lower and upper bounds for these ranges of $N_c$ are derived, and are shown to be tight. The achievability techniques developed rely on a variety of signaling, beamforming, and alignment techniques which vary according to the (relative) number of antennas at each terminal and whether the s.d.o.f. is integer valued. Specifically, it is shown that, whenever the s.d.o.f. is integer valued, Gaussian signaling for both transmission and cooperative jamming, linear precoding at the transmitter and the cooperative jammer, and linear processing at the legitimate receiver, are sufficient for achieving the s.d.o.f. of the channel. By contrast, when the s.d.o.f. is not an integer, the achievable schemes need to rely on structured signaling at the transmitter and the cooperative jammer, and joint signal space and signal scale alignment. The converse is established by combining an upper bound which allows for full cooperation between the transmitter and the cooperative jammer, with another upper bound which exploits the secrecy and reliability constraints.'
author:
- Mohamed Nafea
- Aylin Yener
bibliography:
- 'MyLib.bib'
title: 'Secure Degrees of Freedom for the MIMO Wire-tap Channel with a Multi-antenna Cooperative Jammer [^1]'
---
Introduction {#Int}
============
Information theoretically secure message transmission in noisy communication channels was first considered in the seminal work by Wyner [@WTCWyner]. Reference [@CK] subsequently identified the secrecy capacity of a general discrete memoryless wire-tap channel. Reference [@leung1978gaussian] studied the Gaussian wire-tap channel and its secrecy capacity. More recently, an extensive body of work was devoted to study a variety of network information theoretic models under secrecy constraint(s), see for example [@liu2008discrete; @liu2009secrecy; @ekrem2011secrecy2; @khandani2013secrecy; @tekin2005secure; @tekin2008general; @liang2008multiple; @bloch2013strong; @gopala2008secrecy; @khisti2008secure; @lai2008relay; @oohama2007capacity; @he2010cooperation; @ekrem2011secrecy1; @liang2009compound; @khisti2011interference; @he2009k; @he2014providing]. The secrecy capacity region for most of multi-terminal models remain open despite significant progress on bounds and associated insights. Recent work thus includes efforts that concentrate on characterizing the more tractable high signal-to-noise ratio (SNR) scaling behavior of secrecy capacity region for Gaussian multi-terminal models [@khisti2011interference; @he2009k; @he2014providing; @xie2012secure; @xie2013secure; @xie2014secure].
Among the multi-transmitter models studied, a recurrent theme in achievability is enlisting one or more terminals to transmit intentional interference with the specific goal of diminishing the reception capability of the eavesdropper, known as [*[cooperative jamming]{}*]{} [@Tekin2006]. For the Gaussian wire-tap channel, adding a cooperative jammer terminal transmitting Gaussian noise can improve the secrecy rate considerably [@tekin2008general], albeit not the scaling of the secrecy capacity with power at high SNR. Recently, reference [@he2014providing] has shown that, for the Gaussian wire-tap channel, adding a cooperative jammer and utilizing structured codes for message transmission and cooperative jamming, provide an achievable secrecy rate scalable with power, i.e., a positive secure degrees of freedom (s.d.o.f.), an improvement from the zero degrees of freedom of the Gaussian wire-tap channel. More recently, reference [@xie2012secure] has proved that, for this channel, the s.d.o.f. $\frac{1}{2}$, achievable by codebooks constructed from integer lattices along with real interference alignment, is tight. References [@xie2013secure; @xie2014secure] have subsequently identified the s.d.o.f. region for multi-terminal Gaussian wire-tap channel models.
While the above development is for single-antenna terminals, multiple antennas have also been utilized to improve secrecy rates and s.d.o.f. for several channel models, see for example [@khisti2010secure; @oggier2011secrecy; @shafiee2009towards; @liu2009secrecy; @ekrem2011secrecy2; @khandani2013secrecy; @khisti2011interference; @liu2009note; @he2014mimo; @he2013mimo]. The secrecy capacity of the multi-antenna (MIMO) wire-tap channel, identified in[@khisti2010secure] scales with power only when the legitimate transmitter has an advantage over the eavesdropper in the number of antennas. It then follows naturally to utilize a cooperative jamming terminal to improve the secrecy rate and scaling for multi-antenna wire-tap channels as well which is the focus of this work.
In this paper, we study the multi-antenna wire-tap channel with a multi-antenna cooperative jammer. We characterize the high SNR scaling of the secrecy capacity, i.e., the s.d.o.f., of the channel with $N_c$ antennas at the cooperative jammer, $N_t$ antennas at transmitter, $N_r$ antennas at the receiver, and $N_e$ antennas at the eavesdropper. The achievability and converse techniques both are methodologically developed for ranges of the parameters, i.e., the number of antennas at each terminal. The upper and lower bounds for all parameter values are shown to match one another. The s.d.o.f. results in this paper match the achievability results derived in [@meGlobalSIP; @meAllerton], which are special cases for $\{N_t=N_r=1,N_c=N_e\}$, $\{N_t=N_r=N_e=N, N_c=2N\}$, $\{N_t=N_r=N_e=N, N_c=2N-1\}$, and real channel gains. The s.d.o.f. for the cases $\{N_t=N_r=N_e\}$ and $\{N_t=N_r\}$, for all possible values of $N_c$, were reported in [@meITW2014], [@meICC2015], respectively.
The proposed achievable schemes for different ranges of the values for $N_c$, $N_t$, $N_r$, and $N_e$ all involve linear precoding and linear receiver processing. The common goal to all these schemes is to perfectly align the cooperative jamming signals over the information signals observed at the eavesdropper while simultaneously enabling information and cooperative jamming signal separation at the legitimate receiver. We show that whenever the s.d.o.f. of the channel is integer valued, Gaussian signaling both at the transmitter and the cooperative jammer suffices to achieve the s.d.o.f. By contrast, non-integer s.d.o.f. requires structured signaling along with joint signal space and signal scale alignment in the complex plane [@maddah2010degrees; @kleinbock2002baker]. The necessity of structured signaling follows from the fact that fractional s.d.o.f. indicates sharing at least one spatial dimension between information and cooperative jamming signals at the receiver’s signal space. In this case, sharing the same spatial dimension between Gaussian information and jamming signals, which have similar power scaling, does not provide positive degrees of freedom, and we need for structured signals that can be separated over this single dimension at high SNR. The tools that enable the signal scale alignment are available in the field of transcendental number theory [@kleinbock2002baker; @Sprindzuk1; @Sprindzuk2], which we utilize.
The paper is organized as follows. Section \[ChannelModel\] introduces the channel model, and Section \[MainResult\] provides the main results. For clarity of exposition, we first present the converse and achievability for the MIMO wire-tap channel with $N_t=N_r=N$ in Sections \[Conv\_Proof\] and \[AchSchemes\]. Section \[Thm2\_Proof\] then extends the converse and achievability proofs for the case $N_t\neq N_r$. Section \[Discussion\] discusses the results of this work and Section \[Con\] concludes the paper.
Overall, this study determines the value in jointly utilizing signal scale and spatial interference alignment techniques for secrecy and quantifies the impact of a multi-antenna helper for the MIMO wire-tap channel by settling the question of the secrecy prelog for the $(N_t\times N_r\times N_e)$ MIMO wire-tap channel in the presence of an $N_c$-antenna cooperative jammer, for all possible values of $N_c$. In contrast with the single antenna case, where integer lattice codes and real interference alignment suffice to achieve the s.d.o.f. of the channel, in the MIMO setting, one needs to utilize a variety of signaling, beam-forming, and alignment techniques, in order to coordinate the transmitted and received signals for different values of $N_t,N_r,N_e$, and $N_c$.
Channel Model and Definitions
=============================
[\[ChannelModel\]]{} First, we remark the notation we use throughout the paper: Small letters denote scalars and capital letters denote random variables. Vectors are denoted by bold small letters, while matrices and random vectors are denoted by bold capital letters[^2]. Sets are denoted using calligraphic fonts. All logarithms are taken to be base $2$. The set of integers $\left\{-Q,\cdots,Q\right\}$ is denoted by $(-Q,Q)_{\mathbb{Z}}$. $\bold{0}_{m\times n}$ denotes an $m\times n$ matrix of zeros, and $\bold{I}_{n}$ denotes an $n\times n$ identity matrix. For matrix ${\bold{A}}$, $\mathcal{N}({\bold{A}})$ denotes its null space, ${\rm{det}}({\bold{A}})$ denotes its determinant, and $||{\bold{A}}||$ denotes its [*[induced]{}*]{} norm. For vector $\bold{V}$, $||\bold{V}||$ denotes its Euclidean norm, and $\bold{V}_i^j$ denotes the $i$th to $j$th components in $\bold{V}$. We use $\bold{V}^n$ to denote the $n$-letter extension of the random vector $\bold{V}$, i.e., $\bold{V}^{n}=\left[\bold{V}(1)\;\cdots\bold{V}(n)\right]$. The operators $^T$, $^H$, and $^\dagger$ denote the transpose, Hermitian, and pseudo inverse operations. We use $\mathbb{R},\mathbb{C}$, $\mathbb{Q}$, and $\mathbb{Z}$, to denote the sets of real, complex, rational, and integer numbers, respectively. $\mathbb{Z}[j]$ denotes the set of Gaussian (complex) integers. A circularly symmetric Gaussian random vector with zero mean and covariance matrix $\bold{K}$ is denoted by $\mathcal{CN}({\bold{0}},\bold{K})$.
As the channel model, we consider the MIMO wire-tap channel with an $N_t$-antenna transmitter, $N_r$-antenna receiver, $N_e$-antenna eavesdropper, and an $N_c$-antenna cooperative jammer as depicted in Fig. \[fig:sysmodel\].
![$(N_t\times N_r\times N_e)$ multiple antenna wire-tap channel with an $N_c$-antenna cooperative jammer.[]{data-label="fig:sysmodel"}](Fig1.eps){width="13.5cm" height="7.5cm"}
The received signals at the receiver and eavesdropper, at the $n$th channel use, are given by $$\begin{aligned}
\label{eq:Yr1}
{\bold{Y}_r}(n)&={\bold{H}_t}{\bold{X}_t}(n)+{\bold{H}_c}{\bold{X}_c}(n)+{\bold{Z}_r}(n)\\
\label{eq:Ye1}
{\bold{Y}_e}(n)&={\bold{G}_t}{\bold{X}_t}(n)+{\bold{G}_c}{\bold{X}_c}(n)+{\bold{Z}_e}(n),\end{aligned}$$ where ${\bold{X}_t}(n)$ and ${\bold{X}_c}(n)$ are the transmitted signals from the transmitter and the cooperative jammer at the $n$th channel use. ${\bold{H}_t}\in\mathbb{C}^{N_r\times N_t}$, ${\bold{H}_c}\in \mathbb{C}^{N_r\times N_c}$ are the channel gain matrices from the transmitter and the cooperative jammer to the receiver, while ${\bold{G}_t}\in\mathbb{C}^{N_e\times N_t}$, ${\bold{G}_c}\in\mathbb{C}^{N_e\times N_c}$ are the channel gain matrices from the transmitter and the cooperative jammer to the eavesdropper. It is assumed that the channel gains are static, independently drawn from a [*[complex-valued]{}*]{} continuous distribution, and known at all terminals. ${\bold{Z}_r}(n)$ and ${\bold{Z}_e}(n)$ are the complex Gaussian noise at the receiver and eavesdropper at the $n$th channel use, where ${\bold{Z}_r}(n)\sim\mathcal{CN}(\bold{0},\bold{I}_{N_r})$ and ${\bold{Z}_e}(n)\sim\mathcal{CN}(\bold{0},\bold{I}_{N_e})$ for all $n$. ${\bold{Z}_r}(n)$ is independent from ${\bold{Z}_e}(n)$ and both are independent and identically distributed (i.i.d.) across the time index[^3] $n$. The power constraints on the transmitted signals at the transmitter and the cooperative jammer are ${{\rm{E}}}\left\{{\bold{X}_t^H}{\bold{X}_t}\right\},{{\rm{E}}}\left\{{\bold{X}_c^H}{\bold{X}_c}\right\}\leq P$.
The transmitter aims to send a message $W$ to the receiver, and keep it secret from the external eavesdropper. A stochastic encoder, which maps the message $W$ to the transmitted signal ${\bold{X}_t^n}\in{\mathcal{X}_t^n}$, is used at the transmitter. The receiver uses its observation, ${\bold{Y}_r^n}\in\mathcal{Y}_r^n$, to obtain an estimate $\hat{W}$ of the transmitted message. Secrecy rate $R_s$ is achievable if for any $\epsilon>0$, there is a channel code $(2^{nR_s},n)$ satisfying[^4] $$\begin{aligned}
\label{eq:reliab_const}
&P_e={\rm{Pr}}\left\{\hat{W}\neq W\right\}\leq \epsilon,\\
\label{eq:sec_const}
&\frac{1}{n}H(W|{\bold{Y}_e^n})\geq \frac{1}{n}H(W)-\epsilon.\end{aligned}$$ The secrecy capacity of a channel, $C_s$, is defined as the closure of all its achievable secrecy rates. For a channel with complex-valued coefficients, the achievable secure degrees of freedom (s.d.o.f.), for a given secrecy rate $R_s$, is defined as $$\begin{aligned}
\label{eq:sdof}
D_s=\underset{P\rightarrow\infty}\lim \frac{R_s}{\log P}.\end{aligned}$$
The cooperative jammer transmits the signal ${\bold{X}_c^n}\in\mathcal{X}_c^n$ in order to reduce the reception capability of the eavesdropper. However, this transmission affects the receiver as well, as interference. The jamming signal, ${\bold{X}_c^n}$, does not carry any information. Additionally, there is no shared secret between the transmitter and the cooperative jammer.
Main Result {#MainResult}
===========
We first state the s.d.o.f. results for $N_t=N_r=N$.
\[Thm1\] The s.d.o.f. of the MIMO wire-tap channel with an $N_c$-antenna cooperative jammer, $N$ antennas at each of the transmitter and receiver, and $N_e$ antennas at the eavesdropper is given by $$\begin{aligned}
\label{eq:thm1}
D_s=\begin{cases}
[N+N_c-N_e]^+,\qquad \text{for}\;\; 0\leq N_c\leq N_e-\frac{\min\{N,N_e\}}{2}\\
N-\frac{\min\{N,N_e\}}{2},\qquad \text{for}\;\; N_e-\frac{\min\{N,N_e\}}{2}< N_c \leq \max\{N,N_e\}\\
\frac{N+N_c-N_e}{2},\qquad \text{for}\;\; \max\{N,N_e\} < N_c\leq N+N_e.
\end{cases}\end{aligned}$$
The proof for Theorem \[Thm1\] is provided in Sections \[Conv\_Proof\] and \[AchSchemes\].
Next, in Theorem \[Thm2\] below, we generalize the result in Theorem \[Thm1\] to $N_t\neq N_r$.
\[Thm2\] The s.d.o.f. of the MIMO wire-tap channel with an $N_c$-antenna cooperative jammer, $N_t$-antenna transmitter, $N_r$-antenna receiver, and $N_e$-antenna eavesdropper is given by $$\begin{aligned}
\label{eq:thm2}
D_s=\begin{cases}
\min\left\{N_r,[N_c+N_t-N_e]^+\right\},\qquad \text{for}\;\; 0\leq N_c\leq N_1\\
\min\left\{N_t,N_r,\frac{N_r+\left[N_t-N_e\right]^+}{2}\right\},\qquad \text{for}\;\; N_1< N_c \leq N_2\\
\min\left\{N_t,N_r,\frac{N_c+N_t-N_e}{2}\right\},\qquad \text{for}\;\; N_2 < N_c\leq N_3,
\end{cases}\end{aligned}$$ where, $$\begin{aligned}
&N_1=\min\left\{N_e,\left[\frac{N_r}{2}+\frac{N_e-N_t}{2-1_{N_e>N_t}}\right]^+\right\},\quad 1_{N_e>N_t}=\begin{cases}
1,\qquad\text{if}\;\; N_e>N_t\\
0,\qquad\text{if}\;\;N_e\leq N_t
\end{cases}\\
&N_2=N_r+\left[N_e-N_t\right]^+,\quad N_3=\max\left\{N_2,2\min\left\{N_t,N_r\right\}+N_e-N_t\right\}.\end{aligned}$$
The proof for Theorem \[Thm2\] is provided in Section \[Thm2\_Proof\].
[**[Remark 1]{}**]{} Theorem \[Thm2\] provides a complete characterization for the s.d.o.f. of the channel. The s.d.o.f. at $N_c=N_3$ is equal to $\min\{N_t,N_r\}$, which is equal to the d.o.f of the $(N_t\times N_r)$ point-to-point MIMO Gaussian channel. Thus, increasing the number of antenna at the cooperative jammer, $N_c$, over $N_3$ can not increase the s.d.o.f. over $\min\{N_t,N_r\}$.
[**[Remark 2]{}**]{} For $N_t\geq N_r+N_e$, the s.d.o.f. of the channel is equal to $N_r$ at $N_c=0$, i.e., the maximum s.d.o.f. of the channel is achieved without the help of the cooperative jammer.
[**[Remark 3]{}**]{} The converse proof for Theorem \[Thm2\] involves combining two upper bounds for the s.d.o.f. derived for two different ranges of $N_c$. These two bounds are a straight forward generalization of those derived for the symmetric case in Theorem \[Thm1\]. However, combining them is more tedious since more cases of the number of antennas at the different terminals should be handled carefully. Achievability for Theorem \[Thm2\] utilizes similar techniques to those used for Theorem \[Thm1\] as well, where handling more cases is required. For clarity of exposition, we derive the s.d.o.f. for the symmetric case first in order to present the main ideas, and then utilize these ideas and generalize the result to the asymmetric case of Theorem \[Thm2\].
For illustration purposes, the s.d.o.f. for $N_t=N_r=N_e=N$, and $N_c$ varies from $0$ to $2N$, is depicted in Fig. \[fig:sdof\]. We provide the discussion of the results of this work in Section \[Discussion\].
![Secure degrees of freedom for a MIMO wire-tap channel, with $N$ antennas at each of its nodes, and a cooperative jammer with $N_c$ antennas, where $N_c$ varies from $0$ to $2N$.[]{data-label="fig:sdof"}](Fig2.eps){width="13cm" height="7cm"}
Converse for $N_t=N_r=N$
========================
[\[Conv\_Proof\]]{} In Section \[Conv\_Proof\_1\], we derive the upper bound for the s.d.o.f. for $0\leq N_c\leq N_e$. In Section \[Conv\_Proof\_2\], we derive the upper bound for $\max\{N,N_e\}\leq N_c\leq N+N_e$. The two bounds are combined in Section \[Conv\_Proof\_3\] to provide the desired upper bound in (\[eq:thm1\]).
$0 \leq N_c\leq N_e$
--------------------
[\[Conv\_Proof\_1\]]{} Allow for full cooperation between the transmitter and the cooperative jammer. This cooperation can not decrease the s.d.o.f. of the channel, and yields a MIMO wire-tap channel with $N+N_c$-antenna transmitter, $N$-antenna receiver, and $N_e$-antenna eavesdropper. It has been shown in [@khisti2010secure] that, at high SNR, i.e., $P\rightarrow \infty$, the secrecy capacity of this channel, $C_s$, takes the asymptotic form $$\begin{aligned}
\label{eq:conv_1_1}
C_s(P)=\log {\rm{det}}\left({\bold{I}}_{N}+\frac{P}{p}{{\bold{H}}}{{\bold{G}}^{\sharp}}{{\bold{H}}^H}\right)+ o(\log P),\end{aligned}$$ where $\underset{P\rightarrow \infty}\lim\frac{o(\log P)}{\log P}=0$, ${{\bold{H}}}\in\mathbb{C}^{N\times (N+N_c)}$ and ${{\bold{G}}}\in\mathbb{C}^{N_e\times (N+N_c)}$ are the channel gains from the combined transmitter to the receiver and eavesdropper, and ${{\bold{G}}^{\sharp}}$ is the projection matrix onto the null space of ${{\bold{G}}}$, $\mathcal{N}({{\bold{G}}})$. $p={\rm{dim}}\left\{\mathcal{N}({{\bold{H}}})^{\perp}\cap\mathcal{N}({{\bold{G}}})\right\}$, where $\mathcal{N}({{\bold{H}}})^{\perp}$ is the space orthogonal to the null space of ${{\bold{H}}}$. Due to the randomly generated channel gains, if a vector $\bold{x}\in\mathcal{N}({{\bold{G}}})$, then $\bold{x}\in\mathcal{N}({{\bold{H}}})^{\perp}$ almost surely (a.s.), for all $0\leq N_c\leq N_e$. Thus, $p={\rm{dim}}(\mathcal{N}({{\bold{G}}}))=[N+N_c-N_e]^+$.
${{\bold{H}}}{{\bold{G}}^{\sharp}}{{\bold{H}}^H}$ can be decomposed as $$\begin{aligned}
\label{eq:conv_1_3}
{{\bold{H}}}{{\bold{G}}^{\sharp}}{{\bold{H}}^H}=\bold{\Psi}\begin{bmatrix}\bold{0}_{(N-p)\times (N-p)}&\bold{0}_{(N-p)\times p}\\\bold{0}_{p\times (N-p)}&\bold{\Omega}\end{bmatrix}\bold{\Psi}^H,\end{aligned}$$ where $\bold{\Psi}\in \mathbb{C}^{N\times N}$ is a unitary matrix and $\bold{\Omega}\in\mathbb{C}^{p\times p}$ is a non-singular matrix [@khisti2010secure]. Let $\bold{\Psi}=\left[\bold{\Psi}_1\;\bold{\Psi}_2\right]$, where $\bold{\Psi}_1\in\mathbb{C}^{N\times (N-p)}$ and $\bold{\Psi}_2\in\mathbb{C}^{N\times p}$. Substituting (\[eq:conv\_1\_3\]) in (\[eq:conv\_1\_1\]) yields $$\begin{aligned}
\label{eq:conv_1_4}
C_{s}(P)&= {\log{\rm{det}}}\left({\bold{I}}_{N}+\frac{P}{p}\bold{\Psi}_2 \bold{\Omega}\bold{\Psi}_2^H\right)+o(\log P)\\
\label{eq:conv_1_5}
&=\log {\rm{det}}\left({\bold{I}}_{p}+\frac{P}{p}\bold{\Omega}\bold{\Psi}_2^H\bold{\Psi}_2\right)+o(\log P)\\
\label{eq:conv_1_6}
&=\log P^p {\rm{det}}\left(\frac{1}{P}{\bold{I}}_{p}+\frac{1}{p}\bold{\Omega}\right)+o(\log P)\\
\label{eq:conv_1_7}
&=p\log P + o(\log P),\end{aligned}$$ where (\[eq:conv\_1\_5\]) follows from Sylvester’s determinant identity and (\[eq:conv\_1\_6\]) follows from $\bold{\Psi}$ being unitary.
The achievable secrecy rate of the original channel, $R_s$, is upper bounded by $C_s(P)$. Thus, the s.d.o.f. of the original channel, for $0\leq N_c\leq N_e$, is upper bounded as $$\begin{aligned}
\label{eq:conv_1_9}
D_s&=\underset{P\rightarrow\infty}\lim \frac{R_s}{\log P}\leq \underset{P\rightarrow\infty}\lim \frac{p\log P+ o(\log P)}{\log P}\\
\label{eq:conv_1_11}
&=[N+N_c-N_e]^+.\end{aligned}$$
$\max\{N,N_e\}< N_c\leq N+N_e$
------------------------------
[\[Conv\_Proof\_2\]]{} The upper bound we derive here is inspired by the converse of the single antenna Gaussian wire-tap channel with a single antenna cooperative jammer derived in [@xie2012secure], though as we will see shortly, the vector channel extension resulting from multiple antennas does require care. Let $\phi_i$, for $i=1,2,\cdots,10$, denote constants which do not depend on the power $P$.
The secrecy rate $R_s$ can be upper bounded as follows $$\begin{aligned}
\label{eq:conv_2_1}
nR_s&= H(W)\\
\label{eq:conv_2_2}
&=H(W)-H(W|{\bold{Y}_e^n})+H(W|{\bold{Y}_e^n})-H(W|{\bold{Y}_r^n})+H(W|{\bold{Y}_r^n})\\
\label{eq:conv_2_3}
&\leq n\epsilon+ H(W|{\bold{Y}_e^n})-H(W|{\bold{Y}_r^n},{\bold{Y}_e^n})+n\delta\\
\label{eq:conv_2_4}
&=I(W;{\bold{Y}_r^n}|{\bold{Y}_e^n})+n\phi_1\\
\label{eq:conv_2_5}
&=h({\bold{Y}_r^n}|{\bold{Y}_e^n})-h({\bold{Y}_r^n}|W,{\bold{Y}_e^n})+n\phi_1\\
\label{eq:conv_2_6}
&\leq h({\bold{Y}_r^n}|{\bold{Y}_e^n})-h({\bold{Y}_r^n}|W,{\bold{Y}_e^n},{\bold{X}_t^n},{\bold{X}_c^n})+n\phi_1\\
\label{eq:conv_2_7}
&=h({\bold{Y}_r^n},{\bold{Y}_e^n})-h({\bold{Y}_e^n})-h({\bold{Z}_r^n})+n\phi_1,\end{aligned}$$ where (\[eq:conv\_2\_3\]) follows since $H(W)-H(W|{\bold{Y}_e^n})\leq n\epsilon$ by the secrecy constraint in (\[eq:sec\_const\]), $H(W|{\bold{Y}_r^n})\leq n\delta$ by Fano’s inequality, and $H(W|{\bold{Y}_r^n})\geq H(W|{\bold{Y}_r^n},{\bold{Y}_e^n})$ by the fact that conditioning does not increase entropy, (\[eq:conv\_2\_7\]) follows since ${\bold{Z}_r^n}$ is independent from $\{W,{\bold{Y}_e^n},{\bold{X}_t^n},{\bold{X}_c^n}\}$, and $\phi_1=\epsilon+\delta$.
Let ${\tilde{\bold{X}}_t}={\bold{X}_t}+{\tilde{\bold{Z}}_t}$ and ${\tilde{\bold{X}}_c}={\bold{X}_c}+{\tilde{\bold{Z}}_c}$, where ${\tilde{\bold{Z}}_t}\sim\mathcal{CN}({\bold{0}},{\bold{K}_t})$ and ${\tilde{\bold{Z}}_c}\sim\mathcal{CN}(\bold{0},{\bold{K}_c})$. Note that ${\tilde{\bold{X}}_t}$ and ${\tilde{\bold{X}}_c}$ are noisy versions of the transmitted signals ${\bold{X}_t}$ and ${\bold{X}_c}$, respectively. ${\tilde{\bold{Z}}_t}$ is independent from ${\tilde{\bold{Z}}_c}$ and both are independent from $\{{\bold{X}_t},{\bold{X}_c},{\bold{Z}_r},{\bold{Z}_e}\}$. ${\tilde{\bold{Z}}_t^n}$ and ${\tilde{\bold{Z}}_c^n}$ are i.i.d. sequences of the random vectors ${\tilde{\bold{Z}}_t}$ and ${\tilde{\bold{Z}}_c}$. In addition, let ${\tilde{\bold{Z}}_1}=-{\bold{H}_t}{\tilde{\bold{Z}}_t}-{\bold{H}_c}{\tilde{\bold{Z}}_c}+{\bold{Z}_r}$ and ${\tilde{\bold{Z}}_2}=-{\bold{G}_t}{\tilde{\bold{Z}}_t}-{\bold{G}_c}{\tilde{\bold{Z}}_c}+{\bold{Z}_e}$. Note that ${\tilde{\bold{Z}}_1}\sim\mathcal{CN}({\bold{0}},\bold{\Sigma}_{{\tilde{\bold{Z}}_1}})$ and ${\tilde{\bold{Z}}_2}\sim\mathcal{CN}({\bold{0}},\bold{\Sigma}_{{\tilde{\bold{Z}}_2}})$, where $\bold{\Sigma}_{{\tilde{\bold{Z}}_1}}={\bold{H}_t}{\bold{K}_t}{\bold{H}_t^H}+{\bold{H}_c}{\bold{K}_c}{\bold{H}_c^H}+{\bold{I}}_{N}$ and $\bold{\Sigma}_{{\tilde{\bold{Z}}_2}}={\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}+{\bold{G}_c}{\bold{K}_c}{\bold{G}_c^H}+{\bold{I}}_{N_e}$. ${\tilde{\bold{Z}}_1^n}$ and ${\tilde{\bold{Z}}_2^n}$ are i.i.d. sequences of ${\tilde{\bold{Z}}_1}$ and ${\tilde{\bold{Z}}_2}$, since each of ${\bold{Z}_r^n},{\bold{Z}_e^n},{\tilde{\bold{Z}}_t^n},{\tilde{\bold{Z}}_c^n}$ is i.i.d. across time. The covariance matrices, ${\bold{K}_t}$ and ${\bold{K}_c}$, are chosen as ${\bold{K}_t}=\rho^2{\bold{I}}_{N}$ and ${\bold{K}_c}=\rho^2{\bold{I}}_{N_c}$, where $0<\rho\leq1/\max\left\{||{\bold{H}_c^H}||,\sqrt{||{\bold{G}_t^H}||^2+||{\bold{G}_c^H}||^2}\right\}$. This choice of ${\bold{K}_t}$ and ${\bold{K}_c}$ guarantees the finiteness $h({\tilde{\bold{Z}}_t}),h({\tilde{\bold{Z}}_c}),h({\tilde{\bold{Z}}_1})$, and $h({\tilde{\bold{Z}}_2})$ as shown in Appendix A. Starting from (\[eq:conv\_2\_7\]), we have $$\begin{aligned}
\label{eq:conv_2_8}
n&R_s\leq h({\bold{Y}_r^n},{\bold{Y}_e^n})-h({\bold{Y}_e^n})+n\phi_2\\
\label{eq:conv_2_9}
&=h({\bold{Y}_r^n},{\bold{Y}_e^n},{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})-h({\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n}|{\bold{Y}_r^n},{\bold{Y}_e^n})-h({\bold{Y}_e^n})+n\phi_2\\
\label{eq:conv_2_10}
&\leq h({\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})+h({\bold{Y}_r^n},{\bold{Y}_e^n}|{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})-h({\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n}|{\bold{Y}_r^n},{\bold{Y}_e^n},{\bold{X}_t^n},{\bold{X}_c^n})-h({\bold{Y}_e^n})+n\phi_2\\
\label{eq:conv_2_11}
&\leq h({\tilde{\bold{X}}_t^n})+h({\tilde{\bold{X}}_c^n})+h({\bold{Y}_r^n}|{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})+h({\bold{Y}_e^n}|{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})-h({\tilde{\bold{Z}}_t^n},{\tilde{\bold{Z}}_c^n})-h({\bold{Y}_e^n})+n\phi_2\\
\label{eq:conv_2_12}
&=h({\tilde{\bold{X}}_t^n})+h({\tilde{\bold{X}}_c^n})+h({\tilde{\bold{Z}}_1^n}|{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})+h({\tilde{\bold{Z}}_2^n}|{\tilde{\bold{X}}_t^n},{\tilde{\bold{X}}_c^n})-h({\bold{Y}_e^n})+n\phi_3\\
\label{eq:conv_2_13}
&\leq h({\tilde{\bold{X}}_t^n})+h({\tilde{\bold{X}}_c^n})+h({\tilde{\bold{Z}}_1^n})+h({\tilde{\bold{Z}}_2^n})-h({\bold{Y}_e^n})+n\phi_3\\
\label{eq:conv_2_14}
&=h({\tilde{\bold{X}}_t^n})+h({\tilde{\bold{X}}_c^n})-h({\bold{Y}_e^n})+n\phi_4,\end{aligned}$$ where (\[eq:conv\_2\_11\]) follows since ${\tilde{\bold{Z}}_t^n}$ and ${\tilde{\bold{Z}}_c^n}$ are independent from $\{{\bold{X}_t^n},{\bold{X}_c^n},{\bold{Y}_r^n},{\bold{Y}_e^n}\}$, $\phi_2=\phi_1-h({\bold{Z}_r})$, $\phi_3=\phi_2-h({\tilde{\bold{Z}}_t})-h({\tilde{\bold{Z}}_c})$, and $\phi_4=\phi_3+h({\tilde{\bold{Z}}_1})+h({\tilde{\bold{Z}}_2})$. We now consider the following two cases.
[**[Case 1: $N_e\leq N$]{}**]{}
We first lower bound $h({\bold{Y}_e^n})$ in (\[eq:conv\_2\_14\]) as follows. Using the infinite divisibility of Gaussian distribution, we can express a stochastically equivalent form of ${\bold{Z}_e}$, denoted by $\bold{Z}'_e$, as $$\begin{aligned}
\label{eq:conv_2_15}
\bold{Z}'_e={\bold{G}_t}{\tilde{\bold{Z}}_t}+{\tilde{\bold{Z}}_e}.\end{aligned}$$ where[^5] ${\tilde{\bold{Z}}_e}\sim\mathcal{CN}({\bold{0}},{\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H})$ is independent from $\{{\tilde{\bold{Z}}_t},{\tilde{\bold{Z}}_c},{\bold{X}_t},{\bold{X}_c},{\bold{Z}_r}\}$. ${\tilde{\bold{Z}}_e^n}$ is an i.i.d. sequence of the random vectors ${\tilde{\bold{Z}}_e}$. Using (\[eq:conv\_2\_15\]), a stochastically equivalent form of ${\bold{Y}_e^n}$ is $$\begin{aligned}
\label{eq:conv_2_16}
{\bold{Y}_{e}'}^n={\bold{G}_t}{\tilde{\bold{X}}_t^n}+{\bold{G}_c}{\bold{X}_c^n}+{\tilde{\bold{Z}}_e^n}.\end{aligned}$$
Let ${\bold{X}_t}=\left[X_{t,1}\cdots X_{t,N}\right]^T$, ${\tilde{\bold{Z}}_t}=[\tilde{Z}_{t,1}\cdots\tilde{Z}_{t,N}]^T$, and ${\tilde{\bold{X}}_t}=[{\tilde{\bold{X}}_{t_1}}^T\;{\tilde{\bold{X}}_{t_2}}^T]^T$, where ${\tilde{\bold{X}}_{t_1}}=[\tilde{X}_{t,1}\cdots\tilde{X}_{t,{N_e}}]^T$, ${\tilde{\bold{X}}_{t_2}}=[\tilde{X}_{t,{N_e+1}}\cdots\tilde{X}_{t,N}]^T$, and $\tilde{X}_{t,k}=X_{t,k}+\tilde{Z}_{t,k}$, $k=1,2,\cdots,N$. In addition, let ${\bold{G}_t}=\left[{\bold{G}_{t_1}}\;{\bold{G}_{t_2}}\right]$, where ${\bold{G}_{t_1}}\in\mathbb{C}^{N_e\times N_e}$ and ${\bold{G}_{t_2}}\in\mathbb{C}^{N_e\times (N-N_e)}$. Using (\[eq:conv\_2\_16\]), we have $$\begin{aligned}
\label{eq:conv_2_17}
h&({\bold{Y}_e^n})=h({\bold{Y}_{e}'}^n)=h({\bold{G}_t}{\tilde{\bold{X}}_t^n}+{\bold{G}_c}{\bold{X}_c^n}+{\tilde{\bold{Z}}_e^n})\\
\label{eq:conv_2_18}
&\geq h({\bold{G}_t}{\tilde{\bold{X}}_t^n})=h({\bold{G}_{t_1}}{\tilde{\bold{X}}_{t_1}^n}+{\bold{G}_{t_2}}{\tilde{\bold{X}}_{t_2}^n})\\
\label{eq:conv_2_19}
&\geq h({\bold{G}_{t_1}}{\tilde{\bold{X}}_{t_1}^n}+{\bold{G}_{t_2}}{\tilde{\bold{X}}_{t_2}^n}|{\tilde{\bold{X}}_{t_2}^n})=h({\bold{G}_{t_1}}{\tilde{\bold{X}}_{t_1}^n}|{\tilde{\bold{X}}_{t_2}^n})\\
\label{eq:conv_2_20}
&=h({\tilde{\bold{X}}_{t_1}^n}|{\tilde{\bold{X}}_{t_2}^n})+n\log|\det({\bold{G}_{t_1}})|.\end{aligned}$$ where the inequality in (\[eq:conv\_2\_18\]) follows since $\{{\bold{G}_t}{\tilde{\bold{X}}_t^n}\}$ and $\{{\bold{G}_c}{\bold{X}_c^n}+{\tilde{\bold{Z}}_e^n}\}$ are independent, as for two independent random vectors $\bold{X}$ and $\bold{Y}$, we have $h(\bold{X}+\bold{Y})\geq h(\bold{X})$.
Substituting (\[eq:conv\_2\_20\]) in (\[eq:conv\_2\_14\]) results in $$\begin{aligned}
\label{eq:conv_2_21}
nR_s&\leq h({\tilde{\bold{X}}_{t_1}^n},{\tilde{\bold{X}}_{t_2}^n})+h({\tilde{\bold{X}}_c^n})-h({\tilde{\bold{X}}_{t_1}^n}|{\tilde{\bold{X}}_{t_2}^n})-n\log|\det({\bold{G}_{t_1}})|+n\phi_4\\
\label{eq:conv_2_22}
&=h({\tilde{\bold{X}}_{t_2}^n})+h({\tilde{\bold{X}}_c^n})+n\phi_5,\end{aligned}$$ where $\phi_5=\phi_4-\log|\det({\bold{G}_{t_1}})|$.
We now exploit the reliability constraint in (\[eq:reliab\_const\]) to derive another upper bound for $R_s$, which we combine with the bound in (\[eq:conv\_2\_22\]) in order to obtain the desired bound for the s.d.o.f. when $N_e<N$ and $N\leq N_c\leq N+N_e$. The reliability constraint in (\[eq:reliab\_const\]) can be achieved only if [@cover2006elements] $$\begin{aligned}
\label{eq:conv_2_23}
nR_s&\leq I({\bold{X}_t^n};{\bold{Y}_r^n})=h({\bold{Y}_r^n})-h({\bold{Y}_r^n}|{\bold{X}_t^n})\\
\label{eq:conv_2_24}
&=h({\bold{Y}_r^n})-h({\bold{H}_c}{\bold{X}_c^n}+{\bold{Z}_r^n}).\end{aligned}$$ Similar to (\[eq:conv\_2\_15\]), a stochastically equivalent form of ${\bold{Z}_r}$ is given by $$\begin{aligned}
\label{eq:conv_2_25}
{\bold{Z}_r}'={\bold{H}_c}{\tilde{\bold{Z}}_c}+{\tilde{\bold{Z}}_r},\end{aligned}$$ where[^6] ${\tilde{\bold{Z}}_r}\sim\mathcal{CN}({\bold{0}},{\bold{I}}_{N}-{\bold{H}_c}{\bold{K}_c}{\bold{H}_c^H})$ is independent from $\{{\tilde{\bold{Z}}_t},{\tilde{\bold{Z}}_c},{\bold{X}_t},{\bold{X}_c},{\bold{Z}_e}\}$. ${\tilde{\bold{Z}}_r^n}$ is an i.i.d. sequence of the random vectors ${\tilde{\bold{Z}}_r}$.
Let ${\bold{X}_c}=\left[X_{c,1}\cdots X_{c,N_c}\right]^T$, ${\tilde{\bold{Z}}_c}=[\tilde{Z}_{c,1}\cdots\tilde{Z}_{c,N_c}]^T$, and ${\tilde{\bold{X}}_c}=[{\tilde{\bold{X}}_{c_1}}^T\;{\tilde{\bold{X}}_{c_2}}^T]^T$, where ${\tilde{\bold{X}}_{c_1}}=[\tilde{X}_{c,1}\;\dots\;\tilde{X}_{c,N}]^T$, ${\tilde{\bold{X}}_{c_2}}=[\tilde{X}_{c,N+1}\;\cdots\;\tilde{X}_{c,N_c}]^T$, and $\tilde{X}_{c,k}=X_{c,k}+\tilde{Z}_{c,k}$, $k=1,2,\cdots,N_c$. In addition, let ${\bold{H}_c}=\left[{\bold{H}_{c_1}}\;{\bold{H}_{c_2}}\right]$, where ${\bold{H}_{c_1}}\in\mathbb{C}^{N\times N}$ and ${\bold{H}_{c_2}}=\in\mathbb{C}^{N\times (N_c-N)}$. Using (\[eq:conv\_2\_25\]), we have $$\begin{aligned}
\label{eq:conv_2_26}
h(&{\bold{H}_c}{\bold{X}_c^n}+{\bold{Z}_r^n})=h({\bold{H}_c}{\bold{X}_c^n}+{\bold{Z}_r'}^n)=h({\bold{H}_c}{\tilde{\bold{X}}_c^n}+{\tilde{\bold{Z}}_r^n})\\
\label{eq:conv_2_27}
&\geq h({\bold{H}_c}{\tilde{\bold{X}}_c^n})=h({\bold{H}_{c_1}}{\tilde{\bold{X}}_{c_1}^n}+{\bold{H}_{c_2}}{\tilde{\bold{X}}_{c_2}^n})\\
\label{eq:conv_2_28}
&\geq h({\bold{H}_{c_1}}{\tilde{\bold{X}}_{c_1}^n}|{\tilde{\bold{X}}_{c_2}^n})\\
\label{eq:conv_2_29}
&=h({\tilde{\bold{X}}_{c_1}^n}|{\tilde{\bold{X}}_{c_2}^n})+n\log|\det({\bold{H}_{c_1}})|.\end{aligned}$$
Substituting (\[eq:conv\_2\_29\]) in (\[eq:conv\_2\_24\]) yields $$\begin{aligned}
\label{eq:conv_2_30}
nR_s\leq h({\bold{Y}_r^n})-h({\tilde{\bold{X}}_{c_1}^n}|{\tilde{\bold{X}}_{c_2}^n})-n\log|\det({\bold{H}_{c_1}})|.\end{aligned}$$ Let ${\bold{Y}_r}=\left[Y_{r,1}\;\cdots\;Y_{r,N}\right]^T$. Summing (\[eq:conv\_2\_22\]) and (\[eq:conv\_2\_30\]) results in $$\begin{aligned}
\label{eq:conv_2_31}
nR_s&\leq\frac{1}{2}\left\{h({\bold{Y}_r^n})+h({\tilde{\bold{X}}_{t_2}^n})+h({\tilde{\bold{X}}_{c_2}^n})\right\}+n\phi_6\\
\label{eq:conv_2_32}
&\leq\frac{1}{2}\sum_{i=1}^n\left\{\sum_{k=1}^N h(Y_{r,k}(i))+\sum_{k=N_e+1}^N h(\tilde{X}_{t,k}(i))+\sum_{k=N+1}^{N_c} h(\tilde{X}_{c,k}(i))\right\}+n\phi_6,\end{aligned}$$ where $\phi_6=\frac{1}{2}\left(\phi_5-\log|\det({\bold{H}_{c_1}})|\right)$.
In Appendix B, we show, for $i=1,\cdots,n$, $k=1,\cdots,N$, and $j=1,\cdots,N_c$, that $$\begin{aligned}
\label{eq:conv_2_33}
&h(Y_{r,k}(i))\leq \log 2\pi e+\log (1+h^2 P)\\
\label{eq:conv_2_34}
&h(\tilde{X}_{t,k}(i)), h(\tilde{X}_{c,j}(i))\leq \log 2\pi e+\log (\rho^2+P),\end{aligned}$$ where $h^2=\underset{k}{\max}\;\left(||{\bold{h}}_{t,k}^{r}||^2+||{\bold{h}}_{c,k}^{r}||^2\right)$; ${\bold{h}}_{t,k}^{r}$ and ${\bold{h}}_{c,k}^{r}$ denote the transpose of the $k$th row vectors of ${\bold{H}_t}$ and ${\bold{H}_c}$, respectively. Using (\[eq:conv\_2\_32\]), (\[eq:conv\_2\_33\]), and (\[eq:conv\_2\_34\]), we have $$\begin{aligned}
\label{eq:conv_2_35}
R_s\leq \frac{N}{2}\log (1+h^2 P)+\frac{N_c-N_e}{2}\log(\rho^2+P)+\phi_7,\end{aligned}$$ where $\phi_7=\phi_6+\frac{N+N_c-N_e}{2}\log 2\pi e$. Using (\[eq:sdof\]), we get $$\begin{aligned}
\label{eq:conv_2_36}
D_s&\leq \underset{P\rightarrow\infty}\lim \frac{\frac{N}{2}\log (1+h^2 P)+\frac{N_c-N_e}{2}\log(\rho^2+P)+\phi_7}{\log P}\\
\label{eq:conv_2_37}
&=\frac{N+N_c-N_e}{2}.\end{aligned}$$ Thus, the s.d.o.f. for $N_e\leq N$ and $N\leq N_c\leq N+N_e$, is upper bounded by $\frac{N+N_c-N_e}{2}$.
[**[Case 2: $N_e>N$]{}**]{}\
Another stochastically equivalent form of ${\bold{Z}_e}$ is $$\begin{aligned}
\label{eq:conv_2_38}
\bold{Z}''_e={\bold{G}_t}{\tilde{\bold{Z}}_t}+{\bold{G}_c}{\tilde{\bold{Z}}_c}+{\tilde{\bold{Z}}'_e}.\end{aligned}$$ where[^7] ${\tilde{\bold{Z}}'_e}\sim\mathcal{CN}({\bold{0}},{\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}-{\bold{G}_c}{\bold{K}_c}{\bold{G}_c^H})$ is independent from $\{{\tilde{\bold{Z}}_t},{\tilde{\bold{Z}}_c},{\bold{X}_t},{\bold{X}_c},{\bold{Z}_r}\}$. ${\tilde{\bold{Z}}_e'^n}$ is an i.i.d. sequence of the random vectors ${\tilde{\bold{Z}}'_e}$. Using (\[eq:conv\_2\_38\]), another stochastically equivalent form of ${\bold{Y}_e^n}$ is given by $$\begin{aligned}
\label{eq:conv_2_39}
{{\bold{Y}}''_{e}}^n={\bold{G}_t}{\tilde{\bold{X}}_t}+{\bold{G}_c}{\tilde{\bold{X}}_c^n}+{\tilde{\bold{Z}}_e'^n}.\end{aligned}$$
Let us rewrite ${\tilde{\bold{X}}_c}$ and ${\bold{H}_c}$ as follows. ${\tilde{\bold{X}}_c}=[{{\tilde{\bold{X}}_{c_1}}'}^T\;{{\tilde{\bold{X}}_{c_2}}'}^T]^T$, where ${{\tilde{\bold{X}}_{c_1}}'}=[\tilde{X}_{c,1}\cdots\tilde{X}_{c,{N_e-N}}]^T$, ${{\tilde{\bold{X}}_{c_2}}'}=[{{\tilde{\bold{X}}_{c_{21}}}'}^T\;{{\tilde{\bold{X}}_{c_{22}}}'}^T]^T$, ${{\tilde{\bold{X}}_{c_{21}}}'}=[\tilde{X}_{c,{N_e-N+1}}\cdots\tilde{X}_{c,{N_e}}]^T$, and ${{\tilde{\bold{X}}_{c_{22}}}'}=[\tilde{X}_{c,{N_e+1}}\cdots\tilde{X}_{c,{N_c}}]^T$. ${\bold{H}_c}=[{\bold{H}_{c_1}}'\;{\bold{H}_{c_2}}']$, where ${\bold{H}_{c_1}}'\in\mathbb{C}^{N\times {(N_e-N)}}$, ${\bold{H}_{c_2}}'=[{\bold{H}'_{c_{21}}}\;{\bold{H}_{c_{22}}}]$, ${\bold{H}'_{c_{21}}}\in\mathbb{C}^{N\times N}$, and ${\bold{H}_{c_{22}}}\in\mathbb{C}^{N\times (N_c-N_e)}$. Let ${\bold{G}_c}=\left[{\bold{G}_{c_1}}\;{\bold{G}_{c_2}}\right]$, where ${\bold{G}_{c_1}}\in\mathbb{C}^{N_e\times(N_e-N)}$ and ${\bold{G}_{c_2}}\in\mathbb{C}^{N_e\times (N+N_c-N_e)}$. Using (\[eq:conv\_2\_39\]), we have $$\begin{aligned}
\label{eq:conv_2_40}
h({\bold{Y}_e^n})&=h({{\bold{Y}}''_{e}}^n)=h([{\bold{G}_t}\;{\bold{G}_{c_1}}]\begin{bmatrix}{\tilde{\bold{X}}_t^n}\\{{\tilde{\bold{X}}_{c_1}}'^n}\\\end{bmatrix}+{\bold{G}_{c_2}}{{\tilde{\bold{X}}_{c_2}}'^n}+{\tilde{\bold{Z}}_e'^n})\\
\label{eq:conv_2_41}
&\geq h({\tilde{\bold{X}}_t^n},{{\tilde{\bold{X}}_{c_1}}'^n}|{{\tilde{\bold{X}}_{c_2}}'^n})+n\log|\det[{\bold{G}_t}\;{\bold{G}_{c_1}}]|\\
\label{eq:conv_2_42}
&\geq h({\tilde{\bold{X}}_t^n})+h({{\tilde{\bold{X}}_{c_1}}'^n}|{{\tilde{\bold{X}}_{c_2}}'^n})+n\log|\det[{\bold{G}_t}\;{\bold{G}_{c_1}}]|,\end{aligned}$$ where (\[eq:conv\_2\_42\]) follows since ${\tilde{\bold{X}}_t^n}$ and ${{\tilde{\bold{X}}_{c_2}}'^n}$ are independent. Substituting (\[eq:conv\_2\_42\]) in (\[eq:conv\_2\_14\]) gives $$\begin{aligned}
\label{eq:conv_2_43}
nR_s\leq h({{\tilde{\bold{X}}_{c_2}}'^n})+n\phi_8,\end{aligned}$$ where $\phi_8=\phi_4-\log|\det[{\bold{G}_t}\;{\bold{G}_{c_1}}]|$.
In order to obtain another upper bound for $R_s$, which we combine with (\[eq:conv\_2\_43\]) to obtain the desired bound for $N_e>N$ and $N_e\leq N_c\leq N+N_e$, we proceed as follows. Consider a modified channel where the first $N_e-N$ antennas at the cooperative jammer are removed, i.e., the cooperative jammer uses only the last $N+N_c-N_e$ out of its $N_c$ antennas. The transmitted signals in the modified channel are ${\bold{X}_t^n}$ and $\bold{X}_{c_2}'^{n}$, and hence, the legitimate receiver receives $$\begin{aligned}
\label{eq:modified_channel}
{\bold{\bar{Y}}_r^n}={\bold{H}_t}{\bold{X}_t^n}+{\bold{H}_{c_2}}'\bold{X}_{c_2}'^{n}+{\bold{Z}_r^n}. \end{aligned}$$ Since the cooperative jamming signal is additive interference for the legitimate receiver, the reliable communication rate of this modified channel, $\bar{R}$, is an upper bound for that of the original channel, $R$. Since $R_s$ satisfies the reliability and secrecy constraints in (\[eq:reliab\_const\]) and (\[eq:sec\_const\]), we have that $$\begin{aligned}
\label{eq:conv_2_44}
nR_s&\leq nR\leq n\bar{R}\leq I({\bold{X}_t^n};{\bold{\bar{Y}}_r^n})= h({\bold{\bar{Y}}_r^n})-h({\bold{H}_{c_2}}'\bold{X}_{c_2}'^{n}+{\bold{Z}_r^n}).\end{aligned}$$ Let ${\tilde{\bold{Z}}_{c_2}}=[\tilde{Z}_{c,{N_e-N+1}}\cdots \tilde{Z}_{c,{N_c}}]^T\sim\mathcal{CN}({\bold{0}},{\bold{K}_c}')$, where ${\bold{K}_c}'=\rho^2{\bold{I}}_{N+N_c-N_e}$. Another stochastically equivalent form of ${\bold{Z}_r}$ is ${\bold{Z}_r}''={\bold{H}_{c_2}}'{\tilde{\bold{Z}}_{c_2}}+{\tilde{\bold{Z}}'_r}$, where[^8] ${\tilde{\bold{Z}}'_r}\sim\mathcal{CN}({\bold{0}},{\bold{I}}_{N}-{\bold{H}_{c_2}}'{\bold{K}_c}'{\bold{H}_{c_2}'^H})$ is independent from $\{{\tilde{\bold{Z}}_t},{\tilde{\bold{Z}}_c},{\bold{X}_t},{\bold{X}_c},{\bold{Z}_e}\}$, and ${\tilde{\bold{Z}}_r'^n}$ is an i.i.d. sequence of ${\tilde{\bold{Z}}'_r}$. Thus, using (\[eq:conv\_2\_44\]), we have $$\begin{aligned}
\label{eq:conv_2_44_1}
nR_s&\leq h({\bold{\bar{Y}}_r^n})-h({\bold{H}_{c_2}}'{{\tilde{\bold{X}}_{c_2}}'^n}+{\tilde{\bold{Z}}_r'^n})\leq h({\bold{\bar{Y}}_r^n})-h({\bold{H}_{c_2}}'{{\tilde{\bold{X}}_{c_2}}'^n})\\
\label{eq:conv_2_45}
&\leq h({\bold{\bar{Y}}_r^n})-h({\tilde{\bold{X}}_{c_{21}}'^n}|{\tilde{\bold{X}}_{c_{22}}'^n})-n\log|\det(\bold{H}_{c_{21}}')|.\end{aligned}$$
Let $\bold{\bar{Y}}_r=[\bar{Y}_{r,1}\cdots \bar{Y}_{r,N}]^T$. Summing (\[eq:conv\_2\_43\]) and (\[eq:conv\_2\_45\]) yields $$\begin{aligned}
\label{eq:conv_2_46}
nR_s&\leq \frac{1}{2}\left\{h(\bold{\bar{Y}}_r^n)+h({\tilde{\bold{X}}_{c_{22}}'^n})\right\}+n\phi_9\\
\label{eq:conv_2_47}
&\leq \frac{1}{2}\sum_{i=1}^n\left\{\sum_{k=1}^N h(\bar{Y}_{r,k}(i))+\sum_{k=N_e+1}^{N_c} h(\tilde{X}_{c,k}(i))\right\}+n\phi_9,\end{aligned}$$ where $\phi_9=\frac{1}{2}\{\phi_8-\log|\det({\bold{H}'_{c_{21}}})|\}$. In Appendix. B, we also show that $$\begin{aligned}
\label{eq:conv_2_48}
h(\bar{Y}_{r,k}(i))\leq \log 2\pi e+\log (1+\bar{h}^2 P),\end{aligned}$$ where $\bar{h}^2=\underset{k}{\max}\;\left(||{\bold{h}}_{t,k}^{r}||^2+||{\bold{h}}_{c,k}'^{r}||^2\right)$; ${\bold{h}}_{c,k}'^{r}$ denotes the transpose of the $k$th row vector of ${\bold{H}_{c_2}}'$.
Similar to case $1$, using (\[eq:conv\_2\_47\]), (\[eq:conv\_2\_48\]), and (\[eq:conv\_2\_34\]), the secrecy rate is bounded as $$\begin{aligned}
\label{eq:conv_2_49}
R_s\leq \frac{N}{2}\log(1+\bar{h}^2 P)+\frac{N_c-N_e}{2}\log(\rho^2+P)+n\phi_{10},\end{aligned}$$ where $\phi_{10}=\phi_9+\frac{N+N_c-N_e}{2}\log 2\pi e$. Thus, the s.d.o.f., for $N_e>N$ and $N_e\leq N_c\leq N+N_e$, is upper bounded as $$\begin{aligned}
\label{eq:conv_2_50}
D_s\leq \frac{N+N_c-N_e}{2}.\end{aligned}$$
Obtaining the Upper Bound
-------------------------
[\[Conv\_Proof\_3\]]{} For $N_e\leq N$, the upper bound for the s.d.o.f. derived in Section \[Conv\_Proof\_1\] is equal to $N+N_c-N_e$, for all $0\leq N_c\leq N_e$. while the upper bound derived in Section \[Conv\_Proof\_2\], at $N_c=N$, is equal to $N-\frac{N_e}{2}$, c.f. equations (\[eq:conv\_1\_11\]) and (\[eq:conv\_2\_37\]). As the former upper bound is greater than the latter for all $\frac{N_e}{2}<N_c\leq N$, the s.d.o.f. is upper bounded by $N-\frac{N_e}{2}$ for all $\frac{N_e}{2}<N_c\leq N$. Combining these statements, we have the following upper bound for the s.d.o.f. for $N_e\leq N$: $$\begin{aligned}
\label{eq:conv_3_1}
D_s\leq \begin{cases}
N+N_c-N_e,\qquad \text{for}\;\; 0\leq N_c\leq \frac{N_e}{2}\\
N-\frac{N_e}{2},\qquad \text{for}\;\; \frac{N_e}{2}< N_c \leq N\\
\frac{N+N_c-N_e}{2},\qquad \text{for}\;\; N< N_c\leq N+N_e.
\end{cases}\end{aligned}$$
Similarly, when $N_e>N$ and for all $N_e-\frac{N}{2}<N_c\leq N_e$, the upper bound derived for $0\leq N_c\leq N_e$ in Section \[Conv\_Proof\_1\] is greater than the upper bound derived in Section \[Conv\_Proof\_2\] at $N_c=N_e$. Thus, the s.d.o.f. for $N_e-\frac{N}{2}<N_c\leq N_e$ is upper bounded by $\frac{N}{2}$. In addition, the upper bound in (\[eq:conv\_1\_11\]) is equal to zero for all $0\leq N_c\leq N_e-N$. Thus, the s.d.o.f. for $N_e>N$ is upper bounded as: $$\begin{aligned}
\label{eq:conv_3_2}
D_s\leq \begin{cases}
0,\qquad\qquad\qquad \;\;\text{for}\;\; 0\leq N_c\leq N_e-N\\
N+N_c-N_e,\qquad \text{for}\; N_e-N<N_c\leq N_e-\frac{N}{2}\\
\frac{N}{2},\qquad\qquad\qquad\; \text{for}\;\;N_e-\frac{N}{2}< N_c \leq N_e\\
\frac{N+N_c-N_e}{2},\qquad\qquad \text{for}\;\; N_e< N_c\leq N+N_e.
\end{cases}\end{aligned}$$
By combining the bounds for $N_e\leq N$ in (\[eq:conv\_3\_1\]) and for $N_e>N$ in (\[eq:conv\_3\_2\]), we obtain the upper bound for the s.d.o.f. in (\[eq:thm1\]). In the next Section, we will show the achievability of (\[eq:thm1\]).
Achievablility for $N_t=N_r=N$ {#AchSchemes}
==============================
In this section, we provide the achievability proof for Theorem \[Thm1\] by showing the achievability of (\[eq:conv\_3\_1\]) when $N_e\leq N$, and the achievability of (\[eq:conv\_3\_2\]) when $N_e>N$. For both $N_e\leq N$ and $N_e>N$, we divide the range of the number of antennas at the cooperative jammer, $N_c$, into five ranges and propose an achievable scheme for each range. For all the achievable schemes in this section, we have the $n$-letter signals, ${\bold{X}_t^n}$ and ${\bold{X}_c^n}$, as i.i.d. sequences. Since ${\bold{X}_c^n}$ is independent from ${\bold{X}_t^n}$, and each of them is i.i.d. across time, we have in effect a memoryless wire-tap channel and the secrecy rate $$\begin{aligned}
\label{eq:AchSecRate}
R_s=[I({\bold{X}_t};{\bold{Y}_r})-I({\bold{X}_t};{\bold{Y}_e})]^{+},\end{aligned}$$ is achievable by [*[stochastic encoding]{}*]{} at the transmitter [@CK].
The transmitted signals at the transmitter and the cooperative jammer, for each of the following schemes, are $$\begin{aligned}
\label{eq:Xt1_Xc1}
{\bold{X}_t}&={\bold{P}_t}{\bold{U}_t},\quad {\bold{X}_c}={\bold{P}_c}{\bold{V}_c},\end{aligned}$$ where ${\bold{U}_t}=\left[U_1\cdots U_d\right]^T$ and ${\bold{V}_c}=\left[V_1\cdots V_l\right]^T$ are the information and cooperative jamming streams, respectively. ${\bold{P}_t}=\left[{\bold{p}}_{t,1}\;\cdots{\bold{p}}_{t,d}\right]\in\mathbb{C}^{N\times d}$ and ${\bold{P}_c}=\left[{\bold{p}}_{c,1}\cdots{\bold{p}}_{c,l}\right]\in\mathbb{C}^{N_c\times l}$ are the precoding matrices at the transmitter and the cooperative jammer.
Signaling, precoding, and decoding techniques utilized in this proof vary according to the relative number of antennas at the different terminals and whether the s.d.o.f. of the channel is integer valued or not an integer. In particular, we show that Gaussian signaling both for transmission and cooperative jamming is sufficient to achieve the integer valued s.d.o.f., while achieving non-integer s.d.o.f. requires structured signaling and cooperative jamming along with a combination of linear receiver processing, and the complex field equivalent of real interference alignment [@maddah2010degrees; @kleinbock2002baker]. Additionally, the linear precoding at the transmitter and the cooperative jammer depends on whether $N_e$ is equal to, smaller than, or larger than $N$, and whether the number of antennas at the cooperative jammer, $N_c$, results in a s.d.o.f. for the channel that is before, after, or at the flat s.d.o.f. range in the s.d.o.f. plot versus $N_c$. This leads to an achievability proof that involves $10$ distinct achievable schemes, which differ from each other in the type of signals used (Gaussian or structured), and/or precoding at the transmitter and cooperative jammer, and/or decoding at the legitimate receiver.
In order to extend real interference alignment to complex channels, we need to utilize different results than those used for real channels. For real channels, to analyze the decoder performance, reference [@motahari2009real2] proposed utilizing the convergence part of Khintchine-Groshev theorem in the field of Diophantine approximation [@schmidt1980diophantine], which deals with the approximation of real numbers with rational numbers. For complex channels, transforming the channel into a real channel with twice the dimensions, as is usually the convention, is not sufficient here, since real interference alignment relies on the linear independence over rational numbers of the channel gains, which does not continue to hold after such channel transformation. Luckily, we can utilize a result in the field of classification of transcendental complex numbers, which provides a bound on the absolute value of a complex algebraic number with rational coefficients in terms of its height, i.e., the maximum coefficient [@Sprindzuk1; @Sprindzuk2; @kleinbock2002baker]. For complex channel coefficients, this result ends up playing the same role of the Khintchine-Groshev theorem for real coefficients.
Before continuing with the achievability proof for the different cases, we state the following lemma, which is utilized to show the linear independence between the directions of the received streams at the legitimate receiver.
\[lemma1\] Consider two matrices $\bold{E}_1\in\mathbb{C}^{N\times K}$ and $\bold{E}_2\in\mathbb{C}^{K\times M}$, where $N,M<K$. If the matrix $\bold{E}_2$ is full column rank and the matrix $\bold{E}_1$ has all of its entries independently and randomly drawn according to a continuous distribution, then ${\rm{rank}}(\bold{E}_1\bold{E}_2)=\min(N,M)$ a.s.
The proof of Lemma \[lemma1\] is given in Appendix C.
Case 1: $N_e\leq N$ and $0\leq N_c\leq \frac{N_e}{2}$ {#AchScheme1}
-----------------------------------------------------
The s.d.o.f. for this case is equal to $N+N_c-N_e$, i.e., integer valued, for which we utilize Gaussian signaling and cooperative jamming. Since $N_e\leq N$, the transmitter exploits this advantage by sending a part of its signal invisible to the eavesdropper. There is no need for linear precoding at the cooperative jammer for this case. Increasing the number of the cooperative jammer antennas, $N_c$, increases the s.d.o.f. of the channel.
The transmitted signals, ${\bold{X}_t}$ and ${\bold{X}_c}$, are given by (\[eq:Xt1\_Xc1\]) with $d=N+N_c-N_e$, $l=N_c$, ${\bold{U}_t}\sim \mathcal{CN}(\bold{0},\bar{P}\bold{I}_d)$, ${\bold{V}_c}\sim\mathcal{CN}(\bold{0},\bar{P}\bold{I}_l)$, ${\bold{P}_c}={\bold{I}}_{l}$, and $$\begin{aligned}
\label{eq:Pt}
{\bold{P}_t}=\left[{\bold{P}_{t,a}}\;\;{\bold{P}_{t,n}}\right]\in\mathbb{C}^{N\times d},\end{aligned}$$ where ${\bold{P}_{t,a}}={\bold{G}_t^{\dagger}}{\bold{G}_c}$ in order to align the information streams over the cooperative jamming streams at the eavesdropper, and the $N-N_e$ columns of ${\bold{P}_{t,n}}$ are chosen to span $\mathcal{N}({\bold{G}_t})$. $\bar{P}=\frac{1}{\alpha} P$, in accordance with the power constraints on the transmitted signals at the transmitter and the cooperative jammer, where $\alpha=\max\left\{l,\sum_{i=1}^d ||{\bold{p}}_{t,i}||^2\right\}$ is a constant which does not depend on the power $P$.
Since $N_c\leq \frac{N_e}{2}$, the total number of superposed received streams at the receiver, $2N_c+N-N_e$, is less than or equal to the number of its available spatial dimensions, $N$. Thus, the receiver can decode all the information and cooperative jamming streams at high SNR. Using (\[eq:Yr1\]), (\[eq:Ye1\]), and (\[eq:Xt1\_Xc1\]), the received signals at the receiver and the eavesdropper are $$\begin{aligned}
\label{eq:Yr3_2}
{\bold{Y}_r}&=\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}\\\end{bmatrix}\begin{bmatrix}{\bold{U}_t}\\{\bold{V}_c}\\\end{bmatrix}+{\bold{Z}_r},\\
\label{eq:Ye3_2}
{\bold{Y}_e}&=\begin{bmatrix}{\bold{G}_t}{\bold{G}_t^{\dagger}}{\bold{G}_c}&{\bold{0}}_{N_e\times (N-N_e)}\\\end{bmatrix}\begin{bmatrix}{{\bold{U}_t}_1^l}\\{{\bold{U}_t}_{l+1}^d}\\\end{bmatrix}+{\bold{G}_c}{\bold{V}_c}+{\bold{Z}_e}\\
\label{eq:Ye3_3}
&={\bold{G}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})+{\bold{Z}_e}.\end{aligned}$$ We lower bound the secrecy rate in (\[eq:AchSecRate\]) as follows. First, in order to compute $I({\bold{X}_t};{\bold{Y}_r})$, we show that the matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}\right]\in\mathbb{C}^{N\times (d+l)}$ in (\[eq:Yr3\_2\]) is full column-rank a.s.
The columns of ${\bold{P}_{t,a}}={\bold{G}_t^{\dagger}}{\bold{G}_c}$ are linearly independent a.s. due to the randomly generated channel gains, and the $N-N_e$ columns of ${\bold{P}_{t,n}}$ are linearly independent as well, since they span an $N-N_e$-dimensional subspace. In addition, each of the columns of ${\bold{P}_{t,a}}$ is linearly independent from the columns of ${\bold{P}_{t,n}}$ a.s. since ${\bold{G}_t}{\bold{P}_{t,a}}={\bold{G}_c}$, and hence ${\bold{G}_t}{\bold{p}}_{t_i}\neq {\bold{0}}$ for all $i=1,2,\cdots,l$. Thus ${\bold{P}_t}=[{\bold{P}_{t,a}}\;{\bold{P}_{t,n}}]$ is full column rank a.s. The matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}\right]$ can be written as $$\begin{aligned}
\label{eq:Ach_1_1}
\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}\\\end{bmatrix}=\begin{bmatrix}{\bold{H}_t}&{\bold{H}_c}\\\end{bmatrix}\begin{bmatrix}{\bold{P}_t}&{\bold{0}}_{N\times l}\\{\bold{0}}_{l\times d}&{\bold{I}}_{l}\\\end{bmatrix}.\end{aligned}$$ The matrix $\left[{\bold{H}_t}\;\;{\bold{H}_c}\right]$ has all of its entries independently and randomly drawn according to a continuous distribution, while the second matrix on the right hand side (RHS) of (\[eq:Ach\_1\_1\]) is full column rank a.s. By applying Lemma \[lemma1\] to (\[eq:Ach\_1\_1\]), we have that the matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}\right]$ is full column rank a.s. Thus, using (\[eq:Yr3\_2\]), we obtain the lower bound $$\begin{aligned}
\label{eq:Ach_1_2}
I({\bold{X}_t};{\bold{Y}_r})\geq d\log P+o(\log P).\end{aligned}$$
Next, using (\[eq:Ye3\_3\]), we upper bound $I({\bold{X}_t};{\bold{Y}_e})$ as follows: $$\begin{aligned}
\label{eq:Ach_1_3}
I({\bold{X}_t};{\bold{Y}_e})&=h({\bold{Y}_e})-h({\bold{Y}_e}|{\bold{X}_t})\\
\label{eq:Ach_1_4}
&=h({\bold{G}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})+{\bold{Z}_e})-h({\bold{G}_c}{\bold{V}_c}+{\bold{Z}_e})\\
\label{eq:Ach_1_5}
&=\log\frac{{\rm{det}}({\bold{I}}_{N_e}+2\bar{P}{\bold{G}_c}{\bold{G}_c^H})}{{\rm{det}}({\bold{I}}_{N_e}+\bar{P}{\bold{G}_c}{\bold{G}_c^H})}\\
\label{eq:Ach_1_6}
&=\log\frac{{\rm{det}}({\bold{I}}_{l}+2\bar{P}{\bold{G}_c^H}{\bold{G}_c})}{{\rm{det}}({\bold{I}}_{l}+\bar{P}{\bold{G}_c^H}{\bold{G}_c})}\\
\label{eq:Ach_1_7}
&=\log\frac{2^{l}{\rm{det}}(\frac{1}{2}{\bold{I}}_{l}+\bar{P}{\bold{G}_c^H}{\bold{G}_c})}{{\rm{det}}({\bold{I}}_{l}+\bar{P}{\bold{G}_c^H}{\bold{G}_c})}\\
\label{eq:Ach_1_8}
&\leq l.\end{aligned}$$
Substituting (\[eq:Ach\_1\_2\]) and (\[eq:Ach\_1\_8\]) in (\[eq:AchSecRate\]), we have $$\begin{aligned}
\label{eq:Ach_1_9}
R_s&\geq d\log P+o(\log P)-l\\
\label{eq:Ach_1_10}
&=(N+N_c-N_e)\log P+o(\log P)-N_c,\end{aligned}$$ and hence, using (\[eq:sdof\]), we conclude that the achievable s.d.o.f. is $D_s\geq N+N_c-N_e$.
Case 2: $N_e\leq N, \frac{N_e}{2}<N_c\leq N$, and $N_e$ is even
---------------------------------------------------------------
[\[AchScheme2\]]{} Unlike case $1$, the s.d.o.f. for this case does not increase by increasing $N_c$. For all $N_c$ in this case, the transmitter sends the same number of information streams, while the cooperative jammer utilizes a linear precoder which allows for discarding any unnecessary antennas. The s.d.o.f. here is integer valued, and we use Gaussian signaling for transmission and cooperative jamming.
In particular, for $N_e$ is even, $N_c=\frac{N_e}{2}$, and $N_e\leq N$, the achievable s.d.o.f., using the scheme in Section \[AchScheme1\], is equal to $N-\frac{N_e}{2}$. However, from (\[eq:conv\_3\_1\]), we observe that the s.d.o.f. is upper bounded by $N-\frac{N_e}{2}$ for all $\frac{N_e}{2}<N_c\leq {N}$. Thus, when $N_e\leq N$ and $N_e$ is even, the scheme for $N_c=\frac{N_e}{2}$ in Section \[AchScheme1\] can be used to achieve the s.d.o.f. for all $\frac{N_e}{2}<N_c\leq N$, where the cooperative jammer uses the precoder $$\begin{aligned}
\label{eq:Ach_2_1}
{\bold{P}_c}=\begin{bmatrix}{\bold{I}}_{l}\\{\bold{0}}_{(N_c-l)\times l}\\ \end{bmatrix},\end{aligned}$$ with $l=\frac{N_e}{2}$, to utilize only $\frac{N_e}{2}$ out of its $N_c$ antennas, and the transmitter utilizes $$\begin{aligned}
\label{eq:Ach_2_2}
{\bold{P}_t}=\left[{\bold{P}_{t,a}}\;{\bold{P}_{t,n}}\right],\end{aligned}$$ ${\bold{P}_{t,a}}={\bold{G}_t^{\dagger}}{\bold{G}_c}{\bold{P}_c}\in\mathbb{C}^{N\times l}$, ${\bold{P}_{t,n}}\in\mathbb{C}^{N\times (N-N_e)}$ is defined as in (\[eq:Pt\]), in order to send $d=N-\frac{N_e}{2}$ Gaussian information streams. Following the same analysis as in the previous case, the achievable s.d.o.f. is $N-\frac{N_e}{2}$ for all $\frac{N_e}{2}<N_c\leq N$, where $N_e$ is even and $N_e\leq N$.
Case 3: $N_e\leq N$, $\frac{N_e}{2}< N_c \leq N$, and $N_e$ is odd
------------------------------------------------------------------
[\[AchScheme3\]]{} The s.d.o.f. for this case is equal to $N-\frac{N_e}{2}$, which is not an integer. As Gaussian signaling can not achieve fractional s.d.o.f. for the channel, we utilize structured signaling both for transmission and cooperative jamming for this case. In particular, we propose utilizing [*[joint]{}*]{} signal space alignment and the complex field equivalent of real interference alignment [@maddah2010degrees; @kleinbock2002baker].
The decoding scheme at the receiver is as follows. The receiver projects its received signal over a direction that is orthogonal to all but one information and one cooperative jamming streams. Then, the receiver decodes these two streams from the projection using complex field analogy of real interference alignment. Finally, the receiver removes the decoded information and cooperative jamming streams from its received signal, leaving $N-1$ spatial dimensions for the other $N-\frac{N_e+1}{2}$ information and $\frac{N_e-1}{2}$ cooperative jamming streams.
The transmitted signals are given by (\[eq:Xt1\_Xc1\]), with $d=N-\frac{N_e-1}{2}$, $l=\frac{N_e+1}{2}$, ${\bold{P}_c}, {\bold{P}_t}$ are defined as in (\[eq:Ach\_2\_1\]) and (\[eq:Ach\_2\_2\]), and $U_i={U_{i,{{\rm{Re}}}}}+j{U_{i,{{\rm{Im}}}}}$, $V_k={V_{k,{{\rm{Re}}}}}+j{V_{k,{{\rm{Im}}}}}$, $i=2,3,\cdots,d$ and $k=2,3,\cdots,l$. The random variables $U_1$, $V_1$, $\{{U_{i,{{\rm{Re}}}}}\}_{i=2}^{d}$, $\{{U_{i,{{\rm{Im}}}}}\}_{i=2}^{d}$, $\{{V_{i,{{\rm{Re}}}}}\}_{i=2}^{l}$, and $\{{V_{i,{{\rm{Im}}}}}\}_{i=2}^{l}$ are i.i.d. uniform over the set $\left\{a(-Q,Q)_{\mathbb{Z}}\right\}$. The values for $a$ and the integer $Q$ are chosen as $$\begin{aligned}
\label{eq:Q}
Q&=\left\lfloor P^{\frac{1-\epsilon}{2+\epsilon}}\right\rfloor =P^{\frac{1-\epsilon}{2+\epsilon}}-\nu\\
\label{eq:a}
a&=\gamma P^{\frac{3\epsilon}{2(2+\epsilon)}},\end{aligned}$$ in order to satisfy the power constraints, where $\epsilon$ is an arbitrarily small positive number, and $\nu,\gamma$ are constants that do not depend on the power $P$. Justification for the choice of $a$ and $Q$ is provided in Appendix D.
The received signal at the eavesdropper is $$\begin{aligned}
\label{eq:Ach_3_3}
{\bold{Y}_e}&={\tilde{\bold{G}}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})+{\bold{Z}_e},\end{aligned}$$ where ${\tilde{\bold{G}}_c}={\bold{G}_c}{\bold{P}_c}$. We upper bound the second term in (\[eq:AchSecRate\]), $I({\bold{X}_t};{\bold{Y}_e})$, as follows: $$\begin{aligned}
\label{eq:Ach_3_4}
I({\bold{X}_t};{\bold{Y}_e})&\leq I({\bold{X}_t};{\bold{Y}_e},{\bold{Z}_e})\\
\label{eq:Ach_3_5}
&=I({\bold{X}_t};{\bold{Y}_e}|{\bold{Z}_e})\\
\label{eq:Ach_3_6}
&=H({\bold{Y}_e}|{\bold{Z}_e})-H({\bold{Y}_e}|{\bold{Z}_e},{\bold{X}_t})\\
\label{eq:Ach_3_7}
&=H\left({\tilde{\bold{G}}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})\right)-H\left({\tilde{\bold{G}}_c}{\bold{V}_c}\right)\\
\label{eq:Ach_3_8}
&=H({{\bold{U}_t}_1^l}+{\bold{V}_c})-H({\bold{V}_c})\\
\label{eq:Ach_3_9}
&\nonumber =H\left(U_1+V_1,{U_{2,{{\rm{Re}}}}}+{V_{2,{{\rm{Re}}}}},{U_{2,{{\rm{Im}}}}}+{V_{2,{{\rm{Im}}}}},\cdots,{U_{l,{{\rm{Re}}}}}+{V_{l,{{\rm{Re}}}}},{U_{l,{{\rm{Im}}}}}+{V_{l,{{\rm{Im}}}}}\right)\\
&\qquad\qquad\qquad-H\left(V_1,{V_{2,{{\rm{Re}}}}},{V_{2,{{\rm{Im}}}}},\cdots,{V_{l,{{\rm{Re}}}}},{V_{l,{{\rm{Im}}}}}\right)\\
\label{eq:Ach_3_10}
&\leq \log (4Q+1)^{2l-1}-\log(2Q+1)^{2l-1}\\
\label{eq:Ach_3_11}
&=(2l-1)\log\frac{4Q+1}{2Q+1}\\
\label{eq:Ach_3_12}
&\leq 2l-1,\end{aligned}$$ where (\[eq:Ach\_3\_5\]) follows since ${\bold{X}_t}$ and ${\bold{Z}_e}$ are independent, and (\[eq:Ach\_3\_10\]) follows since the entropy of a uniform random variable over the set $\left\{a(-2Q,2Q)_{\mathbb{Z}}\right\}$ upper bounds the entropy of each of $U_1+V_1,{U_{2,{{\rm{Re}}}}}+{V_{2,{{\rm{Re}}}}},{U_{2,{{\rm{Im}}}}}+{V_{2,{{\rm{Im}}}}},\cdots, {U_{l,{{\rm{Im}}}}}+{V_{l,{{\rm{Im}}}}}$. Equation (\[eq:Ach\_3\_8\]) follows since the mappings ${{\bold{U}_t}_1^l}+{\bold{V}_c}\mapsto{\tilde{\bold{G}}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})$ and ${\bold{V}_c}\mapsto{\tilde{\bold{G}}_c}{\bold{V}_c}$ are bijective. The reason for this is that the entries of ${\tilde{\bold{G}}_c}$ are [*[rationally independent]{}*]{}, and that $({{\bold{U}_t}_1^l}+{\bold{V}_c})$, ${\bold{V}_c}$ belong to $\mathbb{Z}^l[j]$.
\[definition2\] A set of complex numbers $\{c_1,c_2,\cdots,c_L\}$ are rationally independent, i.e., linearly independent over $\mathbb{Q}$, if there is no set of rational numbers $\{r_i\}$, $r_i\neq 0$ for all $i=1,2,\cdots,L$, such that $\sum_{i=1}^L r_i c_i=0$.
Next, we derive a lower bound for $I({\bold{X}_t};{\bold{Y}_r})$. The received signal at the legitimate receiver is given by $$\begin{aligned}
\label{eq:Ach_3_14}
{\bold{Y}_r}&={\bold{A}}{\bold{U}_t}+{\bold{H}'_c}{\bold{V}_c}+{\bold{Z}_r},\end{aligned}$$ where ${\bold{A}}={\bold{H}_t}{\bold{P}_t}=\left[{\bold{a}}_1\;{\bold{a}}_2\;\cdots\;{\bold{a}}_{d}\right]$ and ${\bold{H}'_c}={\bold{H}_c}{\bold{P}_c}=\left[{\bold{h}}_{c,1}\;{\bold{h}}_{c,2}\;\cdots\;\bold{h}_{c,l}\right]$. The receiver chooses ${\bold{b}}\in\mathbb{C}^{N}$ such that ${\bold{b}}\perp{\rm{span}}\left\{{\bold{a}}_2,\cdots,{\bold{a}}_d,{\bold{h}}_{c,2},\cdots,\bold{h}_{c,l}\right\}$ and obtains $$\begin{aligned}
{\widetilde{\bold{Y}}_r}=\bold{D}{\bold{Y}_r}\end{aligned}$$ where $$\begin{aligned}
\label{eq:D}
\bold{D}=\begin{bmatrix}
\qquad\quad{\bold{b}^H}\\\bold{0}_{(N-1)\times 1}&\bold{I}_{N-1}
\end{bmatrix}.\end{aligned}$$
Due to the fact that channel gains are continuous and randomly generated, ${\bold{a}}_1$ and ${\bold{h}}_{c,1}$ are linearly independent from ${\rm{span}}\left\{{\bold{a}}_2,\cdots,{\bold{a}}_d,{\bold{h}}_{c,2},\cdots,\bold{h}_{c,l}\right\}$, and hence, ${\bold{b}}$ is not orthogonal to ${\bold{a}}_1$ and ${\bold{h}}_{c,1}$ a.s. Thus, we have $$\begin{aligned}
\label{eq:Ach_3_15}
{\widetilde{\bold{Y}}_r}=\begin{bmatrix}{\widetilde{Y}_{r_1}}\\{\widetilde{\bold{Y}}_{r_2}^N}\\\end{bmatrix}
=\begin{bmatrix}{\bold{b}^H}{\bold{a}}_1\;\;{\bold{0}}_{1\times (d-1)}\\{\widetilde{\bold{A}}}\\\end{bmatrix}\begin{bmatrix}U_1\\{{\bold{U}_t}_2^d}\\\end{bmatrix}+
\begin{bmatrix}{\bold{b}^H}{\bold{h}}_{c,1}\;\;{\bold{0}}_{1\times (l-1)}\\{\widetilde{\bold{H}}_c}\\\end{bmatrix}\begin{bmatrix}V_1\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+
\begin{bmatrix}{\bold{b}^H}{\bold{Z}_r}\\{{\bold{Z}_r}_2^N}\\\end{bmatrix},\end{aligned}$$ where ${\widetilde{\bold{A}}}=\left[{\tilde{\bold{a}}}_1\;{\tilde{\bold{a}}}_2\;\cdots\;{\tilde{\bold{a}}}_d\right]\in\mathbb{C}^{(N-1)\times d}$, ${\tilde{\bold{a}}}_i={{\bold{a}}_{i}}_2^N$ for all $i=1,2,\cdots,d$. Similarly, ${\widetilde{\bold{H}}_c}=[{\tilde{\bold{h}}}_{c,1}\;{\tilde{\bold{h}}}_{c,2}\;\cdots\;{\tilde{\bold{h}}}_{c,{l}}]\in\mathbb{C}^{(N-1)\times l}$, where ${\tilde{\bold{h}}}_{c,2}={{\bold{h}}_{c,i}}_2^N$ for all $i=1,2,\cdots,l$.
Next, the receiver uses ${\widetilde{Y}_{r_1}}$ to decode the information stream $U_1$ and the cooperative jamming stream $V_1$ as follows. Let $Z'={\bold{b}^H}{\bold{Z}_r}\sim\mathcal{CN}(0,||{\bold{b}}||^2)$, $f_1={\bold{b}^H}{\bold{a}}_1$, and $f_2={\bold{b}^H}{\bold{h}}_{c,1}$. Thus, ${\widetilde{Y}_{r_1}}$ is given by $$\begin{aligned}
\label{eq:Ach_3_16}
{\widetilde{Y}_{r_1}}=f_1U_1+f_2V_1+Z'.\end{aligned}$$ Once again, with randomly generated channel gains, $f_1={\bold{b}^H}{\bold{a}}_1$ and $f_2={\bold{b}^H}{\bold{h}}_{c,1}$ are rationally independent a.s. Thus, the mapping $(U_1,V_1)\mapsto f_1U_1+f_2V_1$ is invertible [@motahari2009real2]. The receiver employs a hard decision decoder which maps ${\widetilde{Y}_{r_1}}\in\tilde{\mathcal{Y}}_{r_1}$ to the nearest point in the constellation $\mathcal{R}_1=f_1\mathcal{U}_1+f_2\mathcal{V}_1$, where $\mathcal{U}_1,\mathcal{V}_1=\left\{a(-Q,Q)_{\mathbb{Z}}\right\}$. Then, the receiver passes the output of the hard decision decoder through the bijective mapping $f_1U_1+f_2V_1\mapsto(U_1,V_1)$ in order to decode both $U_1$ and $V_1$.
The receiver can now use $$\begin{aligned}
\label{eq:Ach_3_17}
{\bar{\bar{\bold{Y}}}_r}&={\widetilde{\bold{Y}}_{r_2}^N}-{\tilde{\bold{a}}}_1 U_1-{\tilde{\bold{h}}}_{c,1} V_1\\
\label{eq:Ach_3_18}
&=\begin{bmatrix}{\tilde{\bold{a}}}_2&\cdots&{\tilde{\bold{a}}}_{d}\\\end{bmatrix}{{\bold{U}_t}_2^d}+\begin{bmatrix}{\tilde{\bold{h}}}_{c,2}&\cdots&{\tilde{\bold{h}}}_{c,l}\\\end{bmatrix}{{\bold{V}_c}_2^l}+{{\bold{Z}_r}_2^N}\\
\label{eq:Ach_3_19}
&=\bold{B} \begin{bmatrix}{{\bold{U}_t}_2^d}\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+{{\bold{Z}_r}_2^N},\end{aligned}$$ to decode $U_2,\cdots,U_d$, where, $$\begin{aligned}
\label{eq:Ach_3_20}
\bold{B}=\left[{\tilde{\bold{a}}}_2\;\cdots\;{\tilde{\bold{a}}}_{d}\;\;{\tilde{\bold{h}}}_{c,2}\;\cdots\;{\tilde{\bold{h}}}_{c,l}\right]\in\mathbb{C}^{(N-1)\times (N-1)},\end{aligned}$$ is full rank a.s. To show that $\bold{B}$ is full rank a.s., let ${\bar{\bold{H}}_t}$ and ${\bar{\bold{H}}_c}$ be generated by removing the first row from ${\bold{H}_t}$ and ${\bold{H}_c}$, and let ${\bar{\bold{P}}_t}$ and ${\bar{\bold{P}}_c}$ be generated by removing the first column from ${\bold{P}_t}$ and ${\bold{P}_c}$, respectively. $\bold{B}$ can be rewritten as $$\begin{aligned}
\label{eq:Ach_3_20_1}
\bold{B}=\begin{bmatrix}{\bar{\bold{H}}_t}&{\bar{\bold{H}}_c}\\\end{bmatrix}\begin{bmatrix}{\bar{\bold{P}}_t}&{\bold{0}}_{N\times (l-1)}\\{\bold{0}}_{N_c\times (d-1)}&{\bar{\bold{P}}_c}\\\end{bmatrix}.\end{aligned}$$ Note that $\left[{\bar{\bold{H}}_t}\;{\bar{\bold{H}}_c}\right]$ has all of its entries independently and randomly drawn from a continuous distribution, and the second matrix in the RHS of (\[eq:Ach\_3\_20\_1\]) is full column rank. Using Lemma \[lemma1\], the matrix $\bold{B}$ is full rank a.s.
Hence, by zero forcing, the receiver obtains $$\begin{aligned}
\label{eq:Ach_3_21}
{\widehat{\bold{Y}}_r}&=\bold{B}^{-1}{\bar{\bar{\bold{Y}}}_r}=\begin{bmatrix}{{\bold{U}_t}_2^d}\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+\bar{\bold{Z}}_r,\end{aligned}$$ where $\bar{\bold{Z}}_r=\bold{B}^{-1}{{\bold{Z}_r}_2^N}\sim\mathcal{CN}\left({\bold{0}},\bold{B}^{-1}\bold{B}^{-H}\right)$. Thus, at high SNR, the receiver can decode the other information streams, $U_2,\cdots,U_d$, from ${\widehat{\bold{Y}}_r}$.
The mutual information between the transmitter and receiver is lower bounded as follows: $$\begin{aligned}
\nonumber \\
\label{eq:Ach_3_22}
I({\bold{X}_t};{\bold{Y}_r})&\geq I({\bold{U}_t};{\widetilde{\bold{Y}}_r})\\
\label{eq:Ach_3_23}
&=I(U_1,{{\bold{U}_t}_2^d};{\widetilde{Y}_{r_1}},{\widetilde{\bold{Y}}_{r_2}^N})\\
\label{eq:Ach_3_24}
&=I(U_1,{{\bold{U}_t}_2^d};{\widetilde{Y}_{r_1}})+I(U_1,{{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|{\widetilde{Y}_{r_1}})\\
\label{eq:Ach_3_25}
&=I(U_1;{\widetilde{Y}_{r_1}})+I({{\bold{U}_t}_2^d};{\widetilde{Y}_{r_1}}|U_1)+I(U_1;{\widetilde{\bold{Y}}_{r_2}^N}|{\widetilde{Y}_{r_1}})+I({{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|U_1,{\widetilde{Y}_{r_1}})\\
\label{eq:Ach_3_26}
&\geq I(U_1;{\widetilde{Y}_{r_1}})+I({{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|U_1,{\widetilde{Y}_{r_1}}),\end{aligned}$$ where (\[eq:Ach\_3\_22\]) follows since ${\bold{U}_t}-{\bold{X}_t}-{\bold{Y}_r}-{\widetilde{\bold{Y}}_r}$ forms a Markov chain. We next lower bound each term in the RHS of (\[eq:Ach\_3\_26\]).
We lower bound the first term, $I(U_1;{\widetilde{Y}_{r_1}})$ as follows, see also [@motahari2009real2; @maddah2010degrees]. Let $P_{e_1}$ denote the probability of error in decoding $U_1$ at the receiver, i.e., $P_{e_1}={{\rm{Pr}}}\left\{\hat{U}_1\neq U_1\right\}$, where $\hat{U}_i$, $i=1,2,\cdots,d$, is the estimate of $U_i$ at the legitimate receiver. Thus, using Fano’s inequality, we have $$\begin{aligned}
\label{eq:Ach_3_27}
I(U_1;{\widetilde{Y}_{r_1}})&=H(U_1)-H(U_1|{\widetilde{Y}_{r_1}})\\
\label{eq:Ach_3_28}
& \geq H(U_1)-1-P_{e_1}\log |\mathcal{U}_1|\\
\label{eq:Ach_3_29}
&=\left(1-P_{e_1}\right)\log(2Q+1)-1.\end{aligned}$$ From (\[eq:Ach\_3\_16\]), since the mapping $(U_1,V_1)\mapsto f_1 U_1+f_2V_1$ is invertible, the only source of error in decoding $U_1$ from ${\widetilde{Y}_{r_1}}$ is the additive Gaussian noise $Z'$. Note that, since $Z'\sim\mathcal{CN}\mathcal(0,||{\bold{b}}||^2)$, ${{\rm{Re}}}\{Z'\}$ and ${{\rm{Im}}}\{Z'\}$ are i.i.d. with $\mathcal{N}\left(0,\frac{||{\bold{b}}||^2}{2}\right)$ distribution, and $|Z'|\sim {\rm{Rayleigh}}\left(\frac{||{\bold{b}}||}{\sqrt{2}}\right)$. Thus, we have $$\begin{aligned}
\label{eq:Ach_3_30}
P_{e_1}&={{\rm{Pr}}}\left\{\hat{U}_1\neq U_1\right\}\\
\label{eq:Ach_3_31}
&\leq {{\rm{Pr}}}\left\{(\hat{U}_1,\hat{V}_1)\neq (U_1,V_1)\right\}\\
\label{eq:Ach_3_32}
&\leq {{\rm{Pr}}}\left\{|Z'|\geq \frac{{d_{\rm{min}}}}{2}\right\}\\
\label{eq:Ach_3_33}
&=\exp\left(\frac{-{d_{\rm{min}}}^2}{4||{\bold{b}}||^2}\right),\end{aligned}$$ where ${d_{\rm{min}}}$ is the minimum distance between the points in the constellation $\mathcal{R}_1=f_1\mathcal{U}_1+f_2\mathcal{V}_1$.
In order to upper bound $P_{e_1}$, we lower bound ${d_{\rm{min}}}$. To do so, similar to [@maddah2010degrees], we extend real interference alignment [@motahari2009real2] to complex channels. In particular, we utilize the following results from number theory:
\[definition1\][@kleinbock2002baker] The Diophantine exponent $\omega(\bold{z})$ of $\bold{z}\in\mathbb{C}^n$ is defined as $$\begin{aligned}
\label{eq:Ach_3_34}
\omega(\bold{z})=\sup\left\{v:|p+\bold{z}.\bold{q}|\leq (||\bold{q}||_{\infty})^{-v} \text{ for infinetly many } \bold{q}\in\mathbb{Z}^n, p\in\mathbb{Z}\right\},\end{aligned}$$ where $\bold{q}=[q_1\;q_2\;\cdots\;q_n]^T$ and $||\bold{q}||_\infty=\underset{i}\max|q_i|$.
\[lemma2\][@kleinbock2002baker] For almost all $\bold{z}\in\mathbb{C}^n$, the Diophantine exponent $\omega(\bold{z})$ is equal to $\frac{n-1}{2}$.
Lemma \[lemma2\] implies the following:
\[Corollary1\] For almost all $\bold{z}\in\mathbb{C}^n$ and for all $\epsilon>0$, $$\begin{aligned}
\label{eq:Ach_3_35}
|p+\bold{z}.\bold{q}|>(\underset{i}\max|q_i|)^{-\frac{(n-1+\epsilon)}{2}},\end{aligned}$$ holds for all $\bold{q}\in\mathbb{Z}^n$ and $p\in\mathbb{Z}$ except for finitely many of them.
Since the number of integers that violate the inequality in (\[eq:Ach\_3\_35\]) is finite, there exists a constant $\kappa$ such that, for almost all $\bold{z}\in\mathbb{C}^n$ and all $\epsilon>0$, the inequality $$\begin{aligned}
\label{eq:Ach_3_35_1}
|p+\bold{z}.\bold{q}|>\kappa(\underset{i}\max|q_i|)^{-\frac{(n-1+\epsilon)}{2}},\end{aligned}$$ holds for all $\bold{q}\in\mathbb{Z}^n$ and $p\in\mathbb{Z}$.
Thus, for almost all channel gains, the minimum distance ${d_{\rm{min}}}$ is lower bounded as follows: $$\begin{aligned}
\label{eq:Ach_3_36}
{d_{\rm{min}}}&=\inf_{Y'_{r_1}, Y''_{r_1}\in\mathcal{R}_1} |Y'_{r_1}-Y''_{r_1}|\\
\label{eq:Ach_3_37}
&=\inf_{U_1, V_1\in\left\{a(-2Q,2Q)_{\mathbb{Z}}\right\}} |f_1 U_1+f_2 V_1|\\
\label{eq:Ach_3_38}
&=\inf_{U_1, V_1\in(-2Q,2Q)_{\mathbb{Z}}} a |f_1| \left|U_1+\frac{f_2}{f_1} V_1\right|\\
\label{eq:Ach_3_39}
&\geq \kappa\frac{a |f_1|}{(2Q)^{\frac{\epsilon}{2}}}\\
\label{eq:Ach_3_40}
&\geq \kappa\gamma |f_1| 2^{-\frac{\epsilon}{2}} P^{\frac{\epsilon}{2}},\end{aligned}$$ where (\[eq:Ach\_3\_39\]) follows from (\[eq:Ach\_3\_35\_1\]), and (\[eq:Ach\_3\_40\]) follows by substituting (\[eq:Q\]) and (\[eq:a\]) in (\[eq:Ach\_3\_39\]). Substituting (\[eq:Ach\_3\_40\]) in (\[eq:Ach\_3\_33\]) gives the following bound on $P_{e_1}$, $$\begin{aligned}
\label{eq:Ach_3_41}
P_{e_1}\leq \exp(-\mu P^{\epsilon}),\end{aligned}$$ where $\mu=\frac{\kappa^2\gamma^2 |f_1|^2 2^{-\epsilon}}{4||{\bold{b}}||^2}$ is a constant which does not depend on the power $P$. Thus, using (\[eq:Ach\_3\_29\]) and (\[eq:Ach\_3\_41\]), we have $$\begin{aligned}
\label{eq:Ach_3_42}
I(U_1;{\widetilde{Y}_{r_1}})\geq \left(1-\exp(-\mu P^{\epsilon})\right)\log(2Q+1)-1.\end{aligned}$$
Next, we lower bound the second term in the RHS of (\[eq:Ach\_3\_26\]), $I({{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|U_1,{\widetilde{Y}_{r_1}})$. Let $\widetilde{\bold{B}}=\begin{bmatrix}{\bold{0}}_{(N-1) \times 1}&{\bold{I}}_{N-1}\\\end{bmatrix}-\frac{1}{f_2}{\tilde{\bold{h}}}_{c,1}{\bold{b}^H}$, and $$\begin{aligned}
\label{eq:Ach_3_43}
{\bar{\bar{\bold{Y}}}'_{r}}&=\bold{B}\begin{bmatrix}{{\bold{U}_t}_2^d}\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+\widetilde{\bold{B}}{\bold{Z}_r}\\
\label{eq:Ach_3_44}
{\widehat{\bold{Y}}'_r}&=\bold{B}^{-1}{\bar{\bar{\bold{Y}}}'_{r}}= \begin{bmatrix}{{\bold{U}_t}_2^d}\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+\bold{B}^{-1}\widetilde{\bold{B}}{\bold{Z}_r},\end{aligned}$$ where $\bold{B}$ is defined as in (\[eq:Ach\_3\_20\]). Thus, we have $$\begin{aligned}
\label{eq:Ach_3_45}
I\left({{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|U_1,{\widetilde{Y}_{r_1}}\right)&=I\left({{\bold{U}_t}_2^d};{\widetilde{\bold{A}}}{\bold{U}_t}+{\widetilde{\bold{H}}_c}{\bold{V}_c}+{{\bold{Z}_r}_2^N}\big|U_1,f_2 V_1+Z'\right)\\
\label{eq:Ach_3_46}
&=I\left({{\bold{U}_t}_2^d};\bold{B}\begin{bmatrix}{{\bold{U}_t}_2^d}\\{{\bold{V}_c}_2^l}\\\end{bmatrix}+{{\bold{Z}_r}_2^N}-\frac{1}{f_2}{\tilde{\bold{h}}}_{c,1}{\bold{b}^H}{\bold{Z}_r}\bigg|f_2 V_1+Z'\right)\\
\label{eq:Ach_3_47}
&=I({{\bold{U}_t}_2^d};{\bar{\bar{\bold{Y}}}'_{r}}|f_2 V_1+Z')\\
\label{eq:Ach_3_48}
&\geq I({{\bold{U}_t}_2^d};{\bar{\bar{\bold{Y}}}'_{r}})\\
\label{eq:Ach_3_49}
&\geq I({{\bold{U}_t}_2^d};{\widehat{\bold{Y}}'_r})\\
\label{eq:Ach_3_50}
&=H({{\bold{U}_t}_2^d})-H({{\bold{U}_t}_2^d}|{\widehat{\bold{Y}}'_r})\\
\label{eq:Ach_3_51}
&\geq H({{\bold{U}_t}_2^d})-P_{e_2}^d\log(2Q+1)^{2(d-1)}-1\\
\label{eq:Ach_3_52}
&= 2(d-1)\left(1-P_{e_2}^d\right)\log(2Q+1)-1,\end{aligned}$$ where $P_{e_2}^d={{\rm{Pr}}}\{(\hat{U}_2,\hat{U}_3,\cdots,\hat{U}_d)\neq (U_2,U_3,\cdots,U_d)\}$, (\[eq:Ach\_3\_45\]) follows from (\[eq:Ach\_3\_15\]), (\[eq:Ach\_3\_48\]) follows since ${{\bold{U}_t}_2^d}$ and $f_2 V_1+Z'$ are independent, (\[eq:Ach\_3\_49\]) follows since ${{\bold{U}_t}_2^d}-{\bar{\bar{\bold{Y}}}'_{r}}-{\widehat{\bold{Y}}'_r}$ forms a Markov chain, and (\[eq:Ach\_3\_51\]) follows from Fano’s inequality.
Let $\widehat{\bold{Z}}_r=\bold{\Theta}{\bold{Z}_r}=[\hat{Z}_{r_2}\;\cdots\;\hat{Z}_{r_N}]^T$, where $\bold{\Theta}=\bold{B}^{-1}\widetilde{\bold{B}}$. Thus, $\widehat{\bold{Z}}_r\sim\mathcal{CN}({\bold{0}},\bold{\Theta}\bold{\Theta}^H)$ and $|\hat{Z}_{r_i}|\sim {\rm{Rayleigh}}(\sigma_i)$, where $\sigma_i^2=\bold{\Theta}\bold{\Theta}^H(i,i)$, $i=2,3,\cdots,N$. Using the union bound, we have $$\begin{aligned}
\label{eq:Ach_3_53}
P_{e_2}^d&={{\rm{Pr}}}\left\{(\hat{U}_2,\hat{U}_3,\cdots,\hat{U}_d)\neq (U_2,U_3,\cdots,U_d)\right\}\\
\label{eq:Ach_3_54}
&\leq \sum_{i=2}^{d}{{\rm{Pr}}}\left\{\hat{U}_i\neq U_i\right\}\\
\label{eq:Ach_3_55}
&\leq\sum_{i=2}^d {{\rm{Pr}}}\left\{|\hat{Z}_{r_i}|\geq \frac{a}{2}\right\}\\
\label{eq:Ach_3_56}
&=\sum_{i=2}^d \exp\left(-\frac{a^2}{8\sigma_i^2}\right)\\
\label{eq:Ach_3_57}
&\leq (d-1)\exp\left(-\frac{\gamma^2}{8\sigma_{\rm{max}}^2}P^{\frac{3\epsilon}{2+\epsilon}}\right)\\
\label{eq:Ach_3_58}
&=\left(d-1\right)\exp(-\mu' P^{\epsilon'}),\end{aligned}$$ where $\sigma_{\rm{max}}=\underset{i}\max\;\sigma_i$, $\mu'=\frac{\gamma^2}{8\sigma_{\rm{max}}^2}$, $\epsilon'=\frac{3\epsilon}{2+\epsilon}$, and (\[eq:Ach\_3\_57\]) follows by substituting (\[eq:a\]) in (\[eq:Ach\_3\_56\]).
Substituting (\[eq:Ach\_3\_58\]) in (\[eq:Ach\_3\_52\]) yields $$\begin{aligned}
\label{eq:Ach_3_59}
I\left({{\bold{U}_t}_2^d};{\widetilde{\bold{Y}}_{r_2}^N}|U_1,{\widetilde{Y}_{r_1}}\right) &\geq \left(2d-2-2(d-1)^2\exp(-\mu' P^{\epsilon'})\right)\log(2Q+1)-1.\end{aligned}$$ Using (\[eq:Q\]), (\[eq:Ach\_3\_26\]), (\[eq:Ach\_3\_42\]), and (\[eq:Ach\_3\_59\]), we have $$\begin{aligned}
\label{eq:Ach_3_60}
I({\bold{X}_t}&;{\bold{Y}_r})\geq \left[2d-1-\exp(-\mu P^{\epsilon})-2(d-1)^2\exp(-\mu' P^{\epsilon'})\right]\log(2P^{\frac{1-\epsilon}{2+\epsilon}}-2\nu+1)-2\\
\label{eq:Ach_3_61}
&=\frac{1-\epsilon}{2+\epsilon}\left[2d-1-\exp(-\mu P^{\epsilon})-2(d-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P+o(\log P).\end{aligned}$$
Using the upper bound in (\[eq:Ach\_3\_12\]) and the lower bound in (\[eq:Ach\_3\_61\]), we get $$\begin{aligned}
\label{eq:Ach_3_62}
&R_s \geq \frac{1-\epsilon}{2+\epsilon}\left[2d-1-\exp(-\mu P^{\epsilon})-2(d-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P+o(\log P)-(2l-1)\\
&=\frac{1-\epsilon}{2+\epsilon}\left[2N-N_e-\exp(-\mu P^{\epsilon})-\frac{1}{2}(2N-N_e-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P+o(\log P)-N_e.\end{aligned}$$ Thus, it follows that the s.d.o.f. is lower bounded as $$\begin{aligned}
\label{eq:Ach_3_63}
D_s \geq \frac{(1-\epsilon)(2N-N_e)}{2+\epsilon}. \end{aligned}$$ Since $\epsilon>0$ can be chosen arbitrarily small, we can achieve s.d.o.f. of $N-\frac{N_e}{2}$.
Case 4: $N_e\leq N$, $N<N_c\leq N+N_e$, and $N+N_c-N_e$ is even
---------------------------------------------------------------
[\[AchScheme4\]]{} Since $N_c>N$ for this case, the cooperative jammer, unlike the previous three cases, chooses its precoder such that $N_c-N$ of its jamming streams are sent invisible to the receiver, in order to allow for more space for the information streams at the receiver. The s.d.o.f. for this case is integer valued, which we can achieve using Gaussian information and cooperative jamming streams.
The transmitted signals are given by (\[eq:Xt1\_Xc1\]), with $d=\frac{N+N_c-N_e}{2}$, $l=\frac{N_c+N_e-N}{2}$, ${\bold{U}_t}\sim\mathcal{CN}\left({\bold{0}},\bar{P}{\bold{I}}_d\right)$, ${\bold{V}_c}\sim\mathcal{CN}\left({\bold{0}},\bar{P}{\bold{I}}_l\right)$, $$\begin{aligned}
\label{eq:Ach_4_0}
{\bold{P}_c}=[{\bold{P}_{c,{\rm{I}}}}\;{\bold{P}_{c,n}}],\end{aligned}$$ where ${\bold{P}_{c,{\rm{I}}}}$ is given by $$\begin{aligned}
\label{eq:Ach_4_1}
{\bold{P}_{c,{\rm{I}}}}=\begin{bmatrix}{\bold{I}}_{g}\\{\bold{0}}_{(N_c-g)\times g}\\\end{bmatrix},\end{aligned}$$ $g=\frac{N_e+N-N_c}{2}$, and ${\bold{P}_{c,n}}\in\mathbb{C}^{N_c\times (N_c-N)}$ is a matrix whose columns span $\mathcal{N}({\bold{H}_c})$, ${\bold{P}_t}$ is defined as in Section \[AchScheme2\], and $\bar{P}=\frac{1}{\alpha'}P$, where $\alpha'=\max\left\{\sum_{i=1}^d||{\bold{p}}_{t,i}||^2, g+\sum_{i=g+1}^{l}||{\bold{p}}_{c,i}||^2\right\}$. At high SNR, the receiver can decode the $d$ information and the $g$ cooperative jamming streams, where $d+g=N$.
The received signals at the legitimate receiver and the eavesdropper are given by $$\begin{aligned}
\label{eq:Yr_4_1}
{\bold{Y}_r}&={\bold{H}_t}{\bold{P}_t}{\bold{U}_t}+\begin{bmatrix}{\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}&{\bold{0}}_{N\times (N_c-N)}\\\end{bmatrix}\begin{bmatrix}{{\bold{V}_c}_1^g}\\{{\bold{V}_c}_{g+1}^l}\\\end{bmatrix}+{\bold{Z}_r}\\
\label{eq:Yr_4_2}
&=\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}\\\end{bmatrix}\begin{bmatrix}{\bold{U}_t}\\{{\bold{V}_c}_1^g}\\\end{bmatrix}+{\bold{Z}_r}\\
\label{eq:Ye_4_1}
{\bold{Y}_e}&={\tilde{\bold{G}}_c}({{\bold{U}_t}_1^l}+{\bold{V}_c})+{\bold{Z}_e},\end{aligned}$$ where ${\tilde{\bold{G}}_c}={\bold{G}_c}{\bold{P}_c}$.
The matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}\right]\in\mathbb{C}^{N\times N}$ in (\[eq:Yr\_4\_2\]) can be rewritten as $$\begin{aligned}
\label{eq:Ach_4_2}
\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}\\\end{bmatrix}=\begin{bmatrix}{\bold{H}_t}&{\bold{H}_c}\\\end{bmatrix}\begin{bmatrix}{\bold{P}_t}&{\bold{0}}_{N\times g}\\{\bold{0}}_{N_c\times d}&{\bold{P}_{c,{\rm{I}}}}\\\end{bmatrix}.\end{aligned}$$ By applying Lemma \[lemma1\] on (\[eq:Ach\_4\_2\]), the matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}\right]$ is full rank a.s. Thus, $$\begin{aligned}
I({\bold{X}_t};{\bold{Y}_r})\geq d\log P+o(\log P).\end{aligned}$$
Using similar steps as from (\[eq:Ach\_1\_3\]) to (\[eq:Ach\_1\_8\]), we can show that $$\begin{aligned}
\label{eq:Ach_4_4}
I({\bold{X}_t};{\bold{Y}_e})&= \log\frac{\det({\bold{I}}_l+2\bar{P}{\tilde{\bold{G}}_c^H}{\tilde{\bold{G}}_c})}{\det({\bold{I}}_l+\bar{P}{\tilde{\bold{G}}_c^H}{\tilde{\bold{G}}_c})}\leq l.\end{aligned}$$ Thus, the achievable secrecy rate in (\[eq:AchSecRate\]) is lower bounded as $$\begin{aligned}
\label{eq:Ach_4_4_1}
R_s&\geq d\log P+o(\log P)-l\\
\label{eq:Ach_4_5}
&=\frac{N+N_c-N_e}{2}\log P+o(\log P)-\frac{N_c+N_e-N}{2},\end{aligned}$$ and, using (\[eq:sdof\]), the s.d.o.f. is lower bounded as $$\begin{aligned}
\label{eq:Ach_4_4}
D_s\geq \frac{N+N_c-N_e}{2}. \end{aligned}$$
Case 5: $N_e\leq N$, $N<N_c\leq N+N_e$, and $N+N_c-N_e$ is odd
--------------------------------------------------------------
[\[AchScheme5\]]{} As in case $3$, the s.d.o.f. for this case is not an integer, and as in case $4$, we have $N_c>N$, which allows the cooperative jammer to send some signals invisible to the receiver. Consequently, the achievable scheme for this case combines the techniques used in Sections \[AchScheme3\] and \[AchScheme4\].
The transmitted signals are given by (\[eq:Xt1\_Xc1\]) with $d=\frac{N+N_c-N_e+1}{2}$, $l=\frac{N_c+N_e-N+1}{2}$, ${\bold{P}_t}$ and ${\bold{P}_c}$ are defined as in Section \[AchScheme4\] with $g=\frac{N_e+N-N_c+1}{2}$, and ${\bold{U}_t}$, ${\bold{V}_c}$ are defined as in Section \[AchScheme3\]. Similar to the proof in Appendix D, the values of $Q$ and $a$ are chosen as in (\[eq:Q\]) and (\[eq:a\]), with $$\begin{aligned}
\gamma=\frac{1}{\sqrt{\max\left\{\||{\bold{p}}_{t,1}||^2+2\sum_{i=2}^d||{\bold{p}}_{t,i}||^2, 2g-1+2\sum_{i=g+1}^l||{\bold{p}}_{c,i}||^2\right\}}},\end{aligned}$$ and $\nu$ are constants that do not depend on the power $P$.
The legitimate receiver uses the projection and cancellation technique described in Section \[AchScheme3\] in order to decode the information streams. The received signal at the eavesdropper is the same as in (\[eq:Ye\_4\_1\]), with $l=\frac{N_c+N_e-N+1}{2}$. Similar to the derivation from (\[eq:Ach\_3\_4\]) to (\[eq:Ach\_3\_12\]), we have $$\begin{aligned}
\label{eq;Ach_5_1}
I({\bold{X}_t};{\bold{Y}_e})\leq 2l-1.\end{aligned}$$
Let ${\bold{A}}={\bold{H}_t}{\bold{P}_t}=\left[{\bold{a}}_1\cdots{\bold{a}}_d\right]$, and ${\bold{H}'_c}={\bold{H}_c}{\bold{P}_{c,{\rm{I}}}}=[{\bold{h}}_{c,1}\cdots{\bold{h}}_{c,g}]$. The received signal at the legitimate receiver is $$\begin{aligned}
\label{eq:Yr_5_1}
{\bold{Y}_r}=\begin{bmatrix}{\bold{A}}&{\bold{H}'_c}\\\end{bmatrix}\begin{bmatrix}{\bold{U}_t}\\{{\bold{V}_c}_1^g}\\\end{bmatrix}+{\bold{Z}_r}.\end{aligned}$$ The receiver chooses ${\bold{b}}\perp{\rm{span}}\left\{{\bold{a}}_2,\cdots,{\bold{a}}_d,{\bold{h}}_{c_2},\cdots,{\bold{h}}_{c_g}\right\}$ and multiplies its received signal by the matrix $\bold{D}$ given in (\[eq:D\]) to obtain ${\widetilde{\bold{Y}}_r}=\left[{\widetilde{Y}_{r_1}}\;({\widetilde{\bold{Y}}_{r_2}^N})^T\right]^T$, where $$\begin{aligned}
\label{eq;Ach_5_2}
{\widetilde{Y}_{r_1}}&=f_1 U_1+f_2 V_1+Z',\\
\label{eq;Ach_5_3}
{\widetilde{\bold{Y}}_{r_2}^N}&={\widetilde{\bold{A}}}{\bold{U}_t}+{\widetilde{\bold{H}}_c}{{\bold{V}_c}_1^g}+{{\bold{Z}_r}_2^N},\end{aligned}$$ $f_1$, $f_2$, $Z'$, ${\widetilde{\bold{A}}}$, and ${\widetilde{\bold{H}}_c}$, are defined as in Section \[AchScheme3\]. In order to decode $U_1$ and $V_1$, the receiver passes ${\widetilde{Y}_{r_1}}$ through a hard decision decoder, ${\widetilde{Y}_{r_1}}\mapsto f_1 U_1+f_2 V_1$, and passes the output of the hard decision decoder through the bijective map $f_1 U_1+f_2 V_1\mapsto (U_1,V_1)$, where $f_1$ and $f_2$ are rationally independent.
Using similar steps to the derivation from (\[eq:Ach\_3\_22\]) to (\[eq:Ach\_3\_61\]) in Section \[AchScheme3\], we obtain $$\begin{aligned}
\label{eq;Ach_5_6}
I({\bold{X}_t};{\bold{Y}_r})\geq \frac{1-\epsilon}{2+\epsilon}\left[2d-1-\exp\left(-\mu P^{\epsilon}\right)-2(d-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P+o(\log P),\end{aligned}$$ where $\epsilon>0$ is arbitrarily small, $\epsilon'=\frac{3\epsilon}{2+\epsilon}$, and $\mu,\mu'$ are constants which do not depend on $P$.
Thus, the achievable secrecy rate in (\[eq:AchSecRate\]) is lower bounded as $$\begin{aligned}
\label{eq;Ach_5_7_1}
&R_s\geq \frac{1-\epsilon}{2+\epsilon}\left[2d-1-\exp(-\mu P^{\epsilon})-(d-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P+o(\log P)-(2l-1)\\
\label{eq;Ach_5_7}
\nonumber &= \frac{1-\epsilon}{2+\epsilon}\left[N+N_c-N_e-\exp(-\mu P^{\epsilon})-\frac{1}{2}(N+N_c-N_e-1)^2\exp(-\mu' P^{\epsilon'})\right]\log P\\
&\qquad\qquad\qquad +o(\log P)-(N_c+N_e-N),\end{aligned}$$ and hence the s.d.o.f is lower bounded as $$\begin{aligned}
\label{eq:Ach_5_8}
D_s \geq \frac{(1-\epsilon)(N+N_c-N_e)}{2+\epsilon}.\end{aligned}$$ Since $\epsilon>0$ can be chosen arbitrarily small, $D_s=\frac{N+N_c-N_e}{2}$ is achievable for this case, which completes the achievability of (\[eq:conv\_3\_1\]). Next, we show the achievability of (\[eq:conv\_3\_2\]), where $N_e>N$, i.e., the eavesdropper has more antennas than the legitimate receiver.
Case 6: $N_e>N$ and $N_e-N<N_c\leq N_e-\frac{N}{2}$
---------------------------------------------------
[\[AchScheme6\]]{} Unlike the previous five cases, since $N_e>N$, no information streams can be sent invisible to the eavesdropper. In fact, the precoder at the transmitter is not adequate for achieving the alignment of the information and cooperative jamming streams at the eavesdropper. We need to design both precoders at the transmitter and the cooperative jammer to take part in achieving the alignment condition. The s.d.o.f. here is integer valued, and hence we can utilize Gaussian streams.
The transmitted signals are given by (\[eq:Xt1\_Xc1\]), with $d=l=N+N_c-N_e$, and ${\bold{U}_t},{\bold{V}_c}\sim\mathcal{CN}\left({\bold{0}},\bar{P}{\bold{I}}_{d}\right)$. The matrices ${\bold{P}_t}$ and ${\bold{P}_c}$ are chosen as follows. Let $\bold{G}=\left[{\bold{G}_t}\;\;-{\bold{G}_c}\right]\in\mathbb{C}^{N_e\times (N+N_c)}$, and let $\bold{Q}\in\mathbb{C}^{(N+N_c)\times d}$ be a matrix whose columns are randomly[^9] chosen to span $\mathcal{N}(\bold{G})$. Write the matrix $\bold{Q}$ as $\bold{Q}=\left[\bold{Q}_1^T\;\;\bold{Q}_2^T\right]^T$, where $\bold{Q}_1\in\mathbb{C}^{N\times d}$ and $\bold{Q}_2\in\mathbb{C}^{N_c\times d}$. Set ${\bold{P}_t}=\bold{Q}_1$ and ${\bold{P}_c}=\bold{Q}_2$. $\bar{P}=\frac{1}{\alpha'' }P$, where $\alpha''=\max\left\{\sum_{i=1}^d ||{\bold{p}}_{t,i}||^2,\sum_{i=1}^d||{\bold{p}}_{c,i}||^2\right\}$.
The choice of ${\bold{P}_t}$ and ${\bold{P}_c}$ results in ${\bold{G}_t}{\bold{P}_t}={\bold{G}_c}{\bold{P}_c}$. Thus, the eavesdropper receives $$\begin{aligned}
\label{eq:Ach_6_1}
{\bold{Y}_e}={\bold{G}_c}{\bold{P}_c}({\bold{U}_t}+{\bold{V}_c})+{\bold{Z}_e}.\end{aligned}$$ Similar to going from (\[eq:Ach\_1\_3\]) to (\[eq:Ach\_1\_8\]), it follows that we have $$\begin{aligned}
\label{eq:Ach_6_2}
I({\bold{X}_t};{\bold{Y}_e})\leq N+N_c-N_e.\end{aligned}$$
The received signal at the receiver in turn is given by $$\begin{aligned}
\label{eq:Ach_6_3}
{\bold{Y}_r}=\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}{\bold{P}_c}\\\end{bmatrix}\begin{bmatrix}{\bold{U}_t}\\{\bold{V}_c}\\\end{bmatrix}+{\bold{Z}_r}.\end{aligned}$$ Note that, without conditioning on ${\bold{G}_t}$ and ${\bold{G}_c}$, the matrix $\bold{Q}$ has all of its entries independently and randomly drawn according to a continuous distribution. Thus, each of ${\bold{P}_t}$ and ${\bold{P}_c}$ is full column rank a.s. Thus, by using Lemma \[lemma1\], we can show that the matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}{\bold{P}_c}\right]$ is full column rank a.s. Using (\[eq:Ach\_6\_3\]), we have $$\begin{aligned}
\label{eq:Ach_6_4}
I({\bold{X}_t};{\bold{Y}_r})\geq (N+N_c-N_e)\log P+o(\log P).\end{aligned}$$ Hence, using (\[eq:Ach\_6\_2\]), (\[eq:Ach\_6\_4\]), (\[eq:AchSecRate\]), and (\[eq:sdof\]), the s.d.o.f. is lower bounded as $D_s\geq N+N_c-N_e$.
Case 7: $N_e>N$, $N_e-\frac{N}{2}<N_c\leq N_e$, and $N$ is even
---------------------------------------------------------------
[\[AchScheme7\]]{} The s.d.o.f. for this case does not increase by increasing $N_c$. The scheme in Section \[AchScheme6\] for $N_c=N_e-\frac{N}{2}$, i.e., $d=\frac{N}{2}$, can be used to achieve the s.d.o.f. for all $N_e-\frac{N}{2}<N_c\leq N_e$, when $N_e>N$ and $N$ is even. However, since $\dim(\mathcal{N}(\bold{G}))=N+N_c-N_e>\frac{N}{2}$, the $d=\frac{N}{2}$ columns of the matrix $\bold{Q}$ are randomly chosen as linearly independent vectors from $\mathcal{N}(\bold{G})$. Following the same analysis as in Section \[AchScheme6\], we can show that the s.d.o.f. is lower bounded as $D_s\geq \frac{N}{2}$.
Case 8: $N_e>N$, $N_e-\frac{N}{2}<N_c\leq N_e$, and $N$ is odd
--------------------------------------------------------------
[\[AchScheme8\]]{} The difference here from Section \[AchScheme7\] is that s.d.o.f. is not an integer, and hence, structured signaling for transmission and cooperative jamming is needed, and the difference from \[AchScheme3\] is that $N_e>N$, and hence both the precoders at the transmitter and cooperative jammer have to participate in achieving the alignment condition at the eavesdropper.
The transmitted signals are given by (\[eq:Xt1\_Xc1\]), with $d=l=\frac{N+1}{2}$, ${\bold{U}_t}$ and ${\bold{V}_c}$ are defined as in Section \[AchScheme3\], and the values for $Q$ and $a$ are chosen as in (\[eq:Q\]) and (\[eq:a\]), with $$\begin{aligned}
\label{eq:Ach_8_1}
\gamma=\frac{1}{\sqrt{\max\left\{\||{\bold{p}}_{t,1}||^2+2\sum_{i=2}^d||{\bold{p}}_{t,i}||^2, ||{\bold{p}}_{c,1}||^2+2\sum_{i=2}^d||{\bold{p}}_{c,i}||^2\right\}}},\end{aligned}$$ and $\nu$ are constants which do not depend $P$. ${\bold{P}_t},{\bold{P}_c}$ are chosen as in Section \[AchScheme7\], with $d=\frac{N+1}{2}$. The eavesdropper’s received signal is the same as in (\[eq:Ach\_6\_1\]). Similar to (\[eq:Ach\_3\_4\])-(\[eq:Ach\_3\_12\]), we have $$\begin{aligned}
\label{eq:Ach_8_2}
I({\bold{X}_t};{\bold{Y}_e})\leq N.\end{aligned}$$
The receiver employs the decoding scheme in Sections \[AchScheme3\] and \[AchScheme5\]. Following similar steps as in Sections \[AchScheme3\] and \[AchScheme5\], we have $$\begin{aligned}
\label{eq:Ach_8_3}
I({\bold{X}_t};{\bold{Y}_r})\geq \frac{(1-\epsilon)N}{2+\epsilon}\log P+o(\log P).\end{aligned}$$ Using (\[eq:Ach\_8\_2\]), (\[eq:Ach\_8\_3\]), (\[eq:AchSecRate\]), and (\[eq:sdof\]), the s.d.o.f. is lower bounded as $D_s\geq \frac{(1-\epsilon)N}{2+\epsilon}$, and since $\epsilon>0$ is arbitrarily small, the s.d.o.f. of $\frac{N}{2}$ is achievable for this case.
Case 9: $N_e>N$, $N_e<N_c\leq N+N_e$, and $N+N_c-N_e$ is even
-------------------------------------------------------------
[\[AchScheme9\]]{} In Sections \[AchScheme7\] and \[AchScheme8\], we observe that the flat s.d.o.f. range extends to $N_c=N_e$, and not $N_c=N$ as in Sections \[AchScheme2\] and \[AchScheme3\]. Achieving the alignment of information and cooperative jamming at the eavesdropper requires that $N_c>N_e$ in order for the cooperative jammer to begin sending some jamming signals invisible to the legitimate receiver. For this case, in addition to choosing its precoding matrix jointly with the transmitter to satisfy the alignment condition, the cooperative jammer chooses its precoder to send $N_c-N_e$ jamming streams invisible to the receiver. The s.d.o.f. here is integer valued, for which we utilize Gaussian streams.
The transmitted signals are given by (\[eq:Xt1\_Xc1\]) with $d=l=\frac{N+N_c-N_e}{2}$, and ${\bold{U}_t},{\bold{V}_c}$ are defined as in Section \[AchScheme6\]. Let ${\bold{P}_t}=\left[{\bold{P}_{t,1}}\;{\bold{P}_{t,2}}\right]$, and ${\bold{P}_c}=\left[{\bold{P}_{c,1}}\;{\bold{P}_{c,2}}\right]$, where ${\bold{P}_{t,1}}\in\mathbb{C}^{N\times g}$, ${\bold{P}_{t,2}}\in\mathbb{C}^{N\times (N_c-N_e)}$, ${\bold{P}_{c,1}}\in\mathbb{C}^{N_c\times g}$, ${\bold{P}_{c,2}}\in\mathbb{C}^{N_c\times (N_c-N_e)}$, and $g=\frac{N_e+N-N_c}{2}$. The matrices ${\bold{P}_t}$ and ${\bold{P}_c}$ are chosen as follows. Let $\bold{G}=\left[{\bold{G}_t}\;-{\bold{G}_c}\right]\in\mathbb{C}^{N_e\times (N+N_c)}$, and let ${\bold{G}}'\in\mathbb{C}^{(N_e+N)\times (N+N_c)}$ be expressed as $$\begin{aligned}
\label{eq:Ach_9_1}
{\bold{G}}'=\begin{bmatrix}{\bold{G}_t}&-{\bold{G}_c}\\\;{\bold{0}}_{N\times N}&{\bold{H}_c}\\\end{bmatrix}.\end{aligned}$$ Let $\bold{Q}'\in\mathbb{C}^{(N+N_c)\times (N_c-N_e)}$ be randomly chosen such that its columns span $\mathcal{N}(\bold{G}')$, and let the columns of the matrix $\bold{Q}\in\mathbb{C}^{(N+N_c)\times g}$ be randomly chosen as linearly independent vectors in $\mathcal{N}(\bold{G})$, and not in $\mathcal{N}(\bold{G}')$. Write the matrix $\bold{Q}$ as $\bold{Q}=\left[\bold{Q}_1^T\;\bold{Q}_2^T\right]^T$, and the matrix $\bold{Q}'$ as $\bold{Q}'=\left[\bold{Q}_1'^T\;\bold{Q}_2'^T\right]^T$, where $\bold{Q}_1\in\mathbb{C}^{N\times g}$, $\bold{Q}_2\in\mathbb{C}^{N_c\times g}$, $\bold{Q}'_1\in\mathbb{C}^{N\times (N_c-N_e)}$, and $\bold{Q}'_2\in\mathbb{C}^{N_c\times (N_c-N_e)}$. Set ${\bold{P}_{t,1}}=\bold{Q}_1$, ${\bold{P}_{t,2}}=\bold{Q}'_1$, ${\bold{P}_{c,1}}=\bold{Q}_2$, and ${\bold{P}_{c,2}}=\bold{Q}'_2$.
This choice of ${\bold{P}_t}$ and ${\bold{P}_c}$ results in ${\bold{G}_t}{\bold{P}_t}={\bold{G}_c}{\bold{P}_c}$ and ${\bold{H}_c}{\bold{P}_{c,2}}={\bold{0}}_{N\times (N_c-N_e)}$. Thus, the received signals at the receiver and eavesdropper are given by $$\begin{aligned}
\label{eq:Yr_9_1}
{\bold{Y}_r}&=\begin{bmatrix}{\bold{H}_t}{\bold{P}_t}&{\bold{H}_c}{\bold{P}_{c,1}}\\\end{bmatrix}\begin{bmatrix}{\bold{U}_t}\\{{\bold{V}_c}_1^g}\\\end{bmatrix}+{\bold{Z}_r}\\
\label{eq:Ye_9_1}
{\bold{Y}_e}&={\bold{G}_c}{\bold{P}_c}({\bold{U}_t}+{\bold{V}_c})+{\bold{Z}_e}.\end{aligned}$$ Using (\[eq:Ye\_9\_1\]), and similar to going from (\[eq:Ach\_1\_3\]) to (\[eq:Ach\_1\_8\]), we have $$\begin{aligned}
\label{eq:Ach_9_2}
I({\bold{X}_t};{\bold{Y}_e})\leq \frac{N+N_c-N_e}{2}.\end{aligned}$$
Because of the assumption of randomly generated channel gains, each of ${\bold{P}_t}$ and ${\bold{P}_c}$ is full column rank a.s. Using Lemma \[lemma1\], we have the matrix $\left[{\bold{H}_t}{\bold{P}_t}\;\;{\bold{H}_c}{\bold{P}_{c,1}}\right]$ is full column rank a.s., and hence, using (\[eq:Yr\_9\_1\]), we have $$\begin{aligned}
\label{eq:Ach_9_3}
I({\bold{X}_t};{\bold{Y}_r})\geq \frac{N+N_c-N_e}{2}\log P+o(\log P).\end{aligned}$$ Thus, using (\[eq:Ach\_9\_2\]), (\[eq:Ach\_9\_3\]), (\[eq:AchSecRate\]), and (\[eq:sdof\]), the s.d.o.f. is lower bounded as $D_s\geq \frac{N+N_c-N_e}{2}$.
Case 10: $N_e>N$, $N_e<N_c\leq N+N_e$, and $N+N_c-N_e$ is odd
-------------------------------------------------------------
[\[AchScheme10\]]{} The s.d.o.f. for this case is not an integer, and we have $N_c>N_e$, and hence, we utilize here precoding as in Section \[AchScheme9\], and signaling and decoding scheme as in Section \[AchScheme8\]; ${\bold{U}_t},{\bold{V}_c}$ are defined as in Section \[AchScheme8\], and ${\bold{P}_t},{\bold{P}_c}$ are chosen as in Section \[AchScheme9\], with $d=\frac{N+N_c-N_e+1}{2}$ and $g=\frac{N_e+N-N_c+1}{2}$. Using the same decoding scheme as in Section \[AchScheme8\], we obtain that the s.d.o.f. is lower bounded as $D_s\geq \frac{N+N_c-N_e}{2}$ for this case, which completes the achievability proof of (\[eq:conv\_3\_2\]). Thus, we have completed the proof for Theorem \[Thm1\].
Extending to the General Case: Theorem \[Thm2\] {#Thm2_Proof}
===============================================
Converse {#Thm2_Proof_Conv}
--------
The converse proof for Theorem \[Thm2\] follows the same steps as in Section \[Conv\_Proof\]. In particular, we derive the following two upper bounds which hold for two different ranges of $N_c$.
### $0\leq N_c\leq N_e$
Similar to Section \[Conv\_Proof\_1\], we have $$\begin{aligned}
\label{eq:thm1proof_1}
R_s\leq C_s(P)=\rho\log P+o(\log P),\end{aligned}$$ where, for $0\leq N_c\leq \left[N_e-[N_t-N_r]^+\right]^+$, $\rho=[N_c+N_t-N_e]^+$. Since $[N_c+N_t-N_e]^+\leq N_r$ for $\left[N_e-[N_t-N_r]^+\right]^+\leq N_c\leq N_e$, we have, for $0\leq N_c\leq N_e$, $$\begin{aligned}
\label{eq:thm1proof_2}
D_s \leq \min\{N_r,[N_c+N_t-N_e]^+\}.\end{aligned}$$
### $N_r+[N_e-N_t]^+ < N_c\leq 2\min\{N_t,N_r\}+N_e-N_t$
Following the same steps as in Section \[Conv\_Proof\_2\], where the two cases we consider here are $N_e\leq N_t$ and $N_e>N_t$, the s.d.o.f. for this range of $N_c$ is upper bounded as $$\begin{aligned}
\label{eq:thm1proof_3}
D_s\leq\frac{N_c+N_t-N_e}{2}.\end{aligned}$$ Note that, when $N_e>N_t$, this bound holds for $N_c>N_r+N_e-N_t$ so that the number of antennas at the cooperative jammer in the modified channel, c.f. (\[eq:modified\_channel\]), is larger than $N_r$, i.e., $N_c+N_t-N_e>N_r$.
### Obtaining the upper bound {#Obtain_bound_thm2}
For each of the following cases, we use the two bounds in (\[eq:thm1proof\_2\]) and (\[eq:thm1proof\_3\]) to obtain the upper bound for the s.d.o.f.
i) $N_t\geq N_r+N_e$\
For this case, we use the trivial bound for the s.d.o.f., $D_s\leq N_r$ for all the values of $N_c$.
ii) $N_r\geq N_t\geq N_e$ and $N_r\geq N_t+N_e$\
Using the bound in (\[eq:thm1proof\_2\]), we have $$\begin{aligned}
D_s\leq N_c+N_t-N_e, {\text{ for }}0\leq N_c\leq N_e,\end{aligned}$$ where at $N_c=N_e$, we have $D_s\leq N_t$, which is the maximum achievable s.d.o.f. for this case.
iii) $N_t\geq N_e$ and $N_t-N_e<N_r< N_t+N_e$\
Combining the bounds in (\[eq:thm1proof\_2\]) and (\[eq:thm1proof\_3\]), as in Section \[Conv\_Proof\_3\], yields $$\begin{aligned}
D_s\leq \begin{cases}
N_c+N_t-N_e,\;\;\; 0\leq N_c\leq \frac{N_r+N_e-N_t}{2}\\
\frac{N_r+N_t-N_e}{2},\;\;\;\frac{N_r+N_e-N_t}{2}\leq N_c\leq N_r\\
\frac{N_c+N_t-N_e}{2},\;\;\; N_r\leq N_c\leq 2\min\{N_t,N_r\}+N_e-N_t.
\end{cases}\end{aligned}$$
iv) $N_e> N_t$ and $N_r\geq 2N_t$\
Using the bound in (\[eq:thm1proof\_2\]), we have $$\begin{aligned}
D_s\leq [N_c+N_t-N_e]^+, {\text{ for }}0\leq N_c\leq N_e.\end{aligned}$$
v) $N_e> N_t$ and $N_r<2N_t$\
By combining the bounds in (\[eq:thm1proof\_2\]) and (\[eq:thm1proof\_3\]), we have $$\begin{aligned}
D_s\leq \begin{cases}
[N_c+N_t-N_e]^+,\;\;\; 0\leq N_c\leq \frac{N_r}{2}+N_e-N_t\\
\frac{N_r}{2},\quad\;\;\;\frac{N_r}{2}+N_e-N_t\leq N_c\leq N_r+N_e-N_t\\
\frac{N_c+N_t-N_e}{2},\;\;\; N_r+N_e-N_t\leq N_c\leq 2\min\{N_t,N_r\}+N_e-N_t.
\end{cases}\end{aligned}$$
One can easily verify that the cases cited above cover all possible combinations of number of antennas at various terminals. By merging the upper bounds for these cases in one expression, we obtain (\[eq:thm2\]) as the upper bound for the s.d.o.f. of the channel.
Achievability {#Thm2_Proof_Ach}
-------------
The s.d.o.f. for the channel when $N_t$ is not equal to $N_r$, given in (\[eq:thm2\]), is achieved using techniques similar to what we presented in Section \[AchSchemes\]. There are few cases, of the number of antennas, where the achievability is straight forward. One such case is when $N_t\geq N_r+N_e$, where the transmitter can send $N_r$ Gaussian information streams invisible to the eavesdropper, and the maximum possible s.d.o.f. of the channel, i.e., $N_r$, is achieved without the help of the cooperative jammer, i.e., $N_c=0$. Another case is when $N_r\geq N_t+\min\{N_t,N_e\}$, where the receiver’s signal space is sufficient for decoding the information and jamming streams, at high SNR, for all $0\leq N_c\leq N_e$, arriving at the s.d.o.f. of $N_t$ (the maximum possible s.d.o.f.) at $N_c=N_e$. Thus, there is no constant period in the s.d.o.f. characterization for this case where the s.d.o.f. keeps increasing by increasing $N_c$, and Gaussian signaling and cooperative jamming are sufficient to achieve the s.d.o.f. of the channel.
We consider the five cases of the number of antennas at the different terminals listed in Section \[Obtain\_bound\_thm2\]. In the following, we summarize the achievable schemes for these cases. Let $d$ and $l$ denote the number of information and cooperative jamming streams. ${\bold{P}_t},{\bold{P}_c}$ are the precoding matrices at the transmitter and the cooperative jammer.
i) $N_t\geq N_r+N_e$\
The transmitter sends $N_r$ Gaussian information streams over $\mathcal{N}({\bold{G}_t})$. $D_s=N_r$ is achievable at $N_c=0$.
ii) $N_r\geq N_t\geq N_e$ and $N_r\geq N_t+N_e$\
For $0\leq N_c\leq N_e$, $d=N_c+N_t-N_e$ and $l=N_c$ Gaussian streams are transmitted. Choose ${\bold{P}_t}$ to send $N_t-N_e$ information streams over $\mathcal{N}({\bold{G}_t})$ and align the remaining information streams over cooperative jamming streams at the eavesdropper. $D_s=N_c+N_t-N_e$.
iii) $N_t\geq N_e$ and $N_t-N_e<N_r< N_t+N_e$:
1) For $0\leq N_c\leq \frac{N_r+N_e-N_t}{2}$:\
The same scheme as in case (ii) is utilized. $D_s=N_c+N_t-N_e$.
2) For $\frac{N_r+N_e-N_t}{2}< N_c\leq N_r$ and $N_r+N_t-N_e$ is even:\
The same scheme as in case (iii-1), with $d=\frac{N_r+N_t-N_e }{2}$ and $l=\frac{N_r+N_e-N_t}{2}$, is utilized. The cooperative jammer uses only $\frac{N_r+N_e-N_t}{2}$ of its $N_c$ antennas. $D_s=\frac{N_r+N_t-N_e}{2}$.
3) For $\frac{N_r+N_e-N_t}{2}<N_c\leq N_r$ and $N_r+N_t-N_e$ is odd:\
$d=\frac{N_r+N_t-N_e+1}{2}$ and $l=\frac{N_r+N_e-N_t+1}{2}$ structured streams, as defined in Section \[AchScheme3\], are transmitted. The cooperative jammer uses only $\frac{N_r+N_e-N_t+1}{2}$ of its $N_c$ antennas. ${\bold{P}_t}$ is chosen as in case (ii). The legitimate receiver uses the projection and cancellation technique, as in Section \[AchScheme3\]. $D_s=\frac{N_r+N_t-N_e}{2}$.
4) For $N_r< N_c\leq 2\min\{N_t,N_r\}+N_e-N_t$ and $N_c+N_t-N_e$ is even:\
$d=\frac{N_c+N_t-N_e}{2}$ and $l=\frac{N_c+N_e-N_t}{2}$ Gaussian streams are transmitted. The cooperative jammer chooses ${\bold{P}_c}$ to send $N_c-N_r$ cooperative jamming streams over $\mathcal{N}({\bold{H}_c})$. ${\bold{P}_t}$ is chosen as in case (ii). $D_s=\frac{N_c+N_t-N_e}{2}$.
5) For $N_r< N_c\leq 2\min\{N_t,N_r\}+N_e-N_t$ and $N_c+N_t-N_e$ is odd:\
$d=\frac{N_c+N_t-N_e+1}{2}$ and $l=\frac{N_c+N_e-N_t+1}{2}$ structured streams are transmitted. ${\bold{P}_t},{\bold{P}_c}$ are chosen as in case (iii-4). The legitimate receiver uses the projection and cancellation technique. $D_s=\frac{N_c+N_t-N_e}{2}$.
iv) $N_e> N_t$ and $N_r\geq 2N_t$\
For $0\leq N_c\leq N_e$, $d=l=[N_c+N_t-N_e]^+$ Gaussian streams are transmitted. Both ${\bold{P}_t},{\bold{P}_c}$ are chosen to align the information streams over the cooperative jamming streams at the eavesdropper as in Section \[AchScheme6\]. $D_s=[N_c+N_t-N_e]^+$.
v) $N_e>N_t$ and $N_r<2N_t$:
1) For $0\leq N_c\leq \frac{N_r}{2}+N_e-N_t$:\
The same scheme as in case (iv) is utilized. $D_s=[N_c+N_t-N_e]^+$.
2) For $\frac{N_r}{2}+N_e-N_t< N_c \leq N_r+N_e-N_t$ and $N_r$ is even:\
$d=l=\frac{N_r}{2}$ Gaussian streams are transmitted. ${\bold{P}_t},{\bold{P}_c}$ are chosen as in case (iv). $D_s=\frac{N_r}{2}$.
3) For $\frac{N_r}{2}+N_e-N_t< N_c \leq N_r+N_e-N_t$ and $N_r$ is odd:\
$d=l=\frac{N_r+1}{2}$ structured streams are transmitted. ${\bold{P}_t},{\bold{P}_c}$ are as in case (iv). The legitimate receiver uses the projection and cancellation technique. $D_s=\frac{N_r}{2}$.
4) For $N_r+N_e-N_t< N_c\leq 2\min\{N_t,N_r\}+N_e-N_t$ and $N_c+N_t-N_e$ is even:\
$d=l=\frac{N_c+N_t-N_e}{2}$ Gaussian streams are transmitted. Both ${\bold{P}_t},{\bold{P}_c}$ are chosen to align the information and the cooperative jamming streams at the eavesdropper. ${\bold{P}_c}$ is also chosen to send $N_c-N_r$ cooperative jamming streams over $\mathcal{N}({\bold{H}_c})$ as in Section \[AchScheme9\]. $N_c>N_r+N_e-N_t$ achieves the above two conditions. $D_s=\frac{N_c+N_t-N_e}{2}$.
5) For $N_r+N_e-N_t< N_c\leq 2\min\{N_t,N_r\}+N_e-N_t$ and $N_c+N_t-N_e$ is odd:\
$d=l=\frac{N_c+N_t-N_e+1}{2}$ structured streams are transmitted. ${\bold{P}_t},{\bold{P}_c}$ are chosen as in case (v-4). The receiver uses the projection and cancellation technique. $D_s=\frac{N_c+N_t-N_e}{2}$.
Using the achievable schemes described above for the different cases of the number of antennas, and their analysis as in Section \[AchSchemes\], we have (\[eq:thm2\]) as the achievable s.d.o.f., which completes the proof for theorem \[Thm2\].
Discussion {#Discussion}
==========
At this point, it is useful to discuss the results and the implications of this work. Theorem \[Thm1\], c.f. (\[eq:thm1\]), shows the behavior of the s.d.o.f., for an $(N\times N\times N_e) $ multi-antenna Gaussian wire-tap channel with an $N_c$-antenna cooperative jammer, associated with increasing $N_c$ form $0$ to $N+N_e$. The s.d.o.f. first increases linearly by increasing $N_c$ from $0$ to $N_e-\lceil\frac{\min\{N,N_e\}}{2}\rceil$, that is to say adding one antenna at the cooperative jammer provided the system to have one additional degrees of freedom. The s.d.o.f. remains constant in the $N_c$ range of $N_e-\lfloor\frac{\min\{N,N_e\}}{2}\rfloor$ to $\max\{N,N_e\}$, and starts to increase again for $N_c$ from $\max\{N,N_e\}$ to $N+N_e$, until the s.d.o.f. arrives at its maximum value, $N$, at $N_c=N+N_e$. This behavior transpires both when the eavesdropper antennas are fewer or more than that of the legitimate receiver.
The reason for the flat s.d.o.f. range is as follows: At high SNR, achieving the secrecy constraint requires i) the entropy of the cooperative jamming signal, ${\bold{X}_c^n}$, to be greater than or equal to that of the information signal visible to the eavesdropper, and ii) ${\bold{X}_c^n}$ to completely cover the information signal, ${\bold{X}_t^n}$, at the eavesdropper. For $N_e\leq N$, part of ${\bold{X}_t^n}$ can be sent invisible to the eavesdropper, and the information signal visible to the eavesdropper can be covered by jamming for all $N_c$. For $0\leq N_c\leq \frac{N_e}{2}$, the spatial resources at the receiver are sufficient, at high SNR, for decoding information and jamming signals which satisfy the above constraints. Thus, increasing the possible entropy of ${\bold{X}_c^n}$ by increasing $N_c$ from $0$ to $\left\lfloor \frac{N_e}{2}\right\rfloor$ allows for increasing the entropy of ${\bold{X}_t^n}$, and hence, the achievable secrecy rate and the s.d.o.f. increase. At $N_c=\left\lceil\frac{N_e}{2}\right\rceil$, the possible entropy of ${\bold{X}_c^n}$ and, correspondingly, the maximum possible entropy of ${\bold{X}_t^n}$, result in information and jamming signals which completely occupy the receiver’s signal space. Thus, increasing the possible uncertainty of ${\bold{X}_c^n}$ by increasing $N_c$ from $\left\lceil\frac{N_e}{2}\right\rceil$ to $N$ is useless, since, in this range, ${\bold{X}_c^n}$ is totally observed by the receiver which has its signal space already full at $N_c=\left\lceil\frac{N_e}{2}\right\rceil$.
Increasing $N_c$ over $N$ increases the possible entropy of ${\bold{X}_c^n}$ and simultaneously increases the part of ${\bold{X}_c^n}$ that can be transmitted invisible to the receiver, leaving more space for ${\bold{X}_t^n}$ at the receiver. This allows for increasing the secrecy rate, and hence, the s.d.o.f. starts to increase again. For $N_e>N$, the s.d.o.f. is equal to zero for all $0\leq N_c\leq N_e-N$, where ${\bold{X}_c^n}$ can not cover the information at the eavesdropper for this case. The s.d.o.f. starts to increase again, after the flat range, at $N_c>N_e$, since sending jamming signals invisible to the receiver while satisfying the covering condition at the eavesdropper requires that $N_c>N_e$.
The difference in the slope for the increase in the s.d.o.f. in the ranges before and after the flat range, for both $N_e\leq N$ and $N_e>N$, can be explained as follows. For $0\leq N_c\leq N_e-\frac{\min\{N,N_e\}}{2}$, each additional antenna at the cooperative jammer allows for utilizing two more spatial dimensions at the receiver; one spatial dimension is used for the jamming signal and the other is used for the information signal. By contrast, for $\max\{N,N_e\}< N_c\leq N+N_e$, each additional antenna at the cooperative jammer sets one spatial dimension at the receiver free from jamming, and this spatial dimension is shared between the extra cooperative jamming and information streams.
It is important to note that the result that suggests that increasing the cooperative jammer antennas is not useful in the range $N_e-\frac{\min\{N,N_e\}}{2}<N_c\leq \max\{N,N_e\}$ applies only to the prelog of the secrecy capacity, i.e., is specific to the high SNR behavior. This should not be taken to mean that additional antennas do not improve secrecy rate, but only the secrecy rate scaling with power in the high SNR.
Theorem \[Thm2\] generalizes the results above to the case where the number of transmit antennas at the transmitter, $N_t$, is not equal to the number of receive antennas at the legitimate receiver, $N_r$. Although the maximum possible s.d.o.f. of the channel for this case is limited to $\min\{N_t,N_r\}=N_d$, increasing $N_t$ over $N_r$, or increasing $N_r$ over $N_t$, do change the behavior of the s.d.o.f. associated with increasing $N_c$ until the maximum possible s.d.o.f. is reached. Let us start at $N_t=N_r=N_d$. For $N_t\geq N_e$, increasing $N_t$ over $N_d=N_r$ increases the number of the information streams that can be sent invisible to the eavesdropper, and hence the s.d.o.f. without the help of the CJ, i.e., $N_c=0$, increases. This results in increasing the range of $N_c$ for which the s.d.o.f. remains constant by increasing $N_c$, since the receiver’s signal space gets full at a smaller $N_c$ and remains full until $N_c$ is larger than $N_d=N_r$. In addition, increasing $N_t$ over $N_d$, when $N_t\geq N_e$, results in decreasing the value of $N_c$ at which the maximum s.d.o.f. of the channel, $N_d$, is achievable, arriving at $N_t\geq N_r+N_e$, where the s.d.o.f. of $N_d$ is achievable without the help of the CJ. When $N_e>N_t$, increasing $N_t$ over $N_d$ decreases the value of $N_c$ at which the s.d.o.f. is positive, and decreases the value of $N_c$ at which the s.d.o.f. of $N_d$ is achievable, arriving at $N_t>N_e$, where the channel renders itself to the previous case. For both the cases $N_t\geq N_e$ and $N_t<N_e$, increasing $N_r$ over $N_d=N_t$, results in increasing the available space at the receiver’s signal space, and hence the constant period decreases, arriving at $N_r\geq N_t+N_e$ when $N_t\geq N_e$, or at $N_r\geq 2N_t$ when $N_e> N_t$, where the constant period vanishes.
Conclusion {#Con}
==========
In this paper, we have studied the multi-antenna wire-tap channel with a $N_c$-antenna cooperative jammer, $N_t$-antenna transmitter, $N_r$-antenna receiver, and $N_e$-antenna eavesdropper. We have completely characterized the s.d.o.f. for this channel for all possible values of the number of antennas at the cooperative jammer, $N_c$. We have shown that when the s.d.o.f. of the channel is integer valued, it can be achieved by linear precoding at the transmitter and cooperative jammer, Gaussian signaling both for transmission and jamming, and linear processing at the legitimate receiver. By contrast, when the s.d.o.f. is not an integer, we have shown that a scheme which employs structured signaling both at the transmitter and cooperative jammer, along with joint signal space and signal scale alignment achieves the s.d.o.f. of the channel. We have seen that, when $N_t\geq N_e$, the transmitter uses its precoder to send a part of its information signal invisible to the eavesdropper, and to align the remaining part over jamming at the eavesdropper, while the cooperative jammer uses its precoder to send a part of its jamming signal invisible to the receiver, whenever possible. When $N_e>N_t$, more intricate precoding at the transmitter and cooperative jammer is required, where both the transmitter and cooperative jammer choose their precoders to achieve the alignment of information and jamming at the eavesdropper, and simultaneously, the cooperative jammer designs its precoder, whenever possible, to send a part of the jamming signal invisible to the receiver. The converse was established by allowing for full cooperation between the transmitter and cooperative jammer for a certain range of $N_c$, and by incorporating both the secrecy and reliability constraints, for the other values of $N_c$. We note that while this paper settles the degrees of freedom of this channel, its secrecy capacity is still open. Additionally, while the model considered here assumes channels to be known, universal secrecy as in [@he2014mimo] should be considered in the future.
Appendix A\
Choice of ${\bold{K}_t}$ and ${\bold{K}_c}$ {#App:AppendixA .unnumbered}
===========================================
The covariance matrices ${\bold{K}_t}$ and ${\bold{K}_c}$ are chosen so that they are [*[positive definite]{}*]{}, i.e., ${\bold{K}_t},{\bold{K}_c}\succ {\bold{0}}$, and hence non-singular, in order to guarantee the finiteness of $h({\tilde{\bold{Z}}_t})$ and $h({\tilde{\bold{Z}}_c})$ in (\[eq:conv\_2\_11\]). In addition, positive definite ${\bold{K}_t}$ and ${\bold{K}_c}$ result in positive definite $\bold{\Sigma}_{{\tilde{\bold{Z}}_1}}$ and $\bold{\Sigma}_{{\tilde{\bold{Z}}_2}}$, and hence, $h({\tilde{\bold{Z}}_1})$ and $h({\tilde{\bold{Z}}_2})$ in (\[eq:conv\_2\_13\]) are also finite.
For ${\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}$ to be a valid covariance matrix for ${\tilde{\bold{Z}}_e}$ in (\[eq:conv\_2\_15\]), ${\bold{K}_t}$ has to satisfy ${\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}\preceq {\bold{I}}_{N_e}$, which is equivalent to $$\begin{aligned}
\label{eq:AppendixA_2}
||{\bold{K}_t}^{\frac{1}{2}}{\bold{G}_t^H}||\leq 1.\end{aligned}$$ Recall that $||{\bold{K}_t}^{\frac{1}{2}}{\bold{G}_t^H}||$ is the induced norm for the matrix ${\bold{K}_t}^{\frac{1}{2}}{\bold{G}_t^H}$.
Similarly, for ${\bold{I}}_N-{\bold{H}_c}{\bold{K}_c}{\bold{H}_c^H}$, ${\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}-{\bold{G}_c}{\bold{K}_c}{\bold{G}_c^H}$, and ${\bold{I}}_N-{\bold{H}_{c_2}}'{\bold{K}_c}'{\bold{H}_{c_2}'^H}$ to be valid covariance matrices for ${\tilde{\bold{Z}}_r},{\tilde{\bold{Z}}'_e}$, and ${\tilde{\bold{Z}}'_r}$, in (\[eq:conv\_2\_25\]), (\[eq:conv\_2\_38\]), (\[eq:conv\_2\_44\_1\]), ${\bold{K}_t},{\bold{K}_c},{\bold{K}_c}'$ have to satisfy $$\begin{aligned}
\label{eq:AppendixA_3}
||{\bold{K}_c}^{\frac{1}{2}}{\bold{H}_c^H}||\leq 1,\quad ||{\bold{K}_t}^{\frac{1}{2}}{\bold{G}_t^H}||^2+||{\bold{K}_c}^{\frac{1}{2}}{\bold{G}_c^H}||^2&\leq 1,\quad \text{and } ||{\bold{K}_c}'^{\frac{1}{2}}{\bold{H}_{c_2}'^H}||\leq 1. \end{aligned}$$ In order to satisfy the conditions (\[eq:AppendixA\_2\]) and (\[eq:AppendixA\_3\]), we choose ${\bold{K}_t}=\rho^2 {\bold{I}}_{N}$, ${\bold{K}_c}=\rho^2 {\bold{I}}_{K}$, where $$\begin{aligned}
0<\rho&\leq 1/\max\left\{||{\bold{G}_t^H}||,||{\bold{H}_c^H}||,\sqrt{||{\bold{G}_t^H}||^2+||{\bold{G}_c^H}||^2},||{\bold{H}_{c_2}'^H}||\right\}\\
&=1/\max\left\{||{\bold{H}_c^H}||,\sqrt{||{\bold{G}_t^H}||^2+||{\bold{G}_c^H}||^2}\right\}.\end{aligned}$$
Appendix B\
Derivation of (\[eq:conv\_2\_33\]), (\[eq:conv\_2\_34\]), and (\[eq:conv\_2\_48\]) {#App:AppendixB .unnumbered}
==================================================================================
In order to upper bound $h({Y_{r,k}}(i))$, for all $i=1,2,\cdots,n$ and $k=1,2,\cdots,N$, we first upper bound the variance of ${Y_{r,k}}(i)$, denoted by ${{\rm{Var}}}\{{Y_{r,k}}(i)\}$. Let ${\bold{h}}_{t,k}^{r}$ and ${\bold{h}}_{c,k}^{r}$ denote the transpose of the $k$th row vectors of ${\bold{H}_t}$ and ${\bold{H}_c}$, respectively. Let ${\bold{Z}_r}(i)=\left[Z_{r,1}(i)\cdots Z_{r,N}(i)\right]^T$. Using (\[eq:Yr1\]), ${Y_{r,k}}(i)$ is expressed as $$\begin{aligned}
\label{eq:AppendixB_1}
{Y_{r,k}}(i)={\bold{h}}_{t,k}^{r^T}{\bold{X}_t}(i)+{\bold{h}}_{c,k}^{r^T}{\bold{X}_c}(i)+Z_{r,k}(i).\end{aligned}$$ Thus, ${{\rm{Var}}}\{{Y_{r,k}}(i)\}$ can be bounded as $$\begin{aligned}
\label{eq:AppendixB_2}
{{\rm{Var}}}\left\{{Y_{r,k}}(i)\right\}&\leq {{\rm{E}}}\left\{{Y_{r,k}}(i){Y_{r,k}^{*}}(i)\right\}\\
\label{eq:AppendixB_3}
&={{\rm{E}}}\left\{|{\bold{h}}_{t,k}^{r^T}{\bold{X}_t}(i)|^2\right\}+{{\rm{E}}}\left\{|{\bold{h}}_{c,k}^{r^T}{\bold{X}_c}(i)|^2\right\}+{{\rm{E}}}\left\{|Z_{r,k}(i)|^2\right\}\\
\label{eq:AppendixB_4}
&\leq ||{\bold{h}}_{t,k}^{r}||^2 \;{{\rm{E}}}\left\{||{\bold{X}_t}(i)||^2\right\}+||{\bold{h}}_{c,k}^{r}||^2\;{{\rm{E}}}\left\{||{\bold{X}_c}(i)||^2\right\}+1\\
\label{eq:AppendixB_5}
&\leq 1+\left(||{\bold{h}}_{t,k}^{r}||^2+||{\bold{h}}_{c,k}^{r}||^2\right) P,\end{aligned}$$ where (\[eq:AppendixB\_4\]) follows from Cauchy-Schwarz inequality and monotonicity of expectation, and (\[eq:AppendixB\_5\]) follows from the power constraints at the transmitter and cooperative jammer.
Define $h^2=\underset{k}{\max}\;\left(||{\bold{h}}_{t,k}^{r}||^2+||{\bold{h}}_{c,k}^{r}||^2\right)$. Since $h(Y_{r,k}(i))$ is upper bounded by the entropy of a complex Gaussian random variable with the same variance, we have, for all $i=1,2,\cdots,n$ and $k=1,2,\cdots,N$, $$\begin{aligned}
\label{eq:AppendixB_6}
h({Y_{r,k}}(i))&\leq \log 2\pi e \left(1+\left(||{\bold{h}}_{t,k}^{r}||^2+||{\bold{h}}_{c,k}^{r}||^2\right) P\right)\\
\label{eq:AppendixB_7}
&\leq \log 2\pi e +\log (1+h^2P).\end{aligned}$$
Similarly, we have $$\begin{aligned}
\bar{Y}_{r,k}(i)={\bold{h}}_{t,k}^{r^T}{\bold{X}_t}(i)+{\bold{h}}_{c,k}'^{r^T}{\bold{X}_{c_2}}'(i)+Z_{r,k}(i),\end{aligned}$$ where ${\bold{h}}_{c,k}'^r$ is the transpose of the $k$-th row vector of ${\bold{H}_{c_2}}'$. Thus, we have, $$\begin{aligned}
h(\bar{Y}_{r,k}(i))\leq \log 2\pi e +\log (1+\bar{h}^2 P),\end{aligned}$$ where $\bar{h}^2=\underset{k}{\max}\;\left(||{\bold{h}}_{t,k}^r||^2+||{\bold{h}}_{c,k}'^r||^2\right)$.
Next, we upper bound $h(\tilde{X}_{t,k}(i))$. The power constraint at the transmitter, for $i=1,2,\cdots,n$, is ${{\rm{E}}}\left\{{\bold{X}_t^H}(i)\;{\bold{X}_t}(i)\right\}=\sum_{k=1}^N {{\rm{E}}}\left\{|X_{t,k}(i)|^2\right\}\leq P$. Thus, ${{\rm{E}}}\left\{|X_{t,k}(i)|^2\right\}\leq P$ for all $i=1,2,\cdots,n$, and $k=1,2,\cdots,N$ . Recall that $\tilde{X}_{t,k}(i)=X_{t,k}(i)+\tilde{Z}_{t,k}(i)$, where $X_{t,k}(i)$ and $\tilde{Z}_{t,k}(i)$ are independent, and the covariance matrix of ${\tilde{\bold{Z}}_t}$ is ${\bold{K}_t}=\rho^2{\bold{I}}_{N}$, where $0<\rho\leq \min\left\{\frac{1}{||{\bold{H}_c^H}||},\frac{1}{\sqrt{||{\bold{G}_t^H}||^2+||{\bold{G}_c^H}||^2}}\right\}$. Thus, ${{\rm{Var}}}\{\tilde{X}_{t,k}(i)\}$ is upper bounded as $$\begin{aligned}
\label{eq:AppendixB_8}
{{\rm{Var}}}\{\tilde{X}_{t,k}(i)\}&={{\rm{Var}}}\{X_{t,k}(i)\}+{{\rm{Var}}}\{\tilde{Z}_{t,k}(i)\}\\
\label{eq:AppendixB_10}
&\leq{{\rm{E}}}\left\{|X_{t,k}(i)|^2\right\}+\rho^2\leq P+\rho^2.\end{aligned}$$ Thus, for $i=1,2,\cdots,n$ and $k=1,2,\cdots,N$, we have $$\begin{aligned}
\label{eq:AppendixB_11}
h(\tilde{X}_{t,k}(i))\leq \log 2\pi e+\log (\rho^2+P).\end{aligned}$$ Similarly, using the power constraint at the cooperative jammer, we have, for $i=1,\cdots,n$ and $j=1,\cdots,K$, $$\begin{aligned}
\label{eq:AppendixB_12}
h(\tilde{X}_{c,j}(i))\leq \log 2\pi e+\log (\rho^2+P).\end{aligned}$$
Appendix C\
Proof of Lemma \[lemma1\] {#App:AppendixB .unnumbered}
=========================
Consider two matrices $\bold{Q}\in\mathbb{C}^{M\times K}$ and $\bold{W}\in\mathbb{C}^{K\times N}$ such that $\bold{Q}$ is full row-rank and $\bold{W}$ has all of its entries independently drawn from a continuous distribution, where $K>N,M$. Let $L=\min\{N,M\}$. We show that $\bold{Q}\bold{W}$ has a rank $L$ a.s. . The matrices $\bold{Q}$ and $\bold{W}$ can be written as $$\begin{aligned}
\bold{Q}&=\begin{bmatrix}\bold{q}_1&\bold{q}_2&\cdots&\bold{q}_K\\\end{bmatrix},\\ \bold{W}&=\begin{bmatrix}\bold{w}_1&\bold{w}_2&\cdots&\bold{w}_N\\\end{bmatrix},\end{aligned}$$ where $\bold{q}_1,\bold{q}_2,\cdots,\bold{q}_{K}$ are the $K$ length-$M$ column vectors of $\bold{Q}$, and $\bold{w}_1,\bold{w}_2,\cdots,\bold{w}_{N}$ are the $N$ length-$K$ column vectors of $\bold{W}$.
Let $w_{j,i}$ denotes the entry in the $j$th row and $i$th column of $\bold{W}$. Let $\bold{Q}\bold{W}=[\bold{s}_1\;\bold{s}_2\;\cdots\;\bold{s}_N]$, where $\bold{s}_i$ is a length-$M$ column vector, $i=1,2,\cdots,N$. When $M\geq N$, $\bold{QW}=[\bold{s}_1\;\bold{s}_2\;\cdots\;\bold{s}_L]$, and when $M<N$, $\{\bold{s}_1,\bold{s}_2,\cdots,\bold{s}_L\}$ are the first $L$ columns of $\bold{QW}$. In order to show that the matrix $\bold{QW}$ has rank $L$, we show that, in either case, $\{\bold{s}_1,\bold{s}_2,\cdots,\bold{s}_L\}$ are a.s. linearly independent, i.e., $$\begin{aligned}
\label{eq:LinIndepCond}
\sum_{i=1}^L \lambda_i \bold{s}_i={\bold{0}}_{M\times 1}\end{aligned}$$ if and only if $\lambda_i=0$ for all $i=1,2,\cdots,L$.
Each $\bold{s}_i$, for $i=1,2,\cdots,L$, can be viewed as a linear combination of the $K$ columns of $\bold{Q}$ with coefficients that are the entries of the $i$th column of $\bold{W}$, i.e., $$\begin{aligned}
\bold{s}_i=\sum_{j=1}^{K} w_{j,i}\bold{q}_j.
\label{eq:gi}\end{aligned}$$
Using (\[eq:gi\]), we can rewrite (\[eq:LinIndepCond\]) as $$\begin{aligned}
\sum_{j=1}^{K} \varphi_j \bold{q}_j={\bold{0}}_{M\times 1}
\label{eq:LinIndepCond1}\end{aligned}$$ where, for $j=1,2,\cdots,K$, $$\begin{aligned}
\varphi_j=\sum_{i=1}^{L} \lambda_i w_{j,i}.
\label{eq:mj}\end{aligned}$$ The $K$ columns of $\bold{Q}$ are linearly dependent since each of them is of length $M$ and $K>M$. Thus, equation (\[eq:LinIndepCond1\]) has infinitely many solutions for $\left\{\varphi_j\right\}_{j=1}^{K}$.
Each of these solutions for $\varphi_j$’s constitutes a system of $K$ linear equations $\big\{\varphi_j=\sum_{i=1}^{L} \lambda_i w_{j,i},\\j=1,2,\cdots,K\big\}$. The number of unknowns in this system, i.e. $\lambda$’s, is $L$. Since the number of equations in this system, $K$, is greater than the number of unknowns, $L$, this system has a solution for $\left\{\lambda_i\right\}_{i=1}^{L}$ only if the elements $\{w_{j,i}: j=1,2,\cdots,K,\; \text{and } i=1,2,\cdots,L\}$ are dependent. Since the entries of $\bold{W}$ are all randomly and independently drawn from some continuous distribution, the probability that these entries are dependent is zero.
Moreover, consider the set with infinite cardinality, where each element in this set is a structured $\bold{W}$ that causes the system of equations in (\[eq:mj\]) to have a solution for $\{\lambda_i\}_{i=1}^L$ for one of the infinitely many solutions of $\{\varphi_j\}_{j=1}^K$ to (\[eq:LinIndepCond1\]). This set with infinite cardinality has a measure zero in the space $\mathbb{C}^{K\times L}$, since this set is a subspace of $\mathbb{C}^{K\times L}$ with a dimension strictly less than $K\times L$. We conclude that (\[eq:LinIndepCond\]) a.s. has no non-zero solution for $\{\lambda_i\}_{i=1}^L$. Thus, $\bold{Q}\bold{W}$ has rank $L$ a.s.
If $\bold{Q}\bold{W}$ has rank $L$ a.s. , then so does $(\bold{Q}\bold{W})^T=\bold{W}^T\bold{Q}^T$. Setting $\bold{E}_1=\bold{W}^T$ and $\bold{E}_2=\bold{Q}^T$, we have $\bold{E}_1\in\mathbb{C}^{N\times K}$ has all of its entries independently drawn from some continuous distribution, $\bold{E}_2\in\mathbb{C}^{K\times M}$ is full column-rank, $K>N,M$, and $\bold{E}_1\bold{E}_2$ has rank $L=\min\{N,M\}$ a.s. Thus, Lemma \[lemma1\] is proved.
Appendix D\
Derivation of (\[eq:Q\]) and (\[eq:a\]) {#App:AppendixC .unnumbered}
=======================================
The power constraints at the transmitter and cooperative jammer are ${{\rm{E}}}\left\{{\bold{X}_t^H}{\bold{X}_t}\right\}\leq P$ and ${{\rm{E}}}\left\{{\bold{X}_c^H}{\bold{X}_c}\right\}\leq P$. Using (\[eq:Xt1\_Xc1\]), we have $$\begin{aligned}
\label{eq:AppendixD_1}
{{\rm{E}}}\left\{{\bold{X}_t^H}{\bold{X}_t}\right\}&={{\rm{E}}}\left\{{\bold{U}_t^H}{\bold{P}_t^H}{\bold{P}_t}{\bold{U}_t}\right\}\\
\label{eq:AppendixD_3}
&=\sum_{i=1}^{d}\sum_{j=1}^{d}{\bold{p}}_{t,j}^H{\bold{p}}_{t,i}{{\rm{E}}}\left\{U_j^{*}U_i\right\}\\
\label{eq:AppendixD_4}
&=\sum_{i=1}^{d}||{\bold{p}}_{t,i}||^2{{\rm{E}}}\left\{|U_i|^2\right\}\\
\label{eq:AppendixD_5}
&=||{\bold{p}}_{t,1}||^2{{\rm{E}}}\left\{|U_1|^2\right\}+\sum_{i=2}^{d}||{\bold{p}}_{t,i}||^2 \left({{\rm{E}}}\left\{{U_{i,{{\rm{Re}}}}}^2\right\}+{{\rm{E}}}\left\{{U_{i,{{\rm{Im}}}}}^2\right\}\right)\\
\label{eq:AppendixD_6}
&\leq \left(||{\bold{p}}_{t,1}||^2+2\sum_{i=2}^{d}||{\bold{p}}_{t,i}||^2\right) a^2 Q^2,\end{aligned}$$ where (\[eq:AppendixD\_4\]) follows since $U_i$ and $U_j$, for $i\neq j$, are independent, and (\[eq:AppendixD\_6\]) follows since ${{\rm{E}}}\left\{U_1^2\right\}, E\left\{{U_{i,{{\rm{Re}}}}}^2\right\}, E\left\{{U_{i,{{\rm{Im}}}}}^2\right\}\leq a^2 Q^2$, for $i=2,3,\cdots,d$.
Similarly, using (\[eq:Xt1\_Xc1\]) and (\[eq:Ach\_2\_1\]), we have $$\begin{aligned}
\label{eq:AppendixD_7}
{{\rm{E}}}\left\{{\bold{X}_c^H}{\bold{X}_c}\right\}&={{\rm{E}}}\left\{{\bold{V}_c^H}{\bold{P}_c^H}{\bold{P}_c}{\bold{V}_c}\right\}=\sum_{i=1}^{l}{{\rm{E}}}\left\{|V_i|^2\right\}\\
\label{eq:AppendixD_9}
&={{\rm{E}}}\left\{V_1^2\right\}+\sum_{i=2}^{l}\left({{\rm{E}}}\left\{{V_{i,{{\rm{Re}}}}}^2\right\}+{{\rm{E}}}\left\{{V_{i,{{\rm{Im}}}}}^2\right\}\right)\\
\label{eq:AppendixD_10}
&\leq (2l-1) a^2Q^2. \end{aligned}$$
From (\[eq:AppendixD\_6\]) and (\[eq:AppendixD\_10\]), in order to satisfy the power constraints, we need that $$\begin{aligned}
\label{eq:AppendixD_11}
a^2Q^2\leq \gamma^2 P,\end{aligned}$$ where, $$\begin{aligned}
\label{eq:AppendixD_12}
\gamma^2=\frac{1}{\max\left\{2l-1,||{\bold{p}}_{t,1}||^2+2\sum_{i=2}^{d}||{\bold{p}}_{t,i}||^2\right\}}.\end{aligned}$$ Let us choose the integer $Q$ as $$\begin{aligned}
\label{eq:AppendixD_13}
Q=\left\lfloor P^{\frac{1-\epsilon}{2+\epsilon}}\right\rfloor =P^{\frac{1-\epsilon}{2+\epsilon}}-\nu,\end{aligned}$$ where $\nu$ is a constant which does not depend on the power $P$. Thus, $$\begin{aligned}
\label{eq:AppendixD_14}
a=\gamma P^{\frac{3\epsilon}{2(2+\epsilon)}},\end{aligned}$$ satisfies the condition in (\[eq:AppendixD\_11\]). Thus, the power constraints at the transmitter and cooperative jammer are satisfied.
[^1]: This paper was presented in part at the 2014 IEEE Information Theory Workshop, and the 2015 IEEE International Conference on Communications. This work was supported by NSF Grants CCF 09-64362, 13-19338 and CNS 13-14719.
[^2]: The distinction between matrices and random vectors is clear from the context.
[^3]: Throughout the paper, we omit the index $n$ whenever possible.
[^4]: We consider weak secrecy throughout this paper.
[^5]: The choice of ${\bold{K}_t}$ guarantees that ${\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}$ is a valid covariance matrix.
[^6]: The choice of ${\bold{K}_c}$ guarantees that ${\bold{I}}_N-{\bold{H}_c}{\bold{K}_c}{\bold{H}_c^H}$ is a valid covariance matrix.
[^7]: The choice of ${\bold{K}_t}$ and ${\bold{K}_c}$ guarantees that ${\bold{I}}_{N_e}-{\bold{G}_t}{\bold{K}_t}{\bold{G}_t^H}-{\bold{G}_c}{\bold{K}_c}{\bold{G}_c^H}$ is a valid covariance matrix.
[^8]: The choice of ${\bold{K}_c}$ guarantees that ${\bold{I}}_N-{\bold{H}_{c_2}}'{\bold{K}_c}'{\bold{H}_{c_2}'^H}$ is a valid covariance matrix.
[^9]: Out of all possible sets of $d=N+N_c-N_e$ linearly independent vectors which span $\mathcal{N}(\bold{G})$, the columns of $\bold{Q}$ are the elements of one randomly chosen set.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We calculate the superfluid weight and the polarization amplitude for the one-dimensional bosonic Hubbard model focusing on the strong-coupling regime. Other than analytic calculations we apply two methods: variational Monte Carlo based on the Baeriswyl wave function and exact diagonalization. The former gives zero superfluid response at integer filling, while the latter gives a superfluid response at finite hopping. From the polarization amplitude we derive the variance and the associated size scaling exponent. Again, the variational study does not produce a finite superfluid weight at integer filling (size scaling exponent is near one), but the Fourier transform of the polarization amplitude behaves in a similar way to the result of exact diagonalization, with a peak at small hopping, and suddenly decreasing at the insulator-superfluid transition. On the other hand, exact diagonalization studies result in a finite spread of the total position which increases with the size of the system. In the superfluid phase the size scaling exponent is two as expected. Importantly, our work addresses the ambiguities that arise in the calculation of the superfluid weight in variational calculations, and we comment on the prediction of Anderson about the superfluid response of the model at integer filling.'
author:
- 'B. Hetényi$^{1,2}$, L. M. Martelo$^{3,4}$, and B. Tanatar$^1$'
title: 'Superfluid weight and polarization amplitude in the one-dimensional bosonic Hubbard model'
---
Introduction
============
The bosonic Hubbard model (BHM) was introduced by Gersch and Knollman [@Gersch63] to study the condensation of repulsive interacting bosons. The phase diagram of the superfluid-insulator transition was described by Fisher [*et al.*]{} [@Fisher89]. Since then the phase diagram has been calculated and refined by a variety of means, including mean-field theory [@Fisher89; @Rokhsar91], perturbative expansion [@Freericks96], quantum Monte Carlo [@Batrouni90; @Krauth91; @Scalettar05; @Rousseau06; @Kashurnikov96b; @Rombouts06] (QMC), density matrix renormalization group [@Kuhner00; @Ejima11; @Zakrzewski08; @Carrasquilla13; @Kuhner98], and exact diagonalization [@Kashurnikov96a]. For a review see Ref. [@Pollet12]. Due to the experimental realization of the model [@Jaksch98; @Greiner02] as ultracold atoms in an optical lattice, the model has gained renewed interest.
The BHM is often applied to model Bose solids, e.g. solid $^4$He [@Anderson12]. One question of interest in these systems is under what conditions a superfluid type response is exhibited [@Reatto88; @Vitiello88]. Some experimental [@Kim06] and some theoretical [@Galli05] results suggest that solid helium becomes supersolid at low temperatures. The experimental conclusions, some of which are based on torsional oscillator measurements, have since been challenged by the suggestion that other effects may be responsible for the observed drop in rotational inertia, such as quantum plasticity [@Beamish10], moreover the role played by defects still awaits clarification (see Ref. [@Hallock15] for an overview). For the BHM Anderson [@Anderson12] has suggested that the ground state at integer filling is a supersolid.
In this work we apply a variational Monte Carlo method we developed [@Hetenyi16] for strongly correlated bosonic models based on the Baeriswyl wavefunction [@Baeriswyl86; @Baeriswyl00; @Dzierzawa97; @Hetenyi10; @Dora15; @Yahyavi18] (BWF) and exact diagonalization (ED) and to study the superfluid response and the polarization amplitude [@Resta98; @Resta99; @Aligia99; @Souza00; @Nakamura02; @Yahyavi17; @Kobayashi18; @Hetenyi19; @Nakamura19; @Furuya19] of the one-dimensional BHM. We derive an expression for the superfluid weight which is valid in a variational context. We also emphasize the role of interference between the Peierls phases of different particles in calculating the superfluid weight. The polarization amplitude can be viewed as the characteristic function [@Yahyavi17; @Hetenyi19] of the polarization distribution, from which gauge invariant cumulants of the polarization [@Souza00; @Resta98; @Resta99; @Yahyavi17] can be derived. Recently there has been increasing interest in understanding topological systems [@Nakamura19; @Furuya19] via these quantities, and higher order cumulants can be connected [@Patankar18] to non-linear response.
For the phase diagram, our ED results show excellent agreement with the QMC results of Rousseau et al. [@Rousseau06; @Rousseau08a; @Rousseau08b] In the case of integer filling, the superfluid weight obtained from ED and those obtained by our Monte Carlo method based on the BWF (BWF-MC) are in agreement for values of the hopping strength relative to interaction ($J/U$) up to $J/U \sim 0.2$, while they differ for larger values of $J/U$ due to the inherent limitations of the methods. At half-filling the two sets of results are in reasonable agreement. At integer filling the ED shows the insulator-superfluid transition, while the BWF remains an insulator in the strong-coupling regime, a result expected [@Dzierzawa97] from this type of variational wave function. The Fourier transform of the polarization amplitude shows a peak at small hopping strength for both the ED and the BWF-MC, which both decrease as the hopping strength is increased. This similarity is merely quantitative, the insulator-superfluid transition is only picked up in ED, where the scaling exponent of the variance of the polarization indicates gap closure.
Our paper is organized as follows. In the next section we present the model and the two methods used in this work, ED and the BWF-MC method. Subsequently, we discuss the superfluid weight. In section \[sec:strong\] the strong-coupling expansion based on the BWF is presented. In section \[sec:polarization\] the polarization amplitude and cumulants are described. In section \[sec:results\] our results are presented. In the penultimate section we comment on the calculation of the superfluid weight by Anderson, subsequently we conclude our work.
![Phase diagram calculated by exact diagonalization for different system sizes, compared to results of Rousseau [*et al.*]{} [@Rousseau06] recalculated by the stochastic quantum Monte Carlo method [@Rousseau08a; @Rousseau08b]. The exact diagonalization results for the first Mott lobe are done for different system sizes ($L=8,10,12,14$), for the second and third Mott lobes (black solid lines) the system sizes are $L=8$ and $L=6$, respectively.[]{data-label="fig:pdED"}](./pdED_new.eps){width="8cm"}
Model and methods {#sec:modelmethods}
=================
The one-dimensional bosonic Hubbard model
-----------------------------------------
We study the BHM with nearest neighbor hopping in one dimension at fixed particle number. The Hamiltonian is $$\label{eqn:hml}
H = -J\sum_{i=1}^L \left(\hat{c}^\dagger_{i+1} \hat{c}_i + \hat{c}^\dagger_{i} \hat{c}_{i+1}\right) + U \sum_{i=1}^L \hat{n}_i (\hat{n}_i-1),$$ where $L$ denotes the number of sites, $J$ and $U>0$ are the hopping and repulsive on-site interaction parameters, respectively, $\hat{c}^\dagger_i$($\hat{c}_i$) denotes a bosonic creation(annihilation) operator at site $i$, and $\hat{n}_i$ denotes the density operator at site $i$.
Variational Monte Carlo based on the Baeriswyl wave function
------------------------------------------------------------
While the method we developed is described in detail elsewhere [@Hetenyi16], here we describe aspects that are relevant to this study. The BWF has the form $$\label{eqn:PsiBWF}
|\Psi_B\rangle = \exp(-\alpha\hat{T})|\infty\rangle,$$ where $\alpha$ denotes the variational parameter, and $|\infty\rangle$ is the wavefunction at $U=\infty$. $\hat{T}$ denotes the hopping operator (first term in Eq. (\[eqn:hml\])). The expectation value of some operator $\hat{O}$ can be written, $$\langle \hat{O} \rangle = \langle \Psi_B|\hat{O} |\Psi_B\rangle =
\frac{\langle \infty |e^{-\alpha\hat{T}}
\hat{O}e^{-\alpha\hat{T}}|\infty\rangle}{\langle \infty |e^{-2\alpha\hat{T}}|\infty\rangle}.$$ We take the operator $\hat{O}$ to be diagonal in the coordinate representation. Inserting coordinate identities, $\sum_{\bf x} |{\bf x}\rangle
\langle {\bf x}| = 1,$ results in $$\langle \hat{O} \rangle = \sum_{{\bf x}_L,{\bf x}_C, {\bf x}_R}
P({\bf x}_L,{\bf x}_C, {\bf x}_R) O({\bf x}_C),$$ where the probability distribution $P({\bf x}_L,{\bf x}_C, {\bf x}_R)$ is $$\begin{aligned}
P({\bf x}_L,{\bf x}_C, {\bf x}_R) =
\frac{1}{\tilde{Q}} \langle \infty |{\bf x}_L\rangle \langle {\bf x}_L
| \exp(-\alpha\hat{T}) | {\bf x}_C \rangle \\ \nonumber \langle {\bf x}_C |
\exp(-\alpha\hat{T})| {\bf x}_R \rangle \langle {\bf x}_R | \infty\rangle,\end{aligned}$$ with $\tilde{Q}$ the normalization. The vector ${\bf x}_{L/C/R}$ denotes the particle positions, each component being the position of one particle $x_{i,L/C/R}$, with $i=1,...,N$. Each particle is represented by three coordinates which we call the “left” ($L$), “center” ($C$), and “right” ($R$). Operators diagonal in the coordinate representation can be evaluated using the center coordinate, others cannot always be evaluated easily. An important exception is the kinetic energy, for which an estimator can be constructed by taking the derivative of the normalization with respect to $\alpha$. The crucial step in the construction of our method is that the kinetic energy propagator can be represented in real space as, $$\label{eqn:prpt}
\langle x | \exp(-\alpha\hat{T}) | x' \rangle =
\frac{1}{L}\sum_k e^{-\alpha \epsilon_k + i k (x-x')},$$ where $\epsilon_k = -2J\cos(k)$. Note that the full probability consists of a [*product*]{} of single particle propagators. We can write $$\langle {\bf x}_S| \exp(-\alpha\hat{T}) | {\bf x}_{S'} \rangle = \prod_{i=1}^N
\langle x_{i,S} | \exp(-\alpha\hat{T}) | x_{i,S'} \rangle,$$ where $S,S' = L,C,R$, where each member of the product consists of a propagator like Eq. (\[eqn:prpt\]). Bosonic exchanges have to be implemented by exchanging the positions of a pair of particles, either among the $L$ or $R$ coordinates and accepting or rejecting such exchange moves.
![Superfluid weight based on exact diagonalization calculations and BWF-MC as a function of $J/U$ for different system sizes for fillings of one-half and one. The inset compares the energy per particle for integer filling for the Baeriswyl wave function based variational Monte Carlo method, exact diagonalization, and the strong coupling expansion based on the Baeriswyl wave function ($-2J^2/U$).[]{data-label="fig:ns_ED"}](./nsED_inset.eps){width="8cm"}
In our previous study [@Hetenyi16], we found excellent agreement between the results of our method and those of state-of-the-art QMC [@Rousseau06; @Rousseau08a; @Rousseau08b] calculations for the phase diagram of the one-dimensional BHM.
For the fermionic case the BWF already has a long history [@Baeriswyl86; @Baeriswyl00; @Dzierzawa97]. In particular, it was applied to study the Mott-Hubbard transition in infinite dimensions hypercubic and hyperdiamond lattices [@Dzierzawa97], as well as in a monolayer graphene sheet [@Martelo97]. For a model of interacting spinless fermions the BWF produces excellent results for the ground state energy [@Dora15]. There are also other variational approaches to the BHM. The most well-known variational wavefunction for the fermionic Hubbard model is the Gutzwiller wavefunction [@Gutzwiller63; @Gutzwiller65], which in its original form becomes equivalent to mean-field theory [@Rokhsar91] when extended to the bosonic model, but extensions [@Rokhsar91; @Capello07] thereof go beyond the mean-field treatment.
Exact diagonalization
---------------------
For smaller systems, we diagonalize Eq. (\[eqn:hml\]) by the well-known iterative Lánczos scheme. An important aspect of the method we use is the indexing of the states, which is based on the Lehmer combinatorial code [@Lehmer60; @Laisant88], a way to order permutations. This procedure is also called Ponomarev ordering [@Ponomarev09; @Ponomarev10; @Ponomarev11] and has been implemented in the BHM by Raventos [*et al.*]{} [@Raventos17]. For a system with $N$ particles on $L$ sites the total number of states is $$\mathbb{N}_s= \frac{(N+L-1)!}{N!(L-1)!},$$ a number which grows exponentially with system size for fixed filling. In an exact diagonalization scheme the states need to be indexed, in other words a scheme is needed which can associate a particular configuration of particles with an ordering integer. In our ED method this is achieved by first mapping the lattice of size $L$ with $N$ bosons onto an auxiliary lattice system with $N+L-1$ sites and $N$ particles, but one in which only a single particle can occupy a particular site. To connect the two systems, consider the following example $L=4$ site $N=3$ particle example. For a particular state, let the occupations of the bosonic system be $0,2,1,0$. The analogous occupation in the $6$-site auxiliary system is $011010$. In the auxiliary system occupation, ones which are neighboring each other indicate particles on the same site in the original system, whereas the zeros of the auxiliary occupation separate the different sites of the original lattice.
![Upper panel: Second cumulant of the polarization as a function of $J/U$ based on exact diagonalization calculations for different system sizes. The inset shows the size scaling exponent. Lower panel: Second cumulant of the polarization as a function of $J/U$ based on variational Monte Carlo calculations for different system sizes. The inset shows the size scaling exponent. []{data-label="fig:sigX2"}](./sigX2_ED.eps "fig:"){width="8cm"} ![Upper panel: Second cumulant of the polarization as a function of $J/U$ based on exact diagonalization calculations for different system sizes. The inset shows the size scaling exponent. Lower panel: Second cumulant of the polarization as a function of $J/U$ based on variational Monte Carlo calculations for different system sizes. The inset shows the size scaling exponent. []{data-label="fig:sigX2"}](./sigX2_BWFMC.eps "fig:"){width="8cm"}
The Lehmer code is a way to index the auxiliary system, which, incidentally is the basis of a fermionic system with $N$ particles and $N+L-1$ sites. The index of a state is given by $$\mathbb{I} = 1 + \sum_{I=1}^N \left( \begin{matrix} S_I -
1\\ I \end{matrix} \right),$$ where $I$ denotes the particle number, and $S_I$ the position of particle $I$ on the lattice in the configuration corresponding to the state. One can easily generate the index $\mathbb{I}$ of a given configuration, but what is important is that the reverse procedure can also be implemented without much effort. Given a number $\mathbb{I}$ one first searches for the position of particle $N$ by finding the largest factorial of the form $\left( \begin{matrix} S_N -
1\\ N \end{matrix} \right)<\mathbb{I}$. The number $S_N$ is the position of particle $N$. One then subtracts $\left( \begin{matrix}
S_N - 1\\ N \end{matrix} \right)$ from $\mathbb{I}$ and repeats the procedure for particle $N-1$ and so on until particle $1$.
Superfluid weight {#sec:SFW}
=================
In this section we derive the expression for the superfluid weight valid in variational calculations. We also place emphasis on allowing for interference between the Peierls phases of the particles of the system.
It is not immediately obvious that there is an issue with this quantity when it is considered specifically in a variational context. Usually the second derivative of the variational ground state energy with respect to the flux at zero Peierls flux is calculated [@Millis91] (see Eq. (\[eqn:hml\_Phi\]) for how the Peierls phase enters). One way to elucidate the issue [@Hetenyi14] is to consider that a variational estimate of the ground state energy is a weighted average of exact energy eigenvalues, $$\label{eqn:PE}
E_{var} = \sum_i P_i(\Phi) E_i(\Phi),$$ where $P_i = |\langle \Psi(\Phi) | \psi_i(\Phi) \rangle|^2$ ($|\Psi(\Phi)\rangle$ denotes the variational wave function, $|\psi_i(\Phi) \rangle$ denotes the $i$th eigenstate of the Hamiltonian), and $E_i(\Phi)$ denotes the $i$th energy eigenvalue. We argue here that the correct superfluid weight, when calculated in a variational calculation, is given by the expression $$\label{eqn:n_S_v}
n_S = \frac{1}{L}\left.\sum_i P_i(\Phi) \frac{\partial^2 E_i (\Phi)}{\partial \Phi^2}\right|_{\Phi=0},$$ in other words, the derivative of $P_i(\Phi)$ with respect to the flux need not be taken, in spite of its $\Phi$-dependence. However, in actual variational calculations, often neither $E_i(\Phi)$, nor $P_i(\Phi)$ are available, so we give an alternative, but equivalent expression for $n_S$, applicable in numerical settings.
Let us briefly recall the arguments of Pollock and Ceperley [@Pollock87]. In their work a continuous system of interacting atoms is considered, at finite temperature. Below we modify their steps to account for our lattice model. Pollock and Ceperley [@Pollock87] considered a thought experiment, aimed to mimic the rotating bucket experiments on superfluids. In this setup, the sample is rotated and the rotational inertia is measured. Below the critical temperature, where the superfluid fraction ceases to rotate with the container, the rotational inertia takes a non-classical value [@Fisher73; @Rousseau14], different from the rotational inertia calculated from the amount of fluid present in the container. In the context of supersolidity Leggett suggested [@Leggett70] that torsional oscillator experiments can access the rotational inertia, and therefore the superfluid weight.
In Ref. [@Pollock87] the sample is enclosed between a two circular walls, one of radius $R$, the other of radius $R+d$. When $R\gg d$, the system becomes equivalent to the sample between two parallel planes. In the experiment the walls are moved by an outside agent with velocity $v$. It is expected that the normal component of the fluid will move with the walls, due to friction, while the superfluid component will remain stationary in the laboratory frame.
The density matrix of the system is $$\hat{\rho}_v = \exp(-\beta \hat{H}_v),$$ where $\beta$ denotes the inverse temperature, and $$\hat{H}_v = \sum_{j=1}^N \frac{(\hat{p}_j - m v)^2}{2m} + \hat{V},$$ where $\hat{p}_j$ denotes the momentum of an individual particle, $m$ denotes the mass of a particle, and $\hat{V}$ denotes an interaction. The total momentum of the system (the momentum of the normal component) is given by $$\label{eqn:rho_n}
\frac{\rho_n}{\rho} N m v = \frac{\mbox{Tr}\{\hat{\rho}_v \hat{P}\}}{\mbox{Tr}\{\hat{\rho}_v \}},$$ where $\rho_n$ denotes the density of the normal phase, $\rho$ denotes the total density, and $$\label{eqn:P}
\hat{P} = \left.\frac{\partial{\hat{H}_v}}{\partial v}\right|_{v=0} = \sum_{j=1}^N \hat{p}_j.$$ We can write the free energy of the system as $$\exp(-\beta F_v) = \mbox{Tr} \{ \exp(-\beta \hat{H}_v)\}.$$ Taking the first derivative of $F_v$ with respect to $v$ results in $$\label{eqn:delFv}
\frac{\partial F_v}{\partial v} = N m v \left( 1 - \frac{\rho_n}{\rho}\right),$$ resulting in the superfluid weight of $$\label{eqn:rho_s}
\frac{\rho_s}{\rho} = \frac{1}{N m} \left.\frac{\partial^2 F_v}{\partial v^2}\right|_{v=0}.$$
In a lattice model the analog of the velocity $v$ is the Peierls phase, which we introduce into the Hamiltonian as $$\label{eqn:hml_Phi}
H(\Phi) = -J\sum_{i=1}^L \left(e^{i \Phi}\hat{c}^\dagger_{i+1}
\hat{c}_i + e^{-i \Phi}\hat{c}^\dagger_{i} \hat{c}_{i+1}\right) + U
\sum_{i=1}^L \hat{n}_i (\hat{n}_i-1).$$ In exact diagonalization calculations, the analog of Eq. (\[eqn:rho\_s\]), the superfluid weight, is obtained by calculating the response of the system to the Peierls phase (or boundary twist) as, $$\label{eqn:ns}
n_S = \frac{1}{L}\left[ \frac{\partial^2 E_g(\Phi)}{\partial \Phi^2}
\right]_{\Phi=0}.$$ Here $E_g(\Phi)$ denotes the ground state of the system, since our lattice system is studied at zero temperature.
In the BWF-MC method we use, where the exact eigenenergies are not available, it is not possible to calculate the superfluid weight directly (via Eq. (\[eqn:ns\])). Instead we derive the superfluid weight in a variational context based on the reasoning of Pollock and Ceperley [@Pollock87]. Our starting point is the normalization of the Baeriswyl wavefunction, $$\label{eqn:NPhi}
\tilde{Q}_\Phi = \langle \infty | \exp \left( - 2 \alpha
\hat{T}(\Phi) \right) | \infty \rangle = \exp(-2\alpha \tilde{F}_\Phi),$$ which is analogous to the partition function in statistical mechanics. We also defined $\tilde{F}_\Phi$, the “free energy” in the variational context. The analog of Eq. (\[eqn:delFv\]) is $$\frac{\partial \tilde{F}_\Phi}{\partial \Phi} = \cos \Phi J_\Phi -
\sin \Phi T_\Phi,$$ where $$\label{eqn:J_Phi}
J_\Phi = \langle \infty | \exp \left( - \alpha
\hat{T}(\Phi) \right)\hat{J}\exp \left( - \alpha
\hat{T}(\Phi) \right)
| \infty \rangle,$$ with the current operator $\hat{J}$ being $$\label{eqn:J}
\hat{J} = \left.\frac{\partial H(\Phi)}{\partial \Phi} \right|_{\Phi=0} = -2J
\sum_k \sin(k) n_k,$$ and $$\label{eqn:T_Phi}
T_\Phi = \langle \infty | \exp \left( - \alpha
\hat{T}(\Phi) \right)\hat{T}\exp \left( - \alpha
\hat{T}(\Phi) \right)
| \infty \rangle,$$ with $\hat{T}$ indicating the hopping energy operator at zero flux. Note that $\hat{J}$ is the lattice analog of the total momentum operator (compare Eq.(\[eqn:J\]) and Eq. (\[eqn:P\])).
From Eq. (\[eqn:delFv\]), we derive our proposed expression for the superfluid weight for variational calculations, $$\label{eqn:n_S}
n_S = \frac{1}{L} \left.\frac{\partial^2 \tilde{F}_\Phi}{\partial \Phi^2}\right|_{\Phi=0} = \frac{1}{L}\left(
\frac{\partial J_0}{\partial \Phi} - T_0\right).$$ In the case of an exact eigenstate, this expression corresponds to Eq. (\[eqn:ns\]), but in the case of a weighted average of the type in Eq. (\[eqn:PE\]) it corresponds to Eq. (\[eqn:n\_S\_v\]). To see this, we first write $T_0$ as $$T_0 = \sum_i P_i T_i$$ where $T_i$ denotes the expectation value of the kinetic energy in eigenstate $i$, and write $J_\Phi$ as $$J_\Phi = \sum_i P_i(\Phi) J_i(\Phi),$$ where $J_i(\Phi)$ denotes the expectation value of the current operator ($\hat{J}$) in the eigenstate $|\psi_i(\Phi)\rangle$, and take the derivative with respect to $\Phi$, resulting in $$\frac{\partial J_\Phi}{\partial \Phi} = \sum_i \left[
\frac{\partial P_i(\Phi)}{\partial \Phi} J_i(\Phi)
+
P_i(\Phi) \frac{\partial J_i(\Phi)}{\partial \Phi}
\right].$$ Setting $\Phi=0$, we can use the fact that $J_i(0)=0$, leaving us with $$n_S = \frac{1}{L} \sum_i P_i \left( \frac{\partial J_i(0)}{\partial \Phi}
- T_i\right),$$ which is exactly Eq. (\[eqn:n\_S\_v\]).
The second derivative in Eq. (\[eqn:n\_S\]) involves two limits, $\Phi \rightarrow 0$ and $L \rightarrow \infty$. In one dimension the order of limits is inconsequential, the superfluid weight is obtained in both cases [@Scalapino92; @Scalapino93]. Below we evaluate the second derivative in Eq. (\[eqn:n\_S\]) by way of finite difference on a grid $\Phi = m 2\pi/L$, with $m$ integer.
Some care needs to be exercised in applying the derivative with respect to $\Phi$. It appears highly tempting to implement the Peierls phase as a shift in $k$, as $\epsilon_k \rightarrow
\epsilon_{k+\Phi}$ in each single particle propagator (Eq. (\[eqn:prpt\])). This approach leads to a finite superfluid weight even at integer filling. However, in this case the interference between the particles is neglected.
To elaborate, let us write $\tilde{Q}_\Phi$ in the coordinate representation as $$\tilde{Q}_\Phi = \sum_{\bf x_L,x_R}\langle \infty | {\bf x_L} \rangle
\langle {\bf x_L} | \exp \left( -2 \alpha \hat{T}(\Phi) \right) |{\bf x_R} \rangle \langle {\bf x_R} | \infty \rangle,$$ where ${\bf x_{L/R}} = x_{L/R,1},...,x_{L/R,N}$, indicating particle positions in the “left” or “right” coordinate bases. For the moment, let us consider a one-particle propagator, $$\langle x | \exp \left( -2 \alpha \hat{T}(\Phi) \right) |x' \rangle =
\langle x | (1 - 2 \alpha \hat{T}(\Phi) + 2 \alpha^2 \hat{T}(\Phi)\hat{T}(\Phi)) |x' \rangle$$ A term in which the difference between $x'$ and $x$ is a fixed number will involve all different paths which go from $x$ to $x'$. The different paths may have a different number of hoppings, left and right, but the difference between $x'$ and $x$ is the same. Each right hopping contributes a phase of $\Phi$, while each left hopping contributes a phase of $-\Phi$. Thus, the net change in phase will be determined solely by $x'-x$, we can write, $$\begin{aligned}
\langle x | \exp \left( -2 \alpha \hat{T}(\Phi) \right) |x' \rangle = &
\exp\left[ i \Phi (x'-x) \right] \times \\
\nonumber & \langle x | \exp \left( -2 \alpha \hat{T}(0) \right) |x' \rangle.\end{aligned}$$ We used the fact that for a periodic system $\langle x | x' \rangle =
\delta_{x,x'+L}$. Armed with this, we can rewrite $\tilde{Q}(\Phi)$ as $$\begin{aligned}
\tilde{Q}_\Phi = \sum_{\bf x_L,x_R}
\exp\left[ i \Phi \sum_{i=1}^N (x_{R,i} - x_{L,i}) \right] \times \\ \nonumber
\hspace{.5cm} \langle \infty | {\bf x_L} \rangle
\langle {\bf x_L} | \exp \left( -2 \alpha \hat{T}(0) \right) |{\bf x_R} \rangle \langle {\bf x_R} | \infty \rangle.\end{aligned}$$ We note that the operator appearing in the exponential $\sum_{i=1}^N
(x_{R,i} - x_{L,i})$ is a sum of differences between single particle positions. The contribution from different particles are now added, and they can cancel. The position operator is undefined in a periodic system, but its exponential, provided that $\Phi = m 2 \pi/L$ is well-defined. Such an operator is used in the many-body analog of the modern theory of polarization [@Resta98; @Resta99], also to express a second derivative in the momentum shift [@Hetenyi19]. Following the same steps, we can write $$n_S = - \lim_{L \rightarrow \infty} \left[ \frac{L^2}{4 \alpha \pi^2} \right] \mbox{Re} \ln \tilde{Q}_{2 \pi / L}$$ If the system has integer filling, it always holds that $$\sum_{i=1}^N (x_{R,i} - x_{L,i}) = 0,$$ leading to a superfluid weight of zero. According to this derivation a finite superfluid weight can only arise for non-integer fillings. This coincides with previous results on the Drude weight of the Baeriswyl wave function in fermionic systems [@Dzierzawa97].
The derivation we provided above is specific to the BWF, in which the hopping energy appears explicitly in the projector. In variational wave functions where that is not the case, one has to apply a Baeriswyl projector, threaded by a Peierls flux, and take the limit of $\alpha \rightarrow 0$ in the final expression for $n_S$.
Strong coupling expansion based on the Baeriswyl wave function {#sec:strong}
==============================================================
The BHM has been studied via a number of strong coupling expansions [@Freericks96; @Elstner99; @Damski06; @Krutitsky08; @Heil12]. Such expansions in the case of the BHM are usually more difficult than in the case of the fermionic Hubbard model, where the Heisenberg model is obtained as the limiting case. The domain for which this approach is valid can be extended by augmenting [@Hetenyi08] the relevant canonical transformation with a variational parameter.
In the atomic limit, the 1D BHM ground state at filling one is given by a full localized boson state $$|\infty\rangle =\prod_{i=1}^L {\hat c}_i^\dagger|0\rangle.$$ Threading the system with a flux $\Phi$, using the fact that the state $|\Psi_\infty\rangle$ is not $\Phi$ dependent, the BWF reads $$|\Psi_B (\Phi) \rangle = \exp(-\alpha\hat{T}(\Phi))|\infty\rangle$$ where $\hat{T}(\Phi)$ is the first term in the r.h.s. of Eq. (\[eqn:hml\_Phi\]). Our first aim is to evaluate the variational energy, $$\label{eqn:EBalphaPhi}
E_B(\alpha,\Phi) =
\frac{\langle \Psi_B(\Phi)| {\hat H}(\Phi) | \Psi_B(\Phi) \rangle}
{ \langle \Psi_B(\Phi)|\Psi_B(\Phi)\rangle},$$ by performing a strong-coupling expansion ($J/U \ll 1$) via expanding the kinetic projector and keeping the leading order terms in $J/U$ in all relevant quantities. An important point is that successive applications of the kinetic operator on $| \infty \rangle$ (a state with all sites occupied by one particle) in expectation values of the type in Eq. (\[eqn:EBalphaPhi\]) should be such that one returns to the state $\langle \infty|$.
Up to first order in $J/U$ the BWF is given by $$\begin{aligned}
\label{eqn:PsiSC}
|\Psi_B (\Phi) \rangle & = |\infty\rangle + \alpha J
|\Psi_d(\Phi)\rangle\end{aligned}$$ where $\vert \Psi_d (\Phi)\rangle = \sum_{i=1}^L \left( e^{i\Phi}
\hat{c}^\dagger_{i+1} \hat{c}_i + e^{-i\Phi} \hat{c}^\dagger_{i}
\hat{c}_{i+1}\right) |\Psi_\infty\rangle$, a state which is a superposition of states with occupation number $n_i$ being $0$ and $2$ for a pair of nearest neighboring sites and $1$ for all other sites.
The pair occupation, needed for the potential energy, within our approximation is given by $$\label{nd}
n_p = \langle \Psi_B(\Phi)| {\hat n}_i ( {\hat n}_i-1) |\Psi_B(\Phi)\rangle
= 8 \alpha^2 J^2$$ and the average of the potential energy by $$\label{EpalphaPhiL}
\langle \Psi_B (\Phi)|{\hat U}|\Psi_B (\Phi)\rangle = 8 \alpha^2 J^2 U L$$ where ${\hat U}$ is the second term in the r.h.s. of Eq. (\[eqn:hml\]). The boson momentum distribution reads $$n(k,\Phi) = \langle \Psi_B(\Phi)| {\hat n}_k|\Psi_B(\Phi)\rangle = 1 + 8 \alpha J \cos(k+\Phi)\\
\label{nkalphaPhiL}$$ where ${\hat n}_k = {\hat c}_k^\dagger {\hat c}_k$ and $\hat{c}^\dagger_{k} = \frac{1}{\sqrt{L}} \sum_{i=1}^{L} e^{-ik x_i}\hat{c}^\dagger_{i}$. The above result is obtained by performing the calculations in momentum space and exponentials $e^{\pm i(k+\Phi)}$ lead to the momentum distribution to depend on $k+\Phi$. The average of the kinetic energy is $$\langle \Psi_B (\Phi)|{\hat T} (\Phi)|\Psi_B (\Phi)\rangle
= \sum_{k} \epsilon_{k+\Phi}n(k,\Phi) = -8 \alpha J^2 L
\label{EkalphaPhiL}$$ Note that both $\epsilon_{k+\Phi}$ and $n(k,\Phi)$ depend on $k+\Phi$, therefore the kinetic energy is [*[not flux dependent]{}*]{} as expected, since, at filling one, the product $\epsilon_{k+\Phi}n(k,\Phi)$ just represents an overall shift in the Brillouin zone relatively to system without flux. Optimizing the total energy at any flux leads to $\alpha^* = \frac{1}{2U}$. The optimal variational wavefunction is given by $$\label{eqn:PsiSC1}
|\Psi_B (\Phi,\alpha^*)\rangle = |\infty\rangle + \frac{J}{2U} |\Psi_d(\Phi)\rangle$$ and the ground state energy per site by $$E_B(\alpha^*) = - \frac{2 J^2}{U},$$ where we dropped the $\Phi$ dependence from the notation. The inset in Fig. \[fig:ns\_ED\] compares this approximation to the results of exact diagonalization for fourteen sites, and BWF-MC results for a system of $L=160$.
As for the superfluid weight (Eq. (\[eqn:n\_S\])), we first express the total current operator under a flux $\Phi$ within our approximation as $$\begin{aligned}
{\hat J} (\Phi) & = & \frac{ \partial {\hat T}(\Phi)}{\partial \Phi} =
2J \sum_{k} \sin(k+\Phi)\hat{c}^\dagger_{k} \hat{c}_{k},\end{aligned}$$ and its average over $|\Psi_B (\Phi)\rangle$ is zero $$\begin{aligned}
J_{\Phi}(\Phi) = 2J \sum_{k} \sin(k+\Phi) n(k,\Phi) = 0
\label{JkalphaPhiL}\end{aligned}$$ indicating the insulating character of $|\Psi_B (\Phi)\rangle$ at filling one. The average of the current operator expressed by Eq. (\[eqn:J\_Phi\]) reads $$\begin{aligned}
J_\Phi & = & 2J \sum_k \sin(k)n(k,\Phi) = -\frac{4J^2}{U}
\sin\Phi L \end{aligned}$$ while the average of the kinetic energy, expressed by Eq. (\[eqn:T\_Phi\]) reads $$\begin{aligned}
T_\Phi = \sum_k \epsilon(k) n(k,\Phi) = -\frac{4J^2}{U} \cos\Phi L\end{aligned}$$ Using Eq. (\[eqn:n\_S\]) $n_S=0$ for the BHM at filling one in the strong coupling expansion, in agreement with the previous ED and BWF-MC results.
For the 1D BHM at filling one up to order in $(J/U)^2$, the kinetic and potential energy per site averages (at optimal variational parameter), read $\langle{\hat T}\rangle = -\frac{4J^2}{U}$ and $\langle {\hat U}\rangle = \frac{2J^2}{U}$, respectively. Physically, this means that, through the kinetic exchange mechanism, the kinetic energy gain is bigger than the on-site potential energy cost, in perfect analogy with the Fermionic Hubbard Model (FHM) in the strong coupling limit [@Fazekas99]. Along the same lines, the pair occupancy, Eq. (\[nd\]), and momentum distribution for $\Phi=0$, Eq. (\[nkalphaPhiL\]), read as $$\begin{aligned}
n_p & = & \frac{2J^2}{U^2}\end{aligned}$$ and $$\label{eq:nkSCBWF2}
n(k) = 1 - \frac{2 \epsilon_k}{U}$$ also in perfect analogy with the FHM in the strong coupling limit, except, obviously, for the absence of the spin-spin term [@Fazekas99]. It is also worth mentioning that the momentum distribuition for $\Phi=0$, Eq. (\[eq:nkSCBWF2\]), is in agreement with that of Ref. [@trivedi2009].
Polarization amplitude {#sec:polarization}
======================
For a periodic lattice system the total position operator is ill-defined. In a many-body system the standard approach [@Resta98] is to calculate the expectation value of the twist operator [@Lieb61], also known as the polarization amplitude [@Kobayashi18], $$Z_q = \langle \Psi |\exp \left( i \frac{2 \pi q }{L} \hat{X}\right)|
\Psi \rangle.$$ where $\hat{X} = \sum_{i=1}^N x_i\hat{n}_i$. This quantity can be interpreted as a characteristic function, and derivatives with respect to $q$ give the average of $\hat{X}$, the variance of $\hat{X}$ or higher order cumulants. The second cumulant, $\sigma^2_N$, which measures the spread of the center of mass, can be used to determine whether a system is a conductor or an insulator [@Kohn64; @Resta98; @Resta99; @Souza00; @Yahyavi17]. In a finite system the definition of the spread is $$\label{eqn:totX}
\sigma^2_N = -\frac{L^2}{4\pi^2} \left. \frac{ \Delta^2 \ln
Z_q}{\Delta q^2}\right|_{q=0},$$ where $\frac{\Delta G}{\Delta q}$ indicates a discrete derivative of $G$ with respect to $q$. While the commonly used expression for the variance of the total position of Resta and Sorella is [@Resta98; @Resta99], $\sigma^2 = -\frac{L^2}{2\pi^2} \mbox{Re}
\ln Z_1,$ (consistent with Eq. (\[eqn:totX\])), for small system sizes scaling exponents are difficult to obtain from Eq. (\[eqn:totX\]). A simple remedy [@Hetenyi19] is to take the derivative of the ln function analytically, and apply discrete derivatives to the remaining cases, $$\label{eqn:totX2}
\sigma^2_N = -\frac{L^2}{4\pi^2}
\left[\left(
\left. \frac{ \Delta^2 Z_q}{\Delta q^2}\right|_{q=0}\right)
-
\left(\left. \frac{ \Delta Z_q}{\Delta q}\right|_{q=0}\right)^2
\right].$$ In our calculations below we will use Eq. (\[eqn:totX2\]) to calculate the variance and the size scaling exponent $\gamma$ from an assumed scaling form of $$\sigma^2_N(L)=aL^\gamma.$$ We also calculate the discrete Fourier transform of $Z_q$, which we define as $$\tilde{Z}_x = \sum_{q=1}^L \exp\left(i \frac{2 \pi q x}{L}\right) Z_q,$$ where $x = 1,...,L$.
![$\tilde{Z}_x$, the Fourier transform of the function $Z_q$, as a function of $x$ (the variable conjugate to $q$) and $J/U$ for a system of filling one. Upper panel: exact diagonalization ($L=14$), lower panel: BWF-MC ($L=80$). []{data-label="fig:heatmap"}](./heatmap.eps "fig:"){width="8cm"} ![$\tilde{Z}_x$, the Fourier transform of the function $Z_q$, as a function of $x$ (the variable conjugate to $q$) and $J/U$ for a system of filling one. Upper panel: exact diagonalization ($L=14$), lower panel: BWF-MC ($L=80$). []{data-label="fig:heatmap"}](./bwf_heatmap.eps "fig:"){width="8cm"}
Numerical Results {#sec:results}
=================
In Fig. \[fig:pdED\] we show Mott lobes of the phase diagram calculated by ED compared to the QMC results of Rousseau et al. [@Rousseau06; @Rousseau08a; @Rousseau08b]. The agreement between the ED results and the QMC ones is reasonable, especially considering that the system sizes treated by ED are quite small, up to $L=14$ for the first Mott lobe, and $L=8$ and $L=6$ for the second and third, respectively. As $J/U$ increases size dependence of the phase lines is visible, a result also found by Raventos [*et al.*]{} [@Raventos17]. There is also a good agreement between ED and our previous BWF-MC up to $J/U\approx 0.4$ (cfr. Fig. 6 of Ref. [@Hetenyi16]). Fig. \[fig:ns\_ED\] shows the superfluid weight for systems of different system sizes at filling one-half and one. The half-filled system is always superfluid. The BWF-MC and ED results are reasonably close in the case of half filling. For integer filling only the ED results are shown, since the BWF-MC are zero, as shown above. The ED results show no superfluid response for small hopping values up to $J/U \sim 0.2$. At that point the superfluid weight starts to grow for the smallest system size. Larger system sizes persist in an insulating state until larger values of $J/U$, but all of them are noticeable by $J/U \approx 0.3$. In the half-filled system the energy as a function of $\Phi$ was found to be discontinuous at a finite value of $\Phi$, indicating a level crossing. No level crossing was found for filling one. We note that at filling one the gap closure (corresponding to the Kosterlitz-Thouless transition) occurs [@Kuhner00; @Ejima11; @Zakrzewski08; @Kashurnikov96a] at $(J/U)_{KT}\approx 0.6$. In particular, the ED calculation extended by a renormalization group analysis by Kashurnikov and Svistunov [@Kashurnikov96a] gives $J_{KT}=0.608(4)$, nearly twice the value where the superfluid weight begins to grow in Fig. \[fig:ns\_ED\]. Actually, for hopping value $J/U \sim
0.3-0.5$, ED indicates a non-zero superfluid weight, but this is not a reliable result, once we are approaching a quantum transition, $(J/U)_{KT} \approx 0.6$, and the smallness of the system sizes, up to $L=14$, becomes highly manifest.
![Standard deviation of the position (divided by the system size) vs. system size, for a system with $J/U=0.25$ at filling one. Upper panel: total position for a bosonic system, lower panel: single-particle position with bosonic exchange moves turned off.[]{data-label="fig:x"}](./x_ins.eps){width="6cm"}
In Fig. \[fig:sigX2\] the variance of the total position $\sigma^2_N$ and its size scaling exponent $\gamma$ are shown. The size scaling exponent $\gamma$ is shown in the insets. The size scaling exponent $\gamma$ is known [@Hetenyi19] to take the value two in a closed gap (conducting) system, and the value one for an insulating system. We notice that in the upper panel of Fig. \[fig:sigX2\], the exact diagonalization calculations bear out these expectations, notwithstanding the limitations of small system sizes. The scaling exponent $\gamma$ is near the value one when $J/U$ is small, and increases to two around $J/U \sim 0.5$, in reasonable agreement with the results of Kashurnikov and Svistunov [@Kashurnikov96a]. For the BWF-MC calculations (lower panel of Fig. \[fig:sigX2\]), the size scaling exponent is near one, in agreement with what is expected for an insulating phase. As argued in section \[sec:SFW\] the superfluid weight is zero for the Baeriswyl wave function at integer filling. This conclusion is corroborated by our variational results shown in the inset of the lower panel of Fig. \[fig:sigX2\].
The upper panel of Fig. \[fig:heatmap\] shows $\tilde{Z}_x$, the discrete Fourier transform of $Z_q$ in the form of a color map $(J/U$ is the variable on the $x$-axis, the variable conjugate to $\tilde{Z}_x$ is on the $y$-axis). The system is the unit filled one. The interesting result on this figure is the peak, which occurs at $x=7$, half the unit cell, which is where it should be for a half-filled system with no spontaneous polarization. The peak is relatively constant until $J/U \approx 0.3$, then starts to decrease. Again, this occurs at $J/U$ where the superfluid weight increases in Fig. \[fig:ns\_ED\]. The lower panel of Fig. \[fig:heatmap\] show $\tilde{Z}_x$ as a function of $x$ and $J/U$ based on our variational Monte Carlo results. The similarity between the upper and lower panels are striking: a sharp peak half-way through the supercell exists near $J = 0$, which decreases around $J/U \approx 0.3$. An important difference between the two results is that in the exact diagionalization, the distribution flattens out as $J$ becomes large, whereas a small and broad peak persists in the variational results.
![Upper panel: position of one particle on the lattice, lower panel: total position modulo the size of the system, for a system with $J/U=0.25$, $N=320$ particles, and size $L=320$ lattice sites.[]{data-label="fig:xx1"}](./x1.eps){width="6cm"}
In order to analyze the behavior of the superfluid weight (next section), we present two more sets of results in Figs. \[fig:x\] and \[fig:xx1\]. The former shows the spread of the position as a function of system size $J/U=0.25$ for two cases, both at filling one. In the upper panel the total position is shown, indicating convergence with system size, or insulating behavior. In the lower panel, we show the variance in the position of one particle, when bosonic exchange moves are turned off (in other words, in a system of distinguishable particles). Fig. \[fig:xx1\] shows two time series in the course of the MC simulation, the upper panel for the position of a single particle (with bosonic exchange moves), the lower panel for the total position. We see that even though the single particle delocalizes over the whole lattice, the total position fluctuates around the center of the unit cell. We see that bosonic exchange delocalizes single particles, but leaves the center of mass localized, fluctuating around the center of the supercell. Similarly, a superfluid weight calculation which neglects the interference of particles would lead to a finite superfluid response (we checked this).
Discussion
==========
Anderson has argued [@Anderson12], based on a strong coupling expansion, that the superfluid response of the BHM is finite at integer filling. Here, we show that his formalism is similar to our strong-coupling expansion, and that his way of implementing the Peierls flux does not allow for interference between particles. The finite superfluid weight is an artifact of this implementation.
We follow the steps of [@Anderson12], but we a a system threaded by a flux $\Phi$ at the outset. For filling one in the strong coupling limit $U\gg J$ and up to the leading order in $J/U$, the many-body ground state is written as the product of non-orthogonal bosons, the so called “eigenbosons”, $$\label{eqn:gs-anderson-phi}
|\Psi_A(\Phi)\rangle = \prod_{i=1}^L{{\hat b}^\dagger_i(\Phi)} |0\rangle$$ where the “eigenbosons” are given by, $$\begin{aligned}
\label{eqn:bc-transf-phi-L}
{\hat b}^\dagger_i(\Phi) & = \frac{1}{M} \left[{\hat c}^\dagger_i + e^{-i\Phi} \frac{J}{2U}
{\hat c}^\dagger_{i+1} + e^{i\Phi} \frac{J}{2U}{\hat c}^\dagger_{i-1} \right]\end{aligned}$$ with $M=\sqrt{1+2(J/2U)^2}$. Let us write down the first few terms of Eq. (\[eqn:gs-anderson-phi\]). The zeroth order term is $$\prod_i \hat{c}_i^\dagger | 0 \rangle$$ which is equal to $|\infty\rangle$, the term zeroth order in $J/U$ in Eq. (\[eqn:PsiSC1\]). The term first order in $J/U$ is $$\frac{J}{2U}\sum_i [\exp(i\Phi) \hat{c}^\dagger_{i+1} + \exp(-i\Phi) \hat{c}^\dagger_{i-1}]
\prod_{j\neq i} c_j^\dagger | 0 \rangle,$$ which is exactly the first order term in Eq. (\[eqn:PsiSC1\]). For this system we calculated the ground state energy, and showed that the superfluid weight is zero.
The particular steps of Anderson, leading to the supefluid weight, are as follows. The bosonic states are Fourier transformed as, $$\begin{aligned}
\label{eqn:nonorthbosFT}
\hat{b}_0 &=& \hat{c}_0 + \left(\frac{J e^{-i\Phi}}{2U}\hat{c}_1 +
\frac{J e^{i\Phi}}{2U}\hat{c}_{-1} \right) \\ &=&
\frac{1}{\sqrt{L}}\sum_k \left[1 + (J/U) \cos(k +
\Phi) \right] \hat{c}_k. \nonumber\end{aligned}$$ The kinetic energy is written $$\label{eqn:K}
K = -\frac{J}{N}\sum_k \cos (k+\Phi)[1 + 2 (J/U) \cos \Phi \cos k + (J/U)^2 \cos^2 k]$$ The second derivative of $K$ according to $\Phi$ will give a finite contribution. The issue is that in the Fourier transform of Eq. (\[eqn:nonorthbosFT\]), the Peierls phase is separated from the rest of the system (in the words of Anderson a pair of lattice sites is “embedded” in the system), and not allowed to interfere with the Peierls phases of other hoppings. When, in our BWF-MC calculations, a Peierls phase is implemented in each of the propagators [ *independently,*]{} without interference, a finite superfluid weight also results. Moreover, at small $J/U$ the two results give the same scaling with $J/U$.
Conclusion {#sec:conclusion}
==========
In the context of solid $^4$He supersolidity was suggested by Kim and Chan [@Kim06] based on torsional oscillator experiments. It is important to note that while the bosonic Hubbard model may give useful hints into the behavior of solid helium, some aspects, for example vacancies and dislocations, are not accounted for [@Chan13]. The experiments of Kim and Chan were also brought in to question by the suggestion [@Beamish10] that their results could be due to quantum plasticity. Quantum plasticity is a phenomenon which can only be defined in more than one dimension.
We have studied the bosonic Hubbard model in one dimension using a variational Monte Carlo method based on the Baeriswyl wave function, and compared our results with exact diagonalization calculations. We showed that at integer filling this wave function can not produce a finite superfluid weight. Interestingly, the Fourier transform of the polarization amplitude behaves very similarly to its counterpart calculated by exact diagonalization. Closer inspection however, for example, the calculation of the size scaling exponent, still indicates an insulating system. Therefore, the BWF is a reliable one for values of $J/U$ up to $J/U\approx 0.4$ not only at the energy level but also to study the polarization amplitude in the insulating phase.
We also stressed the proper way to calculate the superfluid weight in a variational context and commented on a calculation of Anderson based on which a finite superfluid response was found for the integer filled bosonic Hubbard model. Our central result is that the “usual” formula of the superfluid weight, the second derivative with respect to a momentum shifting Peierls phase, is not directly applicable in a variational context.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to V. G. Rousseau for providing the results of the stochastic Green‘s function calculation in Fig. \[fig:pdED\]. We acknowledge financial support from TUBITAK under grant no.s 113F334 and 112T176. BH was also supported by the National Research, Development and Innovation Fund of Hungary within the Quantum Technology National Excellence Program (Project Nr. 2017-1.2.1-NKP-2017-00001). BT acknowledges support from TUBA. LMM acknowledges J. Lopes dos Santos and J. V. Lopes for illuminating and stimulating discussions.
[9]{}
H. Gersch and G. Knollmann [*Phys. Rev.*]{}, [ **129**]{} 959 (1963).
M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, [*Phys. Rev. B*]{}, [**40**]{} 546 (1989).
D. S. Rokhsar and B. G. Kotliar [*Phys. Rev. B*]{}, [**44**]{} 10328 (1991).
J. K. Freericks and H. Monien [*Phys. Rev. B*]{}, [ **53**]{} 2691 (1996).
G.G. Batrouni, R. T. Scalettar, and G. T. Zimányi, [ *Phys. Rev. Lett.*]{}, [**65**]{} 1765 (1990).
W. Krauth, and N. Trivedi, [*Europhys. Lett.*]{} [**14**]{} 627 (1991).
R. T. Scalettar, G. Batrouni, P.J.H. Denteneer, F. Hébert, A. Muramatsu, M. Rigol, V. G. Rousseau et M. Troyer, [*J. Low Temp. Phys.*]{}, [**140**]{} 315 (2005).
V. G. Rousseau, D. P. Arovas, M. Rigol, F. Hébert, G. G. Batrouni, and R. T. Scalettar, [*Phys. Rev. B*]{}, [**73**]{} 174516 (2006).
V. A. Kashurnikov, A. V. Kravasin, B. V. Svistunov, [*JETP Lett.* ]{} [**64**]{} 99 (1996).
S. M. A. Rombouts, K. Van Houcke, and L. Pollet, [*Phys. Rev. Lett.* ]{} [**96**]{} 180603 (2006).
T. D. Kühner, S. R. White, and H. Monien, [ *Phys. Rev. B*]{}, [**61**]{} 12474 (2000).
S. Ejima, H. Fehske, and F. Gebhard, [*EPL*]{} [**93**]{} 30002 (2011).
J. Zakrzewski and D. Delande, [*AIP Conf. Proc.* ]{} [**1076**]{} 292 (2008).
J. Carrasquilla, S. R. Manmana, and M. Rigol, [ *Phys. Rev. A*]{} [**87**]{} 043606 (2013).
T. D. Kühner, and H. Monien, [*Phys. Rev. B*]{}, [**58**]{} R14741 (1998).
V. A. Kashurnikov and B. V. Svistunov, [ *Phys. Rev. B*]{} [**53**]{} 11776 (1996).
L. Pollet, [*Rep. Prog. Phys.*]{}, [**75**]{} 094501 (2012).
D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, P. Zoller, [*Phys. Rev. Lett.*]{}, [**81**]{} 3108 (1998).
M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, I. Bloch, [*Nature*]{}, [**415**]{} 39 (2002).
P. W. Anderson, [J. Low Temp. Phys.]{} [**169**]{} 124 (2012); arXiv.org:1102.4797.
L. Reatto and G. L. Masserini, [*Phys. Rev. B*]{}, [ **38**]{} 4516 (1988)
S. Vitiello, K. Runge, and M. H. Kalos, [*Phys. Rev. Lett.*]{}, [**60**]{} 1970 (1988).
E. Kim and M. H. W. Chan, [*Phys. Rev. Lett.*]{}, [**97**]{} 115302 (2006).
D. E. Galli, M. Rossi, and L. Reatto, [ *Phys. Rev. B*]{}, [**71**]{} 140506(R) (2005).
J. R. Beamish, [*Physics*]{} [**3**]{} 51 (2010).
R. Hallock, [*Physics Today*]{}, [**68**]{} 30 (2015).
B. Hetényi, B. Tanatar, and L. M. Martelo, [*Phys. Rev. B*]{} [**93**]{} 174518 (2016).
D. Baeriswyl in [*Nonlinearity in Condensed Matter*]{}, Ed. A. R. Bishop, D. K. Campbell, D. Kumar, and S. E. Trullinger, Springer-Verlag (1986).
D. Baeriswyl, [*Found. Physics*]{}, [**30**]{} 2033 (2000).
M. Dzierzawa, D. Baeriswyl, and L. M. Martelo, [ *Helv. Phys. Acta*]{}, [**70**]{} 124 (1997).
L. M. Martelo, M. Dzierzawa, L.Siffert, and D. Baeriswyl, [*Z. Phys. B*]{}, [**103**]{}, 335 (1997).
B. Hetényi, [*Phys. Rev. B*]{}, [**82**]{} 115104 (2010).
B. Dóra, M. Haque, F. Pollmann and B. Hetényi , [*Phys. Rev. B*]{} [**93**]{} 115124 (2016).
M. Yahyavi, L. Saleem, and B. Hetényi , [*J. Phys. Cond. Mat.*]{} [**30**]{} 445602 (2018).
R. Resta, [*Phys. Rev. Lett.*]{} [**80**]{} 1800 (1998).
R. Resta and S. Sorella, [*Phys. Rev. Lett.*]{} [**82**]{} 370 (1999).
A. A. Aligia and G. Ortiz, [*Phys. Rev. Lett.*]{} [**82**]{} 2560 (1999).
I. Souza, T. Wilkens, and R. M. Martin, [*Phys. Rev. B*]{} [**62**]{} 1666 (2000).
M. Nakamura and J. Voit, [*Phys. Rev. B*]{} [**65**]{} 153110 (2002).
M. Yahyavi and B. Hetényi, [*Phys. Rev. A*]{} [**95**]{} 062104 (2017).
R. Kobayashi, Y. O. Nakagawa, Y. Fukusumi, M. Oshikawa, [*Phys. Rev. B*]{} [**97**]{} 165133 (2018).
B. Hetényi and B. Dóra, [*Phys. Rev. B*]{} [**99**]{} 085126 (2019).
M. Nakamura and S. C. Furuya, [*Phys. Rev. B*]{} [**99**]{} 075128 (2019).
S. C. Furuya and M. Nakamura, [*Phys. Rev. B*]{} [**99**]{} 144426 (2019).
S. Patankar, L. Wu, M. Rai, J. D. Tran, T. Morimoto, D. E. Parker, A. G. Grushin, N. L. Nair, J. G. Analytis, J. E. Moore, J. Orenstein, D. H. Torchinsky, [*P hys. Rev. B*]{} [ **98**]{} 165113 (2018).
V. G. Rousseau, [*Phys. Rev. E*]{}, [**77**]{} 056705 (2008).
V. G. Rousseau, [*Phys. Rev. E*]{}, [**78**]{} 056707 (2008).
M. C. Gutzwiller, [*Phys. Rev. Lett.*]{}, [**10**]{} 159 (1963).
M. C. Gutzwiller, [*Phys. Rev.*]{}, [**137**]{} A1726 (1965).
M. Capello, F. Becca, M. Fabrizio, and S. Sorella, [ *Phys. Rev. Lett.*]{}, [**99**]{} 056402 (2007).
N. Elstner and H. Monien, [*Phys. Rev. B*]{} [ **59**]{} 12184 (1999).
B. Damski and J. Zakrzewski, [*Phys. Rev. A*]{} [**74**]{} 043609 (2006).
K. V. Krutitsky, M. Thorwart, R. Egger, and R. Graham, [*Phys. Rev. A*]{} [**77**]{} 053609 (2008).
C. Heil and W. von der Linden, [*J. Phys.: Cond. Mat.* ]{} [**24**]{} 295601 (2012).
B. Hetényi and H. G. Evertz, [*Phys. Rev. B* ]{} [**78**]{} 033110 (2008).
D. H. Lehmer, [ *Proc. Sympos. Appl. Math. Combinatorial Analysis, Amer. Math. Soc.*]{} [**10**]{} 179 (1960).
C.-A. Laisant, [ *Bulletin de la Socieété Mathématique de France*]{} [**16**]{} 176 (1888).
A. V. Ponomarev, S. Denisov, and P. Hänggi, [*Phys. Rev. Lett.*]{} [**102**]{} 230601 (2009).
A. V. Ponomarev, S. Denisov, and P. Hänggi, [*Phys. Rev. A*]{} [**81**]{} 043615 (2010).
A. V. Ponomarev, S. Denisov, and P. Hänggi, [*Phys. Rev. Lett.*]{} [**106**]{} 010405 (2011).
D. Raventós, T. Graß, Maciel Lewenstein, and B. Juliá-Diáz, [*J. Phys. B: At. Mol. Opt. Phys.*]{} [**50**]{} 113001 (2017).
A. J. Millis and S. N. Coppersmith, [ *Phys. Rev. B*]{}, [**43**]{} 13770 (1991).
M. E. Fisher, M. N. Barber, and D. Jasnow, [ *Phys. Rev. A*]{} [**8**]{} 1111 (1973).
V. G. Rousseau, [*Phys. Rev. B*]{} [**90**]{} 134503 (2014).
A. J. Leggett, [*Phys. Rev. Lett.*]{} [**25**]{} 2543 (1970).
D. J. Scalapino, S. R. White, and S. C. Zhang, [*Phys. Rev. Lett.*]{} [**68**]{} 2830 (1992).
D. J. Scalapino, S. R. White, and S. C. Zhang, [*Phys. Rev. B*]{} [**47**]{} 7995 (1993).
B. Hetényi [*J. Phys. Soc. Jpn.*]{} [**83**]{} 034711 (2014).
E. L. Pollock and D. M. Ceperley, [Phys. Rev. B]{} [**36**]{} 8343 (1987).
See P. Fazekas, [*Lecture Notes on Electron Correlation and Magnetism*]{}, World Scientific (1999) and references therein.
J. K. Freericks, H. R. Krishnamurthy, Y. Kato, N. Kawashima, and N. Trivedi, [*Phys. Rev. A*]{}, [**79**]{} 053631 (2009). (Note that in this work $U/2$ corresponds to our $U$.)
E. Lieb, T. Schultz, and D. Mattis, [*Ann. Phys.*]{} [**16**]{} 407 (1961).
W. Kohn, [*Phys. Rev.*]{} [**133**]{} (1964) A171.
M. H. W. Chan, R. B. Hallock, and L. Reatto, [*J. Low Temp. Phys.*]{} [**172**]{} 317 (2013).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Transformation properties of Dirac equation correspond to Spin(3,1) representation of Lorentz group SO(3,1), but group GL(4,${\mathbb{R}}$) of general relativity does not accept a similar construction with Dirac spinors. On the other hand, it is possible to look for representation of GL(4,${\mathbb{R}}$) in some bigger space, there Dirac spinors are formally situated as some “subsystem.” In the paper is described construction of such representation, using Clifford and Grassmann algebras of 4D space.'
author:
- 'Alexander Yu. Vlasov'
title: 'Dirac Spinors and Representations of GL(4) Group in GR'
---
Introduction
============
Let as consider the Dirac equation [@DauIV; @ClDir] ($x_0=ct$, $\hbar=1$, $c=1$) $${\mathcal{D}}\psi = m \psi, \quad {\mathcal{D}}\equiv \sum_{\mu=0}^3 \gamma_\mu {\frac{\partial}{\partial{x_\mu}}},
\label{Dir}$$ where $\psi \in {\mathbb{C}}^4$ and $\gamma_\mu \in Mat(4,{\mathbb{C}})$, $\mu = 0,\cdots,3$ are four Dirac matrices with usual property $$\gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu = 2 g_{\mu\nu} {\mathbf 1},
\label{acom}$$ where $g_{\mu\nu}$ is Minkowski metric.
More generally, [Eq. (\[acom\])]{} corresponds to definition of Clifford algebra ${\mathfrak{Cl}}(g)$ for arbitrary quadratic form $g_{\mu\nu}$ and dimension. Dirac matrices are four generators for (complexified) representation of ${\mathfrak{Cl}}(3,1)$ in space of $4 \times 4$ matrices. For general universal Clifford algebra with $n$ generators $\dim{\mathfrak{Cl}}(n)=2^n$.
It is possible to write Dirac equation for arbitrary metric, but covariant transformation between two solutions $\psi$ exists only for isometries of coordinates $A$: $g(Ax,Ay)=g(x,y)$ for given fixed metric $g$. Such isometries produce some subgroup of GL(4,${\mathbb{R}}$), isomorphic to Lorentz group (for diagonal form, i.e. Minkowski metric, it is usual Lorentz group). It is briefly recollected in Sec. \[Sec:Spin\]. A matrix extension of Dirac equation is considered in Sec. \[Sec:Ext\]. It is analyzed using Clifford (Sec. \[Sec:Cliff\]) and Grassmann representation (Sec. \[Sec:Grass\]) of such equation. Main proposition about representation of GL(4,${\mathbb{R}}$) is formulated at end of Sec. \[Sec:Grass\] and discussed with more details in Sec. \[Sec:Expl\].
Spinor Representation of Lorentz Group {#Sec:Spin}
======================================
Any transformation $v'= Av$ of coordinate system corresponds to new set of gamma matrices: $$\gamma_\mu' = \sum_{\nu=0}^3 A_\mu^\nu \gamma_\nu,
\label{Agam}$$ but only for isometries $A$ [Eq. (\[Agam\])]{} may be rewritten as internal isomorphism of algebra: $$\gamma_\mu' = {\mathit\Sigma}_{(A,g)} \gamma_\mu {\mathit\Sigma}_{(A,g)}^{-1},
\quad \mbox{(there is no sum on $\mu$)}
\label{LgL}$$ where matrix ${\mathit\Sigma}_{(A,g)}$ is same for any $\gamma_\mu$ and depends only on transformation $A$ and metric $g$. For Minkowski metric $g$, it is usual form of spinor $2\to 1$ representation of Lorentz group.
Similar with general case, spin group is implemented here as subset of Clifford algebra. More precisely, it is subset of ${\mathfrak{Cl}}_e$, [*i.e.*]{} even subalgebra of Clifford algebra generated by all possible products with even number of generators [@ClDir]. In particular case of Lorentz group SO(3,1), it is Spin(3,1) group and isomorphic with SL(2,${\mathbb{C}}$), group of $2\times 2$ complex matrices with determinant unit.
To demonstrate transformation properties of spinor $\psi$, it is enough to rewrite [Eq. (\[Dir\])]{} as $$({\mathit\Sigma}{\mathcal{D}}{\mathit\Sigma}^{-1}){\mathit\Sigma}\psi = m {\mathit\Sigma}\psi.
\label{LDirL}$$ The [Eq. (\[LDirL\])]{} also shows, why [Eq. (\[Agam\])]{} with general $A \in \rm GL(4,{\mathbb{R}})$ may not correspond to such kind of covariant transformation $\psi(x) \mapsto {\mathit\Sigma}_A \psi(A x)$.
An Extension of Spinor Space {#Sec:Ext}
============================
One way to overcome problem with representation of general coordinate transformation of GL(4,${\mathbb{R}}$) group may be extension of linear space ${\mathbb{C}}^4$ of Dirac spinors.[^1]
There is also more technical way to explain idea of extension. It is not possible to bulid some specific symmetry between two solutions of Dirac equation with different metric, but may be it is possible to construct a new one using [*a few*]{} solutions?
Let us consider instead of $\psi \in {\mathbb{C}}^4$ some $4 \times 4$ complex matrix ${\mathsf{\Psi}}$ and write equation $${\mathcal{D}}{\mathsf{\Psi}}= m {\mathsf{\Psi}}, \quad {\mathcal{D}}\equiv \sum_{\mu=0}^3 \gamma_\mu {\frac{\partial}{\partial{x_\mu}}},
\quad {\mathsf{\Psi}}\in Mat(4,{\mathbb{C}}).
\label{nDir}$$ Formally it may be considered as set of four usual Dirac equations for each row of matrix ${\mathsf{\Psi}}$.
Maybe such extension from 4D to 16D space is not minimal, but it is appropriate for purpose of present paper, [*i.e.*]{} for representation of transformation properties of such equation with respect to GL(4,${\mathbb{R}}$) group of coordinate transformations. For 16D space all possible linear transformations may be represented by 256D space, but application of algebraic language may simplify consideration. The construction uses both Clifford and Grassmann algebras of 4D space and described below.
Clifford Algebra {#Sec:Cliff}
================
Complexified Clifford algebra of Minkowski quadratic form is isomorphic with space of $4 \times 4$ complex matrices. So it is reasonable to try consider [Eq. (\[nDir\])]{} as equation on Clifford algebra $\gamma_\mu,{\mathsf{\Psi}}\in {\mathfrak{Cl}}(3,1)$. Really, as it will be shown below, complete and rigour solution of discussed problem with GL(4,${\mathbb{R}}$) representation may not be based [*only*]{} on Dirac equation on Clifford algebra, but this construction discussed here, because provides an essential step.
If to consider [Eq. (\[nDir\])]{} as equation on Clifford algebra, then initial Dirac equation may be compared first with restriction of such equation on [*left ideal*]{} of Clifford algebra.
Left ideal of algebra $\mathcal A$ by definition [@SLang] is linear subspace $\mathcal I \subset \mathcal A$ with property $\mathcal{A I} \subset \mathcal I$, [*i.e.*]{} any element of algebra after multiplication on element of an ideal produces again element of the ideal. Simplest example of left ideal in matrix algebra is set of matrices $\mathsf M_\psi$ with only one nonzero column $\psi$ and it provides reason for consideration of Dirac equation on such ideal as an analogue of usual case [Eq. (\[Dir\])]{}.
It was already discussed above, that spin group may be naturally implemented as subspace of Clifford algebra, [*e.g.*]{} transformations of ${\mathsf{\Psi}}$ are also elements of Clifford algebra, ${\mathit\Sigma}\in {\mathfrak{Cl}}(3,1)$. On the other hand, construction with ideals of Clifford algebra have some problems with interpretations of symmetries of Dirac equation, necessary for purposes of given paper. Even for Lorentz transformation, simply represented as isomorphisms of Clifford algebra [Eq. (\[LgL\])]{}, instead of [Eq. (\[LDirL\])]{} we must have [*the same*]{} transformation law for all elements of algebra $$({\mathit\Sigma}{\mathcal{D}}{\mathit\Sigma}^{-1}){\mathit\Sigma}{\mathsf{\Psi}}{\mathit\Sigma}^{-1} =
m {\mathit\Sigma}{\mathsf{\Psi}}{\mathit\Sigma}^{-1}.
\label{LPsiL}$$
Really it makes consideration a bit more difficult, but does not change it much. Resolution of the problem, is additional symmetry of [Eq. (\[nDir\])]{}: if some ${\mathsf{\Psi}}$ is solution of [Eq. (\[nDir\])]{}, then ${\mathsf{\Psi}}R$ is also solution, for arbitrary element $R$ of Clifford algebra.
In such a case right multiplication on ${\mathit\Sigma}^{-1}$ does not changes anything and may be ignored. The same property of equation requires consideration not only element $\mathsf M_\psi$ of some left ideal, but also all $\mathsf M_\psi R$, [*i.e.*]{} matrices with columns proportional to same vector.
If $\psi \in {\mathbb{C}}^4$ is initial vector (spinor, solution of Dirac equation), and $\alpha \in {\mathbb{C}}^4$ is arbitrary vector of coefficients, then any matrices with proportional columns may be expressed as $M_{ij}=\psi_i\alpha_j$.
So instead of left ideals discussed above it is necessary to consider matrices $$M_{ij}=\psi_i\alpha_j,
\quad M = \psi \alpha^{T} \equiv \psi \otimes \alpha.
\label{bipsi}$$ For arbitrary matrices $L,R$ $$LMR = L\,(\psi \otimes \alpha)\,R = (L\psi) \otimes (R^{T} \alpha),
\label{LMR}$$ so multiplication saves “the product structure,” but it is not an ideal, because space of such matrixes is not [*linear subspace*]{}, [*e.g.*]{} sum of elements does not necessary may be presented as tensor product of two vectors like [Eq. (\[bipsi\])]{}. Really linear span of the “singular” space coincides with whole algebra.
On the other hand, fixed $\psi$ corresponds to a linear subspace, [*right ideal*]{} $\mathcal R_{\psi}$ of the algebra. It is similar with interpretation of physical solution of usual Dirac equation [Eq. (\[Dir\])]{} as a ray in Hilbert space.
But such construction still not produce covariant transformation of Dirac equation in matrix form [Eq. (\[nDir\])]{} with respect to general element of GL(4,${\mathbb{R}}$). It is necessary to use slighly different construction with Grassmann algebra described in next section.
Grassmann Algebra {#Sec:Grass}
=================
Formally Grassmann (or exterior) algebra ${\Lambda}_n$ is defined by $n$ generators $d_i$, associative operation denoted as $\wedge$ and property $$d_\mu \wedge d_\nu + d_\nu \wedge d_\mu = 0.
\label{gras}$$ So $\dim{\Lambda}_n=2^n$, similarly with Clifford algebra. Linear subspace of Grassmann algebra generated by $\wedge$-product of $k$ different elements $d_i$ are usually denoted as ${\Lambda}_n^k$ (“$k$-forms”). For convenience here is used complex Grassmann algebra.
On the other hand, Clifford algebra, Dirac operator and spin group also may be expressed using Grassmann algebra [@ClDir]. Let us consider Grassmann algebra ${\Lambda}_n$ of $n$-dimensional vector space $V$ and metric $g$ on $V$. Algebra of linear transformations of Grassmann algebra is denoted here as $\mathcal L({\Lambda}_n)$ and for any vector $v\in V$, it is possible to construct linear transformations [@ClDir] $\delta_v,\delta^\star_v \in \mathcal L({\Lambda}_n)$: $$\delta_v : v_1\wedge\cdots\wedge v_k
\mapsto v \wedge v_1\wedge\cdots\wedge v_k,
\label{bnd}$$ $$\delta^\star_v : v_1\wedge\cdots\wedge v_k \mapsto
\sum_{l=1}^k (-1)^l g(v,v_l)\, v_1\wedge\cdots \not\!v_l\cdots\wedge v_k,
\label{cobnd}$$ where ${\not\!v_l}$ means, that term $v_l$ must be omitted. Let $v_i$ is basis of $V$, then operators $$\hat\gamma_i = \delta_i + \delta^\star_i\qquad
(\delta_i \equiv\delta_{v_i},\ \delta_i^\star \equiv\delta_{v_i}^\star)
\label{gamLGr}$$ satisfy usual relations with anticommutators [Eq. (\[acom\])]{} for $n$ generators of Clifford algebra with quadratic form $g$, and so ${\mathfrak{Cl}}(g)$ may be represented as subspace of $\mathcal L({\Lambda}_n)$. Let us denote this representation $${\mathfrak{C}_\mathcal{L}}: {\mathfrak{Cl}}(n) \to \mathcal L({\Lambda}_n).
\label{ClGr}$$
Let us consider also canonical isomorphism of Grassmann and Clifford algebras ${\mathtt{\Lambda}_\mathfrak{C}}: {\Lambda}_n \to {\mathfrak{Cl}}(n)$ [*as linear spaces*]{} defined on basis as $${\mathtt{\Lambda}_\mathfrak{C}}: d_{i_1} \wedge \cdots \wedge d_{i_k}
\mapsto \gamma_{i_1} \cdots \gamma_{i_k},
\quad i_1< i_2 < \cdots < i_k,
\label{ExtToCl}$$ together with inverse one ${\mathtt{\Lambda}_\mathfrak{C}}^{-1}$.
It is possible to express basic property of formal constructions above as $${\mathtt{\Lambda}_\mathfrak{C}}^{-1}(\mathsf{LM}) = {\mathfrak{C}_\mathcal{L}}(\mathsf L)\bigl({\mathtt{\Lambda}_\mathfrak{C}}^{-1}(\mathsf M)\bigr),
\quad \mathsf{L,M} \in {\mathfrak{Cl}}.$$
It should be mentioned, that using Hodge operator $\star : {\Lambda}_n^k \to {\Lambda}_n^{n-k}$ it is possible to write $\delta^\star_v = \star\,\delta_v \star$. By using dual Grassmann operation $\vee = \star \wedge \star$, it is possible to simplify [Eq. (\[bnd\])]{}, [Eq. (\[cobnd\])]{} $$\delta_v (\omega) = v \wedge \omega,
\quad \delta^\star_v (\omega) = v \vee \omega, \quad \omega \in {\Lambda}_n.$$
Similarly, it is possible to introduce right actions $${\overleftarrow{\delta}}_v (\omega) = \omega \wedge v,
\quad {\overleftarrow{\delta}}^\star_v (\omega) = \omega \vee v, \quad
{\overleftarrow{\hat\gamma}}_i = {\overleftarrow{\delta}}_i + {\overleftarrow{\delta}}^\star_i,$$ and right representation ${\mathfrak{C}_\mathcal{R}}$ of Clifford algebra in $\mathcal L({\Lambda}_n)$ with property $${\mathtt{\Lambda}_\mathfrak{C}}^{-1}(\mathsf{LMR}) =
\Bigl({\mathfrak{C}_\mathcal{L}}(\mathsf L)\circ{\mathfrak{C}_\mathcal{R}}(\mathsf R) \Bigr)
\bigl({\mathtt{\Lambda}_\mathfrak{C}}^{-1}(\mathsf M)\bigr),
\quad \mathsf{L,M,R} \in {\mathfrak{Cl}}.$$
The Dirac equation also has natural representation here [@ClDir]. It is possible to express Dirac operator using exterior differential for forms $d$ and its Hodge dual $d^\star$ $$d = \sum_{i=1}^n \delta_i {\frac{\partial}{\partial{x_i}}},\quad
d^\star = \sum_{i=1}^n \delta_i^\star {\frac{\partial}{\partial{x_i}}},\quad
{\mathcal{D}}= d + d^\star.
\label{HodgeDir}$$
It is also possible to use standard representation of spin group via left representation ${\mathfrak{C}_\mathcal{L}}$ of Clifford algebra [Eq. (\[ClGr\])]{}. Usually it is restricted on spaces of odd and even forms [@ClDir].
On the other hand, unlike of Clifford space, Grassmann space also accept external GL(4,${\mathbb{R}}$) isomorphism induced by extension of transformation $$d_\mu' = \sum_{\nu=0}^3 A_\mu^\nu d_\nu.
\label{Adiff}$$ to general form $d_{i_1} \wedge \cdots \wedge d_{i_k}$. Existence of such kind of isomorphism is common property of antisymmetric forms, and Grassmann algebra may be represented as exterior algebra of the forms.
Representation of Dirac operator in form [Eq. (\[HodgeDir\])]{} ensures proper transformation property. It is clear for exterior differential $d$, and for $d^\star$ it is also so, because terms with $g(v,v_l)$ in [Eq. (\[cobnd\])]{} are transformed to $g(Av,Av_l)$, [*i.e.*]{} in agreement with change of metric $g'=A^T g A$ and also demonstrate desired property of ${\mathcal{D}}= d + d^\star$ with respect to map ${\mathfrak{C}_\mathcal{L}}$ and [Eq. (\[Agam\])]{}.
So result of present paper may be formulated as:
[**Proposition:**]{} [*Transformation properties of solution of Dirac equation in matrix form [Eq. (\[nDir\])]{} with respect to group [GL(4,${\mathbb{R}}$)]{} due to isomorphism $${\mathtt{\Lambda}_\mathfrak{C}}: {\Lambda}_4 \longrightarrow {\mathfrak{Cl}}(3,1) \cong Mat(4)$$ corresponds to standard transformation properties of exterior algebra of antisymmetric forms on tangent space with respect to general linear coordinate transformations from [GL(4,${\mathbb{R}}$)]{}.*]{}
Further Discussion {#Sec:Expl}
==================
The main proposition needs for some explanation. Transformation of space of differential forms induced by general linear coordinate transformation is really valid representation of GL(4,${\mathbb{R}}$), but that is relation with usual spinor representation in case of restriction to SO(3,1) $\subset$ GL(4,${\mathbb{R}}$)? For example, exterior space in respect to GL(4,${\mathbb{R}}$) have five irreducible subspaces corresponding to spaces ${\Lambda}^k_4$, $k=0,\ldots,4$.
On the other hand, from a naíve point of view Dirac spinors in such classification should correspond to some (fictitious) index like $k=1/2$, because exterior form with index $k=1$ corresponds to (co)vector, [*i.e.*]{} “spin one.”
To justify the proposition, let us show first, that for Minkowski metric and Lorentz group SO(3,1) $\subset$ GL(4,${\mathbb{R}}$) suggested transformation corresponds to $${\mathsf{\Psi}}\mapsto {\mathit\Sigma}{\mathsf{\Psi}}{\mathit\Sigma}^{-1}
\label{IntIs}$$ used for description of transformation property [Eq. (\[LPsiL\])]{} of Dirac equation in matrix representation [Eq. (\[nDir\])]{}.
Let us consider subspaces ${\mathfrak{Cl}}_k(n) \subset {\mathfrak{Cl}}(n)$, $k=0,\ldots,n$ produced by products of $k$ matrices $\gamma_\mu$. Only for element ${\mathit\Sigma}$ from spin group internal isomorphism [Eq. (\[IntIs\])]{} of Clifford algebra maps subspaces ${\mathfrak{Cl}}_k(n)$ to itself. On the other hand, such isomorphism may be expressed using substitution of generators like [Eq. (\[Agam\])]{} with isometry $A$, [*i.e.*]{} Lorentz group for particular case under consideration.
For Lorentz group transformation law for ${\mathfrak{Cl}}_k(3,1)$ due to [Eq. (\[Agam\])]{} is the same as for ${\Lambda}_4^k$ and [Eq. (\[Adiff\])]{} respectively. More formally, in such a case it is possible to express transformation suggested in proposition as $$\bigl({\mathfrak{C}_\mathcal{L}}({\mathit\Sigma}_A)\circ{\mathfrak{C}_\mathcal{R}}({\mathit\Sigma}_A^{-1})\bigr) (M),
\quad M \in {\Lambda}_4,~ {\mathit\Sigma}_A \in {\rm Spin}(3,1),~A \in {\rm SO}(3,1).$$
Where is also other way to represent constructions suggested above. Let we have Dirac algebra of $4 \times 4$ complex matrices and some fixed basis of Dirac matrices $\gamma_\mu$. Let us now together with usual matrix multiplication introduce other associative and distributive operation “$\wedge$” induced by structure of Grassmann algebra due to linear map ${\mathtt{\Lambda}_\mathfrak{C}}$ [Eq. (\[ExtToCl\])]{}. Formally, it would be necessary to write $16^2=256$ products for elements of basis, but really the operation “$\wedge$” is unique defined by expression with two matrices $$\gamma_\mu \wedge \gamma_\mu \equiv 0,\quad
\gamma_\mu \wedge \gamma_\nu \equiv -\gamma_\nu \wedge \gamma_\mu
\equiv \gamma_\mu\gamma_\nu \quad (\mu < \nu),
\label{gamgrass}$$ associativity and property $\gamma_{i_1} \wedge \cdots \wedge \gamma_{i_k}
\equiv \gamma_{i_1} \cdots \gamma_{i_k}$ for $i_1< i_2 < \cdots < i_k$.
Now linear map [Eq. (\[Agam\])]{} with arbitrary $A \in \rm GL(4,{\mathbb{R}})$ may be extended to representation of GL(4,${\mathbb{R}}$) on full 16D space using “$\wedge$” products of generators. It is also valid for $A \in \rm SO(3,1)$, but in such a case difference between “$\wedge$” and usual matrix (Clifford) products does not matter, because due to property of spin group usual product does not produce “junk” terms with $g_{\mu\nu}{\mathbf 1}$ (unit of algebra multiplied on some coefficient of metric). The other property of $A \in \rm SO(3,1)$ is possibility to express considered representation as internal isomorphism with respect to usual product, ${\mathsf{\Psi}}\mapsto {\mathit\Sigma}_A {\mathsf{\Psi}}{\mathit\Sigma}_A^{-1}$, [Eq. (\[IntIs\])]{}.
It was already discussed earlier, how [Eq. (\[IntIs\])]{} corresponds to usual spinor representation. Because of [Eq. (\[bipsi\])]{} matrix function ${\mathsf{\Psi}}$ may be associated with tensor product of usual spinor $\psi$ on some “auxiliary spinor $\alpha$” and then due to [Eq. (\[LMR\])]{}, it is possible for Lorentz group to rewrite [Eq. (\[IntIs\])]{} as $$\psi\otimes\alpha \mapsto ({\mathit\Sigma}\psi) \otimes ({{\mathit\Sigma}^{-1}}^T \alpha),
\label{LocLor}$$ but state of auxiliary system for such a product does not matter due to possibility to apply arbitrary transformation $R : \alpha \mapsto R^T \alpha$ (see Sec. \[Sec:Cliff\]). So it is possible to take into account only transformation of first term $\psi \mapsto {\mathit\Sigma}\psi$ in tensor product, [*i.e.*]{} spinor representation of Lorentz group.
On the other hand, general GL(4,${\mathbb{R}}$) transformation does not correspond to map between “product states” like $\psi\otimes\alpha$. Using some quantum mechanical jargon it could be possible to say, that general GL(4,${\mathbb{R}}$) transformation “entangles” state $\psi$ and auxiliary state $\alpha$, and transformation $R$ on second state used above may not improve situation (“disentangle” states), because it is “local”. With same analogy, relation between ${\mathsf{\Psi}}$ and usual Dirac spinor $\psi$ may be compared with conception of [*subsystem*]{} in quantum mechanics. But really such jargon should be considered only as some hint, because more detailed consideration may use also real representation of Clifford algebra and constructions used above may correspond to real or even quaternionic matrices and tensor products and, after all, transformations used here correspond to some finite-dimensional unitary representations only for SO(3) subgroup of GL(4,${\mathbb{R}}$).
[9]{} Landau and Lifschitz, [*Course of Theoretical Physics*]{}, [**Vol. IV**]{} [*Quantum Electrodynamics*]{}, (Moscow, Nauka, 1988). J. E. Gilbert and M. A. M. Murray, [*Clifford algebras and Dirac operators in harmonic analysis*]{}, (Cambridge University Press, Cambridge 1991). S. Lang, [*Algebra*]{}, (Addison-Wesley, 1965).
[^1]: Say, two-component spinors may not represent $P$-inversion, but extension from ${\mathbb{C}}^2$ to ${\mathbb{C}}^4$, from Pauli to Dirac spinors resolves this problem [@DauIV].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we propose a Jacobi-type algorithm to solve the low rank orthogonal approximation problem of symmetric tensors. This algorithm includes as a special case the well-known Jacobi CoM2 algorithm for the approximate orthogonal diagonalization problem of symmetric tensors. We first prove the weak convergence of this algorithm, *i.e.* any accumulation point is a stationary point. Then we study the global convergence of this algorithm under a gradient based ordering for a special case: the best rank-2 orthogonal approximation of 3rd order symmetric tensors, and prove that an accumulation point is the unique limit point under some conditions. Numerical experiments are presented to show the efficiency of this algorithm.'
address:
- 'Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, China'
- 'Université de Lorraine, CNRS, CRAN, Nancy, France'
- 'Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France'
author:
- 'Jianze Li, Konstantin Usevich, Pierre Comon'
bibliography:
- 'JLROA.bib'
title: 'Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis'
---
[^1]
Introduction
============
As the higher order analogue of vectors and matrices, in the last two decades, tensors have been attracting more and more attentions from various fields, including signal processing, numerical linear algebra and machine learning [@Cichocki15:review; @comon2014tensors; @Como10:book; @kolda2009tensor; @sidiropoulos2017tensor; @Anan14:latent]. One reason is that more and more real data are naturally represented in tensor form, e.g. *hyperspectral image*, *brain fMRI image*, or *social networks*. The other reason is that, compared with the matrix case, tensor based techniques can capture higher order and more complicated relationships, e.g. *Independent Component Analysis* (ICA) based on the cumulant tensor [@Como94:sp], and *multilinear subspace learning* methods [@lu2013multilinear].
Low rank approximation of higher order tensors is a very important problem and has been applied in various areas [@Como10:book; @de1997signal; @smildemulti]. However, it is much more difficult than the matrix case, since it is ill-posed for many ranks, and this ill-posedness is not rare for 3rd order tensors [@de2008tensor].
**Notation.** Let ${\mathbb{R}}^{n_1\times\cdots\times n_d}{\stackrel{\sf def}{=}}{\mathbb{R}}^{n_1}\otimes\cdots\otimes{\mathbb{R}}^{n_d}$ be the linear space of $d$th order real tensors and $\text{symm}({\mathbb{R}}^{n\times\cdots\times n})\subseteq{\mathbb{R}}^{n\times\cdots\times n}$ be the set of symmetric ones [@Comon08:symmetric; @qi2017tensor], whose entries do not change under any permutation of indices. The identity matrix of size $n$ is denoted by ${\boldsymbol{I}}_n$. Let $\text{St}(p,n)\subseteq{\mathbb{R}}^{n\times p}$ be the Stiefel manifold with $1\leq p\leq n$. Let ${{\mathscr{O}}_{n}}\subseteq{\mathbb{R}}^{n\times n}$ be the orthogonal group, *i.e.* ${{\mathscr{O}}_{n}}=\text{St}(n,n)$. Let ${{\mathscr{SO}}_{n}}\subseteq{\mathbb{R}}^{n\times n}$ be the special orthogonal group, *i.e.* the set of orthogonal matrices with determinant 1. We denote by $\|\cdot\|$ the Frobenius norm of a tensor or a matrix, or the Euclidean norm of a vector. Tensor arrays, matrices, and vectors, will be respectively denoted by bold calligraphic letters, e.g. ${\boldsymbol{\mathcal{A}}}$, with bold uppercase letters, e.g. ${\boldsymbol{M}}$, and with bold lowercase letters, e.g. ${\boldsymbol{u}}$; corresponding entries will be denoted by ${\mathcal{A}}_{ijk}$, $M_{ij}$, and $u_i$. Operator ${\mathop{\bullet_{p}}}$ denotes contraction on the $p$th index of a tensor; when contracted with a matrix, it is understood that summation is always performed on the second index of the matrix. For instance, $({\boldsymbol{\mathcal{A}}}{\mathop{\bullet_{1}}}{\boldsymbol{M}})_{ijk}=\sum_\ell {\mathcal{A}}_{\ell jk} M_{i\ell}$. We denote $${\boldsymbol{\mathcal{A}}}({\boldsymbol{M}}) {\stackrel{\sf def}{=}}{\boldsymbol{\mathcal{A}}} {\mathop{\bullet_{1}}} {\boldsymbol{M}}^{{{\sf T}}} {\mathop{\bullet_{2}}} \cdots {\mathop{\bullet_{d}}} {\boldsymbol{M}}^{{{\sf T}}}$$ for convenience in this paper. For ${\boldsymbol{\mathcal{A}}}\in{\mathbb{R}}^{n\times\cdots\times n}$ and a fixed set of indices $1\leq k_1<k_2<\cdots<k_m\leq n$, we denote by ${\boldsymbol{\mathcal{A}}}^{(k_1,k_2,\cdots,k_m)}$ the $m$-dimensional subtensor obtained from ${\boldsymbol{\mathcal{A}}}$ by allowing its indices to vary in $\{k_1,k_2,\cdots,k_m\}$ only.
**Problem statement.** Let ${\boldsymbol{\mathcal{A}}} \in \text{symm}({\mathbb{R}}^{n\times\cdots\times n})$ and $1 \leq p \leq n$. In this paper, we study the *best rank-$p$ orthogonal approximation problem*, which is to find $$\label{pro-ortho-approxi}
{\boldsymbol{\mathcal{C}}}^{*} {\stackrel{\sf def}{=}}\sum\limits_{k=1}^{p}\sigma_{k}^{*}u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}
= {\mathop{\operator@font argmin}}\|{\boldsymbol{\mathcal{A}}} - \sum_{k=1}^{p}\sigma_{k}u_{k}\otimes\cdots\otimes u_{k}\|,$$ where $[u_{1},\cdots,u_{p}]\in \text{St}(p,n)$ and $\sigma_{k}\in{\mathbb{R}}$ for $1\leq k\leq p$. If $p=1$, then is the *best rank-1 approximation* problem [@Lathauwer00:rank-1approximation; @kofidis2002best; @kolda2011shifted; @zhang2001rank; @SilvCA16:spl] of symmetric tensors, which is equivalent to the *cubic spherical optimization* problem [@qi2009z; @zhang2012best; @Zhang12:MC]. If $p=n$, by [@chen2009tensor Proposition 5.1] and [@LUC2018 Proposition 5.2], we see that is closely related to the *approximate orthogonal diagonalization problem* for 3rd and 4th order cumulant tensors, which is in the core of *Independent Component Analysis* (ICA) [@Como92:elsevier; @Como94:sp; @comon1994tensor], and finds many applications [@Como10:book].
To our knowledge, the orthogonal tensor decomposition was first tackled in [@Como92:elsevier], but appeared more formally in [@Kolda01:Orthogonal], in which many examples were presented to illustrate the difficulties of this type of decomposition. In [@chen2009tensor], the existence of ${\boldsymbol{\mathcal{C}}}^{*}$ in problem was proved, and the *low rank orthogonal approximation of tensors* (LROAT) algorithm and *symmetric LROAT* (SLROAT) were developed to solve this problem based on the polar decomposition. These two algorithms boil down to the *higher order power method* (HOPM) and *symmetric HOPM* (SHOPM) algorithm [@Lathauwer00:rank-1approximation; @kofidis2002best; @zhang2001rank] when $p=1$. More recently, also based on the polar decomposition, a similar algorithm was developed in [@pan2018symmetric] to solve problem , and this algorithm was applied to the image reconstruction task.
**Contribution.** In this paper, we propose a Jacobi-type algorithm to solve problem . This algorithm is exactly the well-known Jacobi CoM2 algorithm [@Como10:book] when $p=n$, and the same as the Jacobi-type algorithm in [@IshtAV13:simax] when $p=1$. We first prove the weak convergence[^2] of this algorithm under the cyclic ordering based on a decomposition property of the identity matrix. Then, under the gradient based ordering defined in [@IshtAV13:simax; @LUC2017globally; @ULC2019], we prove the global convergence[^3] of this algorithm for 3rd order tensors of rank $p=2$ under some conditions. By making some numerical experiments and comparisons, we show that the Jacobi-type algorithm proposed in this paper is efficient and stable.
**Organization.** The paper is organized as follows. In \[geometric\], we show that two optimization problems on Riemannian manifold are both equivalent to , and then calculate their Riemannian gradients. In \[section-algorithm\], we propose a Jacobi-type algorithm to solve . This algorithm includes the well-known Jacobi CoM2 algorithm as a special case. In \[sec-weak-conver\], we prove the weak convergence of this algorithm under the cyclic ordering. In \[sect-Jacobi-G\], we study the global convergence of this algorithm under the gradient based ordering for the 3rd order tensor and $p=2$ case. In \[sect-experiment\], we report some numerical experiments showing the efficiency of this algorithm.
Geometric properties {#geometric}
====================
Equivalent problems
-------------------
Let ${\boldsymbol{\mathcal{A}}} \in \text{symm}({\mathbb{R}}^{n\times\cdots\times n})$ and $1 \leq p \leq n$. Let ${\boldsymbol{X}}\in \text{St}(p,n)$ and $\widetilde{{\boldsymbol{\mathcal{W}}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{X}}).$ One problem equivalent to is to find $$\label{pro-stefiel}
{\boldsymbol{X}}_{*} = \operatorname*{\arg\!\max}_{{\boldsymbol{X}}\in \text{St}(p,n)}\tilde{f}({\boldsymbol{X}}),$$ where $$\label{eq-cost-func-1}
\tilde{f}({\boldsymbol{X}}) {\stackrel{\sf def}{=}}\sum\limits_{i=1}^{p}\widetilde{{\mathcal{W}}}^2_{i\cdots i}.$$
([@chen2009tensor Proposition 5.1]) Let ${\boldsymbol{\mathcal{C}}}^{*}$ be as in . Then $$\label{eq-orthogonal-A-T}
\langle{\boldsymbol{\mathcal{A}}}-{\boldsymbol{\mathcal{C}}}^{*}, u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}\rangle = 0\ \ \text{and}\ \
\sigma_{k}^{*} = \langle{\boldsymbol{\mathcal{A}}}, u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}\rangle\notag$$ for $1\leq k\leq p$. Moreover, it holds that $$\label{eq-relation-problem}
\|{\boldsymbol{\mathcal{A}}}-{\boldsymbol{\mathcal{C}}}^{*}\|^2 = \|{\boldsymbol{\mathcal{A}}}\|^2 - \|{\boldsymbol{\mathcal{C}}}^{*}\|^2 = \|{\boldsymbol{\mathcal{A}}}\|^2 - \sum\limits_{k=1}^{p}(\sigma_{k}^{*})^2.$$
\[remark-equivalence\] (i) Let ${\boldsymbol{\mathcal{C}}}^{*}$ be as in and ${\boldsymbol{X}}_{*}$ be as in . We see from that $${\boldsymbol{X}}_{*} = [u_{1}^{*},\cdots,u_{p}^{*}]\ \ \text{and}\ \
\|{\boldsymbol{\mathcal{A}}}-{\boldsymbol{\mathcal{C}}}^{*}\|^2 = \|{\boldsymbol{\mathcal{A}}}\|^2 - \tilde{f}({\boldsymbol{X}}_{*}).$$ In other words, to solve , it is enough for us to solve , which is an optimization problem on $\text{St}(p,n)$.\
(ii) If $p=1$, then is the *cubic spherical optimization problem* [@qi2009z; @zhang2012best; @Zhang12:MC]. If $p=n$, then is the *approximate orthogonal tensor diagonalization problem* [@Como94:sp; @comon1994tensor; @Como10:book; @LUC2017globally].
Let ${\boldsymbol{Q}}\in{{\mathscr{O}}_{n}}$ and ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}})$. Another problem, equivalent to , is to find $$\label{pro-orthogonal}
{\boldsymbol{Q}}_{*} = \operatorname*{\arg\!\max}_{{\boldsymbol{Q}}\in{{\mathscr{O}}_{n}}}f({\boldsymbol{Q}}),$$ where $$\label{eq-cost-func-2}
f({\boldsymbol{Q}}){\stackrel{\sf def}{=}}\sum\limits_{i=1}^{p}{\mathcal{W}}^2_{i\cdots i}.$$ In fact, if ${\boldsymbol{X}}\in \text{St}(p,n)$ and ${\boldsymbol{Q}}=[{\boldsymbol{X}},{\boldsymbol{Y}}]\in{{\mathscr{O}}_{n}}$, then ${\mathcal{W}}_{i_1\cdots i_d}=\widetilde{{\mathcal{W}}}_{i_1\cdots i_d}$ for any $1\leq i_1,\cdots,i_d\leq p$. The equivalence between and follows from the fact that $f({\boldsymbol{Q}}) = \tilde{f}({\boldsymbol{X}})$.
Let ${\boldsymbol{\mathcal{W}}}\in \text{symm}({\mathbb{R}}^{n\times n\times n})$ and $1\leq p\leq n$. Let $\widetilde{{\boldsymbol{\mathcal{W}}}}={\boldsymbol{\mathcal{W}}}^{(1,2,\cdots,p)}$. Then the objective used in [@IshtAV13:simax (3.1)] is the sum of squares of all the elements in $\widetilde{{\boldsymbol{\mathcal{W}}}}$, while is the sum of squares of the diagonal elements in $\widetilde{{\boldsymbol{\mathcal{W}}}}$. They are the same if $p=1$.
Riemannian gradient
-------------------
Let ${\boldsymbol{\mathcal{A}}} \in \text{symm}({\mathbb{R}}^{n\times\cdots\times n})$ and $1\leq i<j\leq n$. Define $$\begin{aligned}
\sigma_{i,j}({\boldsymbol{\mathcal{A}}}){\stackrel{\sf def}{=}}{\mathcal{A}}_{ii\ldots i}{\mathcal{A}}_{ji\ldots i},\quad
d_{i,j}({\boldsymbol{\mathcal{A}}}){\stackrel{\sf def}{=}}\sigma_{i,j}({\boldsymbol{\mathcal{A}}})-\sigma_{j,i}({\boldsymbol{\mathcal{A}}}) = {\mathcal{A}}_{ii\ldots i}{\mathcal{A}}_{ji\ldots i}-{\mathcal{A}}_{ij\ldots j}{\mathcal{A}}_{jj\ldots j}.\end{aligned}$$
\[RiemanGrad-thm\]The Riemannian gradient of at ${\boldsymbol{Q}}$ is $$\label{eq-Riemannian-gradient}
{\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}})}= {\boldsymbol{Q}}\,\Lambda({\boldsymbol{Q}}),$$ where $$\begin{aligned}
\label{eq-gradient-On}
\Lambda({\boldsymbol{Q}}){\stackrel{\sf def}{=}}d\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}({\boldsymbol{\mathcal{W}}}) &
\ldots & -d_{1,p}({\boldsymbol{\mathcal{W}}})
& -\sigma_{1,p+1}({\boldsymbol{\mathcal{W}}})
& \cdots & -\sigma_{1,n}({\boldsymbol{\mathcal{W}}})\\ \\
d_{1,2}({\boldsymbol{\mathcal{W}}}) & 0 &
\ldots & -d_{2,p}({\boldsymbol{\mathcal{W}}})
& -\sigma_{2,p+1}({\boldsymbol{\mathcal{W}}})
& \cdots & -\sigma_{2,n}({\boldsymbol{\mathcal{W}}})\\ \\
\ldots&\ldots&\ldots&\ldots&\cdots&\cdots&\cdots\\ \\
d_{1,p}({\boldsymbol{\mathcal{W}}}) &
d_{2,p}({\boldsymbol{\mathcal{W}}}) & \ldots & 0
& -\sigma_{p,p+1}({\boldsymbol{\mathcal{W}}})
& \cdots & -\sigma_{p,n}({\boldsymbol{\mathcal{W}}})\\ \\
\sigma_{1,p+1}({\boldsymbol{\mathcal{W}}}) & \sigma_{2,p+1}({\boldsymbol{\mathcal{W}}})
& \cdots &\sigma_{p,p+1}({\boldsymbol{\mathcal{W}}}) & 0 &\cdots & 0\\ \\
\ldots&\ldots&\ldots&\ldots&\ldots&\cdots & \ldots\\ \\
\sigma_{1,n}({\boldsymbol{\mathcal{W}}}) &
\sigma_{2,n}({\boldsymbol{\mathcal{W}}}) & \ldots & \sigma_{p,n}({\boldsymbol{\mathcal{W}}}) &0
&\cdots& 0
\end{smallmatrix}\right].\end{aligned}$$
Note that $$f({\boldsymbol{Q}}) = \sum\limits_{j=1}^p {\mathcal{W}}^2_{jj\ldots j}
=\sum\limits_{j=1}^p(\sum\limits_{i_1,i_2,\ldots,i_d}{\mathcal{A}}_{i_1,i_2,\ldots,i_d}Q_{i_1,j}Q_{i_2,j}\ldots Q_{i_d,j})^2.$$ Let ${\boldsymbol{\mathcal{V}}} = {\boldsymbol{\mathcal{A}}} {\mathop{\bullet_{2}}} {\boldsymbol{Q}}^{{{\sf T}}} \cdots {\mathop{\bullet_{d}}} {\boldsymbol{Q}}^{{{\sf T}}}$. Fix $1\leq i\leq n$ and $1\leq j\leq p$. Then $$\begin{aligned}
\frac{\partial f}{\partial {Q}_{i,j}}
=2d{\mathcal{W}}_{jj\ldots j} {\mathcal{V}}_{ij\ldots j}\end{aligned}$$ by methods similar to [@LUC2017globally Section 4.1]. Note that ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{V}}}{\mathop{\bullet_{1}}} {\boldsymbol{Q}}^{{{\sf T}}}$. We get the Euclidean gradient of at ${\boldsymbol{Q}}$ as follows: $$\begin{aligned}
\nabla f({\boldsymbol{Q}})
= 2d
{\boldsymbol{Q}}
\begin{bmatrix}
{\mathcal{W}}_{11\ldots 1} & {\mathcal{W}}_{12\ldots 2} & \ldots & {\mathcal{W}}_{1p\ldots p}& 0 &\cdots&0\\
{\mathcal{W}}_{21\ldots 1} & {\mathcal{W}}_{22\ldots 2} & \ldots & {\mathcal{W}}_{2p\ldots p}& 0 &\cdots&0\\
\ldots&\ldots&\ldots&\ldots& \cdots &\cdots&\cdots\\ \\
{\mathcal{W}}_{n1\ldots 1} & {\mathcal{W}}_{n2\ldots 2} & \ldots & {\mathcal{W}}_{np\ldots p}& 0 &\cdots&0
\end{bmatrix}
\begin{bmatrix}
{\mathcal{W}}_{1\ldots 1} & \cdots & 0&\cdots &0\\
\vdots&\ddots&\vdots&\cdots &0\\
0 & \cdots & {\mathcal{W}}_{p\cdots p}&\cdots &0\\
\vdots&\cdots&\cdots&\ddots &\vdots\\
0&\cdots&0&\cdots &0
\end{bmatrix}.\end{aligned}$$ By [@Absil08:Optimization (3.35)], we get that $$\label{eq-Riemannian-gradient}
{\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}})}=
\frac{1}{2}{\boldsymbol{Q}}({\boldsymbol{Q}}^{{{\sf T}}}\nabla f({\boldsymbol{Q}})-\nabla f({\boldsymbol{Q}})^{{{\sf T}}}{\boldsymbol{Q}})
={\boldsymbol{Q}}\,\Lambda({\boldsymbol{Q}}).$$ Then the proof is complete.
\(i) If $p=1$, we see that $\Lambda({\boldsymbol{Q}})=0$ if and only if $${\mathcal{W}}_{21\ldots 1} = {\mathcal{W}}_{31\ldots 1} = \cdots = {\mathcal{W}}_{n1\ldots 1} = 0,$$ which means that the first column of ${\boldsymbol{Q}}$ satisfies the condition in [@qi2009z (2)].\
(ii) The definition of $\Lambda({\boldsymbol{Q}})$ in can be seen as an extension of [@LUC2017globally (12)].
The Riemannian gradient of at ${\boldsymbol{X}}$ satisfies $$\begin{aligned}
\label{Remannian-gradient-stp-2}
{\boldsymbol{X}}^{{{\sf T}}}{\mathop{{\operator@font Proj} \nabla}\tilde{f}({\boldsymbol{X}})} = d\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}(\widetilde{{\boldsymbol{\mathcal{W}}}}) &
\ldots & -d_{1,p}(\widetilde{{\boldsymbol{\mathcal{W}}}})\\ \\
d_{1,2}(\widetilde{{\boldsymbol{\mathcal{W}}}}) & 0 &
\ldots & -d_{2,p}(\widetilde{{\boldsymbol{\mathcal{W}}}})\\ \\
\ldots&\ldots&\ldots&\ldots\\ \\
d_{1,p}(\widetilde{{\boldsymbol{\mathcal{W}}}}) &
d_{2,p}(\widetilde{{\boldsymbol{\mathcal{W}}}}) & \ldots & 0
\end{smallmatrix}\right].\end{aligned}$$
The proof goes along the same lines as for \[RiemanGrad-thm\]. Note that $$\tilde{f}({\boldsymbol{X}}) = \sum\limits_{j=1}^p \widetilde{{\mathcal{W}}}^2_{jj\ldots j}
=\sum\limits_{j=1}^p(\sum\limits_{i_1,i_2,\ldots,i_d}{\mathcal{A}}_{i_1,i_2,\ldots,i_d}X_{i_1,j}X_{i_2,j}\ldots X_{i_d,j})^2.$$ Let $\widetilde{{\boldsymbol{\mathcal{V}}}} = {\boldsymbol{\mathcal{A}}} {\mathop{\bullet_{2}}} {\boldsymbol{X}}^{{{\sf T}}} \cdots {\mathop{\bullet_{d}}} {\boldsymbol{X}}^{{{\sf T}}}$. Fix $1\leq i\leq n$ and $1\leq j\leq p$. Then $$\begin{aligned}
\frac{\partial \tilde{f}}{\partial X_{i,j}}
=2d\widetilde{{\mathcal{W}}}_{jj\ldots j} \widetilde{{\mathcal{V}}}_{ij\ldots j}\end{aligned}$$ by the similar methods in [@LUC2017globally Section 4.1]. Note that $\widetilde{{\boldsymbol{\mathcal{W}}}} = \widetilde{{\boldsymbol{\mathcal{V}}}}{\mathop{\bullet_{1}}} {\boldsymbol{X}}^{{{\sf T}}}$. We get the Euclidean gradient of at ${\boldsymbol{X}}$ as follows: $$\begin{aligned}
\nabla \tilde{f}({\boldsymbol{X}})&= 2d
\begin{bmatrix}
\widetilde{{\mathcal{V}}}_{11\ldots 1} & \widetilde{{\mathcal{V}}}_{12\ldots 2} & \cdots & \widetilde{{\mathcal{V}}}_{1p\ldots p} \\
\widetilde{{\mathcal{V}}}_{21\ldots 1} & \widetilde{{\mathcal{V}}}_{22\ldots 2} & \cdots & \widetilde{{\mathcal{V}}}_{2p\ldots p} \\
\cdots&\cdots&\cdots&\cdots\\
\widetilde{{\mathcal{V}}}_{n1\ldots 1} & \widetilde{{\mathcal{V}}}_{n2\ldots 2} & \cdots & \widetilde{{\mathcal{V}}}_{np\ldots p}
\end{bmatrix}
\begin{bmatrix}
\widetilde{{\mathcal{W}}}_{1\ldots 1} & \cdots & 0\\
\vdots & \ddots & \vdots\\
0 & \cdots & \widetilde{{\mathcal{W}}}_{p\cdots p}
\end{bmatrix}.\end{aligned}$$ It follows by [@Absil08:Optimization (3.35)] that $$\begin{aligned}
\label{Remannian-gradient-stp-1}
{\mathop{{\operator@font Proj} \nabla}\tilde{f}({\boldsymbol{X}})}
= ({\boldsymbol{I}}_{n}-{\boldsymbol{X}}{\boldsymbol{X}}^{{{\sf T}}})\nabla \tilde{f}({\boldsymbol{X}})
+ d{\boldsymbol{X}}\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}(\widetilde{{\boldsymbol{\mathcal{W}}}}) &
\ldots & -d_{1,p}(\widetilde{{\boldsymbol{\mathcal{W}}}})\\ \\
d_{1,2}(\widetilde{{\boldsymbol{\mathcal{W}}}}) & 0 &
\ldots & -d_{2,p}(\widetilde{{\boldsymbol{\mathcal{W}}}})\\ \\
\ldots&\ldots&\ldots&\ldots\\ \\
d_{1,p}(\widetilde{{\boldsymbol{\mathcal{W}}}}) &
d_{2,p}(\widetilde{{\boldsymbol{\mathcal{W}}}}) & \ldots & 0
\end{smallmatrix}\right],\end{aligned}$$ and the proof is completed.
\[pro-equiva-sationary\] Let ${\boldsymbol{\mathcal{A}}} \in \text{symm}({\mathbb{R}}^{n\times\cdots\times n})$ and $1 \leq p \leq n$. Let ${\boldsymbol{X}}_{*}\in \text{St}(p,n)$ and ${\boldsymbol{Q}}_{*} = [{\boldsymbol{X}}_{*},{\boldsymbol{Y}}_{*}]\in {{\mathscr{O}}_{n}}.$ Suppose that $\tilde{f}$ is as in and $f$ is as in . Then $${\mathop{{\operator@font Proj} \nabla}\tilde{f}({\boldsymbol{X}}_{*})}=0\ \Leftrightarrow \ {\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{*})}=0.$$
Let $\widetilde{{\boldsymbol{\mathcal{W}}}}_{*} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{X}}_{*})$ and ${\boldsymbol{\mathcal{W}}}_{*} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{*})$.\
($\Rightarrow$). By , we see that $d_{i,j}({\boldsymbol{\mathcal{W}}}_{*}) = d_{i,j}(\widetilde{{\boldsymbol{\mathcal{W}}}}_{*}) = 0$ for any $1\leq i<j \leq p$. It follows by that $${\boldsymbol{Y}}_{*}{\boldsymbol{Y}}_{*}^{{{\sf T}}}\nabla \tilde{f}({\boldsymbol{X}}_{*})
= ({\boldsymbol{I}}_{n}-{\boldsymbol{X}}_{*}{\boldsymbol{X}}_{*}^{{{\sf T}}})\nabla \tilde{f}({\boldsymbol{X}}_{*}) = 0,$$ and thus $${\boldsymbol{Y}}_{*}^{{{\sf T}}}\nabla \tilde{f}({\boldsymbol{X}}_{*})
= {\boldsymbol{Y}}_{*}^{{{\sf T}}}{\boldsymbol{Y}}_{*}{\boldsymbol{Y}}_{*}^{{{\sf T}}}\nabla \tilde{f}({\boldsymbol{X}}_{*}) = 0.$$ Then $\sigma_{i,j}({\boldsymbol{\mathcal{W}}}_{*}) = 0$ for any $1\leq i\leq p<j\leq n$, and thus ${\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{*})}=0$ by .\
($\Leftarrow$). By , we see that $d_{i,j}(\widetilde{{\boldsymbol{\mathcal{W}}}}_{*}) = d_{i,j}({\boldsymbol{\mathcal{W}}}_{*}) = 0$ for any $1\leq i<j \leq p$. Note that $\sigma_{i,j}({\boldsymbol{\mathcal{W}}}_{*}) = 0$ for any $1\leq i\leq p< j\leq n$. It follows that ${\boldsymbol{Y}}_{*}^{{{\sf T}}}\nabla \tilde{f}({\boldsymbol{X}}_{*}) = 0,$ and thus $$({\boldsymbol{I}}_{n}-{\boldsymbol{X}}_{*}{\boldsymbol{X}}_{*}^{{{\sf T}}})\nabla \tilde{f}({\boldsymbol{X}}_{*})
= {\boldsymbol{Y}}_{*}{\boldsymbol{Y}}_{*}^{{{\sf T}}}\nabla \tilde{f}({\boldsymbol{X}}_{*}) = 0.$$ Then ${\mathop{{\operator@font Proj} \nabla}\tilde{f}({\boldsymbol{X}}_{*})}=0$ by .
Jacobi low rank orthogonal approximation algorithm {#section-algorithm}
==================================================
Algorithm description
---------------------
Let $1\leq p\leq n$ and ${\mathscr{C}}=\{(i,j), 1\leq i< j\leq n, i\leq p\}$. We divide ${\mathscr{C}}$ to be two different subsets $$\begin{aligned}
\mathcal{C}_{1} {\stackrel{\sf def}{=}}\{(i,j),\ 1\leq i<j\leq p\}\ \text{and}\ \
\mathcal{C}_{2} {\stackrel{\sf def}{=}}\{(i,j),\ 1\leq i\leq p<j\leq n\}.\end{aligned}$$ Denote by ${{\boldsymbol{G}}^{(i,j,\theta)}}$ the *Givens rotation* matrix, as defined *e.g.* in [@LUC2017globally Section 2.2]. Now we formulate the [*Jacobi low rank orthogonal approximation*]{} (JLROA) algorithm for problem as follows.
\[al-JLROA\](JLROA algorithm)\
[**Input:**]{} ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times \cdots\times n})$, $1 \leq p \leq n$, a starting point ${\boldsymbol{Q}}_{0}$.\
[**Output:**]{} Sequence of iterations ${\boldsymbol{Q}}_{k}$.
- [**For**]{} $k=1,2,\ldots$ until a stopping criterion is satisfied do
- Choose the pair $(i_k,j_k)\in{\mathscr{C}}$ in the following cyclic ordering: $$\label{partial-cyclic-1}
\begin{split}
&(1,2) \to (1,3) \to \cdots \to (1,n) \to \\
& (2,3) \to \cdots \to (2,n) \to \\
& \cdots \to (p,p+1) \to \cdots \to (p,n) \to \\
&(1,2) \to (1,3) \to \cdots.
\end{split}$$
- Solve $\theta_k^{*}$ that maximizes $h_k(\theta){\stackrel{\sf def}{=}}\textit{f}({\boldsymbol{Q}}_{k-1}{{\boldsymbol{G}}^{(i_k,j_k,\theta)}})$.
- Set ${\boldsymbol{U}}_k {\stackrel{\sf def}{=}}{{\boldsymbol{G}}^{(i_k,j_k,\theta^{*}_k)}}$, and update ${\boldsymbol{Q}}_k = {\boldsymbol{Q}}_{k-1} {\boldsymbol{U}}_k$.
- [**End for**]{}
Elementary rotation
-------------------
Let ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{k-1})$ and ${\boldsymbol{\mathcal{T}}} = {\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(i_k,j_k,\theta)}})$. As in \[al-JLROA\], we define $$\begin{aligned}
\label{definition-h}
\textit{h}_k:\ [-\frac{\pi}{2},\frac{\pi}{2}]\longrightarrow {\mathbb{R}}^+,
\ \theta \longmapsto \textit{f}\ ({\boldsymbol{Q}}_{k-1}{{\boldsymbol{G}}^{(i_k,j_k,\theta)}})=\sum_{i=1}^{p}{\mathcal{T}}_{i\cdots i}^2\end{aligned}$$ where $f$ is as in . Note that ${{\boldsymbol{G}}^{(i_k,j_k,\theta)}}={{\boldsymbol{G}}^{(i_k,j_k,\theta+2\pi)}}$ and ${\mathcal{T}}_{i\cdots i}^2(\theta)={\mathcal{T}}_{i\cdots i}^2(\theta+\pi)$ for any $\theta\in{\mathbb{R}}$ and $1\leq i\leq p$. We see that $\textit{h}_k$ has the same image with that defined on ${\mathbb{R}}$. So it is sufficient to determine $\theta_k^{*}\in [-\pi/2, \pi/2]$ such that $\textit{h}_k(\theta_k^{*})=\max\limits_{\theta} \textit{h}_k(\theta),$ and we choose $\theta_k^{*}$ with the smallest absolute value if there are more than one choices.
Denote by $\overline{{\mathbb{R}}}={\mathbb{R}}\cup\{\pm\infty\}$. Define $$\begin{aligned}
\tau_k:\ \overline{{\mathbb{R}}}\longrightarrow {\mathbb{R}}^+,
\ \ x \longmapsto \textit{h}_k(\arctan(x)).\end{aligned}$$ Let $x=\tan(\theta)\in\overline{{\mathbb{R}}}$ and $x^{*}_{k}=\tan(\theta_k^{*})$. Then $$\begin{aligned}
\tau_k(x) - \tau_k(0) = \textit{h}_k(\theta) - \textit{h}_k(0)
= \sum_{i=1}^{p}{\mathcal{T}}_{i\cdots i}^2 - \sum_{i=1}^{p}{\mathcal{W}}_{i\cdots i}^2.\end{aligned}$$
\[lemma-derivative-h\]Let $\textit{h}_k$ be as in . Then $\textit{h}_k^{'}(\theta)= -2\Lambda({\boldsymbol{Q}}_{k-1}{{\boldsymbol{G}}^{(i_k,j_k,\theta)}})_{i_k,j_k}.$
We denote by ${\boldsymbol{G}}(\theta)={{\boldsymbol{G}}^{(i_k,j_k,\theta)}}$ for convenience. Then it follows from and the methods similar to [@LUC2017globally Lemma 5.7] that $$\begin{aligned}
\textit{h}_k^{'}(\theta)&=\langle{\mathop{{\operator@font Proj} \nabla}\textit{f}({\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}(\theta))}, {\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}^{'}(\theta)\rangle
=\langle {\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}(\theta)\Lambda({\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}(\theta)), {\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}^{'}(\theta)\rangle\\
&=\langle\Lambda({\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}(\theta)), {{\boldsymbol{G}}(\theta)}^{{{\sf T}}}{\boldsymbol{G}}^{'}(\theta)\rangle
= -2\Lambda({\boldsymbol{Q}}_{k-1}{\boldsymbol{G}}(\theta))_{i_k,j_k}.\end{aligned}$$
Let $(i_k,j_k)\in\mathcal{C}_{1}$. Then $h_k (\theta)$ in also has a period $\pi/2$ by [@LUC2017globally Section 4.3]. In other words, we can choose $\theta^{*}_k\in [-\pi/4, \pi/4]$ to maximize $h_k (\theta)$. Equivalently, we can choose $x^{*}_{k}\in[-1,1]$ to maximize $\tau_k(x)$.
Examples
--------
Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times \cdots\times n})$ be of 3rd or 4th order. Now we show the details of how to solve $\theta_k^{*}$ in \[al-JLROA\]. In fact, the methods in \[exam-3rd-order\](i) and \[exam-4th-order\](i) were first formulated in [@comon1994tensor], and can also be found in [@LUC2017globally Section 6.2]. We present them here for convenience.
\[exam-3rd-order\] (For 3rd order symmetric tentors)\
(i) $\mathbf{Case}$ 1: $(i_k,j_k)\in \mathcal{C}_{1}$. Take $p=2$ and the pair $(1,2)$ for example. Let $$\begin{aligned}
a &= 6({\mathcal{W}}_{111}{\mathcal{W}}_{112}-{\mathcal{W}}_{122}{\mathcal{W}}_{222}),\\
b &= 6({\mathcal{W}}_{111}^2+{\mathcal{W}}_{222}^2-3{\mathcal{W}}_{112}^2-3{\mathcal{W}}_{122}^2
-2{\mathcal{W}}_{111}{\mathcal{W}}_{122}-2{\mathcal{W}}_{112}{\mathcal{W}}_{222}).\end{aligned}$$ Then we have that $$\begin{aligned}
\tau_k(x)-\tau_k(0)&=\frac{1}{(1+x^2)^2}(a(x-x^3)-\frac{b}{2}x^2),\label{eq-inc-3}\\
\tau_k^{'}(x) &= \frac{1}{(1+x^2)^3}(a(1-6x^2+x^4)-b(x-x^3)).\notag\end{aligned}$$ Denote by $\xi = x - 1/x$. Then $\tau_k^{'}(x)=0$ if and only if $$\begin{aligned}
\Omega(\xi) {\stackrel{\sf def}{=}}a\xi^2+b\xi-4a = 0.\end{aligned}$$ Solve $\Omega(\xi) = 0$ for all the real roots $\xi_\ell$. Then solve $x^2-\xi_\ell x-1=0$ for all $\ell$ and take the best real root as $x_k^{*}$.\
(ii) $\mathbf{Case}$ 2: $(i_k,j_k)\in \mathcal{C}_{2}$. Take $p=2$ and the pair (1,$3$) for example. It holds that $$\begin{aligned}
\tau_k(x)-\tau_k(0) &
= {\mathcal{T}}_{111}^{2}-{\mathcal{W}}_{111}^2
=\frac{1}{(1+x^2)^3}[
({\mathcal{W}}_{333}^2-{\mathcal{W}}_{111}^2)x^6+(6{\mathcal{W}}_{133}{\mathcal{W}}_{333})x^5\notag\\
&+(-3{\mathcal{W}}_{111}^2+9{\mathcal{W}}_{133}^2+6{\mathcal{W}}_{113}{\mathcal{W}}_{333})x^4
+(18{\mathcal{W}}_{113}{\mathcal{W}}_{133}+2{\mathcal{W}}_{111}{\mathcal{W}}_{333})x^3\notag\\
&+(-3{\mathcal{W}}_{111}^2+6{\mathcal{W}}_{133}{\mathcal{W}}_{111}+9{\mathcal{W}}_{113}^2)x^2
+(6{\mathcal{W}}_{111}{\mathcal{W}}_{113})x],\label{eq-increasement-order3-2}\\
\tau_k^{'}(x)
&= \frac{6{\mathcal{T}}_{111}(x)}{(1+x^2)^{5/2}}[-{\mathcal{W}}_{133}x^3+({\mathcal{W}}_{333}-2{\mathcal{W}}_{113})x^2
+(2{\mathcal{W}}_{133}-{\mathcal{W}}_{111})x+{\mathcal{W}}_{113}].\notag\end{aligned}$$ Then we solve $$\label{eq-order-3-station}
-{\mathcal{W}}_{133}x^3+({\mathcal{W}}_{333}-2{\mathcal{W}}_{113})x^2+(2{\mathcal{W}}_{133}-{\mathcal{W}}_{111})x+{\mathcal{W}}_{113} = 0,$$ and take $x^{*}_{k}$ to be the best point among these real roots and $\pm\infty$.
is similar to equations in [@Lathauwer00:rank-1approximation Section 3.5], which is for the best rank-1 approximation of a tensor in $\text{symm}({\mathbb{R}}^{2\times 2\times 2})$.
\[exam-4th-order\] (For 4th order symmetric tensors)\
(i) $\mathbf{Case}$ 1: $(i_k,j_k)\in \mathcal{C}_{1}$. Take $p=2$ and the pair $(1,2)$ for example. It holds that $$\begin{aligned}
&\tau_k(x)-\tau_k(0)
= {\mathcal{T}}_{1111}^{2}+{\mathcal{T}}_{2222}^{2}-{\mathcal{W}}_{1111}^2-{\mathcal{W}}_{2222}^2\\
&=\frac{1}{(1+x^2)^4}((8{\mathcal{W}}_{1111}{\mathcal{W}}_{1112}-8{\mathcal{W}}_{1222}{\mathcal{W}}_{2222})(x-x^7)\\
&+(-4{\mathcal{W}}_{1111}^2 + 12{\mathcal{W}}_{1122}{\mathcal{W}}_{1111} + 16{\mathcal{W}}_{1112}^2 + 16{\mathcal{W}}_{1222}^2
- 4{\mathcal{W}}_{2222}^2 + 12{\mathcal{W}}_{1122}{\mathcal{W}}_{2222})(x^2+x^6)\\
&+(48{\mathcal{W}}_{1112}{\mathcal{W}}_{1122} + 8{\mathcal{W}}_{1111}{\mathcal{W}}_{1222}
- 48{\mathcal{W}}_{1122}{\mathcal{W}}_{1222} - 8{\mathcal{W}}_{1112}{\mathcal{W}}_{2222})(x^3-x^5)\\
&+(- 6{\mathcal{W}}_{1111}^2 + 4{\mathcal{W}}_{1111}{\mathcal{W}}_{2222} + 72{\mathcal{W}}_{1122}^2 - 6{\mathcal{W}}_{2222}^2 + 64{\mathcal{W}}_{1112}{\mathcal{W}}_{1222})x^4).\end{aligned}$$ Denote by $$\begin{aligned}
a &= 8({\mathcal{W}}_{1111}{\mathcal{W}}_{1112}-{\mathcal{W}}_{1222}{\mathcal{W}}_{2222});\\
b &=8({\mathcal{W}}_{1111}^2-3{\mathcal{W}}_{1122}{\mathcal{W}}_{1111}-4{\mathcal{W}}_{1112}^2
-4{\mathcal{W}}_{1222}^2+{\mathcal{W}}_{2222}^2-3{\mathcal{W}}_{1122}{\mathcal{W}}_{2222});\\
c &= 8(18{\mathcal{W}}_{1112}{\mathcal{W}}_{1122}-7{\mathcal{W}}_{1111}{\mathcal{W}}_{1112}+3{\mathcal{W}}_{1111}{\mathcal{W}}_{1222}\\
&-18{\mathcal{W}}_{1122}{\mathcal{W}}_{1222}-3{\mathcal{W}}_{1112}{\mathcal{W}}_{2222}+7{\mathcal{W}}_{1222}{\mathcal{W}}_{2222});\\
d &= 8(9{\mathcal{W}}_{1111}{\mathcal{W}}_{1122}-32{\mathcal{W}}_{1112}{\mathcal{W}}_{1222}-2{\mathcal{W}}_{1111}{\mathcal{W}}_{2222}\\
&+9{\mathcal{W}}_{1122}{\mathcal{W}}_{2222}+12{\mathcal{W}}_{1112}^2-36{\mathcal{W}}_{1122}^2+12{\mathcal{W}}_{1222}^2);\\
e &= 80(6{\mathcal{W}}_{1122}{\mathcal{W}}_{1222}-{\mathcal{W}}_{1111}{\mathcal{W}}_{1222}-6{\mathcal{W}}_{1112}{\mathcal{W}}_{1122}+{\mathcal{W}}_{1112}{\mathcal{W}}_{2222}).\end{aligned}$$ Then $$\begin{aligned}
&\tau_k'(x) = \frac{1}{(1+x^2)^5}[a(1+x^8)+b(x^7-x)+c(x^6+x^2)+d(x^5-x^3)+ex^4].\end{aligned}$$ Denote by $\xi = x - 1/x$. It follows that $\tau_k^{'}(x)=0$ if and only if $$\begin{aligned}
\Omega(\xi) {\stackrel{\sf def}{=}}a\xi^4+b\xi^3+(4a + c)\xi^2+(3b + d)\xi+2a+2c+e = 0.\end{aligned}$$ Solve $\Omega(\xi) = 0$ for all the real roots $\xi_\ell$. Then solve $x^2-\xi_\ell x-1=0$ for all $\ell$ and take the best real root as $x_k^{*}$.\
(ii) $\mathbf{Case}$ 2: $(i_k,j_k)\in \mathcal{C}_{2}$. Take $p=2$ and the pair (1,$3$) for example. It holds that $$\begin{aligned}
\tau_k(x)-\tau_k(0) &=\frac{1}{(1+x^2)^4}[
({\mathcal{W}}_{3333}^2-{\mathcal{W}}_{1111}^2)x^8
+ (8{\mathcal{W}}_{1333}{\mathcal{W}}_{3333})x^7\\
&+ (- 4{\mathcal{W}}_{1111}^2 + 16{\mathcal{W}}_{1333}^2 + 12{\mathcal{W}}_{1133}{\mathcal{W}}_{3333})x^6
+ (48{\mathcal{W}}_{1133}{\mathcal{W}}_{1333} + 8{\mathcal{W}}_{1113}{\mathcal{W}}_{3333})x^5\\
&+ (- 6{\mathcal{W}}_{1111}^2 + 2{\mathcal{W}}_{3333}{\mathcal{W}}_{1111} + 36{\mathcal{W}}_{1133}^2 + 32{\mathcal{W}}_{1113}{\mathcal{W}}_{1333})x^4\\
&+ (48{\mathcal{W}}_{1113}{\mathcal{W}}_{1133} + 8{\mathcal{W}}_{1111}{\mathcal{W}}_{1333})x^3\\
&+ (- 4{\mathcal{W}}_{1111}^2 + 12{\mathcal{W}}_{1133}{\mathcal{W}}_{1111} + 16{\mathcal{W}}_{1113}^2)x^2
+ (8{\mathcal{W}}_{1111}{\mathcal{W}}_{1113})x],\\
\tau_k^{'}(x)
&= \frac{-8{\mathcal{T}}_{1111}}{(1+x^2)^3}
[{\mathcal{W}}_{1333}x^4 +(3{\mathcal{W}}_{1133}-{\mathcal{W}}_{3333})x^3
+ (3{\mathcal{W}}_{1113}-3{\mathcal{W}}_{1333})x^2\\
&+ ({\mathcal{W}}_{1111}-3{\mathcal{W}}_{1133})x-{\mathcal{W}}_{1113}].\end{aligned}$$ Then we solve $${\mathcal{W}}_{1333}x^4 +(3{\mathcal{W}}_{1133}-{\mathcal{W}}_{3333})x^3
+ (3{\mathcal{W}}_{1113}-3{\mathcal{W}}_{1333})x^2
+ ({\mathcal{W}}_{1111}-3{\mathcal{W}}_{1133})x-{\mathcal{W}}_{1113} = 0$$ and take $x^{*}_{k}$ to be the best point among these real roots and $\pm\infty$.
Weak convergence to stationary points {#sec-weak-conver}
=====================================
Let $N=p(2n-p-1)/2$ be the number of elements in ${\mathscr{C}}$. We denote by $\Sigma$ the set of all the *ordered sets* ${\mathscr{P}}$ of index pairs in ${\mathscr{C}}$, that is, $$\begin{aligned}
{\mathscr{P}} = \{(i_1,j_1), (i_2,j_2), \cdots, (i_N,j_N)\}\in\Sigma.\end{aligned}$$ We denote by $\Sigma_{0}\subseteq\Sigma$ the subset including $$\begin{aligned}
{\mathscr{P}}^{*} = \{(i^{*}_1,j^{*}_1), (i^{*}_2,j^{*}_2), \cdots, (i^{*}_N,j^{*}_N)\},\end{aligned}$$ which satisfies that the first $n-1$ pairs $\{(i^{*}_1,j^{*}_1), \cdots, (i^{*}_{n-1},j^{*}_{n-1})\}$ have one common index, the next $n-2$ pairs $\{(i^{*}_n,j^{*}_n), \cdots, (i^{*}_{2n-3},j^{*}_{2n-3})\}$ have one common index, the next $n-3$ pairs $\{(i^{*}_{2n-2},j^{*}_{2n-2}), \cdots, (i^{*}_{3n-6},j^{*}_{3n-6})\}$ have one common index, until the last $n-p$ pairs $\{(i^{*}_{N-n+p+1},j^{*}_{N-n+p+1}), \cdots, (i^{*}_{N},j^{*}_{N})\}$ have one common index.
\[def-equivalent-order\]Let ${\mathscr{P}}_1, {\mathscr{P}}_2\in\Sigma$. We say that ${\mathscr{P}}_1$ is *equivalent* to ${\mathscr{P}}_2$ if we can obtain ${\mathscr{P}}_2$ from ${\mathscr{P}}_1$ only by\
(i) exchanging the positions of $(i_l,j_l)$ and $(i_{l+1},j_{l+1})$ when $\{i_l,j_l\}\cap \{i_{l+1},j_{l+1}\}=\emptyset$;\
(ii) moving the first element to the position after the last one;\
(iii) moving the last element to the position before the first one;\
(iv) reversing the positions of all the elements.
\(i) Let $n=p=4$. Let ${\mathscr{P}}=\{(1,3), (2,3), (2,4), (1,4), (3,4), (1,2)\}$. We can see that ${\mathscr{P}}$ is equivalent to $$\{(2,4), (1,4), (3,4), (1,2), (1,3), (2,3)\}\ \ \text{and}\ \
\{(3,4), (1,3), (2,3), (2,4), (1,4), (1,2)\},$$ which are both in $\Sigma_{0}$. On the other hand, it is not difficult to see that $$\begin{aligned}
\label{example-dim-4}
\{(1,2), (1,4), (2,3), (2,4), (1,3), (3,4)\}\end{aligned}$$ is not equivalent to any ${\mathscr{P}}^{*}\in\Sigma_{0}$.\
(ii) Let $n=p$. We can verify that there always exists such ${\mathscr{P}}\in\Sigma$ as in when $n$ is odd and $n\geq 5$. In fact, in this case, we can construct a graph by setting the numbers as vertices and the index pairs in ${\mathscr{C}}$ as edges. Then, by Euler’s Theorem, there always exists a Eulerian circuit, which is corresponding to a ${\mathscr{P}}\in\Sigma$ not equivalent to any ${\mathscr{P}}^{*}\in\Sigma_{0}$. When $n=5$, one such ${\mathscr{P}}$ is $$\begin{aligned}
\{ (1,2), (2,3), (3,4), (4,5), (3,5), (1,3), (1,4), (2,4), (2,5), (1,5) \}.\end{aligned}$$
\[al-general\](General algorithm)\
[**Input:**]{} ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times \cdots\times n})$, $1 \leq p \leq n$, a starting point ${\boldsymbol{Q}}_{0}$, an ordered set ${\mathscr{P}}\in\Sigma$.\
[**Output:**]{} Sequence of iterations ${\boldsymbol{Q}}_{k}$.
- [**For**]{} $k=1,2,\ldots$ until a stopping criterion is satisfied do
- Choose the pair $(i_k,j_k)\in{\mathscr{C}}$ according to ${\mathscr{P}}$.
- Solve $\theta^{*}_{k}$ that maximizes $h_k(\theta)$ defined as in .
- Set ${\boldsymbol{U}}_k {\stackrel{\sf def}{=}}{{\boldsymbol{G}}^{(i_k,j_k,\theta^{*}_k)}}$, and update ${\boldsymbol{Q}}_k = {\boldsymbol{Q}}_{k-1} {\boldsymbol{U}}_k$.
- [**End for**]{}
Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times \cdots\times n})$ and ${\boldsymbol{Q}}\in{{\mathscr{O}}_{n}}$. Let ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}})$ and $(i,j)\in{\mathscr{C}}$. Suppose that $\theta_{*}$ is the maximal point of the funcion $$\begin{aligned}
\textit{h}:\ [-\frac{\pi}{2},\frac{\pi}{2}]\longrightarrow {\mathbb{R}}^+, \ \
\theta \longmapsto \textit{f}\ ({\boldsymbol{Q}}{{\boldsymbol{G}}^{(i,j,\theta)}})\end{aligned}$$ as in . We define the operators $\Phi_{i,j}$ by sending ${\boldsymbol{Q}}$ to ${\boldsymbol{Q}}{{\boldsymbol{G}}^{(i,j,\theta_{*})}}.$ Then the iterations in the $t$-th loop[^4] of \[al-general\] are in fact generated as follows: $$\begin{aligned}
&\cdots\xrightarrow{\Phi_{i_N,j_N}} {\boldsymbol{Q}}_{(t-1)N}
\xrightarrow{\Phi_{i_1,j_1}}
{\boldsymbol{Q}}_{(t-1)N+1} \xrightarrow{\Phi_{i_2,j_2}} {\boldsymbol{Q}}_{(t-1)N+2}\\
&\xrightarrow{\Phi_{i_3,j_3}} \cdots \xrightarrow{\Phi_{i_N,j_N}} {\boldsymbol{Q}}_{tN} \xrightarrow{\Phi_{i_1,j_1}}
{\boldsymbol{Q}}_{tN+1}\xrightarrow{\Phi_{i_2,j_2}}\cdots.\end{aligned}$$ We define ${\boldsymbol{Q}}^{(t)}={\boldsymbol{Q}}_{tN}$ and $\Phi = \Phi_{i_N,j_N}\circ \cdots\circ\Phi_{i_2,j_2}\circ\Phi_{i_1,j_1}$. It is clear that $\Phi_{i,j}$ is continuous for all $(i,j)\in{\mathscr{C}}$. Therefore, $\Phi$ is also continuous. Now we rewrite [@chen2009tensor Lemma 5.5] as follows.
\[lem-fixed-point\]Let $\Phi:{{\mathscr{O}}_{n}}\rightarrow{{\mathscr{O}}_{n}}$ be a continuous operator and the sequence $\{{\boldsymbol{Q}}^{(t)}\}_{t=1}^{\infty}\subseteq{{\mathscr{O}}_{n}}$ satisfy ${\boldsymbol{Q}}^{(t+1)}=\Phi({\boldsymbol{Q}}^{(t)})$. If a continuous function $f:{{\mathscr{O}}_{n}}\rightarrow{\mathbb{R}}$ satisfies that\
(i) the sequence $\{f({\boldsymbol{Q}}^{(t)})\}_{t=1}^{\infty}$ converges, and\
(ii) if $f(\Phi({\boldsymbol{Q}})) = f({\boldsymbol{Q}})$, then $\Phi({\boldsymbol{Q}})={\boldsymbol{Q}}$,\
then every accumulation point ${\boldsymbol{Q}}_{*}$ of $\{{\boldsymbol{Q}}^{(t)}\}_{t=1}^{\infty}$ satisfies that $\Phi({\boldsymbol{Q}}_{*})={\boldsymbol{Q}}_{*}$.
\[lem-iden-decom\]Suppose that ${\mathscr{P}}$ is equivalent to a ${\mathscr{P}}^{*}\in\Sigma_{0}$ and $$\label{eq-indentity-equality}
{{\boldsymbol{G}}^{(i_1,j_1,\theta_{1})}}{{\boldsymbol{G}}^{(i_2,j_2,\theta_{2})}}\cdots{{\boldsymbol{G}}^{(i_N,j_N,\theta_{N})}} = {\boldsymbol{I}}_{n},$$ where $\theta_k\in[-\pi/2,\pi/2]$. Then ${{\boldsymbol{G}}^{(i_k,j_k,\theta_{k})}} = {\boldsymbol{I}}_{n}$ for $1\leq k\leq N$.
Note that ${\mathscr{P}}$ is equivalent to ${\mathscr{P}}^{*}\in\Sigma_{0}$ and the position changes in \[def-equivalent-order\] preserve . After a finite number of such position changes, there exist $\theta_{k}^{*}\in[-\pi/2,\pi/2]$ such that $$\begin{aligned}
\label{eq-ordered-star}
{{\boldsymbol{G}}^{(i^{*}_1,j^{*}_1,\theta^{*}_{1})}}{{\boldsymbol{G}}^{(i^{*}_2,j^{*}_2,\theta^{*}_{2})}}\cdots{{\boldsymbol{G}}^{(i^{*}_N,j^{*}_N,\theta^{*}_{N})}} = {\boldsymbol{I}}_{n}.\end{aligned}$$ Without loss of generality, we can suppose that $$\begin{aligned}
{\mathscr{P}}^{*} = \{(1,2), (1,3), \cdots, (1,n), (2,3), \cdots, (2,n), (3,4), \cdots, (p,n)\},\end{aligned}$$ as in . Then is ${{\boldsymbol{G}}^{(1,2,\theta^{*}_{1})}}{{\boldsymbol{G}}^{(1,3,\theta^{*}_{2})}}\cdots{{\boldsymbol{G}}^{(p,n,\theta^{*}_{N})}} = {\boldsymbol{I}}_{n}.$ It follows that $${{\boldsymbol{G}}^{(1,3,\theta^{*}_{2})}}\cdots{{\boldsymbol{G}}^{(p,n,\theta^{*}_{N})}} = {{\boldsymbol{G}}^{(1,2,-\theta^{*}_{1})}}.$$ It is not difficult to verify that $({{\boldsymbol{G}}^{(1,3,\theta^{*}_{2})}}\cdots{{\boldsymbol{G}}^{(p,n,\theta^{*}_{N})}})_{12}=0$. Then $\theta^{*}_{1}=0$. Similary, by ${{\boldsymbol{G}}^{(1,4,\theta^{*}_{3})}}\cdots{{\boldsymbol{G}}^{(p,n,\theta^{*}_{N})}} = {{\boldsymbol{G}}^{(1,3,-\theta^{*}_{2})}},$ we get $\theta^{*}_{2}=0$. After repeting this process for $N-1$ times, we complete the proof.
It may be interesting to ask whether \[lem-iden-decom\] holds for any ${\mathscr{P}}\in\Sigma$. In fact, when $p=n=4$, a counterexample is $$\begin{aligned}
{{\boldsymbol{G}}^{(1,2,\pi/2)}}{{\boldsymbol{G}}^{(1,4,\pi/2)}}{{\boldsymbol{G}}^{(2,3,-\pi/2)}}{{\boldsymbol{G}}^{(2,4,-\pi/2)}}{{\boldsymbol{G}}^{(1,3,-\pi/2)}}{{\boldsymbol{G}}^{(3,4,-\pi/2)}}= {\boldsymbol{I}}_{4}.\end{aligned}$$
In \[al-general\], if ${\mathscr{P}}$ is equivalent to a ${\mathscr{P}}^{*}\in\Sigma_{0}$, then every accumulation point is a stationary point.
Suppose that ${\boldsymbol{Q}}_{*}$ is an accumulation point of $\{{\boldsymbol{Q}}_k, k\in{\mathbb{N}}\}$. Then there exists $1\leq\ell_{*}\leq N$ such that ${\boldsymbol{Q}}_{*}$ is an accumulation point of $\{{\boldsymbol{Q}}_{tN+\ell_{*}}, t\in{\mathbb{N}}\}$.\
Case I: If $\ell_{*}= N$, by \[lem-fixed-point\], we see that $\Phi({\boldsymbol{Q}}_{*})={\boldsymbol{Q}}_{*}$. It follows by \[lem-iden-decom\] that $\Phi_{i,j}({\boldsymbol{Q}}_{*})={\boldsymbol{Q}}_{*}$ for all $(i,j)\in{\mathscr{C}}$. Then ${\boldsymbol{Q}}_{*}$ is a stationary point by \[RiemanGrad-thm\].\
Case II: If $\ell_{*}< N$, we can set the starting point as ${\boldsymbol{Q}}_{\ell_{*}}$. Let ${\mathscr{P}}^{'}\in\Sigma$ be obtained by doing the manipulation (ii) of \[def-equivalent-order\] on ${\mathscr{P}}$ successively for $\ell_{*}$ times. Let $\Phi^{'}$ be the composition corresponding to ${\mathscr{P}}^{'}$. Similar to Case I, we see that $\Phi^{'}({\boldsymbol{Q}}_{*})={\boldsymbol{Q}}_{*}$. Note that ${\mathscr{P}}^{'}$ is also equivalent to ${\mathscr{P}}^{*}\in\Sigma_{0}$. By the similar reduction as in Case I, we complete the proof.
\(i) In \[al-JLROA\], every accumulation point is a stationary point.\
(ii) In Jacobi CoM2 algorithm, every accumulation point is a stationary point.
Jacobi-G algorithm and its convergence {#sect-Jacobi-G}
======================================
Jacobi-G algorithm
------------------
Different from the cyclic ordering in \[al-JLROA\] or the fixed ordering ${\mathscr{P}}$ in \[al-general\], another pair selection rule of Jacobi-type algorithm based on the Riemannian gradient was proposed in [@IshtAV13:simax]. In this sense, the pair $(i_k,j_k)$ at each iteration is chosen such that $$\label{eq:pair_selection_gradient}
|h_{k}^{'}(0)| = 2|({\boldsymbol{Q}}_{k-1}^{\intercal}{\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{k-1})})_{i_k,j_k}| \ge \varepsilon \|{\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{k-1})}\|,$$ where $0<\varepsilon\leq2/n$ is fixed. By [@IshtAV13:simax Lemma 5.2] and [@LUC2017globally Lemma 3.1], we see that it is always possible to find such a pair if $f$ is differentiable.
(Jacobi-G algorithm)\[alg:jacobi-G\]\
[**Input:**]{} ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times \cdots\times n})$, $1\leq p\leq n$, $0<\varepsilon\leq 2/n$, a starting point ${\boldsymbol{Q}}_{0}$.\
[**Output:**]{} Sequence of iterations $\{{\boldsymbol{Q}}_{k}\}_{k\ge1}$.
- [**For**]{} $k=1,2,\ldots$ until a stopping criterion is satisfied do
- Choose a pair $(i_k,j_k)$ satisfying at ${\boldsymbol{Q}}_{k-1}$.
- Solve $\theta^{*}_{k}$ that maximizes $h_k(\theta)$ defined as in .
- Set ${\boldsymbol{U}}_k {\stackrel{\sf def}{=}}{{\boldsymbol{G}}^{(i_k,j_k,\theta^{*}_k)}}$, and update ${\boldsymbol{Q}}_k = {\boldsymbol{Q}}_{k-1} {\boldsymbol{U}}_k$.
- [**End for**]{}
\[remark-local-conv\] (i) By [@IshtAV13:simax Theorem 5.4] and [@LUC2017globally Theorem 3.3], we see that every accumulation point of the iterations in \[alg:jacobi-G\] is a stationary point of $f$.\
(ii) Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times n\times n})$ and $p$ = 1. Then \[alg:jacobi-G\] is the same with the Jacobi-type algorithm in [@IshtAV13:simax], which was developed to find the best low multilinear rank approximation of symmetric tensors.
In this section, we mainly prove the following result for \[alg:jacobi-G\]. The proof is postponed to \[subsec-main-proof\].
\[theorem-main-covergence\] Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times n\times n})$ with $n\geq 3$. Suppose that $p=2$ and ${\boldsymbol{Q}}_{\ast}$ is an accumulation point of \[alg:jacobi-G\] satisfying $$\begin{aligned}
&{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{112}^2+{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{122}^2\neq 0,\label{eq-condition-not-zero}\\
&{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{333}{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{444}\cdots{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{nnn}\neq 0.\label{eq-condition-not-zero-2}\end{aligned}$$ Then either ${\boldsymbol{Q}}_{\ast}$ is the unique limit point, or there exist an infinite number of accumulation points.
Some lemmas
-----------
\[lemma-double-derivative\] Let ${\boldsymbol{\mathcal{W}}}\in\text{symm}({\mathbb{R}}^{2\times 2\times 2})$ and ${\boldsymbol{\mathcal{T}}}={\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(1,2,\arctan x)}})$ with $x\in\overline{{\mathbb{R}}}$. Define $\tau:\ \overline{{\mathbb{R}}}\rightarrow {\mathbb{R}}^+$ sending $x$ to ${\mathcal{T}}_{111}^2.$ Suppose that ${\mathcal{W}}_{222}\neq0$ and $\tau(0)=\max\limits_{x\in\overline{{\mathbb{R}}}} \tau(x).$ Then\
(i) ${\mathcal{W}}_{111}\neq0$, ${\mathcal{W}}_{112}=0$,\
(ii) ${\mathcal{W}}_{111}(2{\mathcal{W}}_{122}-{\mathcal{W}}_{111})<0$.
\(i) It is clear that $|{\mathcal{W}}_{222}|\leq|{\mathcal{W}}_{111}|$ since $\tau(0)\geq\tau(\pm\infty)$. Then ${\mathcal{W}}_{111}\neq0$. Let $\theta = \arctan x$. We have that $$\frac{d{\mathcal{T}}_{111}}{d\theta}=3{\mathcal{T}}_{112},\ \
\frac{d{\mathcal{T}}_{112}}{d\theta}=2{\mathcal{T}}_{122}-{\mathcal{T}}_{111}$$ by straightforward differentiation [@LUC2017globally Page 10]. It follows that $$\begin{aligned}
\tau'(x)&=2{\mathcal{T}}_{111}\frac{d{\mathcal{T}}_{111}}{d\theta}\frac{d\theta}{dx}=\frac{6{\mathcal{T}}_{111}{\mathcal{T}}_{112}}{1+x^2},\label{eq-tau-derivative}\\
\tau''(x)
&=\frac{6}{(1+x^2)^2}(3{\mathcal{T}}_{112}^2+2{\mathcal{T}}_{111}{\mathcal{T}}_{122}
-{\mathcal{T}}_{111}^2-2{\mathcal{T}}_{111}{\mathcal{T}}_{112}x).\label{eq-tau-2-derivative}\end{aligned}$$ Note that $\tau'(0)=0$. We have ${\mathcal{W}}_{112}=0$ by .\
(ii) Note that $\tau''(0)\leq0$. We have $2{\mathcal{W}}_{111}{\mathcal{W}}_{122}-{\mathcal{W}}_{111}^2\leq0$ by . To complete the proof, we only need to prove that $\tau(0)<\max\limits_{x\in\overline{{\mathbb{R}}}} \tau(x)$ if ${\mathcal{W}}_{111}=1$, ${\mathcal{W}}_{122}=1/2$ and ${\mathcal{W}}_{222}=\beta\neq0$ without loss of generality. In fact, it can be verified that $$\tau(x) = \frac{(1+\frac{3}{2}x^2+\beta x^3)^2}{(1+x^2)^3}$$ in this case, and $$\max\limits_{x\in\overline{{\mathbb{R}}}} \tau(x)\geq\tau(2\beta)=\frac{(1+6\beta^2+8\beta^4)^2}{(1+4\beta^2)^3}>\tau(0)=1.$$
\[re-defi-index\]([@LUC2018 Definition 3.11]) Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times n\times n})$ and $1\le i<j\le n$. Suppose that ${\mathcal{A}}_{iii}{\mathcal{A}}_{iij}={\mathcal{A}}_{ijj}{\mathcal{A}}_{jjj}$. The *stationary diagonal ratio*, denoted by $\gamma_{ij}({\boldsymbol{\mathcal{A}}})$, is defined as follows. $$\gamma_{ij}({\boldsymbol{\mathcal{A}}}) {\stackrel{\sf def}{=}}\begin{cases}
0, & \text{if}\ {\boldsymbol{\mathcal{A}}}^{(i,j)}= \mathbf{0};\\
\infty, & \text{if}\ {\mathcal{A}}_{iii}={\mathcal{A}}_{jjj}=0\quad\text{and}\quad{\mathcal{A}}^2_{ijj} +{\mathcal{A}}^2_{iij}\neq0;\\
\end{cases}$$ otherwise, $\gamma_{ij}({\boldsymbol{\mathcal{A}}})$ is the [(unique)]{} number such that $$\begin{pmatrix}{\mathcal{A}}_{ijj} \\ {\mathcal{A}}_{iij} \end{pmatrix} = \gamma_{ij}({\boldsymbol{\mathcal{A}}})\begin{pmatrix}{\mathcal{A}}_{iii}\\{\mathcal{A}}_{jjj}\end{pmatrix}.$$
\[lemma-extreme-state-02\]Let ${\boldsymbol{\mathcal{W}}}\in\text{symm}({\mathbb{R}}^{2\times 2\times 2})$ and ${\boldsymbol{\mathcal{T}}}={\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(1,2,\arctan x)}})$ with $x\in{\mathbb{R}}$ and $x\neq0$. Suppose that $\|{\mathop{\operator@font diag}\{{\boldsymbol{\mathcal{W}}}\}}\|=\|{\mathop{\operator@font diag}\{{\boldsymbol{\mathcal{T}}}\}}\|\neq0$ and $${\mathcal{W}}_{111}{\mathcal{W}}_{112}={\mathcal{W}}_{122}{\mathcal{W}}_{222},\ \ {\mathcal{T}}_{111}{\mathcal{T}}_{112}={\mathcal{T}}_{122}{\mathcal{T}}_{222}.$$ Then $\gamma_{12}({\boldsymbol{\mathcal{W}}}) = \gamma_{12}({\boldsymbol{\mathcal{T}}}) = -1$ or $1/3$.
Note that $\|{\mathop{\operator@font diag}\{{\boldsymbol{\mathcal{W}}}\}}\|=\|{\mathop{\operator@font diag}\{{\boldsymbol{\mathcal{T}}}\}}\|$ and $\|{\boldsymbol{\mathcal{W}}}\| = \|{\boldsymbol{\mathcal{T}}}\|$. We see that $|\gamma_{12}({\boldsymbol{\mathcal{W}}})| = |\gamma_{12}({\boldsymbol{\mathcal{T}}})|$. Let ${\boldsymbol{\mathcal{T}}} ={\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(1,2,\arctan x)}})$. Define $$\begin{aligned}
\tau:\ {\mathbb{R}}\longrightarrow {\mathbb{R}}^+,
\ x \longmapsto \|{\mathop{\operator@font diag}\{{\boldsymbol{\mathcal{T}}}\}}\|^2 = {\mathcal{T}}_{111}^2+{\mathcal{T}}_{222}^2.\end{aligned}$$ Then $\tau(x)=\tau(0)$ by the condition. It follows by that $$\label{eq-0-double-derivative}
{\mathcal{W}}_{111}^2+{\mathcal{W}}_{222}^2-3{\mathcal{W}}_{112}^2-3{\mathcal{W}}_{122}^2
-2{\mathcal{W}}_{111}{\mathcal{W}}_{122}-2{\mathcal{W}}_{112}{\mathcal{W}}_{222}=0.$$ After the substitution of ${\mathcal{W}}_{122}=\gamma_{12}({\boldsymbol{\mathcal{W}}}){\mathcal{W}}_{111}$ and ${\mathcal{W}}_{112}=\gamma_{12}({\boldsymbol{\mathcal{W}}}){\mathcal{W}}_{222}$ to , we get that $\gamma_{12}({\boldsymbol{\mathcal{W}}})=-1$ or $1/3$. Note that ${\boldsymbol{\mathcal{W}}}={\boldsymbol{\mathcal{T}}}(({{\boldsymbol{G}}^{(1,2,\arctan x)}})^\intercal)$. We can similarly get that $\gamma_{12}({\boldsymbol{\mathcal{T}}})=-1$ or $1/3$.
\[lemma-3-dimension-p-2\] Let ${\boldsymbol{\mathcal{W}}}\in\text{symm}({\mathbb{R}}^{3\times 3\times 3})$ and ${\boldsymbol{\mathcal{T}}}={\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(1,3,\arctan x)}})$ with $x\in\overline{{\mathbb{R}}}$ and $x\neq0$. Suppose that $|{\mathcal{W}}_{111}|=|{\mathcal{T}}_{111}|>0$ and $${\mathcal{W}}_{111}{\mathcal{W}}_{112}={\mathcal{W}}_{122}{\mathcal{W}}_{222},\ \
{\mathcal{T}}_{111}{\mathcal{T}}_{112}={\mathcal{T}}_{122}{\mathcal{T}}_{222},\ \
{\mathcal{W}}_{113}={\mathcal{W}}_{223}={\mathcal{T}}_{113}={\mathcal{T}}_{223}=0.$$ Then ${\mathcal{W}}_{112}={\mathcal{W}}_{122}={\mathcal{T}}_{112}={\mathcal{T}}_{122}=0.$
It can be verified that $$-\frac{x}{\sqrt{1+x^2}}{\mathcal{W}}_{122} = {\mathcal{T}}_{223} = 0,$$ and thus ${\mathcal{W}}_{122}=0$. It follows by the condition that ${\mathcal{W}}_{112}=0$. Note that ${\boldsymbol{\mathcal{W}}}={\boldsymbol{\mathcal{T}}}(({{\boldsymbol{G}}^{(1,3,\arctan x)}})^{\intercal}).$ We can similarly get that ${\mathcal{T}}_{112}={\mathcal{T}}_{122}=0$.
Proof of \[theorem-main-covergence\] {#subsec-main-proof}
------------------------------------
\[lemma-weak-inequality\] Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times n\times n})$. Let $h_{k}(\theta)$ be as in for $k\in{\mathbb{N}}$. Then there exists $\delta>0$ such that $$\label{eq-lemma-weak-ineuqality}
h_{k}(\theta_k^{*})-h_{k}(0)\geq \delta |h_{k}'(0)|^2$$ for any $k\in{\mathbb{N}}$ with $(i_k,j_k)\in \mathcal{C}_{2}$ in \[alg:jacobi-G\].
Let ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{k-1})$ and ${\boldsymbol{\mathcal{T}}} = {\boldsymbol{\mathcal{W}}}({{\boldsymbol{G}}^{(i_k,j_k,\theta)}})$. Let $(i,j)=(i_k,j_k)$. It is clear that ${\mathcal{T}}_{iii}(\theta)$ is a trigonometric polynomial with a finite degree $n_{0}$ for all the iterations in $\mathcal{C}_{2}$. By [@bell2015bernstein Theorem 1], we see that $${\mathcal{T}}_{iii}'(0)^2 \leq n_{0}^2(\|{\mathcal{T}}_{iii}\|_{\infty}^2-{\mathcal{T}}_{iii}^2(0)) = n_{0}^2(h_{k}(\theta_k^{*})-h_{k}(0)),$$ when $\theta=0$. Note that $h_{k}'(0)=2{\mathcal{T}}_{iii}(0){\mathcal{T}}_{iii}'(0)$. Let $M>0$ such that $|4 n_{0}^2{\mathcal{T}}_{iii}^2(0)|<M$ for all the iterations in $\mathcal{C}_{2}$. Then $$|h_{k}'(0)|^2 \leq 4 n_{0}^2{\mathcal{T}}^2_{iii}(0)(h_{k}(\theta_k^{*})-h_{k}(0))
<M(h_{k}(\theta_k^{*})-h_{k}(0)).$$ The proof is completed if we set $\delta=1/M$.
Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{n\times n\times n\times n})$ be of 4th order. By the similar methods, we can also prove for pairs in $\mathcal{C}_{1}$, or pairs in $\mathcal{C}_{2}$.
Now we need a result in [@LUC2017globally], which is the direct consequence of [@SU15:pro Theorem 2.3].
\[theorem-convegence-general\]([[@LUC2017globally Corollary 5.4]]{}) Let $f$ be a real analytic function from ${{\mathscr{O}}_{n}}$ to ${\mathbb{R}}$. Suppose that $\{{\boldsymbol{Q}}_k:k\in{\mathbb{N}}\}\subseteq{{\mathscr{O}}_{n}}$ and, for large enough k,\
(i) there exists $\sigma>0$ such that $$\label{condition-coro-KL}
|{f}({\boldsymbol{Q}}_{k})-{f}({\boldsymbol{Q}}_{k-1})|\geq \sigma\|{\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{k-1})}\| \|{\boldsymbol{Q}}_{k}-{\boldsymbol{Q}}_{k-1}\|,$$ (ii) ${\mathop{{\operator@font Proj} \nabla}f({\boldsymbol{Q}}_{k-1})}=0$ implies that ${\boldsymbol{Q}}_{k}={\boldsymbol{Q}}_{k-1}$.\
Then the iterations $\{{\boldsymbol{Q}}_k:k\in{\mathbb{N}}\}$ converge to a point ${\boldsymbol{Q}}_*\in{{\mathscr{O}}_{n}}$.
Assume that there exist a finite number of accumulation points, denoted by ${\boldsymbol{Q}}^{(\ell)}(1\leq\ell\leq N)$. Then any accumulation point is a stationary point by \[remark-local-conv\](i). In other words, it holds that $\Lambda({\boldsymbol{Q}}^{(\ell)})=0$ for all $1\leq\ell\leq N$ by . Let ${\boldsymbol{Q}}_{\ast}={\boldsymbol{Q}}^{(1)}$. Now we prove that ${\boldsymbol{Q}}_{\ast}$ is the unique limit point.\
**Step 1.** We first prove that all the accumulation points satisfy and if ${\boldsymbol{Q}}_{\ast}$ satisfies them. Note that the number of accumulation points is finite. We can see that any two different accumulation points can be connected by finite combination of the following two possible paths.\
(a) Take the pair $(1,2)\in\mathcal{C}_{1}$. If $\{x_k^{\ast},(i_k,j_k)=(1,2)\}$ is finite or converges to 0, this path doesn’t appear and we skip it. Otherwise, this set has a nonzero accumulation point $\zeta$ and a subsequence converges to it. We assume that $$\{x_k^{\ast},(i_k,j_k)=(1,2)\}\rightarrow\zeta\neq0$$ without loss of generality. Note that $\{{\boldsymbol{Q}}_{k-1},(i_k,j_k)=(1,2)\}$ has an accumulation point. We assume that $$\{{\boldsymbol{Q}}_{k-1},(i_k,j_k)=(1,2)\}\rightarrow{\boldsymbol{Q}}^{(\ell_1)}$$ without loss of generality. Then ${\boldsymbol{Q}}^{(\ell_2)}={\boldsymbol{Q}}^{(\ell_1)}{\boldsymbol{G}}^{(1,2,\arctan\zeta)}$ is another different accumulation point. It is clear that ${\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{iii}={\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{iii}$ for $3\leq i\leq n$. Note that ${\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}^{(\ell_1)})^{(1,2)}$ and ${\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}^{(\ell_2)})^{(1,2)}$ satisfy the conditions in \[lemma-extreme-state-02\]. We see that $${\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{112}^2+{\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{122}^2\neq0,\
{\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{112}^2+{\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{122}^2\neq0.$$ (b) Take the pair $(1,3)\in\mathcal{C}_{2}$ for example. Other pairs in $\mathcal{C}_{2}$ are similar. If $\{x_k^{\ast},(i_k,j_k)=(1,3)\}$ is finite or converges to 0, this path doesn’t appear and we skip it. Otherwise, this set has a nonzero accumulation point $\zeta$ and a subsequence converges to it. We assume that $$\{x_k^{\ast},(i_k,j_k)=(1,3)\}\rightarrow\zeta\neq0$$ without loss of generality. Note that $\{{\boldsymbol{Q}}_{k-1},(i_k,j_k)=(1,3)\}$ has an accumulation point. We assume that $$\{{\boldsymbol{Q}}_{k-1},(i_k,j_k)=(1,3)\}\rightarrow{\boldsymbol{Q}}^{(\ell_1)}$$ without loss of generality. Then ${\boldsymbol{Q}}^{(\ell_2)}={\boldsymbol{Q}}^{(\ell_1)}{\boldsymbol{G}}^{(1,3,\arctan\zeta)}$ is another different accumulation point. Note that ${\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}^{(\ell_1)})^{(1,2,3)}$ and ${\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}^{(\ell_2)})^{(1,2,3)}$ satisfy the conditions in \[lemma-3-dimension-p-2\]. We see that $${\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{112}={\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{122}=
{\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{112}={\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{122}=0.$$
Since ${\boldsymbol{Q}}_{\ast}$ satisfies , we see that path (a) is the only possible path. Then all the accumulation points satisfy . Note that ${\boldsymbol{Q}}_{\ast}$ satisfies and ${\mathcal{A}}({\boldsymbol{Q}}^{(\ell_1)})_{iii}={\mathcal{A}}({\boldsymbol{Q}}^{(\ell_2)})_{iii}$ for $3\leq i\leq n$ in path (a). All the accumulation points satisfy .\
**Step 2.** Since path (b) in Step 1 doesn’t appear, we get that $$\label{eq-proof-tends-0}
\{x_k^{\ast},(i_k,j_k)\in\mathcal{C}_{2}\}\rightarrow 0$$ in \[alg:jacobi-G\]. Let $\mathcal{N}({\boldsymbol{Q}}_{\ast},\eta)$ be the neighborhood of ${\boldsymbol{Q}}_{\ast}={\boldsymbol{Q}}^{(1)}$ in ${{\mathscr{O}}_{n}}$ with radius $\eta>0$ such that there exist no other accumulation points in this neighborhood. If pair $(i,j)\in\mathcal{C}_2$ satisfies that $$\label{eq-condition-infinite}
\{{\boldsymbol{Q}}_{k-1}\in\mathcal{N}({\boldsymbol{Q}}_{\ast},\eta),(i_k,j_k)=(i,j)\}\
\text{is infinite},$$ then ${\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{\ast})^{(i,j)}$ satisfies the conditions in \[lemma-double-derivative\](iii) by condition . Then ${\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{iii}({\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{iii}-2{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{ijj})\neq0$. Let $$\rho_1{\stackrel{\sf def}{=}}\min|{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{iii}({\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{iii}-2{\mathcal{A}}({\boldsymbol{Q}}_{\ast})_{ijj}) |$$ for all pairs $(i,j)\in\mathcal{C}_{2}$ satisfying . Then $\rho_1>0$. For other accumulation points, we can similarly get $\rho_\ell$ for $1<\ell\leq N$. Then $$\label{eq-rho}
\rho{\stackrel{\sf def}{=}}\min\rho_\ell>0.$$ **Step 3.** Now we show that there exists $\kappa>0$ such that $$\begin{aligned}
\label{eq-inequality-C2}
|h_{k}(\theta_k^{*})-h_{k}(0)|\geq \kappa |h_{k}'(0)||\theta_k^{*}|\end{aligned}$$ for all $(i_k,j_k)\in\mathcal{C}_{2}$. Let ${\boldsymbol{\mathcal{W}}} = {\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{k-1})$. Denote $(i,j) = (i_{k},j_{k})$. Note that $|x_k^{\ast}|<+\infty$ when $k$ is large enough by . Then by and , we have that $$\begin{aligned}
\frac{h_{k}'(0)}{x_k^{\ast}}=
\frac{6{\mathcal{W}}_{iii}{\mathcal{W}}_{iij}}{x_k^{\ast}}
=-6{\mathcal{W}}_{iii}[(2{\mathcal{W}}_{ijj}-{\mathcal{W}}_{iii})+({\mathcal{W}}_{jjj}-2{\mathcal{W}}_{iij})x_k^{\ast}-{\mathcal{W}}_{ijj}{x_k^{\ast}}^2]\end{aligned}$$ have accumulation points in the set $$\{-6{\mathcal{A}}({\boldsymbol{Q}}^{(\ell)})_{iii}(2{\mathcal{A}}({\boldsymbol{Q}}^{(\ell)})_{ijj}
-{\mathcal{A}}({\boldsymbol{Q}}^{(\ell)})_{iii}),\ \text{pair $(i,j)$ satisfies \eqref{eq-condition-infinite}}, \ 1\leq\ell\leq N\}$$ when $k\in{\mathbb{N}}$ with $(i_{k},j_{k})\in\mathcal{C}_2$. It follows from that there exists $\upsilon>0$ such that $|h_{k}'(0)|\geq\upsilon|x_{k}^{\ast}|$ when $k$ is large enough with $(i_{k},j_{k})\in\mathcal{C}_2$.. Then we get by \[lemma-weak-inequality\].\
**Step 4.** If $\{x_k^{\ast},(i_k,j_k)=(1,2)\in\mathcal{C}_1\}$ is finite, we skip it. Otherwise, by [@LUC2017globally (27)], we know that $$\label{eq:lemma-G-inequality-3}
|h_{k}(\theta_k^{*})-h_{k}(0)| = |\frac{x_k^{*}h^{'}_{k}(0)}{2(1-{x_k^{*}}^2)}| \geq \frac{1}{2}|h_{k}'(0)||\theta_k^{*}|$$ for all $(i_k,j_k)\in\mathcal{C}_{1}$. Let $\omega=\min\{\kappa,1/2\}>0$. By and , we get that $$|h_{k}(\theta_k^{*})-h_{k}(0)| \geq \omega|h_{k}'(0)||\theta_{\ast}| \geq \frac{\sqrt{2}}{2}\omega\varepsilon\|{\mathop{{\operator@font Proj} \nabla}{f}({\boldsymbol{Q}}_{k-1})}\|\|{\boldsymbol{Q}}_{k}-{\boldsymbol{Q}}_{k-1}\|,$$ for all $k\in{\mathbb{N}}$. Then ${\boldsymbol{Q}}_{\ast}$ is the unique limit point by \[theorem-convegence-general\].
Numerical experiments {#sect-experiment}
=====================
In this section, we make some experiments to compare the performance of JLROA algorithm with the LROAT and SLROAT algorihtms in [@chen2009tensor], and Trust region algorithm by *Manopt Toolbox* in [@JMLR:v15:boumal14a]. When $p=1$, LROAT and SLROAT are exactly the HOPM and SHOPM algorithms in [@Lathauwer00:rank-1approximation; @kofidis2002best], respectively. We use the cyclic ordering of JLROA algorithm in \[al-JLROA\] for simplicity except \[example-4\] and \[example-5\]. The LROAT and SLROAT algorihtms are both initialized via HOSVD [@Lathauwer00:TensorSVD], because we find they generally have better performance in this case.
\[example-1\]We randomly generate 1000 tensors in $\text{symm}({\mathbb{R}}^{10\times 10\times 10})$, and run JLROA and SLROAT algorithms for them. Denote by $\textsc{JVal}$ and $\textsc{SVal}$ the final value of obtained by JLROA and SLROAT, respectively. Set the following notations.\
(i) $\textsc{NumG}:$ the number of cases that $\textsc{JVal}$ is greater than $\textsc{SVal}$;\
(ii) $\textsc{NumS}:$ the number of cases that $\textsc{JVal}$ is smaller than $\textsc{SVal}$;\
(iii) $\textsc{NumE}:$ the number of cases that $\textsc{JVal}$ is equal[^5] to $\textsc{SVal}$;\
(iv) $\textsc{RatioG}:$ the average of $\textsc{JVal}/\textsc{SVal}$ when $\textsc{JVal}$ is greater than $\textsc{SVal}$;\
(v) $\textsc{RatioS}:$ the average of $\textsc{JVal}/\textsc{SVal}$ when $\textsc{JVal}$ is smaller than $\textsc{SVal}$.\
The results are shown in \[table-example-1\] and \[figure-example-1\]. It can be seen that JLROA algorithm has better performance when $p>2$. They always get the same result when $p=1$.
\[example-2\] Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{3\times 3\times 3\times 3})$ such that $$\begin{aligned}
&{\mathcal{A}}_{1111} = 0.2883,\ \
{\mathcal{A}}_{1122} = -0.2485,\ \
{\mathcal{A}}_{1222} = 0.2972,\ \
{\mathcal{A}}_{1333} = -0.3619,\\
&{\mathcal{A}}_{2233} = 0.2127,\ \
{\mathcal{A}}_{1112} = -0.0031,\ \
{\mathcal{A}}_{1123} = -0.2939,\ \
{\mathcal{A}}_{1223} = 0.1862,\\
&{\mathcal{A}}_{2222} = 0.1241,\ \
{\mathcal{A}}_{2333} = 0.2727,\ \
{\mathcal{A}}_{1113} = 0.1973,\ \
{\mathcal{A}}_{1133} = 0.3847,\\
&{\mathcal{A}}_{1233} = 0.0919,\ \
{\mathcal{A}}_{2223} = -0.3420,\ \
{\mathcal{A}}_{3333} = -0.3054,\ \\end{aligned}$$ as in [@kofidis2002best Example 1] and [@chen2009tensor Section 6.1]. It has been shown in [@kofidis2002best; @chen2009tensor] that SHOPM ($p=1$) and SLROAT ($p=2$) fail to converge for ${\boldsymbol{\mathcal{A}}}$. We now see the convergence behaviour of JLROA algorithm. The results of JLROA, SLROAT and LROAT algorithms are shown in \[figure-example-2\]. It can be seen that JLROA performances are always better than or equal to those of SLROAT and LROAT.
\[example-3\] We randomly generate 1000 tensors in $\text{symm}({\mathbb{R}}^{10\times 10\times 10})$, and run JLROA and Trust region algorithms for them. Denote by $\textsc{JVal}$ and $\textsc{TVal}$ the final value of obtained by JLROA and Trust region, respectively. Set the following notations.\
(i) $\textsc{NumG}:$ the number of cases that $\textsc{JVal}$ is greater than $\textsc{TVal}$;\
(ii) $\textsc{NumS}:$ the number of cases that $\textsc{JVal}$ is smaller than $\textsc{TVal}$;\
(iii) $\textsc{NumE}:$ the number of cases that $\textsc{JVal}$ is equal[^6] to $\textsc{TVal}$;\
(iv) $\textsc{RatioG}:$ the average of $\textsc{JVal}/\textsc{TVal}$ when $\textsc{JVal}$ is greater than $\textsc{TVal}$;\
(v) $\textsc{RatioS}:$ the average of $\textsc{JVal}/\textsc{TVal}$ when $\textsc{JVal}$ is smaller than $\textsc{TVal}$.\
The results are shown in \[table-example-3\] and \[figure-example-3\]. It can be seen that $\textsc{RatioG}$ is very large when $p=1,2$, which means that Turst region is not so stable as JLROA in these two cases. Correspondingly, Trust region algorithm has generally better performance when $p>2$.
\[example-4\]In this example, we show the influence of choice of ${\mathscr{P}}\in\Sigma$ on the final results. Fix $1\leq p\leq 10$ and randomly generate a ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{10\times 10\times 10})$. We first choose the cyclic ordering , and then randomly choose ${\mathscr{P}}\in\Sigma$ for $200$ times to run \[al-general\]. The results are shown in \[figure-example-4\]. It can be seen that all the ${\mathscr{P}}\in\Sigma$ have almost the same result when $p=1$. However, when $p=2$, these ${\mathscr{P}}\in\Sigma$ are separated into different groups corresponding to different results. It may be interesting to study how to determine the ${\mathscr{P}}\in\Sigma$ with the best result.
\[example-5\]Let ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{10\times 10\times 10})$ and $p=2$. Suppose that ${\boldsymbol{Q}}_{\ast}$ is an accumulation point of \[alg:jacobi-G\]. To check the frequency of conditions and being satisfied, we define $$\omega = \min\{|{\mathcal{W}}_{112}|, |{\mathcal{W}}_{122}|, |{\mathcal{W}}_{333}|, \cdots, |{\mathcal{W}}_{nnn}|\},$$ where ${\boldsymbol{\mathcal{W}}}={\boldsymbol{\mathcal{A}}}({\boldsymbol{Q}}_{\ast})$. We choose the iteration ${\boldsymbol{Q}}_{K}$ as the approximation of an accumulation point when $K$ is large enough ($K=500$ in this experiment). We randomly generate ${\boldsymbol{\mathcal{A}}}\in\text{symm}({\mathbb{R}}^{10\times 10\times 10})$ for $1000$ times, and run \[alg:jacobi-G\] to see the frequency that $\omega>0$ (greater than 0.0001). The results are shown in \[figure-example-5\], where $\omega>0$ for $991$ times. It can be seen that the conditions and are satisfied in most cases.
![Results of \[example-5\]. Blue points mean that $\omega>0$, while red points mean that $\omega=0$.[]{data-label="figure-example-5"}](figures/Example-5-time-1){width="70.00000%"}
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors would like to thank Xiao Chen for his valuable discussions in \[sec-weak-conver\].
[^1]: This work was supported in part by the National Natural Science Foundation of China (No. 11601371, 11701327).
[^2]: any accumulation point is a stationary point.
[^3]: the iterations converge to a unique limit point for any starting point.
[^4]: each loop contains $N$ successive iterations.
[^5]: the difference is smaller than 0.0001.
[^6]: the difference is smaller than 0.0001.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The temperature-dependence of the in-plane optical properties of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$ have been investigated for the undoped ($x=0$) parent compound, and the optimally-doped ($x= 0.1$) superconducting material ($T_{c}\simeq 12$ K) over a wide frequency range. The optical conductivity has been described using two free-carrier (Drude) components, in combination with oscillators to describe interband transitions. At room temperature, the parent compound may be described by a strong, broad Drude term, as well as a narrow, weaker Drude component. Below the structural and magnetic transitions at $\simeq 96$ and 83 K, respectively, strength is transferred from the free-carrier components into a bound excitation at $\simeq 1000$ cm$^{-1}$, and the material exhibits semiconducting-like behavior. In the optimally-doped sample, at room temperature the optical properties are again described by narrow and broad Drude responses comparable to the parent compound; however, below $T^\ast \simeq 100$ K, strength from the narrow Drude is transferred into a newly-emergent low-energy peak at $\simeq 120$ cm$^{-1}$, which arises from a localization process, resulting in semiconducting-like behavior. Interestingly, below $T_{c}$, this peak also contributes to the superfluid weight, indicating that some localized electrons condense into Cooper pairs; this observation may provide insight into the pairing mechanism in iron-based superconductors.'
author:
- Run Yang
- Yaomin Dai
- Jia Yu
- Qiangtao Sui
- Yongqing Cai
- Zhian Ren
- Jungseek Hwang
- Hong Xiao
- Xingjiang Zhou
- Xianggang Qiu
- 'Christopher C. Homes'
date: '; version 6'
title: 'Unravelling the mechanism of the semiconducting-like behavior and its relation to superconductivity in (CaFe$_\mathbf{1-x}$Pt$_\mathbf{x}$As)$_\mathbf{10}$Pt$_\mathbf{3}$As$_\mathbf{8}$'
---
Introduction
============
The discovery of iron-based superconductors has prompted in an intensive investigation of this class of materials in the hope of discovering new compounds with high superconducting critical temperatures ($T_c$’s) [@Johnston2010; @Paglione2010; @Si2016]. In both iron-based superconductors (FeSCs) and cuprates, a variety of unusual normal-state phenomena are observed that are believed to have an important connection to the superconductivity (SC) [@Lee2006; @Wang2016]. In the optimally-doped cuprates the resistivity often shows a peculiar non-saturating linear temperature dependence that at high temperature may violate the Mott-Ioffe-Regel limit [@Hussey04], leading it to be described as a marginal Fermi liquid [@Varma89]. A pseudogap develops in underdoped regime well above the critical temperature ($T_{c}$) [@Timusk1999], which has been interpreted as evidence for preformed Cooper pairs without global phase coherence [@Yang2008; @Wang2005; @Xu2000]; on the other hand, competing orders, such as charge-ordered states, have also been proposed as the origin of this feature [@Valla2006]. In FeSCs, one of the most interesting phenomena in the normal state is the emergence of nematicity, or rotational symmetry breaking of the electronic states [@Fernandes2012]; however, its origin and relation to the superconductivity in these materials is still uncertain [@Wang2016].
The (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$ (Ca 10-3-8) materials exhibit some rather interesting properties. The unit cell of the undoped parent compound is shown in Fig. \[fig:resis\](a); the conducting Fe–As layers are separated by Ca atoms and insulating Pt$_3$As$_8$ layers, resulting in an inter-layer distance as large as 10.6 [Å]{}. Transport measurements indicate this material is highly two dimensional (2D) [@Ni2011]. The phase diagram \[Fig. 4(a) of Ref. \] indicates that the parent compound is an antiferromagnetic (AFM) semiconductor; the resistivity and other experimental probes [@Cho2012; @Zhou2013; @Sturzer2013; @Sapkota2014] indicate that this material undergoes structural and magnetic transitions at $T_s \simeq 96$ K and $T_N \simeq 83$ K, respectively. Through the application of pressure, or by doping Pt on the Fe site (electron doping), the AFM order is suppressed, and superconductivity emerges with a maximum $T_c\simeq 12$ K in the optimally-doped material [@Xiang2012]. However, the semiconducting-like behavior still remains (resistivity increases upon cooling) above the AFM and SC dome [@Gao2014; @Ni2011], which is reminiscent of the pseudogap-like behavior in cuprates [@Yang2008]. Investigating the origin of such distinct behavior and how it evolves into a superconductor may provide insight into the pairing mechanism in iron-based superconductors.
![(a) The triclinic unit cell of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}\-$Pt$_{3}$As$_{8}$ showing the Fe–As layers separated by Ca and Pt$_3$As$_8$ sheets. (b) The in-plane resistivity of the $x=0$ (undoped) sample showing a semiconducting-like response at low temperature. Inset: $d\rho_{ab}/dT$ showing local minima at $T_s$ and $T_N$. (c) The resistivity for the $x=0.1$ (optimally-doped) sample, again showing a semiconducting response at low temperature just above $T_c$. The circles denoting $\sigma_{1}(\omega\rightarrow 0) \equiv \sigma_{dc}$ are in good agreement with the transport data.[]{data-label="fig:resis"}](Figure1.pdf){width="2.7in"}
In this work the temperature dependence of the in-plane optical properties of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$ for undoped ($x=0$) and optimally-doped ($x=0.1$) samples is investigated. The real part of the optical conductivity is particularly useful as it yields information about the free-carrier response and interband transitions; in the zero-frequency limit, the dc conductivity is recovered, allowing comparisons to be made with transport data. The optical properties suggest that the semiconducting-like behavior in the parent compound likely originates from AFM order that leads to a reconstruction of the Fermi surface and a decrease in the carrier concentration. In the optimally-doped sample, torque magnetometry indicates superconducting fluctuations well above $T_c$, suggesting that this material may not be homogeneous. Along with the semiconducting-like behavior, at low temperature we observe the emergence of a peak in the optical conductivity in the far-infrared region, which is attributed to localization driven by either scattering from impurities or AFM spin fluctuations. Interestingly, below $T_{c}$, this peak also contributes to the superfluid weight, indicating that there is likely a relationship between magnetism and superconductivity.
Experiment
==========
High-quality single crystals of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}\-$Pt$_{3}$As$_{8}$ with good cleavage planes (001) were synthesized using self-flux method [@Ni2013]. The temperature dependence of the in-plane resistivity for the undoped and optimally-doped materials is shown in Figs. \[fig:resis\](b) and \[fig:resis\](c), respectively. At room temperature, the resistivity of both materials is comparable. In the undoped material, $\rho_{ab}$ exhibits relatively little temperature dependence until $\simeq 150$ K, below which it exhibits a semiconducting response, increasing gradually, with inflection points at $T_N\simeq 83$ and $T_s\simeq 96$ K, shown in the inset of Fig. \[fig:resis\](b). The resistivity of the doped material initially decreases as the temperature is reduced and then undergoes a slight upturn resulting in a broad minimum at about 100 K; below $T_c \simeq 12\,$K the resistivity abruptly drops to zero. The reflectance from freshly-cleaved surfaces has been measured over a wide temperature ($\sim 5$ to 300 K) and frequency range ($\sim 2$ meV to about 5 eV) at a near-normal angle of incidence for light polarized in the *a-b* planes using an *in situ* evaporation technique [@Homes93]. The complex optical properties have been determined from a Kramers-Kronig analysis of the reflectivity. The reflectivity is shown in Supplementary Figs. S1(a) and S1(b); the details of the Kramers-Kronig analysis are described in the Supplementary Material [@Suplmt]. Magnetic torque measurements have also been performed on the optimally-doped material.
![The temperature dependence of the real part of the optical conductivity in the *a-b* planes of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}\-$Pt$_{3}$As$_{8}$ for (a) $x=0$, and (b) $x=0.1$; the insets show the optical conductivity at several temperatures over a wide frequency range.[]{data-label="fig:sigma"}](Figure2.pdf){width="2.7in"}
Results and Discussion
======================
The temperature dependence of the real part of the in-plane optical conductivity \[$\sigma_{1}(\omega)$\] is shown in the infrared region for the undoped and doped compounds in Figs. \[fig:sigma\](a) and \[fig:sigma\](b), respectively; the conductivity is shown over a much broader frequency range in the insets. Several sharp features in the conductivity are observed at $\simeq 150$ and 250 cm$^{-1}$, which are attributed to infrared-active lattice vibrations. The extrapolated values for the dc resistivity \[$\sigma_{1}(\omega\rightarrow 0) \equiv \sigma_{dc}$, circles in Figs. \[fig:resis\](b) and \[fig:resis\](c)\] are essentially identical to the resistivity, indicating the excellent agreement between optics and transport measurements. The FeSCs are multiband materials; a minimal description consists of two electronic subsystems using the so-called two-Drude model [@Wu2010] with the complex dielectric function $\tilde\epsilon=\epsilon_1+i\epsilon_2$, $$\tilde\epsilon(\omega) = \epsilon_\infty - \sum_{j=1}^2 {{\omega_{p,D;j}^2}\over{\omega^2+i\omega/\tau_{D,j}}}
+ \sum_k {{\Omega_k^2}\over{\omega_k^2 - \omega^2 - i\omega\gamma_k}},
\label{eq:eps}$$ where $\epsilon_\infty$ is the real part at high frequency. In the first sum $\omega_{p,D;j}^2 = 4\pi n_je^2/m^\ast_j$ and $1/\tau_{D,j}$ are the square of the plasma frequency and scattering rate for the delocalized (Drude) carriers in the $j$th band, respectively, and $n_j$ and $m^\ast_j$ are the carrier concentration and effective mass. In the second summation, $\omega_k$, $\gamma_k$ and $\Omega_k$ are the position, width, and strength of the $k$th vibration or bound excitation. The complex conductivity is $\tilde\sigma(\omega) = \sigma_1 +i\sigma_2 = -2\pi i \omega [\tilde\epsilon(\omega) -
\epsilon_\infty ]/Z_0$ (in units of $\Omega^{-1}$cm$^{-1}$); $Z_0\simeq 377$ $\Omega$ is the impedance of free space. The model is fit to the real and imaginary parts of the optical conductivity simultaneously using a non-linear least-squares technique.
{width="5.75in"}
Parent compound ($\mathbf{x=0}$)
--------------------------------
At room temperature the optical conductivity in Fig. \[fig:sigma\](a) has a metallic character, with a Drude-like free carrier response superimposed on a flat, nearly incoherent background. The results of the two-Drude model fit at 150 K is shown in Fig. \[fig:parent\](a); the conductivity may be described by a free-carrier response consisting of a narrow Drude term that reflects the coherent response, and a much stronger, broad Drude component that corresponds to a nearly incoherent background; several Lorentzian oscillators are included to describe the bound excitations (interband transitions) at higher energies \[Fig. \[fig:sigma\](a)\]. As the temperature is lowered, the low-frequency conductivity is suppressed and a broad peak develops in the mid-infrared region. The results of the model fit at 10 K are shown in Fig. \[fig:parent\](b); while the broad Drude has been reduced slightly in strength, the narrow Drude is strongly suppressed and a strong peak centered at $\simeq 1000$ cm$^{-1}$ has emerged.
The temperature dependencies of the scattering rates and the plasma frequencies of the narrow and broad Drude terms are shown in Figs. \[fig:parent\](c) and \[fig:parent\](d), respectively. As the temperature is reduced, the scattering rate for the broad Drude term shows a weak temperature dependence; however, for $T\lesssim T_s, T_N$, it undergoes a dramatic reduction from about $\simeq 1600$ to $\simeq 1000$ cm$^{-1}$. In contrast, the scattering rate for the narrow Drude term has a strong temperature dependence, decreasing from $\simeq 660$ cm$^{-1}$ at room temperature to $\simeq 180$ cm$^{-1}$ at 100 K, below which it decreases rapidly to $\simeq 80$ cm$^{-1}$ at low temperature. The temperature dependence of the plasma frequencies tells a similar story. As the temperature is reduced, the plasma frequency of the broad Drude term is essentially constant; however, for $T\lesssim T_s, T_N$ it undergoes a dramatic reduction, losing roughly 50% of its strength ($\propto \omega_{p,D;i}^2$). At the same time, a broad peak of roughly equal strength appears in the mid-infrared region; as the temperature is further reduced, the strength of both features remains unchanged. The narrow Drude initially shows little temperature dependence, but below about 200 K it begins to decrease uniformly with temperature, showing only a slight discontinuity at the structural and magnetic transitions, ultimately losing over 90% of its original strength at low temperature. Even though the coherent component is losing strength with decreasing temperature, the commensurate decrease in the scattering rate results in a slight decrease in the resistivity \[Fig. \[fig:resis\](b)\], until $\simeq 150\,$K, below which the resistivity begins to increase.
These trends may also be observed in the behavior of the spectral weight. The spectral weight is defined here as $$W(T) = \frac{Z_0}{\pi^2} \int_{\omega_a}^{\omega_b} \sigma_{1}(\omega,\,{T})\, d\omega ,$$ over the $\omega_a - \omega_b$ interval. As the temperature is lowered, spectral weight is transferred from high to low frequency as the scattering rates decrease, shown in Fig. \[fig:weight\](a). This trend is gradually reversed below $\simeq 150$ K with spectral weight now transferred from below 600 cm$^{-1}$ to a broad peak centered at $\sim 1000$ cm$^{-1}$.
The $f$-sum rule requires that the sum of the squares of the plasma frequencies, $\omega_{p,tot}^2=\omega_{p,D1}^2+\omega_{p,D2}^2+\Omega_{peak}^2$, should remain constant. This is indeed the case, as shown in Fig. \[fig:parent\](d), so it may be inferred that below $T_s$ and $T_N$, strength is transferred from both the coherent and incoherent bands into the mid-infrared excitation. This type of behavior is widely observed in most parent compounds of the FeSCs [@Nakajima2014; @Homes2016] and is attributed to the formation of a spin-density-wave-like (SDW) gap [@Hu2008] and subsequent reconstruction of the Fermi surface [@Yin11] resulting in low-energy interband transitions that lie in the infrared region. Interestingly, the coherent component begins to lose strength well above the structural and magnetic transitions \[Figs.\[fig:parent\](d) and \[fig:weight\](a)\]. The parent material is highly 2D [@Yuan2015], the interlayer magnetic coupling is very weak, and is easily destroyed by doping [@Xiang2012]. Before three dimensional (3D) long-range AFM order can be established, intralayer 2D short-range AFM fluctuations are present [@Dai2012; @Xu2016]. As a result, the semiconducting-like behavior above $T_{N}$ may be regarded as the precursor to AFM order.
Optimally-doped compound ($\mathbf{x=0.1}$)
-------------------------------------------
In the optimally-doped sample, shown in Fig. \[fig:sigma\](b), the optical conductivity exhibits metallic behavior above $\sim 100$ K, although its temperature dependence is rather weak, with spectral weight being gradually transferred from high to low frequency. Below 100 K, the spectral weight below 100 cm$^{-1}$ is gradually suppressed while the spectral weight below 600 cm$^{-1}$ remains constant, suggesting that the missing weight is being transferred to the $100 - 600$ cm$^{-1}$ region \[Fig. \[fig:weight\](b)\], with the formation of a new absorption peak at $\simeq 120$ cm$^{-1}$. At the same time, the low-frequency conductivity is also decreasing, corresponding to the semiconducting-like response below 100 K \[Figs. \[fig:resis\](c) and \[fig:sigma\](b)\]. Upon entry into the superconducting state, the optical conductivity in the low-energy region is almost completely suppressed, with $\sigma_{1}(\omega) \simeq 0$ below $\sim 20$ cm$^{-1}$, signalling the opening of a nodeless superconducting energy gap [@Yang2017].
![The temperature dependence of the normalized spectral weight in the Ca 10-3-8 for the (a) undoped parent compound and (b) optimally-doped material over several different frequency intervals. []{data-label="fig:weight"}](Figure4.pdf){width="2.5in"}
### Normal state
The results of the fits using the two-Drude model to the conductivity of the optimally-doped sample at 150 and 15 K are summarized in Figs. \[fig:doped\](a) and \[fig:doped\](b), respectively. As observed in the parent compound, at room temperature the conductivity may be described by narrow and broad Drude terms; several Lorentzian oscillators are included to describe the bound excitations (interband transitions) at higher energies \[Fig. \[fig:sigma\](b)\]. Below 100 K, the decrease in intensity of the narrow Drude component is accompanied by the formation of a peak at $\simeq 120$ cm$^{-1}$ \[Fig. \[fig:doped\](b)\], which has been fit using a Lorentzian line shape. The temperature dependencies of the scattering rates and plasma frequencies are shown in Figs. \[fig:doped\](c) and \[fig:doped\](d), respectively. By tracking the strengths of the plasma frequencies of the Drude terms, we note that below $T^\ast\simeq 100$ K the plasma frequency of the narrow Drude is suppressed while the strength of the new peak is gradually enhanced \[Fig. \[fig:doped\](d)\]; the broad Drude term displays little temperature dependence above or below $T^\ast$. The conservation of spectral weight again requires that $\omega_{p,tot}^2 =
\omega_{p,D1}^2+\omega_{p,D2}^2+\Omega_{peak}^2$ should remain constant, which is indeed the case in Fig. \[fig:doped\](d). Thus, below $T^\ast$, some of the coherent response (narrow Drude) is transferred to the new peak in the optical conductivity, resulting in a reduced $\sigma_{dc}$ and semiconducting-like behavior.
{width="5.75in"}
The evolution of the low-energy peak may also be described using a simple classical generalization of the Drude (CGD) formula in which the faction of the carriers velocity that is retained after a collision [@Smith2001]. While many collisions may be considered, in the single-scattering approximation the complex conductivity is written as $$\tilde\sigma(\omega)= \left(\frac{2\pi}{Z_0}\right) \frac{\omega_p^2\tau}{(1-i\omega\tau)}
\left[ 1+\frac{c}{(1-i\omega\tau)} \right],
\label{eq:smith}$$ where $c$ is the persistence of velocity that is retained for a single collision. This model has the interesting attribute that for $c=0$ a simple Drude is recovered, while for $c=-1$ the carriers are completely localized in the form of a Lorentzian oscillator with a peak at $\omega\tau=1$, width $2/\tau$, and an oscillator strength that is identical to the plasma frequency (Supplemental Material). The real and imaginary parts of the optical conductivity have been fit between 15 and 125 K using the two-Drude model with the provision that the narrow Drude component is replaced by the expression in Eq. (\[eq:smith\]). At 125 K, the fit is identical to that of the two-Drude model, returning $c=0$ (pure Drude). Fits below 100 K reveal an increasingly negative value for $c=-0.26$, -0.49, -0.41, -0.51, and $-0.61\pm 0.05$ at 100, 75, 50, 30, and 15 K, respectively. Interestingly, the plasma frequency for the narrow Drude is now roughly constant, with $\omega_{p,D1}\simeq
5200\pm 500$ cm$^{-1}$, which is essentially identical to the values returned from the two-Drude model at and above 125 K \[Fig. \[fig:doped\](d)\]; this indicates that the response of both the localized and free carriers is now incorporated into a single plasma frequency. The scattering rate is also slightly lower at low temperatures than the values obtained using the two-Drude model \[Fig. \[fig:doped\](c)\]. The values for the broad Drude term are unchanged. While the overall quality of the fits is indistinguishable from the two-Drude model, it is remarkable that the introduction of a single new parameter to the narrow Drude band allows both the position and strength of the low-energy peak to be described quite well, indicating that this peak likely arises from carrier localization due to scattering. In addition, the value of $c\simeq -0.6$ at 15 K is consistent with the results of the two-Drude with a Lorentzian that indicate that the spectral weight of the narrow Drude component is split more or less equally between free and localized carriers at low temperature. These results are in good agreement with recent angle-resolved photoemission spectroscopy (ARPES) measurements, which show a clear decrease in the size of the hole pockets below $T^\ast$ (Supplementary Fig. S2).
### Superconducting state
Although there is semiconducting-like response in the normal state, below $T_{c}
\simeq 12$ K, the resistivity drops to zero; such a semiconducting-like to superconducting transition is unusual in iron-based superconductors. A clear signature of superconducting transition is observed in the reflectivity \[Supplementary Fig. S1(b)\]. In the real part of the optical conductivity, the spectral weight below $\simeq 20$ cm$^{-1}$ is totally suppressed, indicating the opening of a nodeless superconducting energy gap. The Mattis-Bardeen formalism is used to describe the gapping of the spectrum of excitations in the superconducting state [@Zimmerman91; @Dressel-Book]. The real part of the optical conductivity is shown just above $T_c$ by the dotted line in Fig. \[fig:super\](a); this curve is described by narrow ($\omega_{p,D1}\simeq 3580$ cm$^{-1}$, $1/\tau_{D1}\simeq 220$ cm$^{-1}$) and broad ($\omega_{p,D2}\simeq 10\,150$ cm$^{-1}$, $1/\tau_{D2}\simeq 1500$ cm$^{-1}$) Drude components, as well as a low-energy bound excitation ($\omega_0\simeq 120$ cm$^{-1}$, $\gamma_0\simeq 285$ cm$^{-1}$, and $\Omega_0\simeq 3660$ cm$^{-1}$). The data below $T_c$ at $\simeq 5\,$K is shown by the solid line; despite the presence of multiple bands and a weak shoulder at $\simeq 30$ cm$^{-1}$, we have chosen to simplify the analysis and model the data with a single superconducting energy scale for both bands, $2\Delta_1 = 2\Delta_2 \simeq 20\pm 4$ cm$^{-1}$ ($\simeq 2.5\pm 0.4$ meV). Note that $1/\tau_{D1} > 2\Delta_{1,2}$, and $1/\tau_{D2}\gg 2\Delta_{1,2}$, placing this material in the dirty limit [@Homes2015]. The gapped spectrum of excitations for the two Drude bands, as well as the contribution from the emergent peak, are shown by the dashed lines in Fig. \[fig:super\](a). The linear combination of all three contributions reproduces the data quite well, except in the region of the peak. It should be noted that the normal-state values are not refined to fit the data below $T_c$. The 5 K data may be more accurately reproduced by decreasing the intensity of the peak by about 20%, suggesting that some of the spectral weight of this feature has collapsed into the condensate. The gap ratio ${2\Delta_{1,2}}/{k_{\rm B}T_{c}} \simeq 2.4$ falls below the BCS value of 3.5, placing this material in the BCS weak-coupling limit
![(a) The real part of the optical conductivity of optimally-doped Ca 10-3-8 just above (dotted line) and below (solid line) $T_{c}$. The short dashed lines are the gapped spectrum of excitations for the narrow and broad Drude bands with $2\Delta_{1,2} \simeq 20$ cm$^{-1}$; the long dashed line is the contribution of the emergent peak. The linear combination of these contributions (dashed-dot line) reproduces the data quite well, except in the region of peak. (b) The superfluid weight (solid black line) obtained from the imaginary part of $\omega\sigma_{2}(\omega)$ (see Ref. for details). The red and blue dashed lines are obtained from the FGT sum rule \[Eq. (\[eq:FGT\])\].[]{data-label="fig:super"}](Figure6.pdf){width="2.5in"}
The formation of superconducting energy gap(s) below $T_c$ results in the loss of low-frequency spectral weight that collapses into the superfluid condensate; the strength of the condensate may be estimated in one of two ways. The complex conductivity for the superfluid response may be expressed as [@Dordevic2002; @Hwang2007] $$\tilde\sigma_{s}(\omega)= \sigma_{s1}+\emph{i}\,\sigma_{s2}(\omega) =
\frac{\pi^2}{Z_{0}}\omega_{ps}^{2}\delta(0)+\frac{\emph{i} 2\pi\omega_{ps}^{2}}{Z_{0}\omega},
\label{eq:sigma}$$ where $\omega_{ps}^{2}=4\pi n_{s}e^{2}/m^\ast$ represents the superconducting plasma frequency, $n_{s}$ is the superconducting carrier density, and $m^\ast$ is an effective mass. Thus, from the imaginary part $2\pi \omega_{ps}^2 \simeq Z_0\omega\sigma_{s2}(\omega)$. Alternatively, the difference between the low-frequency optical conductivity just above and below $T_c$, the so-called “missing area”, can be analyzed using the Ferrel-Glover-Tinkham (FGT) sum rule [@Ferrell1958; @Tinkham1959]: $$\frac{Z_{0}}{\pi^{2}}\int_{0^{+}}^{\omega}[\sigma_{1}(\omega^\prime,T\gtrsim T_{c}) -
\sigma_{1}(\omega^\prime,T\ll T_{c})] d\omega^\prime = \omega_{ps}^{2},
\label{eq:FGT}$$ where the cutoff frequency $\omega$ is chosen so that the integral converges smoothly. Both methods yield similar values of $\omega_{ps}\simeq 2\,110\pm 200$ cm$^{-1}$, shown in Fig. \[fig:super\](b), resulting in a penetration depth of $\lambda_{0}= 7\,500\pm 600$ [Å]{}, in agreement with previous $\mu$SR measurements [@Surmach2015]. While about half of the spectral weight of the narrow Drude component has been transferred to the far-infrared absorption peak \[Fig. \[fig:doped\](d)\], as Fig. \[fig:super\](a) indicates, this peak is also suppressed below $T_{c}$. A key question is: What becomes of these localized electrons? To address this question, we have applied the FGT sum rule by taking the difference in the optical conductivity between 15 and 5 K, and 100 and 5 K. From the results shown in Fig. \[fig:super\](b), we notice that the superfluid stiffness calculated with respect to 15 and 5 K converge at $\omega \simeq 400$ cm$^{-1}$ ($\sim 50$ meV). However, between 100 and 5 K the integral converges much more quickly ($\omega\lesssim 200$ cm$^{-1}$), a very unusual situation. If only the Drude components condense into the superfluid, the results calculated between 15 and 5 K would converge more quickly, because the Drude component is narrower at low temperature \[Figs. \[fig:doped\](c) and \[fig:doped\](d)\]. Such anomalous behavior suggests that there is an extra component in the $100- 400$ cm$^{-1}$ region that contributes to the superfluid below $T_c$. This implies that the newly-formed peak below 100 K contributes to the superfluid condensate. Understanding the origin of this peak may provide insight into the unconventional pairing in this material.
In order to further understand the relation between the semiconducting-like behavior and superconductivity, we performed a magnetic torque measurement on the optimally-doped sample (details are provided in the Supplementary Material). In Fig. \[fig:torque\](a), we observe that, below $T^{*}$, the torque $\tau_{0}$ starts to deviate from the high-temperature *T*-linear behavior and $|\chi_{c}-\chi_{ab}|$ increases with decreasing $H$ \[Fig. \[fig:torque\](b)\]. Both types of behavior indicate a non-linear susceptibility [@Xiao2014]. Approaching $T_{c}$, this non-linearity appears to diverge in the zero-field limit, suggesting that this behavior may be related to superconducting fluctuations [@Kasahara2016]. Inelastic neutron scattering and nuclear magnetic resonance have both observed evidence for preformed Cooper pairs in Ca 10-3-8 [@Surmach2015], and the minimum in the temperature-dependent Seebeck coefficient of Pt-doped material can be understood in terms of either preformed pairs or the phonon drag effect [@Ni2011]. A difficulty with the notion of preformed pairs is that they are typically observed in strong-coupling systems, in which the coherence length is comparable with the inter-particle distance [@Kasahara2016]; however, optimally-doped Ca 10-3-8 is in the BCS weak-coupling limit. Furthermore, in cuprates and FeSe, superconducting fluctuations always result in a decrease in the resistance [@Popcevic2018]; the semiconducting-like behavior is anomalous.
![(a) The temperature-dependent out-of-plane torque $\tau_{0} =
\frac{1}{2}\mu_{0}(\chi_{c}-\chi_{ab})H^{2}$ for the optimally-doped Ca 10-3-8 with fixed magnetic field (9 T); $\chi_{c}$ and $\chi_{ab}$ are magnetic susceptibilities along the *c* and *a* axis, respectively. The dashed line is linear fit to the high-temperature data. (b) The field-dependent $|\chi_{c}-\chi_{ab}|$ at different temperatures.[]{data-label="fig:torque"}](Figure7.pdf){width="2.55in"}
Localization and magnetism
--------------------------
The substitution of Pt for Fe into the Fe–As planes of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$ induces superconductivity; however, it also results in the introduction of disorder sites that can lead to strong scattering and the localization of free-carriers. In systems that display activated behavior, the interplay between localization and superconductivity is of considerable interest [@Ma1985; @Valles1989]. In YBa$_2$(Cu$_{1-x}$Zn$_x$)$_4$O$_8$, the substitution of Zn for Cu led to the dramatic reduction of $T_c$ and the appearance of a peak in the optical conductivity at $\simeq 120$ cm$^{-1}$ that was attributed to quasiparticle localization [@Basov1998]. In the cuprates, non-magnetic Zn is thought to act as a magnetic impurity. A low-energy peak in in the optical conductivity was also observed when the magnetic impurities Mn and Cr were substituted for Fe in BaFe$_2$As$_2$ [@Kobayashi2016]. However, in (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$, the Pt$^{4+}$ atoms in the Fe–As layers are nonmagnetic and a low-energy peak has never been observed in iron-based superconductors with nonmagnetic impurities [@Nakajima2010]. We note that the semiconducting-like behavior is not distinct to Pt-doped Ca 10-3-8; under pressure, the stoichiometric parent compound also shows semiconducting-like behavior, right up to the point at which it becomes a superconductor. The similarity between the phase diagrams of Pt-doped Ca10-3-8, and Ca 10-3-8 under pressure, indicates the intrinsic nature of the semiconducting-like behavior. Moreover, the low-energy peak observed in Pt-doped Ca 10-3-8, has also been seen in La-doped Ca$_{8.5}$La$_{1.5}$(Pt$_3$As$_8$)(Fe$_2$As$_2$)$_5$ (out-of-plan doping) [@Seo2017], and in stoichiometric (CaFeAs)$_{10}$Pt$_4$As$_8$ [@Seo2018]. The 2D nature of these materials will greatly increase the importance of spin fluctuations, suggesting that they may be playing a prominent role in the in-plane transport properties. It is therefore likely that the localization peak observed in this work arises from strong scattering due to AFM fluctuations rather than impurity scattering.
Although AFM order competes with superconductivity, spin fluctuations have been proposed as a possible pairing mechanism in the high-temperature superconductors [@Moriya2000]. The torque magnetometry results \[Fig. \[fig:torque\](a)\] indicate that the onset for superconducting fluctuations occur well above $T_c$. This type of behavior has been observed in many high-temperature superconductors [@Li2010; @Cyr2018], and it has been suggested that these fluctuations may be attributed to the inhomogeneous nature (either structural or electronic) of these materials [@Pelc2018]. Thus, it may be the case that the putative normal-state is an effective medium in which the superconducting regions are embedded in a poorly-conducting matrix with either strong spin fluctuations or magnetic order (i.e., incommensurate SDW); below $T_c$, phase coherence is established across the different superconducting regions and a bulk superconducting transition is observed. The global onset of superconductivity would naturally suppress the magnetic fluctuations (order) and the scattering attributed to it, leading to a reduction in the size of the localization peak. The carriers that are no longer localized due to strong scattering would then be allowed to collapse into the condensate, a result that is consistent with the observed transfer of spectral weight from the peak into the condensate below $T_c$.
Summary
=======
To conclude, the temperature dependence of the in-plane optical properties of (CaFe$_{1-x}$Pt$_{x}$As)$_{10}$Pt$_{3}$As$_{8}$ has been examined for the undoped ($x=0$) parent compound with $T_s\simeq 96$ K and $T_N\simeq 83$ K, and the optimally-doped ($x=0.1$) superconducting material, $T_c\simeq 12$ K. At room temperature, the optical conductivity of both materials may be described by the two-Drude model. In the parent compound, below $T_s$ and $T_N$ the broad Drude component narrows and decreases dramatically in strength, behavior which is also observed in the narrow Drude component. The missing spectral weight is transferred to a broad peak at $\simeq 1000$ cm$^{-1}$, which is attributed to a low-energy interband transition that originates from the Fermi surface reconstruction driven by the structural and magnetic transitions. The semiconducting-like behavior originates from short-range magnetic fluctuations that could be regarded as the precursor to AFM order. In the optimally-doped material, the broad Drude term shows little temperature dependence, but the scattering rate in the narrow Drude component has a weak temperature dependence. Below $T^\ast \simeq 100$ K, the narrow Drude loses strength at the same time a localization peak at $\simeq 120$ cm$^{-1}$ emerges. A classical generalization of the Drude model reproduces the position and strength of the low-energy peak, indicating that it originates via a localization process. Torque magnetometry detects a diamagnetic signal well above $T_c$, which is attributed to SC fluctuations. Below $T_c$ magnetic fluctuations (order) are suppressed, resulting in a decrease in localization, allowing spectral weight from this peak to be transferred into the superconducting condensate. These results indicate an intimate relationship between magnetism and superconductivity in this material.
We thank Liling Sun, Jimin Zhao, Weiguo Yin, Yilin Wang, Hu Miao, Peter Johnson for useful discussions. Work at Chinese Academy of Science was supported by NSFC (Project No. 11374345 and No. 91421304) and MOST (Project No. 2015CB921303 and No. 2015CB921102). J.H. acknowledges the financial support from the National Research Foundation of Korea (NRFK Grant No. 2017R1A2B4007387). Work at Brookhaven National Laboratory was supported by the Office of Science, U.S. Department of Energy under Contract No. DE-SC0012704.
[56]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1080/00018732.2010.513480) [****, ()](\doibase 10.1038/nphys1759) [****, ()](\doibase
10.1038/natrevmats.2016.17) [****, ()](\doibase 10.1103/RevModPhys.78.17) [****, ()](\doibase 10.1038/nmat4492) [****, ()](\doibase
10.1080/14786430410001716944) [****, ()](\doibase 10.1103/PhysRevLett.63.1996) [****, ()](\doibase
10.1088/0034-4885/62/1/002) [****, ()](\doibase
10.1038/Nature07400) [****, ()](\doibase 10.1103/PhysRevLett.95.247002) [****, ()](\doibase
10.1038/35020016) [****, ()](\doibase 10.1126/science.1134742) [****, ()](\doibase 10.1103/PhysRevB.85.024534) [****, ()](\doibase 10.1073/pnas.1110563108) [****, ()](\doibase
10.1002/adma.201305154) [****, ()](\doibase 10.1103/PhysRevB.85.020504) [****, ()](http://stacks.iop.org/0953-8984/25/i=12/a=122201) [****, ()](http://stacks.iop.org/0953-8984/25/i=12/a=122203) [****, ()](\doibase 10.1103/PhysRevB.90.100504) [****, ()](\doibase 10.1103/PhysRevB.85.224527) [****, ()](\doibase 10.1103/PhysRevB.87.060507) [****, ()](\doibase 10.1364/AO.32.002976) @noop [****, ()](\doibase 10.1103/PhysRevB.81.100512) [****, ()](\doibase 10.7566/JPSJ.83.104703) [****, ()](\doibase 10.1103/PhysRevB.94.195142) [****, ()](\doibase 10.1103/PhysRevLett.101.257005) [****, ()](\doibase 10.1038/nphys1923) [****, ()](\doibase 10.1063/1.4926486) [****, ()](\doibase 10.1038/nphys2438) [****, ()](\doibase 10.1103/PhysRevB.94.085147) [****, ()](\doibase 10.1103/PhysRevB.95.064506) [****, ()](\doibase 10.1103/PhysRevB.64.155106) [****, ()](\doibase 10.1016/0921-4534(91)90771-P) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRevB.91.144503) [****, ()](\doibase 10.1103/PhysRevB.69.024514) [****, ()](\doibase 10.1103/PhysRevB.65.134511) [****, ()](\doibase
10.1088/0953-8984/19/12/125208) [****, ()](\doibase
10.1103/PhysRev.109.1398) [****, ()](\doibase 10.1103/PhysRevLett.2.331) [****, ()](\doibase 10.1103/PhysRevB.91.104515) [****, ()](\doibase 10.1103/PhysRevB.90.214511) [****, ()](\doibase
10.1038/ncomms12843) [****, ()](\doibase 10.1038/s41535-018-0115-2) [****, ()](\doibase 10.1103/PhysRevB.32.5658) [****, ()](\doibase
10.1103/PhysRevB.39.11599) [****, ()](\doibase
10.1103/PhysRevLett.81.2132) [****, ()](\doibase 10.1103/PhysRevB.94.224516) [****, ()](\doibase
10.1103/PhysRevB.81.104528) [****, ()](\doibase 10.1103/PhysRevB.95.094510) [****, ()](\doibase 10.1038/s41598-019-40528-3) [****, ()](\doibase
10.1080/000187300412248) [****, ()](\doibase 10.1103/PhysRevB.81.054510) [****, ()](\doibase 10.1103/PhysRevB.97.064502) [****, ()](\doibase 10.1038/s41467-018-06707-y)
\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that the Cuntz splice induces stably isomorphic graph $C^*$-algebras.'
address:
- 'Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen, Denmark'
- 'Department of Science and Technology, University of the Faroe Islands, Nóatún 3, FO-100 Tórshavn, the Faroe Islands'
- 'Department of Mathematics, University of Hawaii, Hilo, 200 W. Kawili St., Hilo, Hawaii, 96720-4091 USA'
- 'Department of Mathematics, University of Oslo, PO BOX 1053 Blindern, N-0316 Oslo, Norway'
author:
- Søren Eilers
- Gunnar Restorff
- Efren Ruiz
- 'Adam P. W. Sørensen'
title: Invariance of the Cuntz splice
---
Introduction
============
Cuntz and Krieger introduced the Cuntz-Krieger algebras in [@MR561974], and Cuntz showed in [@MR608527] that if we restrict to the matrices satisfying the modest condition (II), then the stabilized Cuntz-Krieger algebras are an invariant of shifts of finite type up to flow equivalence. Shortly after Franks had made a successful classification of irreducible shifts of finite type up to flow equivalence ([@MR758893]), Cuntz raised the question of whether this invariant or the $K_0$-group alone classifies simple Cuntz-Krieger algebras up to stable isomorphism. He sketched in [@MR866492] that it was enough to answer whether $\mathcal{O}_2$ and $\mathcal{O}_{2_-}$ are isomorphic, where $\mathcal{O}_2$ and $\mathcal{O}_{2_-}$ are the Cuntz-Krieger algebras associated with the matrices $$\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}\quad\text{and}\quad\begin{pmatrix} 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{pmatrix},$$ respectively. This question remained open until Rørdam in [@MR1340839] showed that $\mathcal{O}_2$ and $\mathcal{O}_{2_-}$ are in fact isomorphic and elaborated on the arguments of Cuntz to show that the $K_0$-group is a complete invariant of the stabilized simple Cuntz-Krieger algebras.
This procedure of gluing the graph corresponding to the former matrix above onto another graph has since been known as *Cuntz splicing* a graph at a certain vertex. Knowing that when we Cuntz splice a graph (on a vertex that supports two return paths), we get stably isomorphic Cuntz-Krieger algebras, has been important for classifying Cuntz-Krieger algebras ([@MR1340839; @MR1329907; @MR2270572]), as well as understanding the connection between the dynamics of the underlying shift spaces and the Cuntz-Krieger algebras. With the recent work on the relation between move equivalence of graphs and stable isomorphism of the corresponding graph [algebras]{}, the question of whether Cuntz splicing yields stably isomorphic [algebras]{}has become of great interest. Bentmann has shown that this is in fact the case for purely infinite graph [algebras]{}with finitely many ideals ([@arXiv:1510.06757v2]), while Gabe recently has generalized this to also cover general purely infinite graph [algebras]{}([@GabePrivate2016]). Their methods depend heavily on the result of Kirchberg on lifting invertible ideal-related [$\operatorname{KK}$]{}-elements to equivariant isomorphisms for strongly purely infinite [algebras]{}([@MR1796912]).
In this paper we show in general that Cuntz splicing a vertex that supports two distinct return paths yields stably isomorphic graph [algebras]{} — only assuming that the graph is countable. The results, the proofs and methods of this paper are important for recent development in the geometric classification of general Cuntz-Krieger algebras and of unital graph [algebras]{}([@Eilers-Restorff-Ruiz-Sorensen-1], [@Eilers-Restorff-Ruiz-Sorensen-2]) as well as for the question of strong classification of general Cuntz-Krieger algebras and of unital graph [algebras]{}([@Carlsen-Restorff-Ruiz], [@Eilers-Restorff-Ruiz-Sorensen-2]).
We proved invariance of the Cuntz splice in the special case of unital graph [algebras]{}in an arXiv preprint (1505.06773) posted in May 2015. Bentmann’s recent paper showed us how to reduce the general question to the row-finite case, and we proceeded to discover that our arguments applied with only minor changes to that case. Since most of the results of our preprint have since been superseded by other forthcoming work, we do not intend to publish it, whereas this work is intended for publication.
Preliminaries
=============
A graph $E$ is a quadruple $E = (E^0 , E^1 , r, s)$ where $E^0$ and $E^1$ are sets, and $r$ and $s$ are maps from $E^1$ to $E^0$. The elements of $E^0$ are called *vertices*, the elements of $E^1$ are called *edges*, the map $r$ is called the *range map*, and the map $s$ is called the *source map*.
When working with several graphs at the same time, to avoid confusion, we will denote the range map and source map of a graph $E$ by $r_E$ and $s_E$ respectively.
All graphs considered will be *countable*, [*i.e.*]{}, there are countably many vertices and edges.
A *loop* is an edge with the same range and source.
A *path* $\mu$ in a graph is a finite sequence $\mu = e_1 e_2 \cdots e_n$ of edges satisfying $r(e_i)=s(e_{i+1})$, for all $i=1,2,\ldots, n-1$, and we say that the *length* of $\mu$ is $n$. We extend the range and source maps to paths by letting $s(\mu) = s(e_1)$ and $r(\mu) = r(e_n)$. Vertices in $E$ are regarded as *paths of length $0$* (also called empty paths).
A *cycle* is a nonempty path $\mu$ such that $s(\mu) = r(\mu)$. We call a cycle $e_1e_2\cdots e_n$ a *vertex-simple cycle* if $r(e_i)\neq r(e_j)$ for all $i\neq j$. A vertex-simple cycle $e_1e_2\cdots e_n$ is said to have an *exit* if there exists an edge $f$ such that $s(f)=s(e_k)$ for some $k=1,2,\ldots,n$ with $e_k\neq f$. A *return path* is a cycle $\mu = e_1 e_2 \cdots e_n$ such that $r(e_i) \neq r(\mu)$ for $i < n$.
For a loop, cycle or return path, we say that it is *based* at the source vertex of its path. We also say that a vertex *supports* a certain loop, cycle or return path if it is based at that vertex.
Note that in [@MR1988256; @MR1914564], the authors use the term *loop* where we use *cycle*.
A vertex $v$ in $E$ is called *regular* if $s^{-1}(v)$ is finite and nonempty. We denote the set of regular vertices by $E_{\mathrm{reg}}^0$.
A vertex $v$ in $E$ is called a *sink* if $s^{-1}(v)=\emptyset$. A graph $E$ is called *row-finite* if for each $v \in E^0$, $v$ is either a sink or a regular vertex.
It is essential for our approach to graph [algebras]{}to be able to shift between a graph and its adjacency matrix. In what follows, we let [$\mathbb{N}$]{}denote the set of positive integers, while ${\ensuremath{\mathbb{N}}\xspace}_0$ denotes the set of nonnegative integers.
Let $E = (E^0 , E^1 , r, s)$ be a graph. We define its *adjacency matrix* ${\mathsf{A}}_E$ as a $E^0\times E^0$ matrix with the $(u,v)$’th entry being $$\left\vert{\left\{ e\in E^1 \;\middle|\; s(e)=u, r(e)=v \right\}}\right\vert.$$ As we only consider countable graphs, ${\mathsf{A}}_E$ will be a finite matrix or a countably infinite matrix, and it will have entries from ${\ensuremath{\mathbb{N}}\xspace}_0\sqcup\{\infty\}$.
Let $X$ be a set. If $A$ is an $X \times X$ matrix with entries from ${\ensuremath{\mathbb{N}}\xspace}_0\sqcup\{\infty\}$, we let ${\mathsf{E}}_{A}$ be the graph with vertex set $X$ such that between two vertices $x,x' \in X$ we have $A(x,x')$ edges.
It will be convenient for us to alter the adjacency matrix of a graph in a very specific way, subtracting the identity, so we introduce notation for this.
Let $E$ be a graph and ${\mathsf{A}}_E$ its adjacency matrix. Let ${\mathsf{B}}_E$ denote the matrix ${\mathsf{A}}_{E} - I$.
Graph C\*-algebras
------------------
We follow the notation and definition for graph [algebras]{}in [@MR1670363]; this is not the convention used in Raeburn’s monograph [@MR2135030].
\[def:graphca\] Let $E = (E^0,E^1,r,s)$ be a graph. The *graph [algebra]{}* $C^*(E)$ is defined as the universal [algebra]{}generated by a set of mutually orthogonal projections ${\left\{ p_v \;\middle|\; v \in E^0 \right\}}$ and a set ${\left\{ s_e \;\middle|\; e \in E^1 \right\}}$ of partial isometries satisfying the relations
- $s_e^* s_f = 0$ if $e,f \in E^1$ and $e \neq f$,
- $s_e^* s_e = p_{r(e)}$ for all $e \in E^1$,
- $s_e s_e^* \leq p_{s(e)}$ for all $e \in E^1$, and,
- $p_v = \sum_{e \in s^{-1}(v)} s_e s_e^*$ for all $v \in E^0$ with $0 < |s^{-1}(v)| < \infty$.
Whenever we have a set of mutually orthogonal projections ${\left\{ p_v \;\middle|\; v \in E^0 \right\}}$ and a set ${\left\{ s_e \;\middle|\; e \in E^1 \right\}}$ of partial isometries in a [algebra]{}satisfying the relations, then we call these elements a *Cuntz-Krieger $E$-family*.
We will also need moves on graphs as defined in [@MR3082546]. In the case of graphs with finitely many vertices the basic moves are outsplitting (Move ), insplitting (Move ), reduction (Move ), and removal of a regular source (Move ). It turns out that in the general setting, move must be replaced by the following
\[def:collapse\] Let $E = (E^0 , E^1 , r , s )$ be a graph and let $v$ be a regular vertex in $E$ that does not support a loop. Define a graph $E_{COL}$ by $$\begin{aligned}
E_{COL}^0 &= E^0 \setminus \{v\}, \\
E_{COL}^1 &= E^1 \setminus (r^{-1}(v) \cup s^{-1}(v))
\sqcup {\left\{ [ef ] \;\middle|\; e \in r^{-1}(v)\text{ and }f \in s^{-1}(v) \right\}},\end{aligned}$$ the range and source maps extends those of $E$, and satisfy $r_{E_{COL}}([ef ]) = r (f )$ and $s_{E_{COL}} ([ef ]) = s (e)$.
Move was defined in [@MR3082546 Theorem 5.2] for graphs with finitely many vertices as an auxiliary move, and proved there to be realized by moves , and .
The equivalence relation generated by the moves , , , together with graph isomorphism is called *move equivalence*, and denoted [$\sim_{M\negthinspace E}$]{}.
Let $X$ be a set and let $A$ and $A'$ be $X \times X$ matrices with entries from ${\ensuremath{\mathbb{N}}\xspace}_0\sqcup\{\infty\}$. If ${\mathsf{E}}_A {\ensuremath{\sim_{M\negthinspace E}}\xspace}{\mathsf{E}}_{A'}$, then we say that $A$ and $A'$ are *move equivalent*, and we write $A {\ensuremath{\sim_{M\negthinspace E}}\xspace}A'$.
By [@MR3082546 Theorem 5.2], the above definition is equivalent to the definition in [@MR3082546 Section 4] for graphs with finitely many vertices.
These moves have been considered by other authors, and were previously noted to preserve the Morita equivalence class of the associated graph [algebra]{}. The moves and induce stably isomorphic $C^*$-algebras due to the results in [@MR2054048], and by [@MR2215769], moves , , preserve the Morita equivalence class of the associated graph [algebras]{}(see also [@MR3082546 Propositions 3.1, 3.2 and 3.3 and Theorem 3.5]). Therefore, we get the following theorem.
\[thm:moveimpliesstableisomorphism\] Let $E_1$ and $E_2$ be graphs such that $E_1{\ensuremath{\sim_{M\negthinspace E}}\xspace}E_2$. Then $C\sp*(E_1)\otimes {\ensuremath{\mathbb{K}}\xspace}\cong C\sp*(E_2)\otimes {\ensuremath{\mathbb{K}}\xspace}$.
We now recall the definiton of the Cuntz splice (see Notation \[notation:OnceAndTwice-new\] and Example \[example:cuntz-splice-new\] for illustrations).
\[def:cuntzsplice\] Let $E = (E^0 , E^1 , r , s )$ be a graph and let $v \in E^0$ be a regular vertex that supports at least two return paths. Let ${E_{v,-}}$ denote the graph $(E_{v,-}^0 , E_{v,-}^1 , r_{v,-}, s_{v,-})$ defined by $$\begin{aligned}
{E^0_{v,-}} &:= E^0\sqcup\{u_1 , u_2 \} \\
{E^1_{v,-}} &:= E^1\sqcup\{e_1 , e_2 , f_1 , f_2 , h_1 , h_2 \},\end{aligned}$$ where $r_{v,-}$ and $s_{v,-}$ extend $r$ and $s$, respectively, and satisfy $$s_{v,-} (e_1 ) = v,\quad s_{v,-} (e_2 ) = u_1 ,\quad s_{v,-} (f_i ) = u_1 ,\quad s_{v,-} (h_i ) = u_2 ,$$ and $$r_{v,-} (e_1 ) = u_1 ,\quad r_{v,-} (e_2 ) = v,\quad r_{v,-} (f_i ) = u_i ,\quad r_{v,-} (h_i ) = u_i .$$ We call ${E_{v,-}}$ the *graph obtained by Cuntz splicing $E$ at $v$*, and say ${E_{v,-}}$ is formed by performing Move to $E$.
The aim of this paper is to prove that $C\sp*(E)\otimes {\ensuremath{\mathbb{K}}\xspace}\cong C\sp*({E_{v,-}})\otimes {\ensuremath{\mathbb{K}}\xspace}$ for any graph $E$. In fact, we prove slightly more, since our proof allows for Cuntz splicing also at infinite emitters supporting at least two return paths.
Elementary matrix operations preserving move equivalence {#addmoves}
========================================================
In this section we perform row and column additions on ${\mathsf{B}}_E$ without changing the move equivalence class of the associated graphs. Our setup is slightly different from what was considered in [@MR3082546 Section 7], so we redo the proofs from there in our setting. There are no substantial changes in the proof techniques, which essentially go back to [@MR758893].
\[lem:oneStepColumnAdd\] Let $E = (E^0, E^1, r_E,s_E)$ be a graph. Let $u,v \in E^0$ be distinct vertices. Suppose the $(u,v)$’th entry of ${\mathsf{B}}_E$ is nonzero ([*i.e.*]{}, there is an edge from $u$ to $v$), and that the sum of the entries in the $u$’th row of ${\mathsf{B}}_E$ is strictly greater than 0 ([*i.e.*]{}, $u$ emits at least two edges). If $B'$ is the matrix formed from ${\mathsf{B}}_E$ by adding the $u$’th column into the $v$’th column, then $${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}B' + I.$$
Fix an edge $f$ from $u$ to $v$. Form a graph $G$ from $E$ by removing $f$ but adding for each edge $e \in r_E^{-1}(u)$ an edge $\bar{e}$ with $s_G(\bar{e}) = s_E(e)$ and $r_G(\bar{e}) = v$. We claim that $B' = {\mathsf{B}}_G$. At any entry other than the $(u,v)$’th entry the two matrices have the same values, since we in both cases add entries into the $v$’th column that are exactly equal to the number of edges in $E$. At the $(u,v)$’th entry of ${\mathsf{B}}_G$ we have $$\begin{aligned}
(|s_E^{-1}(u) \cap r_E^{-1}(v)| - 1) + |s_E^{-1}(u) \cap r_E^{-1}(u)| &= {\mathsf{B}}_E(u,v) + {\mathsf{B}}_E(u,u) = B'(u,v).\end{aligned}$$ Thus to prove this lemma it suffices to show $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}G$.
Partition $s_E^{-1}(u)$ as $\mathcal{E}_1 = \{ f \}$ and $\mathcal{E}_2 = s_E^{-1}(u) \setminus \{ f \}$. By assumption $\mathcal{E}_2$ is not empty, so we can use Move . Doing so yields a graph just as $E$ but where $u$ is replaced by two vertices, $u_1$ and $u_2$. The vertex $u_1$ receives a copy of everything $u$ did and it emits only one edge. That edge has range $v$. The vertex $u_2$ also receives a copy of everything $u$ did, and it emits everything $u$ did, except $f$. Since $u_1$ is regular and not the base of a loop, we can collapse it. The resulting graph is $G$ (after we relabel $u_2$ as $u$), so $G {\ensuremath{\sim_{M\negthinspace E}}\xspace}E$.
We can also add columns along a path.
\[prop:columnAdd\] Let $E = (E^0, E^1, r_E,s_E)$ be a graph and let $u,v \in E^0$ be distinct vertices with a path from $u$ to $v$ going through distinct vertices $u = u_0, u_1, u_2, \ldots, u_n = v$ (labelled so there is an edge from $u_i$ to $u_{i+1}$ for $i=0,1,2 \ldots,n-1$). Suppose further that $u$ supports a loop. If $B'$ is the matrix formed from ${\mathsf{B}}_E$ by adding the $u$’th column into the $v$’th column, then $${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}B' + I.$$
That $u$ supports a loop guarantees that $B'+I$ is the adjacency matrix of a graph $E'={\mathsf{E}}_{B'+I}$.
The vertex $u_i$ emits exactly one edge in $E$ if and only if it emits exactly one edge in $E'$, for $i=1,\ldots,n-1$. So by collapsing all regular vertices $u_i$, $i=1,2,\ldots,n-1$ emitting exactly one edge both in $E$ and in $E'$, we get two new graphs $E_1{\ensuremath{\sim_{M\negthinspace E}}\xspace}E$ and $E_1'{\ensuremath{\sim_{M\negthinspace E}}\xspace}E'$. In $E_1$, there is a path from $u$ to $v$ through vertices that all emit at least two edges. Moreover, ${\mathsf{B}}_{E_1'}$ is obtained from ${\mathsf{B}}_{E_1}$ by adding the $u$’th column into the $v$’th column. Therefore, we may without loss of generality assume that all the vertices $u_i$, $i=0,1,2,\ldots,n-1$ emit at least two edges.
By repeated applications of Lemma \[lem:oneStepColumnAdd\], we first add the $u_{n-1}$’th column into the $u_n$’th column of ${\mathsf{B}}_E$, which we can since there is an edge from $u_{n-1}$ to $u_n$. Then we add the $u_{n-2}$’th column into the $u_n$’th column, which we can since there now is an edge from $u_{n-2}$ to $u_n$. Continuing this way, we end up with a matrix $C$ which is formed from ${\mathsf{B}}_E$ by adding all the columns $u_i$, for $i = 0,1,2,\ldots, n-1$, into the the $u_n$’th column. We have that ${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}C + I$.
Now consider the matrix $B'={\mathsf{B}}_{E'}$. By repeated applications of Lemma \[lem:oneStepColumnAdd\], we first add the $u_{n-1}$’th column into the $u_n$’th column of $B'={\mathsf{B}}_{E'}$, which we can since there is an edge from $u_{n-1}$ to $u_n$. Then we add the $u_{n-2}$’th column into the $u_n$’th column, which we can since there now is an edge from $u_{n-2}$ to $u_n$. Continuing this way, we end up with a matrix $D$ which is formed from $B'={\mathsf{B}}_{E'}$ by adding all the columns $u_i$, for $i = 1,2,\ldots, n-1$, into the the $u_n$’th column. We have that $B'+I={\mathsf{A}}_{E'} {\ensuremath{\sim_{M\negthinspace E}}\xspace}D + I$.
But it is clear from the construction that $C=D$.
\[rmk:columnAdd\] Similar to how we used Lemma \[lem:oneStepColumnAdd\] in the above proof, we can use Proposition \[prop:columnAdd\] “backwards” to subtract columns in ${\mathsf{B}}_E$ as long as the addition that undoes the subtraction would be legal.
We now turn to row additions.
\[lem:oneStepRowAdd\] Let $E = (E^0, E^1, r_E,s_E)$ be a graph. Let $u,v \in E^0$ be distinct vertices. Suppose the $(v,u)$’th entry of ${\mathsf{B}}_E$ is nonzero ([*i.e.*]{}, there is an edge from $v$ to $u$), and that $u$ is a regular vertex. If $B'$ is the matrix formed from ${\mathsf{B}}_E$ by adding the $u$’th row into the $v$’th row, then $${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}B' + I.$$
Let $E'={\mathsf{E}}_{B'+I}$ denote the graph with adjacency matrix $B'+I$.
First assume that $u$ only receives one edge in $E$ (which necessarily is the edge from $v$). Then $u$ is a regular vertex not supporting a loop, so we can collapse it obtaining a graph $E''$. Note that the vertex $u$ is a regular source in $E'$, so we may remove it. It is clear that the resulting graph is exactly $E''$.
Now assume instead that $u$ receives at least two edges. Fix an edge $f$ from $v$ to $u$. Form a graph $G$ from $E$ by removing $f$ but adding for each edge $e \in s_E^{-1}(u)$ an edge $\bar{e}$ with $s_G(\bar{e}) = v$ and $r_G(\bar{e}) = r_E(e)$. We claim that $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}G$. Arguing as in the proof of Lemma \[lem:oneStepColumnAdd\] we see that this is equivalent to proving ${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}B' + I$.
Partition $r_E^{-1}(u)$ as $\mathcal{E}_1 = \{ f \}$ and $\mathcal{E}_2 = r_E^{-1}(u) \setminus \{ f \}$. By our assumptions on $u$, $\mathcal{E}_2$ is nonempty, and $u$ is regular, so we can use Move . Doing so replaces $u$ with two new vertices, $u_1$ and $u_2$. The vertex $u_1$ only receives one edge, and that edge comes from $v$, the vertex $u_2$ receives the edges $u$ received except $f$. Since $u_1$ is regular and not the base of a loop we can collapse it. The resulting graph is $G$ (after we relabel $u_2$ as $u$), so $G {\ensuremath{\sim_{M\negthinspace E}}\xspace}E$.
We can also add rows along a path of vertices.
\[prop:rowAdd\] Let $E = (E^0, E^1, r_E,s_E)$ be a graph and let $u,v \in E^0$ be distinct vertices with a path from $v$ to $u$ going through distinct vertices $v = v_0, v_1, v_2, \ldots, v_n = u$ (labelled so there is an edge from $v_i$ to $v_{i+1}$ for $i=0,1,2,\ldots,n-1$). Suppose further that the vertex $u$ is regular and supports at least one loop. If $B'$ is the matrix formed from ${\mathsf{B}}_E$ by adding the $u$’th row into the $v$’th row, then $${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}B' + I.$$
That $u$ supports a loop guarantees that $B'+I$ is the adjacency matrix of a graph $E'={\mathsf{E}}_{B'+I}$.
First we prove the special case where all the vertices $v_1,\ldots,v_n$ are regular. By repeated applications of Lemma \[lem:oneStepRowAdd\], we first add the $v_{1}$’st row into the $v_0$’th row of ${\mathsf{B}}_E$, which we can since there is an edge from $v_{0}$ to $v_1$ and $v_1$ is regular. Then we add the $v_{2}$’nd row into the $v_0$’th row, which we can since there now is an edge from $v_{0}$ to $v_2$ and $v_2$ is regular. Continuing this way, we end up with a matrix $C$ which is formed from ${\mathsf{B}}_E$ by adding all the rows $v_i$, for $i = 1,2,\ldots, n$, into the the $v_0$’th column. We have that ${\mathsf{A}}_E {\ensuremath{\sim_{M\negthinspace E}}\xspace}C + I$.
Now consider the matrix $B'={\mathsf{B}}_{E'}$. By repeated applications of Lemma \[lem:oneStepRowAdd\], we first add the $v_{1}$’st row into the $v_0$’th row of $B'={\mathsf{B}}_{E'}$, which we can since there is an edge from $v_{0}$ to $v_1$. Then we add the $v_{2}$’nd row into the $v_0$’th row, which we can since there now is an edge from $v_{0}$ to $v_2$. Continuing this way, we end up with a matrix $D$ which is formed from $B'={\mathsf{B}}_{E'}$ by adding all the rows $v_i$, for $i = 1,2,\ldots, n-1$, into the the $v_0$’th row. We have that $B'+I={\mathsf{A}}_{E'} {\ensuremath{\sim_{M\negthinspace E}}\xspace}D + I$. But it is clear from the construction that $C=D$.
Now we prove that the general case when only $u$ is assumed to be regular can be reduced to the case where $v_1,\ldots,v_n$ are regular. Choose a path $e_0e_1\cdots e_{n-1}$ going through the distinct vertices $v_1,\ldots,v_n$. For each singular vertex $v_{i}$, $i=1,\ldots,n-1$, we outsplit according to the partition $\mathcal{E}_i^1=\{e_i\}$ and $\mathcal{E}_i^2=s_E^{-1}(v_i)$ and call the corresponding vertices $v_i^1$ and $v_i^2$, respectively. Denote the split graph by $E_1$, and denote the vertices $v_i$, $i=1,\ldots,n-1$ that were not split by $v_i^1$. Note that we now have a path from $v$ to $u$ through distinct regular vertices. Note also that since all vertices along the path are distinct, what happens to the $v_i$’th entry of row $u$ and $v$ is that it gets doubled for each vertex $u_i$ that gets split and stays unchanged for the vertices $u_i=u_i^1\in E^0$ that are regular. Let $E'$ be the graph ${\mathsf{E}}_{B'+I}$, and let $E_1'$ be the graph constructed using exactly the same outsplittings as in the graph above. Now it is clear that the graph we get from $E_1$ by adding row $u$ into row $v$ is exactly $E_1'$. Thus the general case now follows from the above.
\[rmk:rowAdd\] We can also use Proposition \[prop:rowAdd\] “backwards” to subtract rows in ${\mathsf{B}}_E$ ([*cf.*]{} Remark \[rmk:columnAdd\]).
Cuntz splice implies stable isomorphism {#sec:cuntzsplice}
=======================================
In this section, we show that the Cuntz splice gives stably isomorphic graph $C^*$-algebras. We first set up some notation.
\[notation:OnceAndTwice-new\] Let $\mathbf{E}_*$ and $\mathbf{E}_{**}$ denote the graphs: $$\begin{aligned}
\mathbf{E}_* \ = \ \ \ \ \xymatrix{
\bullet^{v_1} \ar@(ul,ur)[]^{e_{1}} \ar@/^/[r]^{e_{2}} & \bullet^{v_2} \ar@(ul,ur)[]^{e_{4}} \ar@/^/[l]^{e_{3}}
}\end{aligned}$$ $$\begin{aligned}
\mathbf{E}_{**} \ = \ \ \ \ \xymatrix{
\bullet^{ w_{4} } \ar@(ul,ur)[]^{f_{10}} \ar@/^/[r]^{ f_{9} } & \bullet^{ w_{3} } \ar@(ul,ur)[]^{f_{7}} \ar@/^/[r]^{ f_{6} } \ar@/^/[l]^{f_{8}} & \bullet^{w_1} \ar@(ul,ur)[]^{f_{1}} \ar@/^/[r]^{f_{2}} \ar@/^/[l]^{f_{5}}
& \bullet^{w_2} \ar@(ul,ur)[]^{f_{4}} \ar@/^/[l]^{f_{3}}
}\end{aligned}$$ The graph $\mathbf{E}_*$ is what we attach when we Cuntz splice. If we instead attach the graph $\mathbf{E}_{**}$, we have Cuntz spliced twice.
Let $E = ( E^{0}, E^{1} , r_{E}, s_{E} )$ be a graph and let $u$ be a vertex of $E$. Then $E_{u, -}$ can be described as follows (up to canonical isomorphism): $$\begin{aligned}
E_{u,-}^{0} &= E^{0} \sqcup \mathbf{E}_{*}^{0} \\
E_{u,-}^{1} &= E^{1} \sqcup \mathbf{E}_{*}^{1} \sqcup \{ d_1, d_2 \}\end{aligned}$$ with $r_{E_{u,-}} \vert_{E^{1}} = r_{E}$, $s_{E_{u,-}} \vert_{ E^{1} } = s_{E}$, $r_{E_{u,-}} \vert_{\mathbf{E}_{*}^{1}} = r_{\mathbf{E}_{*}}$, $s_{E_{u,-}} \vert_{\mathbf{E}_{*}^{1}} = s_{\mathbf{E}_{*}}$, and $$\begin{aligned}
s_{E_{u,-}}(d_1) &= u & r_{E_{u,-}}(d_1) &= v_{1} \\
s_{E_{u,-}}(d_2) &= v_1 & r_{E_{u,-}}(d_2) &= u.\end{aligned}$$ Moreover, $E_{u,--}$ can be described as follows (up to canonical isomorphism): $$\begin{aligned}
E_{u,--}^{0} &= E^{0} \sqcup \mathbf{E}_{**}^{0} \\
E_{u,--}^{1} &= E^{1} \sqcup \mathbf{E}_{**}^{1} \sqcup \{ d_1, d_2 \}\end{aligned}$$ with $r_{E_{u,--}} \vert_{E^{1}} = r_{E}$, $s_{E_{u,--}} \vert_{ E^{1} } = s_{E}$, $r_{E_{u,--}} \vert_{\mathbf{E}_{**}^{1}} = r_{\mathbf{E}_{**}}$, $s_{E_{u,--}} \vert_{\mathbf{E}_{**}^{1}} = s_{\mathbf{E}_{**}}$, and $$\begin{aligned}
s_{E_{u,--}}(d_1) &= u & r_{E_{u,--}}(d_1) &= w_{1} \\
s_{E_{u,--}}(d_2) &= w_1 & r_{E_{u,--}}(d_2) &= u.\end{aligned}$$
\[example:cuntz-splice-new\] Consider the graph $$\begin{aligned}
E \ = \ \ \ \ \xymatrix{
\bullet_{u} \ar@(dl,ul) \ar@(ur,dr)
}\end{aligned}$$ Then $$\begin{aligned}
E_{u,-} \ = \ \ \ \ \xymatrix{
\bullet^{v_1} \ar@/^/[d]^{d_2} \ar@(ul,ur)[]^{e_{1}} \ar@/^/[r]^{e_{2}} & \bullet^{v_2} \ar@(ul,ur)[]^{e_{4}} \ar@/^/[l]_{e_{3}} \\
\bullet_{u} \ar@/^/[u]^{d_1} \ar@(dl,ul) \ar@(ur,dr) &
}\end{aligned}$$ and $$\begin{aligned}
E_{u,--} \ = \ \ \ \ \xymatrix{
\bullet^{ w_{4} } \ar@(ul,ur)[]^{f_{10}} \ar@/^/[r]^{ f_{9} } &
\bullet^{ w_{3} } \ar@(ul,ur)[]^{f_{7}} \ar@/^/[r]^{ f_{6} } \ar@/^/[l]_{f_{8}} &
\bullet^{w_1} \ar@/^/[d]^{d_2} \ar@(ul,ur)[]^{f_{1}} \ar@/^/[r]^{f_{2}} \ar@/^/[l]_{f_{5}} &
\bullet^{w_2} \ar@(ul,ur)[]^{f_{4}} \ar@/^/[l]_{f_{3}} \\
& & \bullet_{u} \ar@/^/[u]^{d_1} \ar@(dl,ul) \ar@(ur,dr) &
}\end{aligned}$$
The strategy for obtaining the result is as follows. By [@MR1340839], the graph [algebras]{}$C^*(\mathbf{E}_*)$ and $C^*(\mathbf{E}_{**})$ are isomorphic. We first show in Proposition \[prop:csdouble-new\] that $C^*(\mathbf{E}_*)$ and $C^*(\mathbf{E}_{**})$ are still isomorphic if we do not enforce the summation relation at $v_1$ and $w_1$ respectively, by appealing to general classification results. In fact, we need to establish (Lemma \[lem:csmurray-new\]) that they are isomorphic in a way sending prescribed elements of the nonstable $K$-theory to other prescribed elements. Using this, we prove in Theorem \[t:cuntz-splice-1\] by use of Kirchberg’s Embedding Theorem that Cuntz splicing once and twice yields isomorphic graph [algebras]{}. Finally, we establish in Proposition \[prop:cuntzsplicetwice\] that the graph obtained by Cuntz splicing twice is move equivalent to the original, and the desired conclusion follows.
\[prop:csdouble-new\] The relative graph [algebras]{}(in the sense of Muhly-Tomforde [@MR2054981]) $C^*(\mathbf{E}_{*}, \{v_2\})$ and $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ are isomorphic.
Following [@MR2054981 Definition 3.6] we define a graph $$\begin{aligned}
(\mathbf{E}_*)_{\{v_2\}} \ = \ \ \ \ \xymatrix{
\bullet^{v_1} \ar[d]_{e_{1}'} \ar@(ul,ur)[]^{e_{1}} \ar@/^/[r]^{e_{2}} & \bullet^{v_2} \ar@(ul,ur)[]^{e_{4}} \ar@/^/[l]_{e_{3}} \ar@/^/[dl]^{e_{4}'} \\
\bullet_{v_1'} &
}\end{aligned}$$ Then by [@MR2054981 Theorem 3.7] we have that $C^*(\mathbf{E}_{*}, \{v_2\}) \cong C^*((\mathbf{E}_*)_{\{v_2\}})$. Similarly we define a graph $$\begin{aligned}
(\mathbf{E}_{**})_{\{w_2,w_3,w_4\}} \ = \ \ \ \ \xymatrix{
\bullet^{w_4} \ar@(ul,ur)[]^{f_{10}} \ar@/^/[r]^{ f_{9} } &
\bullet^{w_3} \ar@/_/[dr]_{f_6'} \ar@(ul,ur)[]^{f_{7}} \ar@/^/[r]^{ f_{6} } \ar@/^/[l]_{f_{8}} &
\bullet^{w_1} \ar[d]_{f_1'} \ar@(ul,ur)[]^{f_{1}} \ar@/^/[r]^{f_{2}} \ar@/^/[l]_{f_{5}} &
\bullet^{w_2} \ar@/^/[dl]^{f_3'} \ar@(ul,ur)[]^{f_{4}} \ar@/^/[l]_{f_{3}} \\
& & \bullet_{w_1'} &
}\end{aligned}$$ Using [@MR2054981 Theorem 3.7] again, we have that $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ is isomorphic to $C^*((\mathbf{E}_{**})_{\{w_2,w_3,w_4\}})$.
Both the graphs $(\mathbf{E}_*)_{\{v_2\}}$ and $(\mathbf{E}_{**})_{\{w_2,w_3,w_4\}}$ satisfy Condition (K). Using the well developed theory of ideal structure and $K$-theory for graph [algebras]{}, we see that both have exactly one nontrivial ideal, that this ideal is the compact operators, and that their six-term exact sequences are $$\begin{aligned}
\xymatrix{
{\ensuremath{\mathbb{Z}}\xspace}\langle v_1' \rangle \ar[r] & {\ensuremath{\mathbb{Z}}\xspace}\ar[r] & 0 \ar[d] \\
0 \ar[u] & \ar[l] 0 & \ar[l] 0
}
\ \ \ \ & \ \ \ \
\xymatrix{
{\ensuremath{\mathbb{Z}}\xspace}\langle w_1' \rangle \ar[r] & {\ensuremath{\mathbb{Z}}\xspace}\ar[r] & 0 \ar[d] \\
0 \ar[u] & \ar[l] 0 & \ar[l] 0
}\end{aligned}$$
Furthermore, in $K_0(C^*((\mathbf{E}_*)_{\{v_2\}}))$ we have $$\begin{aligned}
[p_{v_1}] &= -[p_{v_1'}] = [p_{v_2}],\end{aligned}$$ and in $K_0(C^*((\mathbf{E}_{**})_{\{w_2,w_3,w_4\}}))$ we have $$\begin{aligned}
[p_{w_1}] &= -[p_{w_1'}] = [p_{w_2}], \\
[p_{w_3}] &= 0 = [p_{w_4}].\end{aligned}$$ Therefore the class of the unit is $-[p_{v_1'}]$ and $-[p_{w_1'}]$, respectively. It now follows from [@MR1396721 Theorem 2] (see also [@arXiv:1301.7695v1 Corollary 4.20]) that $C^*((\mathbf{E}_*)_{\{v_2\}}) \cong C^*((\mathbf{E}_{**})_{\{w_2,w_3,w_4\}})$ and hence that $C^*(\mathbf{E}_{*}, \{v_2\}) \cong C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$.
We also need a technical result about the projections in $\mathcal{E} = C^*(\mathbf{E}_{*}, \{v_2\})$.
\[lem:csmurray-new\] Let $\mathcal{E} = C^*(\mathbf{E}_{*}, \{v_2\})$ and choose an isomorphism between $\mathcal{E}$ and $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ according to the previous proposition. Let $p_{v_1}$, $p_{v_2}$, $s_{e_1}$, $s_{e_2}$, $s_{e_3}$, $s_{e_4}$ be the canonical generators of $C^*(\mathbf{E}_{*}, \{v_2\})=\mathcal{E}$ and let $p_{w_1}$, $p_{w_2}$, $p_{w_3}$, $p_{w_4}$, $s_{f_1}$, $s_{f_2}, \ldots, s_{f_{10}}$ denote the image of the canonical generators of $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ in $\mathcal{E}$ under the chosen isomorphism. Then $$\begin{aligned}
s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^* &\sim s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^*, \\
p_{v_1} - \left( s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^* \right) &\sim p_{w_1} - \left( s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^* \right), \\
1_\mathcal{E}-p_{v_1} = p_{v_2} &\sim p_{w_2}+p_{w_3}+p_{w_4}=1_\mathcal{E}-p_{w_1}\end{aligned}$$ in $\mathcal{E}$, where $\sim$ denotes Murray-von Neumann equivalence. Thus there exists a unitary $z_0$ in $\mathcal{E}$ such that $$\begin{aligned}
z_0\left(s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^*\right)z_0^* &= s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^*, \\
z_0\left(p_{v_1} - \left( s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^* \right)\right)z_0^* &= p_{w_1} - \left( s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^* \right), \\
z_0p_{v_1}z_0^* &= p_{w_1} \\
z_0p_{v_2}z_0^* &= p_{w_2}+p_{w_3}+p_{w_4}.\end{aligned}$$
By [@MR2310414 Corollary 7.2], row-finite graph [algebras]{}have stable weak cancellation, so by [@MR2054981 Theorem 3.7], $\mathcal{E}$ has stable weak cancellation. Hence any two projections in $\mathcal{E}$ are Murray-von Neumann equivalent if they generate the same ideal and have the same $K$-theory class.
As in the proof of Proposition \[prop:csdouble-new\], we will use [@MR2054981 Theorem 3.7] to realize our relative graph [algebras]{}as graph [algebras]{}of the graphs $(\mathbf{E}_*)_{\{v_2\}}$ and $(\mathbf{E}_{**})_{\{w_2,w_3,w_4\}}$. Denote the image of the vertex projections of $C^*((\mathbf{E}_*)_{\{v_2\}})$ inside $\mathcal{E}$ under this isomorphism by $q_{v_1}, q_{v_2}, q_{v_1'}$ and denote the image of the vertex projections of $(\mathbf{E}_{**})_{\{w_2,w_3,w_4\}}$ inside $\mathcal{E}$ under the isomorphisms $(\mathbf{E}_{**})_{\{w_2,w_3,w_4\}}\cong C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \}) \cong\mathcal{E}$ by $q_{w_1}, q_{w_2}, q_{w_3}, q_{w_4}, q_{v_1'}$. Using the description of the isomorphism in [@MR2054981 Theorem 3.7], we see that we need to show that $q_{v_1} \sim q_{w_1}$, $q_{v_1'} \sim q_{w_1'}$ and $q_{v_2}\sim q_{w_2}+q_{w_3}+q_{w_4}$.
Since $(\mathbf{E}_*)_{\{v_2\}}^0$ satisfies Condition (K) and the smallest hereditary and saturated subset containing $v_1$ is all of $(\mathbf{E}_*)_{\{v_2\}}^0$ we have that $q_{v_1}$ is a full projection ([@MR1988256 Theorem 4.4]). Similarly $q_{w_1}$, $q_{v_2}$ and $q_{w_2}+q_{w_3}+q_{w_4}$ are full. In $K_0(\mathcal{E})$ we have, using our calculations from the proof of Proposition \[prop:csdouble-new\], that $$\begin{aligned}
[q_{v_1}] &= [1] = [q_{w_1}], \\
[q_{v_2}] &= [1] = [q_{w_2}]=[q_{w_2}]+[q_{w_3}]+[q_{w_4}].\end{aligned}$$ So by stable weak cancellation $q_{v_1} \sim q_{w_1}$ and $q_{v_2}\sim q_{w_2}+q_{w_3}+q_{w_4}$.
Both $q_{v_1'}$ and $q_{w_1'}$ generate the only nontrivial ideal $\mathfrak{I}$ of $\mathcal{E}$ ([@MR1988256 Theorem 4.4]). Since that ideal is isomorphic to the compact operators and both $[q_{v_1'}]$ and $[q_{w_1'}]$ are positive generators of $K_0(\mathfrak{I})\cong K_0(\mathbb{K})\cong {\ensuremath{\mathbb{Z}}\xspace}$, they must both represent the same class in $K_0(\mathfrak{I})$, and thus also in $K_0(\mathcal{E})$. Therefore $q_{v_1'} \sim q_{w_1'}$.
Let $u$, $v$ and $w$ be partial isometries realizing the Murray-von Neumann equivalences. Then $z_0=u+v+w$ is a unitary that satisfies the required conditions.
\[t:cuntz-splice-1\] Let $E$ be a graph and let $u$ be a vertex of $E$. Then $C^{*}(E_{u,-}) \cong C^{*}(E_{u,--})$.
As above, we let $\mathcal{E}$ denote the [algebra]{}$C^*(\mathbf{E}_{*}, \{v_2\})$, and we choose an isomorphism between $\mathcal{E}$ and $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$, which exists according to Proposition \[prop:csdouble-new\].
Since $C^*(E_{u,-})$ and $\mathcal{E}$ are separable, nuclear [algebras]{}, by the Kirchberg Embedding Theorem [@MR1780426], there exist an injective [homomorphism]{}$$C^*(E_{u,-}) \oplus \mathcal{E} \hookrightarrow \mathcal{O}_2.$$ We will suppress this embedding in our notation.
In $\mathcal{O}_2$, we denote the vertex projections and the partial isometries coming from $C^*(E_{u,-})$ by $p_v, v\in E_{u,-}^0$ and $s_e,e\in E_{u,-}^1$, respectively, and we denote the vertex projections and the partial isometries coming from $\mathcal{E}=C^*(\mathbf{E}_*, \{v_2\})$ by $p_1,p_2$ and $s_1, s_2, s_3, s_4$, respectively. Since we are dealing with an embedding, it follows from Szyma[ń]{}ski’s Generalized Cuntz-Krieger Uniqueness Theorem ([@MR1914564 Theorem 1.2]) that for any vertex-simple cycle $\alpha_1 \alpha_2 \cdots \alpha_n$ in $E_{u,-}$ without any exit, we have that the spectrum of $s_{\alpha_1} s_{\alpha_2} \cdots s_{\alpha_n}$ contains the entire unit circle.
We will define a new Cuntz-Krieger $E_{u,-}$-family. We let $$\begin{aligned}
q_v&=p_v&&\text{for each }v\in E^0, \\
q_{v_1}&=p_1, \\
q_{v_2}&=p_2.\end{aligned}$$ Since any two nonzero projections in $\mathcal{O}_2$ are Murray-von Neumann equivalent, we can choose partial isometries $x_1, x_2 \in \mathcal{O}_2$ such that $$\begin{aligned}
x_1 x_1^* &= s_{d_1} s_{d_1}^* & x_1^* x_1 &= p_1 \\
x_2 x_2^* &= p_1 - (s_1 s_1^* + s_2 s_2^*) & x_2^* x_2 &= p_u.\end{aligned}$$ We let $$\begin{aligned}
t_e&=s_e&&\text{for each }e\in E^1, \\
t_{e_i}&=s_i&&\text{for each }i=1,2,3,4, \\
t_{d_1}&=x_1, \\
t_{d_2}&=x_2. \end{aligned}$$
By construction ${\left\{ q_v \;\middle|\; v \in E_{u,-}^0 \right\}}$ is a set of orthogonal projections, and ${\left\{ t_e \;\middle|\; e \in E_{u,-}^1 \right\}}$ a set of partial isometries. Furthermore, by choice of ${\left\{ t_e \;\middle|\; e \neq d_1,d_2 \right\}}$ the relations are clearly satisfied at all vertices other than $v_1$ and $u$. The choice of $x_1, x_2$ ensures that the relations hold at $u$ and $v_1$ as well. Hence $\{q_v, t_e\}$ does indeed form a Cuntz-Krieger $E_{u,-}$-family. Denote this family by $\mathcal{S}$.
Using the universal property of graph [algebras]{}, we get a $*$-homomorphism from $C^*(E_{u,-})$ onto $C^*(\mathcal{S}) \subseteq \mathcal{O}_2$. Let $\alpha_1 \alpha_2 \cdots \alpha_n$ be a vertex-simple cycle in $E_{u,-}$ without any exit. Since $u$ is where the Cuntz splice is glued on, no vertex-simple cycle without any exit uses edges connected to $u, v_1$ or $v_2$. Hence $t_{\alpha_1} t_{\alpha_2} \cdots t_{\alpha_n} = s_{\alpha_1} s_{\alpha_2} \cdots s_{\alpha_n}$ and so its spectrum contains the entire unit circle. It now follows from [@MR1914564 Theorem 1.2] that the [homomorphism]{}from $C^*(E_{u,-})$ to $C^*(\mathcal{S})$ is in fact a [isomorphism]{}.
Let ${\ensuremath{\mathfrak{A}}\xspace}_0$ be the [algebra]{}generated by ${\left\{ p_v \;\middle|\; v \in E^0 \right\}}$, and let [$\mathfrak{A}$]{}be the subalgebra of $\mathcal{O}_2$ generated by ${\left\{ p_v \;\middle|\; v \in E^0 \right\}}$ and $\mathcal{E}$. Note that ${\ensuremath{\mathfrak{A}}\xspace}={\ensuremath{\mathfrak{A}}\xspace}_0\oplus\mathcal{E}$.
Let us denote by ${\left\{ r_{w_i}, y_{f_j} \;\middle|\; i=1,2,3,4, j = 1,2,\ldots, 10 \right\}}$ the image of the canonical generators of $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ in $\mathcal{O}_2$ under the chosen isomorphism between $C^*(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$ and $\mathcal{E}$ composed with the embedding into $\mathcal{O}_2$.
By Lemma \[lem:csmurray-new\], certain projections in $\mathcal{E}$ are Murray-von Neumann equivalent, so choose a unitary $z_0 \in \mathcal{E}$ according to this lemma, and set $z=z_0+\sum_{v\in E^0}p_v \in \mathcal{M}({\ensuremath{\mathfrak{A}}\xspace})$. Clearly $z$ is a unitary in $\mathcal{M}( {\ensuremath{\mathfrak{A}}\xspace})$. Since the approximate identity of ${\ensuremath{\mathfrak{A}}\xspace}$ given by $$\left\{ \sum_{ k = 1 }^n p_{v_k} + 1_\mathcal{E} \right\}_{n \in {\ensuremath{\mathbb{N}}\xspace}},$$ where ${\left\{ p_v \;\middle|\; v \in E^0 \right\}} = \{ p_{v_1} , p_{v_2} , \dots \}$, is an approximate identity of $C^*( \mathcal{S} )$, we have a canonical unital $^*$-homomorphism from $\mathcal{M} ( {\ensuremath{\mathfrak{A}}\xspace})$ to $\mathcal{M} ( C^* (\mathcal{S} ) )$ which, when restricted to ${\ensuremath{\mathfrak{A}}\xspace}$, gives the embedding of ${\ensuremath{\mathfrak{A}}\xspace}$ into $C^*( \mathcal{S})$. So we can consider $z$ as a unitary in $\mathcal{M} ( C^* ( \mathcal{S} ) )$. Hence, for all $x \in C^*(\mathcal{S})$, we have that $z x$ and $xz$ are elements of $C^* ( \mathcal{S} )$. By construction of $z$, we have that $$\begin{aligned}
z q_v &= q_vz=q_v, \text{ for all } v \in E^0, \\
z t_e &= t_ez=t_e, \text{ for all } e \in E^1, \\
z \left( t_{e_1} t_{e_1}^* + t_{e_2} t_{e_2}^* \right) z^* &= y_{f_1} y_{f_1}^* + y_{f_2} y_{f_2}^* +y_{f_5} y_{f_5}^*, \\
z \left( q_{v_1} - \left( t_{e_1} t_{e_1}^* + t_{e_2} t_{e_2}^* \right) \right) z^* &= r_{w_1} - \left( y_{f_1} y_{f_1}^* + y_{f_2} y_{f_2}^* +y_{f_5} y_{f_5}^* \right), \\
z q_{v_1} z^* &= r_{w_1}, \\
z q_{v_2} z^* &= r_{w_2}+r_{w_3}+r_{w_4}.\end{aligned}$$
We will now define a Cuntz-Krieger $E_{u,--}$-family in $\mathcal{O}_2$. We let $$\begin{aligned}
P_v &= q_v = p_v &&\text{for each }v\in E^0, \\
P_{w_i}&=r_{w_i} &&\text{for each }i=1,2,3,4.\end{aligned}$$ Moreover, we let $$\begin{aligned}
S_e&=t_e=s_e &&\text{for each }e\in E^1, \\
S_{f_i}&=y_{f_i} &&\text{for each }i=1,2,\ldots,10, \\
S_{d_1}&=zt_{d_1}z^*=zx_1z^*,\\
S_{d_2}&=zt_{d_2}z^*=zx_2z^*.\end{aligned}$$ Denote this family by $\mathcal{T}$.
By construction ${\left\{ P_v \;\middle|\; v \in E_{u,--}^0 \right\}}$ is a set of orthogonal projections, and ${\left\{ S_e \;\middle|\; e \in E_{u,--}^1 \right\}}$ a set of partial isometries satisfying $$\begin{aligned}
S_e^*S_e&=s_e^*s_e=p_{r(e)},
& S_eS_e^*&=s_es_e^*, \\
S_{f_i}^*S_{f_i}&=y_{f_i}^*y_{f_i}=r_{f_i}, &
S_{f_i}S_{f_i}^*&=y_{f_i}y_{f_i}^*, \\
S_{d_1}^*S_{d_1}&=r_{w_1}, & S_{d_1}S_{d_1}^*&=s_{d_1}s_{d_1}^*, \\
S_{d_2}^*S_{d_2}&=p_u, & S_{d_2}S_{d_2}^*&=r_{w_1}-\left(y_{f_1}y_{f_1}^*+y_{f_2}y_{f_2}^*+y_{f_5}y_{f_5}^*\right), \end{aligned}$$ for all $e\in E^1$ and $i=1,2,\ldots,10$. From this, it is clear that $\mathcal{T}$ will satisfy the Cuntz-Krieger relations at all vertices in $E^0$. Similarly, we see that since ${\left\{ r_{w_i}, y_{f_j} \;\middle|\; i=1,2,3,4, j = 1,2,\ldots, 10 \right\}}$ is a Cuntz-Krieger $(\mathbf{E}_{**}, \{ w_2,w_3,w_4 \})$-family, $\mathcal{T}$ will satisfy the relations at the vertices $w_2, w_3, w_4$. It only remains to check the summation relation at $w_1$, for that we compute $$\begin{aligned}
\smash{\sum_{s_{E_{u,--}}(e) = w_1} S_e S_e^*} &= S_{f_1} S_{f_1}^* + S_{f_2} S_{f_2}^* + S_{f_5} S_{f_5}^* + S_{d_2} S_{d_2}^* \\
&= y_{f_1} y_{f_1}^* + y_{f_2} y_{f_2}^* +y_{f_5} y_{f_5}^* + r_{w_1}-\left(y_{f_1}y_{f_1}^*+y_{f_2}y_{f_2}^*+y_{f_5}y_{f_5}^*\right) \\
&= r_{w_1} = P_{w_1}.\end{aligned}$$ Hence $\mathcal{T}$ is a Cuntz-Krieger $E_{u,--}$-family.
The universal property of $C^*(E_{u,--})$ provides a surjective [homomorphism]{}from $C^*(E_{u,--})$ to $C^*(\mathcal{T}) \subseteq \mathcal{O}_2$. Let $\alpha_1 \alpha_2 \cdots \alpha_n$ be a vertex-simple cycle in $E_{u,--}$ without any exit. We see that all the edges $\alpha_i$ must be in $E^1$, and hence we have $$\begin{aligned}
S_{\alpha_1} S_{\alpha_2} \cdots S_{\alpha_n} = t_{\alpha_1}t_{\alpha_2} \cdots t_{\alpha_n} = s_{\alpha_1} s_{\alpha_2} \cdots s_{\alpha_n} \end{aligned}$$ and so its spectrum contains the entire unit circle. It now follows from [@MR1914564 Theorem 1.2] that $C^*(E_{u,--})$ is isomorphic to $C^*(\mathcal{T})$.
Recall that $z \in \mathcal{M}( C^* ( \mathcal{S} ) )$. Therefore, $\mathcal{T} \subseteq C^*(\mathcal{S})$ since ${\ensuremath{\mathfrak{A}}\xspace}\subseteq C^*(\mathcal{S})$ and since $r_{w_i}, y_{f_j}\in \mathcal{E} \subseteq C^*(\mathcal{S})$, for $i=1,2,3,4$, $j = 1,2,\ldots, 10$. So $C^*(\mathcal{T}) \subseteq C^*(\mathcal{S})$.
Since the approximate identity of ${\ensuremath{\mathfrak{A}}\xspace}$ given by $$\left\{ \sum_{ k = 1 }^n p_{v_k} + 1_\mathcal{E} \right\}_{n \in {\ensuremath{\mathbb{N}}\xspace}},$$ where ${\left\{ p_v \;\middle|\; v \in E^0 \right\}} = \{ p_{v_1} , p_{v_2} , \dots \}$, is an approximate identity of $C^*( \mathcal{T} )$, we get that for all $x \in C^*(\mathcal{T})$, $z x z^*$ and $z^* x z$ are elements of $C^* ( \mathcal{T} )$. But since [$\mathfrak{A}$]{}is also contained in $C^*(\mathcal{T})$ and $\mathcal{E} \subseteq C^*(\mathcal{T})$, we have that $\mathcal{S} \subseteq C^*(\mathcal{T})$, and hence $C^*(\mathcal{S}) \subseteq C^*(\mathcal{T})$. Therefore $$C^*(E_{u,-}) \cong C^*(\mathcal{S}) = C^*(\mathcal{T}) \cong C^*(E_{u,--}).\qedhere$$
The next two results will show that $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}E_{u, -- }$ for a row-finite graph $E$ and a vertex $u \in E^0$ which supports two distinct return paths. This will be enough to prove our main result since by [@arXiv:1510.06757v2 Lemma 5.1], there exists a row-finite graph $F$ and a vertex $v$ supporting two distinct return paths such that $C^* ( E_{u, - } ) \otimes {\ensuremath{\mathbb{K}}\xspace}\cong C^* ( F_{v, - } ) \otimes {\ensuremath{\mathbb{K}}\xspace}$ and $C^* ( E ) \otimes {\ensuremath{\mathbb{K}}\xspace}\cong C^* ( F ) \otimes {\ensuremath{\mathbb{K}}\xspace}$.
\[prop:cuntzSpliceSetup\]Let $E$ be a row-finite graph and let $u$ be a vertex which supports two distinct return paths. Then there exists a row-finite graph $F$ and a vertex $v \in F^0$ which supports two distinct loops such that $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}F$ and $E_{u, -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}F_{v, -- }$.
Throughout the proof, we will freely use the following fact: Let $G$ be a graph and let $w$ be a vertex and let $w' \neq w$ be a regular vertex that does not support a loop. Let $G'$ be the resulting graph after collapsing $w'$. Then $G {\ensuremath{\sim_{M\negthinspace E}}\xspace}G'$ and $G_{w, -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}G'_{w, -- }$.
Suppose $u \in E^0$ supports two loops. Then set $E = F$ and $v = u$. Suppose $u$ does not support two loops. Then there exists a return path $\mu = e_1 e_2 \cdots e_n$ with $n \geq 2$. Starting at $r(e_1)$, if $r( e_1 )$ does not support a loop, we collapse the vertex $r( e_1)$. Doing this will result in reducing the length on $\mu$. Note that we may have also added a loop at $u$. Continuing this procedure, we have obtained a graph $E'$ with $u$ in $(E')^0$ such that $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}E'$, $E_{u, -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}E'_{u, -- }$, and either $u$ supports two loops or $u$ supports a return path $\nu = f_1 f_2 \dots f_m$ with $m \geq 2$, with $r( f_1 )$ supporting a loop.
If $u$ supports two loops, set $F = E'$ and $v = u$. Suppose $u$ supports a return path $\nu = f_1 f_2 \dots f_m$ with $m \geq 2$, with $r( f_1 )$ supporting a loop. Then by Proposition \[prop:columnAdd\], we add the $r( f_1 )$’th column to the $u$’th column twice, to get a graph $F$ with $u \in F^0$ supporting two loops such that $F {\ensuremath{\sim_{M\negthinspace E}}\xspace}E'$. Note that we may perform the same matrix operations to ${\mathsf{B}}_{E'_{u, -- } }$ and get that $E'_{u, -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}F_{u, -- }$. Set $v = u$.
We have just obtained the desired graph $F$ and the desired vertex $v \in F^0$ since $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}E' {\ensuremath{\sim_{M\negthinspace E}}\xspace}F$ and $E_{u, -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}E'_{u , -- } {\ensuremath{\sim_{M\negthinspace E}}\xspace}F_{v, -- }$.
We now show that performing the Cuntz splice twice is a legal move for a row-finite graph.
\[prop:cuntzsplicetwice\] Let $E$ be a row-finite graph and let $v$ be a vertex that supports at least two distinct return paths. Then $E {\ensuremath{\sim_{M\negthinspace E}}\xspace}E_{v,--}$.
According to Proposition \[prop:cuntzSpliceSetup\], we can assume that $E$ satisfies the conditions of that proposition — so we assume that $v$ is a regular vertex that supports at least two loops.
For a given matrix size $N \in {\ensuremath{\mathbb{N}}\xspace}\cup \{ \infty \}$ and $i,j\in\{1,2,\ldots,N\}$, we let $E_{(i,j)}$ denote the $N\times N$ matrix that is equal to the identity matrix everywhere except for the $(i,j)$’th entry, that is $1$. If $B$ is a $N\times N$ matrix, then $E_{(i,j)}B$ is the matrix obtained from $B$ by adding $j$’th row into the $i$’th row, and $BE_{(i,j)}$ is the matrix obtained from $B$ by adding $i$’th column into the $j$’th column. Using $E_{(i,j)}^{-1}$ instead will yield subtraction. In what follows we will make extensive use of the results from Section \[addmoves\], but we do so implicitly since we feel it will only muddle the exposition if we add all the references in.
Note that ${\mathsf{B}}_{E_{v,--}}$ can be written as $$B_1=
\begin{pmatrix}
\begin{pmatrix}
0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} &
\begin{pmatrix}
0 & 0 & \cdots \\
0 & 0 & \cdots \\
1 & 0 & \cdots \\
0 & 0 & \cdots
\end{pmatrix} \\
\begin{pmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0\\
\vdots &\vdots &\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}$$
Now let $B_2=E_{(3,2)}B_1$ and $B_3=B_2E_{(2,1)}^{-1}$. Then $B_1 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_2 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_3 + I$. We have that $$B_3 =
\begin{pmatrix}
\begin{pmatrix}
-1& 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 1 & 1 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} & \begin{pmatrix}
0 & 0 & \cdots \\
0 & 0 &\cdots \\
1 & 0 & \cdots \\
0 & 0 & \cdots
\end{pmatrix} \\
\begin{pmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \\
\vdots &\vdots &\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}$$ The $1$st vertex in ${\mathsf{E}}_{B_3 + I}$ does not support a loop, so it can be collapsed yielding $$B_4 =
\begin{pmatrix}
\begin{pmatrix}
1 & 1 & 0 \\
1 & 1 & 1 \\
0 & 1 & 0
\end{pmatrix} & \begin{pmatrix}
0 & 0 & \cdots \\
1 & 0 & \cdots \\
0 & 0 & \cdots
\end{pmatrix} \\
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 0 \\
\vdots &\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}$$
With $B_4 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_3 + I$. Now we let $B_5=E_{(2,3)}^{-1}B_4$, $B_6=E_{(4,1)}B_5$, $B_7=E_{(4,3)}^{-1}E_{(4,3)}^{-1}B_6$, $B_8=E_{(1,2)}B_7$ and $B_9=B_8E_{(2,3)}^{-1}$. We then have $B_4 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_5 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_6 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_7 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_8 + I{\ensuremath{\sim_{M\negthinspace E}}\xspace}B_9 + I$. We have that $$B_9 =
\begin{pmatrix}
\begin{pmatrix}
2 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & -1
\end{pmatrix} & \begin{pmatrix}
1 & 0 & \cdots \\
1 & 0 & \cdots \\
0 & 0 & \cdots
\end{pmatrix} \\
\begin{pmatrix}
1 & 0 & 0 \\
0 & 0 & 0 \\
\vdots &\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}$$ In ${\mathsf{E}}_{B_9 + I}$ the $3$rd vertex does not support a loop, so it can be collapsed to yield $$B_{10} =
\begin{pmatrix}
\begin{pmatrix}
2 & 1 \\
1 & 1
\end{pmatrix} & \begin{pmatrix}
1 & 0 & \cdots \\
1 & 0 & \cdots \\
\end{pmatrix} \\
\begin{pmatrix}
1 & 0 \\
0 & 0 \\
\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}$$ with $B_9 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}B_{10} + I$.
Now we look at the graph $E$ again, and and let ${\mathsf{B}}_E = (b_{ij})$. Since the vertex $v$ (number $1$) has at least two loops, we have $b_{11}\geq 1$. Now we can insplit by partitioning $r^{-1}(v)$ into two sets, one with a single set consisting of a loop based at $v$, and the other the rest. In the resulting graph, $v$ is split into two vertices $v^1$ and $v^2$, and let $E'$ denote the rest of the graph. The vertex $v^1$ has the same edges in and out of $E'$ as $v$ had, but it has only $b_{11}$ loops. There is one edge from $v^1$ to $v^2$ and $v^2$ has one loop and there are $b_{11}$ edges from $v^2$ to $v^1$ as well as all the same edges going from $v^2$ into $E'$ as originally from $v$. Use the inverse collapse move to add a new vertex $u$ to the middle of the edge from $v^1$ to $v^2$ and call the resulting graph $F$. Label the vertices such that $v^2$, $u$ and $v^1$ are the $1$st, $2$nd and $3$rd vertices, then ${\mathsf{B}}_F$ is: $${\mathsf{B}}_F =
\begin{pmatrix}
\begin{pmatrix}
0 & 0 \\
1 & -1
\end{pmatrix} & \begin{pmatrix}
b_{11} & b_{12} & \cdots \\
0 & 0 & \cdots \\
\end{pmatrix} \\
\begin{pmatrix}
0 & 1 \\
0 & 0 \\
\vdots &\vdots \\
\end{pmatrix} &
\widetilde{B}
\end{pmatrix}$$ where $\widetilde{B}$ is ${\mathsf{B}}_E$ except for on the $(1,1)$’th entry, which is $b_{11}-1$. Note that $b_{11}-1\geq 0$, so that there is still a loop based at the $3$rd vertex. Also, note that since $E$ is a row-finite graph, the $b_{1k}$’s are eventually zero. This is important since it allows us to do the following matrix manipulations. Let $C_2={\mathsf{B}}_F E_{(1,2)}E_{(1,2)}$, $C_3=E_{(1,2)}C_2$, $C_4=E_{(1,3)}^{-1}C_3$, $C_5=C_4E_{(2,3)}$ and $C_6=C_5E_{(1,2)}$. We have that $C_1 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}C_2 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}C_3 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}C_4 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}C_5 + I {\ensuremath{\sim_{M\negthinspace E}}\xspace}C_6 + I$. The matrix $C_6$ $$C_6 =
\begin{pmatrix}
\begin{pmatrix}
1 & 1 \\
1 & 2
\end{pmatrix} & \begin{pmatrix}
1 & 0 & \cdots \\
1 & 0 & \cdots \\
\end{pmatrix} \\
\begin{pmatrix}
0 & 1 \\
0 & 0 \\
\vdots &\vdots \\
\end{pmatrix} &
{\mathsf{B}}_E
\end{pmatrix}.$$ Therefore, $C_6$ is in fact equivalent to $B_{10}$ upon relabeling of the first two vertices, thus it follows, that $E{\ensuremath{\sim_{M\negthinspace E}}\xspace}E_{v,--}$.
Thus we have the following fundamental result.
\[theorem:cuntzspliceinvariant\] Let $E$ be a graph and let $v$ be a vertex that supports at least two distinct return paths. Then $C^*(E)\otimes{\ensuremath{\mathbb{K}}\xspace}\cong C^*(E_{v,-})\otimes{\ensuremath{\mathbb{K}}\xspace}$.
By [@arXiv:1510.06757v2 Lemma 5.1], we may assume that $E$ is a graph with no singular vertices, in particular, $E$ is a row-finite graph. By Theorem \[t:cuntz-splice-1\], $C^{*} ( E_{v, -} ) \cong C^{*} ( E_{v,- -} )$ and hence, $C^{*} ( E_{v, -} ) \otimes {\ensuremath{\mathbb{K}}\xspace}\cong C^{*} ( E_{v,- -} ) \otimes {\ensuremath{\mathbb{K}}\xspace}$. By Proposition \[prop:cuntzsplicetwice\], $C^{*} (E) \otimes {\ensuremath{\mathbb{K}}\xspace}\cong C^{*} ( E_{v, - - } ) \otimes {\ensuremath{\mathbb{K}}\xspace}$. Thus, $C^*(E)\otimes{\ensuremath{\mathbb{K}}\xspace}\cong C^*(E_{v,-})\otimes{\ensuremath{\mathbb{K}}\xspace}$.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was partially supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92), by VILLUM FONDEN through the network for Experimental Mathematics in Number Theory, Operator Algebras, and Topology, by a grant from the Simons Foundation (\# 279369 to Efren Ruiz), and by the Danish Council for Independent Research | Natural Sciences.
Part of this work was initiated while all four authors were attending the research program *Classification of operator algebras: complexity, rigidity, and dynamics* at the Mittag-Leffler Institute, January–April 2016. The authors would also like to thank Aidan Sims for many fruitful discussions.
[ERRS16b]{}
Pere Ara, M. Angeles Moreno, and Enrique Pardo, *Nonstable [$K$]{}-theory for graph algebras*, Algebr. Represent. Theory **10** (2007), no. 2, 157–178, URL: <http://dx.doi.org/10.1007/s10468-006-9044-z>, [](http://dx.doi.org/10.1007/s10468-006-9044-z). [MR ]{}[2310414 (2008b:46094)]{}
Lawrence G. Brown and Marius Dadarlat, *Extensions of [$C^\ast$]{}-algebras and quasidiagonality*, J. London Math. Soc. (2) **53** (1996), no. 3, 582–600, URL: <http://dx.doi.org/10.1112/jlms/53.3.582>, [](http://dx.doi.org/10.1112/jlms/53.3.582). [MR ]{}[1396721 (97d:46086)]{}
Rasmus Bentmann, *[C]{}untz splice invariance for purely infinite graph algebras*, ArXiv e-prints (2015), to appear in Math. Scand., [](http://arxiv.org/abs/1510.06757v2).
Teresa Bates, Jeong Hee Hong, Iain Raeburn, and Wojciech Szyma[ń]{}ski, *The ideal structure of the [$C^*$]{}-algebras of infinite graphs*, Illinois J. Math. **46** (2002), no. 4, 1159–1176, URL: <http://projecteuclid.org/euclid.ijm/1258138472>. [MR ]{}[1988256 (2004i:46105)]{}
Teresa Bates and David Pask, *Flow equivalence of graph algebras*, Ergodic Theory Dynam. Systems **24** (2004), no. 2, 367–382, URL: <http://dx.doi.org/10.1017/S0143385703000348>, [](http://dx.doi.org/10.1017/S0143385703000348). [MR ]{}[2054048 (2004m:37019)]{}
Tyrone Crisp and Daniel Gow, *Contractible subgraphs and [M]{}orita equivalence of graph [$C^*$]{}-algebras*, Proc. Amer. Math. Soc. **134** (2006), no. 7, 2003–2013, URL: <http://dx.doi.org/10.1090/S0002-9939-06-08216-5>, [](http://dx.doi.org/10.1090/S0002-9939-06-08216-5). [MR ]{}[2215769 (2006k:46083)]{}
Joachim Cuntz and Wolfgang Krieger, *A class of [$C^{\ast} $]{}-algebras and topological [M]{}arkov chains*, Invent. Math. **56** (1980), no. 3, 251–268, URL: <http://dx.doi.org/10.1007/BF01390048>, [](http://dx.doi.org/10.1007/BF01390048). [MR ]{}[561974 (82f:46073a)]{}
Toke Meier Carlsen, Gunnar Restorff, and Efren Ruiz, *Strong classification of purely infinite [C]{}untz-[K]{}rieger algebras*, In preparation, 2016.
J. Cuntz, *A class of [$C^{\ast} $]{}-algebras and topological [M]{}arkov chains. [II]{}. [R]{}educible chains and the [E]{}xt-functor for [$C^{\ast}
$]{}-algebras*, Invent. Math. **63** (1981), no. 1, 25–40, URL: <http://dx.doi.org/10.1007/BF01389192>, [](http://dx.doi.org/10.1007/BF01389192). [MR ]{}[608527 (82f:46073b)]{}
[to3em]{}, *The classification problem for the [$C^\ast$]{}-algebras [${\mathcal O}_A$]{}*, Geometric methods in operator algebras ([K]{}yoto, 1983), Pitman Res. Notes Math. Ser., vol. 123, Longman Sci. Tech., Harlow, 1986, pp. 145–151. [MR ]{}[866492 (88a:46081)]{}
S[ø]{}ren Eilers, Gunnar Restorff, and Efren Ruiz, *Strong classification of extensions of classifiable [$C\sp*$]{}-algebras*, ArXiv e-prints (2013), [](http://arxiv.org/abs/1301.7695v1).
[to3em]{}, *Geometric classification of [$C^*$]{}-algebras over finite graphs*, in preparation, 2016.
S[ø]{}ren Eilers, Gunnar Restorff, Efren Ruiz, and Adam P. W. S[ø]{}rensen, *The complete classification of unital graph [$C^*$]{}-algebras: Geometric and strong*, in preparation, 2016.
Neal J. Fowler, Marcelo Laca, and Iain Raeburn, *The [$C^*$]{}-algebras of infinite graphs*, Proc. Amer. Math. Soc. **128** (2000), no. 8, 2319–2327, URL: <http://dx.doi.org/10.1090/S0002-9939-99-05378-2>, [](http://dx.doi.org/10.1090/S0002-9939-99-05378-2). [MR ]{}[1670363 (2000k:46079)]{}
John Franks, *Flow equivalence of subshifts of finite type*, Ergodic Theory Dynam. Systems **4** (1984), no. 1, 53–66, URL: <http://dx.doi.org/10.1017/S0143385700002261>, [](http://dx.doi.org/10.1017/S0143385700002261). [MR ]{}[758893 (86j:58078)]{}
James Gabe, *Classifying purely infinite [$C^*$]{}-algebras: [T]{}he obstruction class*, In preparation, 2016.
Jeong Hee Hong and Wojciech Szyma[ń]{}ski, *Purely infinite [C]{}untz-[K]{}rieger algebras of directed graphs*, Bull. London Math. Soc. **35** (2003), no. 5, 689–696, URL: <http://dx.doi.org/10.1112/S0024609303002364>, [](http://dx.doi.org/10.1112/S0024609303002364). [MR ]{}[1989499 (2005c:46097)]{}
Danrun Huang, *Flow equivalence of reducible shifts of finite type and [C]{}untz-[K]{}rieger algebras*, J. Reine Angew. Math. **462** (1995), 185–217, URL: <http://dx.doi.org/10.1515/crll.1995.462.185>, [](http://dx.doi.org/10.1515/crll.1995.462.185). [MR ]{}[1329907 (96m:46123)]{}
Eberhard Kirchberg, *Das nicht-kommutative [M]{}ichael-[A]{}uswahlprinzip und die [K]{}lassifikation nicht-einfacher [A]{}lgebren*, [$C^*$]{}-algebras ([M]{}ünster, 1999), Springer, Berlin, 2000, pp. 92–141. [MR ]{}[1796912 (2001m:46161)]{}
Eberhard Kirchberg and N. Christopher Phillips, *Embedding of exact [$C^*$]{}-algebras in the [C]{}untz algebra [$\mathcal O_2$]{}*, J. Reine Angew. Math. **525** (2000), 17–53, URL: <http://dx.doi.org/10.1515/crll.2000.065>, [](http://dx.doi.org/10.1515/crll.2000.065). [MR ]{}[1780426 (2001d:46086a)]{}
Paul S. Muhly and Mark Tomforde, *Adding tails to [$C^*$]{}-correspondences*, Doc. Math. **9** (2004), 79–106. [MR ]{}[2054981 (2005a:46117)]{}
Iain Raeburn, *Graph algebras*, CBMS Regional Conference Series in Mathematics, vol. 103, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2005. [MR ]{}[2135030 (2005k:46141)]{}
Gunnar Restorff, *Classification of [C]{}untz-[K]{}rieger algebras up to stable isomorphism*, J. Reine Angew. Math. **598** (2006), 185–210, URL: <http://dx.doi.org/10.1515/CRELLE.2006.074>, [](http://dx.doi.org/10.1515/CRELLE.2006.074). [MR ]{}[2270572 (2007m:46090)]{}
Mikael R[ø]{}rdam, *Classification of [C]{}untz-[K]{}rieger algebras*, $K$-Theory **9** (1995), no. 1, 31–58, URL: <http://dx.doi.org/10.1007/BF00965458>, [](http://dx.doi.org/10.1007/BF00965458). [MR ]{}[1340839 (96k:46103)]{}
Adam P. W. S[ø]{}rensen, *Geometric classification of simple graph algebras*, Ergodic Theory Dynam. Systems **33** (2013), no. 4, 1199–1220, URL: <http://dx.doi.org/10.1017/S0143385712000260>, [](http://dx.doi.org/10.1017/S0143385712000260). [MR ]{}[3082546]{}
Wojciech Szyma[ń]{}ski, *General [C]{}untz-[K]{}rieger uniqueness theorem*, Internat. J. Math. **13** (2002), no. 5, 549–555, URL: <http://dx.doi.org/10.1142/S0129167X0200137X>, [](http://dx.doi.org/10.1142/S0129167X0200137X). [MR ]{}[1914564 (2003h:46083)]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'references.bib'
---
[fullgraphs]{}
IFUM-954-FT\
[**Superspace methods for the computation of wrapping effects in the standard and $\beta$-deformed ${\cal{N}}=4$ SYM**]{}
F. Fiamberti ${}^{a,b}$, A. Santambrogio ${}^b$, C. Sieg ${}^c$
*$^a$ Dipartimento di Fisica, Università degli Studi di Milano,\
Via Celoria 16, 20133 Milano, Italy*\
*$^b$ INFN–Sezione di Milano,\
Via Celoria 16, 20133 Milano, Italy*\
*$^c$ The Niels Bohr International Academy,\
The Niels Bohr Institute,\
Blegdamsvej 17, DK-2100 Copenhagen, Denmark\
*
We review the general procedure for the field-theoretical computation of wrapping effects in standard and $\beta$-deformed ${\mathcal{N}}=4$ super Yang-Mills by means of ${\mathcal{N}}=1$ superspace techniques. In the undeformed theory, these methods allowed to find explicit results at four and five loops for two-impurity operators. In the deformed case, a general expression for the finite-size correction to the anomalous dimension of single-impurity operators at the critical order was obtained.
Introduction
============
A great progress has been made in the last years towards a better understanding of the original formulation of the AdS/CFT correspondence [@Maldacena:1998re], which conjectures the equivalence of the ${\mathcal{N}}=4$ super Yang-Mills (SYM) theory in four dimensions and type II B superstrings on ${\text{AdS}}_5\times S^5$, with the discovery of several strong hints of integrability on both sides of the duality. The powerful techniques available for integrable systems allow now to compute the planar spectra of the two theories in a particular limit, and several tests on the validity of the conjecture have become possible.
The first discovery of integrability in ${\mathcal{N}}=4$ SYM was the demonstration, based on the analogy with spin chains, that the restriction of the theory to the $SU(2)$ sector, containing operators built using only two out of the three available complex scalar fields, is integrable at one loop [@Minahan:2002ve; @Minahan:2006sk]. In this spin-chain picture, it is natural to see the operators as excited states obtained by adding fields of one type (*impurities*) to a ground state built using only fields of the other kind. Afterwards, it was first shown that integrability is valid for the whole theory at one loop [@Beisert:2003jj; @Beisert:2003yb; @Beisert:2004ag; @SchaferNameki:2004ik; @Beisert:2005di], and then strong evidence for its extension to two and three loops was found [@Beisert:2003tq; @Serban:2004jf; @Kotikov:2004er; @Beisert:2004hm; @Eden:2004ua]. These results led to the believe that integrability may be an exact property to all orders, and a great amount of work was dedicated to the subject, which culminated with the formulation of a proposal for an all-order Bethe ansatz [@Beisert:2003xu; @Beisert:2003jb; @Beisert:2003ys; @Staudacher:2004tk; @Beisert:2005fw; @Beisert:2005tm], from which the dilatation operator can be computed. This extension to arbitrary order was made possible by the discovery that the S-matrix of ${\mathcal{N}}=4$ SYM is fixed (up to a phase factor) by the symmetries of the theory [@Beisert:2005tm].
During the same years, a great effort was dedicated also to the study of integrability on the string dual of ${\mathcal{N}}=4$ SYM. The starting point was the realization that classical strings on the ${\text{AdS}}_5\times S^5$ background are integrable [@Mandal:2002fs; @Bena:2003wd; @Kazakov:2004qf; @Arutyunov:2004yx; @Alday:2005gi], followed by the proposal of a Bethe ansatz to describe strings on the $\mathds{R} \times S^3$ subspace of ${\text{AdS}}_5\times S^5$ [@Arutyunov:2003za; @Arutyunov:2004vx; @Gromov:2006cq], with the discovery that the S-matrices of the gauge and string theory must be related by a global dressing phase [@Callan:2003xr; @Serban:2004jf; @Callan:2004uv; @Hernandez:2006tk; @Beisert:2006ib; @Beisert:2006ez; @Arutyunov:2004vx; @Eden:2006rx; @Beisert:2007hz]. As on the field theory side, the result for the Bethe ansatz and the S-matrix have been later extended to the full theory [@Beisert:2003ea; @Arutyunov:2003rg; @Arutyunov:2004xy; @Kazakov:2004nh; @Swanson:2004qa; @Beisert:2005bm; @Beisert:2005cw; @SchaferNameki:2005is; @Staudacher:2004tk; @Freyhult:2006vr].
The powerful integrability techniques allow to compute easily the spectrum of ${\mathcal{N}}=4$ SYM in the limit of long operators. For any given operator, however, they are forced to fail when the loop order becomes high enough [@Beisert:2004hm].The definition of the S-matrix and of the Bethe ansatz, in fact, requires the existence of an asymptotic regime, i.e. the interaction must not involve all fields at the same time. But since the range of the interaction grows with the loop order $\ell$ as $(\ell+1)$, for an operator of length (number of elementary fields) $L$, the interaction range exceeds the length $L$ at loop orders $\ell\ge L$. Thus, the Bethe ansatz breaks down, and it no longer produces the correct components of the anomalous dimension. In terms of Feynman diagrams, such failure of the asymptotic tools is caused by the appearance of wrapping diagrams, in which the interactions wrap around the composite operator.
The finite-size effects generated by wrapping interactions must be studied in order to compute the exact spectrum of ${\mathcal{N}}=4$ SYM. The first analysis of these corrections in terms of Feynman diagrams was performed in [@Sieg:2005kd], and in [@Ambjorn:2005wa; @Janik:2007wt] some proposals were considered for their general description, among which the use of the thermodynamic Bethe ansatz. The first exact computation of a field-theory quantity affected by wrapping corrections appeared in [@us; @uslong], where the four-loop anomalous dimension of the Konishi operator was calculated by means of superspace techniques. The result was then confirmed first by an independent analysis, based on the Lüscher approach [@Luscher:1985dn], performed on the string side [@Bajnok:2008bm], which thus constitutes a highly non-trivial check of the AdS/CFT correspondence, and later by a computer-made direct computation based on the component-field approach [@Velizhanin:2009zz]. The superspace computation has been recently extended to consider length-five operators at five loops [@usfive].
At the same time, finite-size effects were studied also on the string side [@SchaferNameki:2006gk; @Arutyunov:2006gs; @SchaferNameki:2006ey; @Astolfi:2007uz; @Ambjorn:2005wa; @Minahan:2008re; @Gromov:2008ie; @Heller:2008at; @Hatsuda:2008gd; @Ramadanovic:2008qd; @Hatsuda:2008na; @Janik:2007wt; @Sax:2008in]. In most cases, however, the calculations were carried out in the limit of large ’t Hooft coupling $\lambda$ from the very beginning, so that the final results cannot be compared directly with the ones coming from the gauge side, where $\lambda\ll1$ must be considered in a perturbative approach.
In parallel with the explicit computations based on diagrammatic techniques, also the method based on the Lüscher approach has been generalized to deal with different classes of operators [@Penedones:2008rv; @Bajnok:2008qj; @Beccaria:2009eq; @Lukowski:2009ce; @Velizhanin:2010cm], and a proposal for the five-loop component of the Konishi anomalous dimension has been obtained [@Bajnok:2009vm]. Despite these successful applications, however, it is very difficult to extend the Lüscher technique to the most general cases, and hence also different approaches have been taken into consideration. Currently, the most promising one is based on the thermodynamic Bethe ansatz [@Zamolodchikov:1989cf; @Arutyunov:2007tc; @deLeeuw:2008ye; @Arutyunov:2009zu; @deLeeuw:2009hn; @Arutyunov:2009ga; @Arutyunov:2009mi; @Arutyunov:2009ce; @Arutyunov:2009ux; @Arutyunov:2009ax; @Arutyunov:2010gb; @Balog:2010xa; @Balog:2010vf], which led to the proposal that the full, exact spectrum of ${\mathcal{N}}=4$ SYM is captured in terms of a so-called Y-system [@Gromov:2009tv; @Gromov:2009bc; @Bombardelli:2009ns; @Arutyunov:2009ur; @Hegedus:2009ky; @Cavaglia:2010nm]. This method has already been shown to reproduce the known field theory results at four and five loops with wrapping effects included [@us; @uslong; @Gromov:2009tv; @usfive; @Beccaria:2009eq], the five-loop result for the Konishi operator from the Lüscher approach [@Bajnok:2009vm; @Balog:2010xa; @Arutyunov:2010gb], and it has also been used to compute finite-size corrections at strong coupling [@Gromov:2009zb; @Roiban:2009aa; @Gromov:2009tq; @Gromov:2010vb].
A similar agreement was found at four loops also in the context of the ${\text{AdS}}_4/{\text{CFT}}_3$ correspondence [@Gromov:2009tv; @Minahan:2009aq; @Minahan:2009wg]. In the future it will be very important to further check the validity of the Y-system against additional direct perturbative results.
Besides ${\mathcal{N}}=4$ SYM, also its $\beta$-deformed version, preserving only ${\mathcal{N}}=1$ supersymmetry, offers an interesting environment for the analysis of wrapping effects. The study of finite-size corrections in this case started in [@betadef] and was later extended in [@Fiamberti:2008sn], whereas in [@Bykov:2008bj] wrapping corrections were discussed on the deformed string background. The most interesting feature of the deformed theory is the possibility to study a simpler class of operators, so that perturbative computations at higher orders become feasible.
In this paper, we review all the perturbative computations of wrapping effects, based on ${\mathcal{N}}=1$ superspace techniques, that have been performed both in the standard and in the $\beta$-deformed ${\mathcal{N}}=4$ SYM. First of all, in Section \[superspace\], we present the main technical results that make superspace techniques so useful for this kind of calculations. Then, in Section \[undeformed\], we describe the general procedure for the analysis of finite-size corrections in the standard ${\mathcal{N}}=4$ SYM and its applications to four and five loops. Section \[betadef\] is dedicated to the topic of wrapping effects in the $\beta$-deformed ${\mathcal{N}}=4$ SYM. We conclude with some comments in Section \[comments\].
Superspace techniques {#superspace}
=====================
Perturbative computations in gauge field theories at high loop orders, based on the standard component-field approach, are usually very complicated. Hence, when supersymmetry is present, it is very useful to exploit it by making use of superspace techniques. For both standard and $\beta$-deformed ${\mathcal{N}}=4$ SYM, it is convenient to use the ${\mathcal{N}}=1$ superspace description [@Gates:1983nr], where all the field content is encoded into one vector $V$ and three scalar superfields $\Phi^i$, which we denote as $\phi$, $Z$ and $\psi$. Therefore, fermionic matter fields never appear explicitly, which considerably simplifies all the computations. Moreover, every supergraph combines the information on a large number of standard component-field diagrams, so that the total number of relevant contributions is reduced, too.
The action of ${\mathcal{N}}=4$ SYM
-----------------------------------
The action for undeformed ${\mathcal{N}}=4$ SYM reads, in the notation of [@Gates:1983nr], $$\label{action}
\begin{aligned}
S &= \int{\operatorname{d}\!}^4 x{\operatorname{d}\!}^4 \theta \, \operatorname{tr}\left(e^{-gV} \bar \Phi_i e^{gV}
\Phi^i\right) + \frac 1{2g^2} \int {\operatorname{d}\!}^4 x {\operatorname{d}\!}^2 \theta
\,\operatorname{tr}\left(W^\alpha W_\alpha\right)\\
&\phantom{{}={}}
+i \frac{g}{3!} \int{\operatorname{d}\!}^4 x{\operatorname{d}\!}^2 \theta \,\epsilon^{ijk}\,\operatorname{tr}\left(\Phi_i
\left[\Phi_j , \Phi_k\right]\right) + \text{h.c.}{~.}\end{aligned}$$ Here, $W_\alpha = i\bar {D}^2 \left(e^{-gV} {D}_\alpha\,e^{gV}\right)$, $V=V^aT_a$, $\Phi^i=\Phi_i^aT_a$, [$i=1,2,3$]{}, and the $T_a$ are matrices that are normalized as $$\operatorname{tr}(T_a T_b) =\delta_{ab}
{~,}$$ and that obey the $SU(N)$ algebra $$\label{Tcomm}
[T_a, T_b] = i f_{abc} T_c {~,}$$ where the $f_{abc}$ are the $SU(N)$ structure constants. The latter can be written in terms of the $T_a$ through to the inverse of $$f_{abc}=-i\,\mathrm{tr}([T_a,T_b]T_c) {~.}$$ This relation and the identity $$T_a^{ij}T_a^{kl}=\left(\delta_{il}\delta_{jk}-\frac{1}{N}\delta_{ij}\delta_{kl}\right) {~,}$$ allow to determine the colour structures of Feynman diagrams. Since we will make all our computations in the planar limit, in addition to the gauge coupling $g$, it will be useful to define also the rescaled ’t Hooft coupling $$\lambda=\frac{g^2N}{(4\pi)^2} {~.}$$ The Feynman rules for propagators and vertices in supergraphs can be derived from the action . In momentum space, the propagators are $$\label{propagators}
\langle V^a V^b\rangle=-\frac{\delta^{ab}}{p^2} {~,}\qquad\langle\Phi^a_i\bar{\Phi}^b_j\rangle=\delta_{ij}\frac{\delta^{ab}}{p^2} {~.}$$ As for the vertex factors, the last term in describes the interactions among three scalar chiral or anti-chiral superfields, whose contributions are $$\label{vertices-scalar}
V_C=-\frac{g}{3!}\epsilon^{ijk}f_{abc}\Phi^a_i\Phi^b_j\Phi^c_k\ ,\quad V_A=-\frac{g}{3!}\epsilon^{ijk}f_{abc}\bar{\Phi}^a_i\bar{\Phi}^b_j\bar{\Phi}^c_k{~,}$$ whereas the first term in the action generates vertices with one chiral and one anti-chiral scalar, plus a maximum number of vectors growing with the perturbative order under consideration. In this paper, we will encounter only the cases with one or two vectors, whose factors are respectively $$\label{vertices-vector}
V_V^{(1)}=i g f_{abc}\delta^{ij}\bar{\Phi}^a_i V^b \Phi^c_j\ ,\quad V_V^{(2)}=\frac{g^2}{2}\delta^{ij}f_{adm}f_{bcm}V^a V^b \bar{\Phi}^c_i\Phi^d_j {~.}$$ Finally, the second term in the action produces vertices where only vector superfields interact, and which will not be needed for our computations. In the same way, ghost fields will never be relevant in this work.
In addition to such vertex factors, one must remember to add a $\bar{D}^2$ or a $D^2$ covariant derivative to each chiral or antichiral line, respectively, to restore the full ${\operatorname{d}\!}^4\theta$ integration measure on the Grassmann variables. In the case of three-scalar vertices, only two out of the three lines carry derivatives. Once a supergraph has been built, we apply the D-algebra procedure [@Gates:1983nr], which consists in a sequence of integration by parts of the covariant derivatives resulting in a reduction of the diagram to a standard momentum integral.
The $\beta$-deformed theory
---------------------------
The $\beta$-deformed ${\mathcal{N}}=4$ SYM theory is obtained from the standard one through the following modification of the superpotential for chiral and anti-chiral superfields $$ig\,\operatorname{tr}\left(\phi\,\psi\,Z - \phi\,Z\,\psi\right)~\longrightarrow ~ih\,\operatorname{tr}\left(e^{i\pi\beta} \phi\,\psi\,Z - e^{-i\pi\beta} \phi\,Z\,\psi\right){~,}$$ where $h$ and $\beta$ are complex constants, and we recall that $\phi$, $Z$ and $\psi$ are the three scalar chiral superfields. Such a deformation is marginal, and thus the theory remains conformally invariant, at all orders [@Leigh:1995ep; @Mauri:2005pa] if $$h\bar{h}=g^2 {~,}$$ where $g$ is the Yang-Mills coupling constant. In this paper we consider only the case of real $\beta$. In fact in this case the deformed theory is also believed to be integrable [@Roiban:2003dw; @Berenstein:2004ys; @Beisert:2005if], together with its string dual, namely superstring theory on the Lunin-Maldacena background [@Lunin:2005jy; @Frolov:2005dj; @Frolov:2005ty], and an all-loop Bethe ansatz similar to the standard one has been formulated [@Beisert:2005if].
As far as the Feynman rules are concerned, only the vertex coefficients for the three-scalar interactions will be modified by the deformation, with the appearance of a factor of $q\equiv e^{i\pi\beta}$ or $\bar{q}=e^{-i\pi\beta}$, depending on the order of the fields $$\begin{aligned}
V_C&= - h f_{abc} (e^{i\pi\beta} \Phi_1^a\,\Phi_2^b\,\Phi_3^c - e^{-i\pi\beta} \Phi_1^a\,\Phi_3^b\,\Phi_2^c) {~,}\\
V_A&= - \bar{h} f_{abc} (e^{-i\pi\beta}\bar{\Phi}_1^a\, \bar{\Phi}_2^b\, \bar{\Phi}_3^c-e^{i\pi\beta} \bar{\Phi}_1^a\, \bar{\Phi}_3^b\, \bar{\Phi}_2^c) {~.}\end{aligned}$$ All the other vertex factors and the propagators remain the same as in the undeformed case.
The $\beta$-deformed theory is particularly interesting because some classes of simple operators that were protected by supersymmetry in the undeformed ${\mathcal{N}}=4$ SYM now acquire a non-trivial anomalous dimension. As we will explain later, this fact will allow us to perform perturbative computations to orders beyond four or five loops.
Anomalous dimensions
--------------------
The analysis of wrapping effects that we are going to present is based on the computation of anomalous dimensions of composite operators. Given a set of bare operators $\{\mathcal{O}_1,\ldots,\mathcal{O}_n\}$ with the same classical dimension, in general they will be mixed by quantum corrections, and their renormalized versions will be given by $$\mathcal{O}_i^{\mathrm{ren}}=\mathcal{Z}_i^j\mathcal{O}_j^{\mathrm{bare}} {~,}$$ where $\mathcal{Z}_i^j$ is the one-point function matrix, which gets contributions from all the Feynman diagrams with one insertion of one of the composite operators $\mathcal{O}_i$. In order to find the linear combinations of the $\mathcal{O}_i$ with well-defined anomalous dimensions, we must diagonalize this matrix. Working in dimensional regularization, with $d=4-2\varepsilon$ spacetime dimensions, the anomalous dimensions will be the eigenvalues of the mixing matrix $\mathcal{M}$ defined as $$\label{anomalous}
\gamma_k=\mathrm{eig}(\mathcal{M})_k\ ,\qquad\mathcal{M}_i^j=\lim_{\varepsilon\rightarrow0}\left[\varepsilon g\frac{{\operatorname{d}\!}}{{\operatorname{d}\!}g}\mathrm{log}\mathcal{Z}_i^j(g,\varepsilon)\right] {~.}$$ It is worth emphasizing here that the computation of anomalous dimensions requires only the knowledge of the *divergent* part of the expansion of every graph in powers of $1/\varepsilon$. As we will explain later, this fact leads to a great simplification in the calculation of the integrals.
It is useful to present now a different approach [@Beisert:2003tq] to the computation of anomalous dimensions. In a conformal theory, the possible quantum dimensions of operators are the eigenvalues of the dilatation operator $\mathcal{D}$, which represents the generator of dilatation transformations on the operator algebra, whereas the corresponding eigenvectors are the composite operators that renormalize multiplicatively. Thus, $$\mathcal{D}\,\mathcal{O}=\Delta(\lambda)\mathcal{O}\ ,\qquad\Delta(\lambda)=\Delta_0+\gamma(\lambda) {~,}$$ where $\Delta_0$ is the classical dimension and $\gamma(\lambda)$ is the anomalous one. We will focus on the perturbative expansion of $\mathcal{D}$ in powers of the ’t Hooft coupling $$\mathcal{D}(\lambda)=\sum_{k=0}^\infty\lambda^k \mathcal{D}_k {~.}$$ The importance of the dilatation operator in ${\mathcal{N}}=4$ SYM (and in its deformed version) comes from the fact that the hypothesis of integrability allows to compute its components in the planar limit without the need for explicit diagrammatic computations. This simplification allowed to find the explicit form of $\mathcal{D}$ in a particular sector up to five loops.
From a simple analysis of the general properties of planar Feynman diagrams, it follows that the range of interaction, that is, the maximum number of fields of the composite operator that interact with each other, grows with the perturbative order. In fact, at $\ell$ loops the dilatation operator will get contributions from graphs with range up to $\ell+1$. Note that the range of a Feynman diagram is a meaningful quantity only in the planar limit.
A theorem about supergraphs
---------------------------
We now demonstrate a very useful result [@uslong], which allows to simplify considerably the D-algebra procedure for a Feynman supergraph by identifying operations that lead to finite contributions and hence are irrelevant for our analysis of anomalous dimensions. Let us consider a planar graph with a single insertion of a composite operator made of chiral superfields, and where the final operator contains the same fields. In Figure \[example-01\] an example is shown, where the thick line represents the composite operator, straight thin lines are scalar propagators and wiggly lines are vector fields. Moreover, chiral scalar vertices are marked with a circle, and for the sake of simplicity we did not show explicitly the covariant derivatives.
We want to show that divergent contributions can be obtained after D-algebra only if all the covariant derivatives of chiral type $D$ are kept inside the diagram during the sequence of integrations by parts (with the exception, as will be clear in the following, of derivatives on lines not belonging to any loop).
-------------------------- ----------------------------------------------------------------------
$V_C$ number of chiral vertices
$V_A$ number of antichiral vertices
$V_V^{(n)}$ number of vertices with a chiral, an antichiral and $n$ vector lines
$\tilde{V}_V^{(n)}$ number of vertices with $n$ vector lines
$p_S$ number of scalar propagators belonging to at least one loop
$p_V$ number of vector propagators
$p$ total number of propagators belonging to at least one loop
$p_E$ number of scalar propagators not belonging to any loop
$V_{EC}$ number of chiral vertices not belonging to any loop
$N_\ell$ number of loops
$N_{D}$, $N_{{\bar{D}}}$ numbers of ${D}$ and ${\bar{D}}$ derivatives
-------------------------- ----------------------------------------------------------------------
: Useful definitions[]{data-label="tab:proof"}
=0.75mm
-----------------------------------------------------------
$N_\ell=7$
$V_C=V_A=4$
$V_V^{(1)}=3{~,}\quad V_V^{(2)}=1{~,}\quad V_V^{(n>2)}=0$
$\tilde{V}_V^{(3)}=1{~,}\quad \tilde{V}_V^{(m>3)}=0$
$p_S=14{~,}\quad p_V=4{~,}\quad p=p_S+p_V=18$
$p_E=2{~,}\quad V_{EC}=2$
-----------------------------------------------------------
To demonstrate our assertion, it is useful to define the quantities of Table \[tab:proof\]. We can find several equations involving them, based on the possible types of vertices in the graph. In particular,
- from each chiral (anti-chiral) vertex, three scalar propagators start. On two of them, a $\bar{D}^2$ ($D^2$) double covariant derivative acts.
- From each vertex of type $V_V^{(n)}$, two scalar (one chiral and one anti-chiral) and $n$ vector lines start. On the chiral and anti-chiral lines, a $\bar{D}^2$ and a $D^2$ act respectively.
- From each vertex of type $\tilde{V}_V^{(n)}$, $n$ vector propagators start. A complicated derivative structure will act on the lines, but in all the cases two $\bar{D}$ and two $D$ derivatives will be involved.
In this way, every propagator is counted twice, since by hypothesis the outgoing fields are the same as the ones in the composite operator. So for the propagator numbers we find $$\label{proof:propagators}
\begin{aligned}
p_S&=\frac{1}{2}\left[3(V_C+V_A)+2\sum_{n\geq1} V_V^{(n)}\right]-p_E {~,}\\
p_V&=\frac{1}{2}\left[\sum_{n\geq1} nV_V^{(n)}+\sum_{m\geq3}m\tilde{V}_V^{(m)}\right] {~,}\end{aligned}$$ whereas the numbers of covariant derivatives fulfill $$\label{proof:ND}
\begin{aligned}
N_{D}&=4V_C+2\sum_{n\geq1}V_V^{(n)}+2\sum_{m\geq3}\tilde{V}_V^{(m)} {~,}\\
N_{{\bar{D}}}&=4V_A+2\sum_{n\geq1}V_V^{(n)}+2\sum_{m\geq3}\tilde{V}_V^{(m)} {~.}\end{aligned}$$ For the hypothesis on the outgoing fields, the numbers of chiral and anti-chiral vertices must be equal $$V_C=V_A {~,}$$ and hence we have $N_{D}=N_{{\bar{D}}}$. Moreover, the number of scalar propagators that do not belong to any loop and the number of chiral vertices with the same feature must be equal: $p_E=V_{EC}$. We can now combine the equations and into $$\label{proof:p}
p=\frac{1}{2}\left[N_{D}+V_C+V_A+\sum_{n\geq1}n V_V^{(n)}+\sum_{m\geq3}(m-2)\tilde{V}_V^{(m)}\right]-p_E {~.}$$ From a simple power counting, it follows that the final integral will be at least logarithmically divergent only if the D-algebra generates at least $(2p-4N_\ell)$ momenta in the numerator. The construction of each momentum requires a $D$ derivative (and a $\bar{D}$). Moreover, for every loop a $D^2$ and a $\bar{D}^2$ are required to complete the superspace integration. Thus, performing the D-algebra we need to keep inside the diagram at least $(2p-2N_\ell)$ derivatives of type $D$.
Now we must get rid of propagators not belonging to any loop. It is easy to see that on any one of them, either a $D^2$ is already there from the beginning, as it appears when the propagator is connected to a vertex of kind $V_V^{(n)}$, or it can be moved there by an integration by parts at the vertex at which the propagator is attached to the rest of the diagram. So we can always assume that every such propagator has a $D^2$, which is hence effectively *outside* the graph. Thus, the actual number of $D$ derivatives that can be effectively used is just $(N_D-2p_E)$. We will be allowed to move one more $D$ out of the diagram only if this number exceeds the required minimum, i.e. if $$\label{proof:ineq}
N_{D}-2p_E>2p-2N_\ell {~,}$$ which we can rewrite using as $$\label{proof:ineq2}
N_\ell>\frac{1}{2}\left[V_C+V_A+\sum_{n\geq1}n V_V^{(n)}+\sum_{m\geq3}(m-2)\tilde{V}_V^{(m)}\right] {~.}$$ As the last step of the proof, it is enough to show that this inequality can never be fulfilled. In fact, let us make use of Euler’s formula for planar connected graphs $$\label{proof:Euler}
\mathcal{V}-\mathcal{E}+\mathcal{F}=2 {~,}$$ where $\mathcal{V}$ is the total number of vertices, $\mathcal{E}$ is the number of edges and $\mathcal{F}$ is the number of faces, including the external unbounded region. For a Feynman supergraph, as the operator insertion behaves as an additional vertex, we have $$\mathcal{V}=V_C+V_A+\sum_{n\geq1}V_V^{(n)}+\sum_{m\geq3}\tilde{V}_V^{(m)}+1\ ,\qquad \mathcal{E}=p+p_E\ ,\qquad \mathcal{F}=N_\ell+1{~.}$$ So we find $$N_\ell=\frac{1}{2}\left[V_C+V_A+\sum_{n\geq1}n V_V^{(n)}+\sum_{m\geq3}(m-2)\tilde{V}_V^{(m)}\right] {~,}$$ which is not compatible with . We have therefore shown that none of the derivatives of type $D$ can be moved out of the diagram during the D-algebra.
Cancellation identities for supergraphs
---------------------------------------
The property that we have demonstrated above can be exploited to show that large classes of supergraphs entering the computation of anomalous dimensions sum up to finite expressions. In particular, we are interested in diagrams with the maximum interaction range allowed by the corresponding number of loops.
Consider a maximum-range diagram that contains a single-vector vertex breaking an outgoing scalar propagator, as in the example of Figure \[example-02\].
=0.75mm
Because of the assumption on the interaction range, the only possible configurations are shown in Figure \[startstruct\]. The three possibilities are exactly equivalent for our present purpose, so we will focus on the second one in the following.
=0.75mm
=0.75mm
In a maximum-range graph, the propagator leaving from the single-vector vertex can be attached only to one out of three structures, shown in Figures \[block-A\], \[block-B\] and \[block-C\]. All the possible respective combinations are listed in Figures \[diagrams-A\], \[diagrams-B\] and \[diagrams-C\]. We can now demonstrate that the divergent parts of the diagrams in each class sum up to zero. The key point of the proof is the fact that when we integrate by parts the double covariant derivative ${D}^2$ at the single-vector vertex, we are forced, by the property demonstrated previously, to keep it inside the graph, thus moving it on the vector line starting from the vertex. We can then shift the ${D}^2$ to the opposite end of the vector propagator, where a second integration by parts is possible. Since once again we can neglect the terms where the derivatives would act on the external fields, the integration produces a single contribution, which can be simplified thanks to the standard D-algebra identities, and in the end we find a modification in the original superspace integral. Now, it turns out that in all the cases of interest the divergent part of the found integral is cancelled by the one obtained from a different original supergraph. We now summarize the details of such cancellations for the three classes:
- Class A (Fig. \[diagrams-A\]):\
The diagram $A_1$ is finite, since we are forced to move the ${D}^2$ outside the diagram. The first steps of D-algebra for $A_3$ transform the graph into the same structure as $A_2$, with an additional minus sign coming from the $\Box=-p^2$ that cancels a propagator. Since $A_2$ and $A_3$ have the same colour factor, their divergent parts cancel each other.
- Class B (Fig. \[diagrams-B\]):\
In this case, we begin the D-algebra for both $B_1$ and $B_2$, finding the same superspace integral. However, their colour factors are opposite, and hence also the divergent parts of $B_1$ and $B_2$ sum up to zero. The diagram $B_3$ is finite, for the same reason as $A_1$.
- Class C (Fig. \[diagrams-C\]):\
For $C_1$ and $C_2$, we proceed as in the case of $B_1$ and $B_2$, and we conclude again that the divergent parts of the two diagrams cancel out. For $C_3$ and $C_4$, we have the same situation as for $A_2$ and $A_3$. Finally, $C_5$ is finite, similarly to $A_1$ and $B_3$.
=0.75mm
=0.75mm
=0.75mm\
We have thus demonstrated that none of the maximum-range diagrams that contain one of the structures of Figure \[startstruct\] is relevant for the calculation of anomalous dimensions. As we will see, this represents a very important simplification.
Wrapping interactions in ${\mathcal{N}}=4$ SYM {#undeformed}
==============================================
We are now ready to begin the analysis of wrapping effects in ${\mathcal{N}}=4$ SYM by exploiting the results of the previous section. Since single-impurity operators of the $SU(2)$ sector are protected, we are forced to study two-impurity ones. The shortest non-protected operators of this class have length $L=4$, and a possible choice for two independent states in the length-four subsector is [@us; @uslong] $$\label{Opbasis}
{\mathcal{O}_{4,1}}=\operatorname{tr}(\phi Z\phi Z){~,}\qquad{\mathcal{O}_{4,2}}=\operatorname{tr}(\phi\phi ZZ){~,}$$ which will in general mix under renormalization. Note that the anti-symmetric combination of the basis is a descendant of the Konishi operator. Because of the relationship between the perturbative order and the interaction range, wrapping effects can appear in this subsector only at four loops and beyond.
The spin-chain analogy
----------------------
Since we work in the planar limit, we can restrict our analysis to single-trace operators. We will focus on the $SU(2)$ sector, containing operators built using only two out of the three available superfields. A single-trace operator in this sector can be easily associated to a state of an $SU(2)$ spin chain, once a simple correspondence between the field flavour and the spin projection on a chosen axis is fixed [@Minahan:2002ve]. Thanks to this correspondence, we will refer to composite operators also as states of the corresponding chain. Because of the relationship between the loop order and the interaction range, we will be forced to work with long-range chains. An operator given by the product of $L$ fields of the same flavour $Z$ will be related to the ground state of a ferromagnetic chain, from which we can build excited states by replacing some of the $Z$ fields with impurities, i.e. fields of type $\phi$. An operator with $n$ $\phi$ fields will thus be equivalent to an $n$-magnon state of the chain.
The interactions among the spins of the chain are conveniently described in terms of permutations of neighbouring sites, from which the following operators can be constructed [@Beisert:2003tq] $$\label{permstrucdef}
{{}\{a_1,\dots,a_n\}{}}=\sum_{r=0}^{L-1}\operatorname{P}_{a_1+r,a_1+r+1}\cdots
\operatorname{P}_{a_n+r,a_n+r+1}
{~,}$$ where $\operatorname{P}_{a,a+1}$ swaps the spins at sites $a$ and $a+1$. For a chain of length $L$, we must impose the cyclic identification $\operatorname{P}_{a,a+1}\simeq\operatorname{P}_{a+L,a+L+1}$. The range of an operator of the form can be computed from the list of arguments $a_1,\ldots a_n$ as $$\label{nneighbourint}
\kappa=2+\max\{a_1,\dots, a_n\}-\min\{a_1,\dots, a_n\}{~.}$$
This analogy is very convenient because it allows to restate all the field-theory problems in terms of spin chains, which are the traditional and best-known environment for the study of integrability properties. In fact, the one-loop integrability of ${\mathcal{N}}=4$ SYM in the $SU(2)$ sector was recognized through the discovery that the one-loop dilatation operator, once it is translated into the spin chain language, coincides with the Hamiltonian of the Heisenberg $XXX_{1/2}$ chain, which was known to be integrable [@Minahan:2002ve; @Minahan:2006sk].
Using the basis of permutation operators , we can write the components of the asymptotic dilatation operator, as obtained from the asymptotic Bethe ansatz [@Beisert:2004ry; @Beisert:2007hz; @Beisert:2003tq]. The expressions up to three loops are very simple and they are given by $$\label{Duptothree}
\begin{aligned}
\mathcal{D}_0&={}+{}\{\} {~,}\\
\mathcal{D}_1&=2\left(\{\}-\{1\}\right) {~,}\\
\mathcal{D}_2&=2\left(-4\{\}+6\{1\}-\left(\{1,2\}+\{2,1\}\right)\right) {~,}\\
\mathcal{D}_3&=60\{\}-104\{1\}+4\{1,3\}+24\left(\{1,2\}+\{2,1\}\right) \\
&\phantom{{}={}}-4i\epsilon_{2a}\{1,3,2\}+4i\epsilon_{2a}\{2,1,3\}-4\left(\{1,2,3\}+\{3,2,1\}\right) {~.}\end{aligned}$$ The meaning of the undetermined coefficient $\epsilon_{2a}$ will be discussed later.
The general procedure
---------------------
The asymptotic Bethe ansatz will give the correct value of the Konishi anomalous dimension only up to three loops. However, we can still take advantage from the knowledge of the asymptotic result to extract the information on the most complicated diagrams that otherwise we should consider explicitly. We summarize here the main steps of our procedure for the calculation of the exact $\ell$-loop anomalous dimensions of length-$L$ operators, which has general validity and will be applied also to the five-loop case in ${\mathcal{N}}=4$ SYM and to even higher orders in the $\beta$-deformed theory:
1. compute the $\ell$-loop component $\mathcal{D}_\ell$ of the asymptotic dilatation operator from the hypothesis of all-loop integrability and the asymptotic Bethe equations [@Beisert:2004ry] (see [@usfive] for an explicit example of this step, realized in the five-loop case). We can divide the diagrams contributing to the dilatation operator into two classes: the first one contains graphs with range less than or equal to $L$, which are allowed both in the asymptotic and in the length-$L$ case, possibly with different combinatorial factors. The second class is made of higher-range graphs, which appear in the asymptotic computation but not in the finite-length case.
2. Subtract the contribution of the second class, to obtain all the information on the first one. Such a trick works because when the dilatation operator is written in the basis of permutation operators , its functional form as an operator on the states of the $SU(2)$ sector automatically accounts for the change in the combinatorial factors of a given Feynman diagram applied to operators of different lengths. Moreover, this approach reveals to be very useful because the number and complexity of diagrams typically grow when the range is reduced, so that the lower-range graphs of which we avoid the direct computation constitute in fact the most difficult classes to calculate.
3. Add the contribution of wrapping diagrams, which must be computed explicitly.
This general procedure becomes particularly simple when we consider finite-size effects at the critical order, that is $L$ loops for length-$L$ operators. Moreover, both the tasks of subtraction and computation of wrapping graphs will be greatly further simplified by the application of ${\mathcal{N}}=1$ superspace techniques.
Before focusing on the actual four-loop case, we present here the general discussion for the subtraction step in the special case where the order is critical. We must subtract all the $L$-loop diagrams of range $(L+1)$, which is the maximum one allowed at this order. Now, let us divide all such graphs into two groups, with the first one made of diagrams (that we define maximal) whose chiral structure alone (i.e. the structure of scalar propagators and vertices) already has range $(L+1)$, whereas the second group contains graphs (referred to as non-maximal) with a lower-range chiral structure, whose range is increased by vector interactions. An example from each class is shown in Figure \[example-03\]. The second class fulfills all the assumptions we made to demonstrate the cancellation identities for supergraphs in Section \[superspace\], and we conclude that the corresponding diagrams are not relevant for the computation of anomalous dimensions.
=0.75mm $\qquad\qquad$
We are therefore left with the subtraction of the maximal diagrams. To deal with them, it is better to consider a new basis, directly related to Feynman supergraphs, instead of the standard permutation one . In order to find it, we must analyze the general properties of the possible chiral structures. First of all, we can have diagrams with only vector interactions. From the point of view of flavour permutations, their chiral structure is just the identity. Let us consider now the basic structure of Figure \[buildingblock\]. Since scalar propagator always connect a chiral and an anti-chiral vertex, any non-trivial chiral structure for an $\ell$-loop diagram with one insertion of a composite operator made of chiral superfields must be constructable by assembling up to $\ell$ copies of this building block. We can now use the standard permutation basis to write down the explicit action of $\chi(1)$ on the flavours of the superfields. Starting from the structure of $\chi(1)$ and iterating it, the more complicated structures can be found. The results up to four loops are
=0.75mm $$\begin{aligned}
\chi(1):\quad-&
\raisebox{1.5\eqoff}{\fmfframe(1,1)(3,4){\begin{fmfchar*}(30,30)
{\fmftop{v1}
\fmfbottom{v5}
\fmfforce{(0.125w,h)}{v1}
\fmfforce{(0.125w,0)}{v5}
\fmffixed{(0.25w,0)}{v1,v2}
\fmffixed{(0.25w,0)}{v2,v3}
\fmffixed{(0.25w,0)}{v3,v4}
\fmffixed{(0.25w,0)}{v5,v6}
\fmffixed{(0.25w,0)}{v6,v7}
\fmffixed{(0.25w,0)}{v7,v8}
\fmf{plain,tension=0.5,right=0.25}{v2,vc1}
\fmf{plain,tension=0.5,left=0.25}{v3,vc1}
\fmf{plain}{vc1,vc2}
\fmf{plain,tension=0.5,left=0.125}{vc3,vc2}
\fmf{plain,tension=0.5,left=0.25}{v6,vc2}
\fmf{plain,tension=0.5,right=0.25}{v7,vc2}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath(__v1,__v5)}
\fmfiset{p2}{vpath(__v2,__vc1)}
\fmfiset{p3}{vpath(__v3,__vc1)}
\fmfiset{p4}{vpath(__vc1,__vc2)}
\fmfiset{p5}{vpath(__v6,__vc2)}
\fmfiset{p6}{vpath(__v7,__vc2)}
\fmfiset{p7}{vpath(__v4,__v8)}
}\fmfipair{w[]}
{\fmfiequ{w1}{point length(p4)/2 of p4}
}
\fmfiv{l=\footnotesize{$\phi$},l.a=-90,l.d=5}{vloc(__v6)}
\fmfiv{l=\footnotesize{$Z$},l.a=-90,l.d=5}{vloc(__v7)}
\fmfiv{l=\footnotesize{$\phi$},l.a=90,l.d=5}{vloc(__v2)}
\fmfiv{l=\footnotesize{$Z$},l.a=90,l.d=5}{vloc(__v3)}
\fmfiv{l=\footnotesize{$\psi$},l.a=0,l.d=5}{w1}
\end{fmfchar*}}}
&+&
\raisebox{1.5\eqoff}{\fmfframe(3,1)(1,4){\begin{fmfchar*}(30,30)
{\fmftop{v1}
\fmfbottom{v5}
\fmfforce{(0.125w,h)}{v1}
\fmfforce{(0.125w,0)}{v5}
\fmffixed{(0.25w,0)}{v1,v2}
\fmffixed{(0.25w,0)}{v2,v3}
\fmffixed{(0.25w,0)}{v3,v4}
\fmffixed{(0.25w,0)}{v5,v6}
\fmffixed{(0.25w,0)}{v6,v7}
\fmffixed{(0.25w,0)}{v7,v8}
\fmf{plain,tension=0.5,right=0.25}{v2,vc1}
\fmf{plain,tension=0.5,left=0.25}{v3,vc1}
\fmf{plain}{vc1,vc2}
\fmf{plain,tension=0.5,left=0.125}{vc3,vc2}
\fmf{plain,tension=0.5,left=0.25}{v6,vc2}
\fmf{plain,tension=0.5,right=0.25}{v7,vc2}
\fmfposition
\fmfipath{p[]}
\fmfiset{p1}{vpath(__v1,__v5)}
\fmfiset{p2}{vpath(__v2,__vc1)}
\fmfiset{p3}{vpath(__v3,__vc1)}
\fmfiset{p4}{vpath(__vc1,__vc2)}
\fmfiset{p5}{vpath(__v6,__vc2)}
\fmfiset{p6}{vpath(__v7,__vc2)}
\fmfiset{p7}{vpath(__v4,__v8)}
}\fmfipair{w[]}
{\fmfiequ{w1}{point length(p4)/2 of p4}
}
\fmfiv{l=\footnotesize{$\phi$},l.a=-90,l.d=5}{vloc(__v6)}
\fmfiv{l=\footnotesize{$Z$},l.a=-90,l.d=5}{vloc(__v7)}
\fmfiv{l=\footnotesize{$Z$},l.a=90,l.d=5}{vloc(__v2)}
\fmfiv{l=\footnotesize{$\phi$},l.a=90,l.d=5}{vloc(__v3)}
\fmfiv{l=\footnotesize{$\psi$},l.a=0,l.d=5}{w1}
\end{fmfchar*}}}
\end{aligned}$$
$$\label{chistruc}
\begin{aligned}
\chi(a,b,c,d)&={{}\{ \}{}}-4{{}\{1\}{}}
+{{}\{a,b\}{}}+{{}\{a,c\}{}}+{{}\{a,d\}{}}+{{}\{b,c\}{}}+{{}\{b,d\}{}}+{{}\{c,d\}{}}\\
&\phantom{{}={}}
-{{}\{a,b,c\}{}}-{{}\{a,b,d\}{}}-{{}\{a,c,d\}{}}-{{}\{b,c,d\}{}}
+{{}\{a,b,c,d\}{}}{~,}\\
\chi(a,b,c)&=-{{}\{ \}{}}+3{{}\{1\}{}}
-{{}\{a,b\}{}}-{{}\{a,c\}{}}-{{}\{b,c\}{}}+{{}\{a,b,c\}{}}{~,}\\
\chi(a,b)&={{}\{ \}{}}-2{{}\{1\}{}}+{{}\{a,b\}{}}{~,}\\
\chi(1)&=-{{}\{ \}{}}+{{}\{1\}{}}{~,}\\
\chi() &={{}\{ \}{}}{~.}\end{aligned}
\normalsize$$
We will refer to the $\chi(\ldots)$ functions as chiral functions. Since they can be written as linear combination of the old permutation basis, we can take them as new basis elements. The number $n$ of arguments in each chiral function is equal to the number of copies of $\chi(1)$ that are needed to assemble it. To obtain an $\ell$-loop graph we will need to add $(\ell-n)$ vector propagators. As anticipated, $\chi()$ is just the identity, corresponding to diagrams with only vector interactions. When the dilatation operator is written in the new basis, the coefficient of each chiral function will be equal to the coefficient of the $1/\varepsilon$ pole of the sum of all the relevant diagrams with the chosen chiral structure, multiplied by $(-2\ell)$ according to the definition of anomalous dimension .
From the previous considerations, it follows that we can subtract the contribution of maximal range-$(L+1)$ diagrams by simply deleting the terms with the corresponding chiral functions from the expression of the asymptotic dilatation operator in the basis . Note that this approach would not have been applicable to non-maximal graphs, whose contributions would mix with those of lower-range diagrams with the same chiral structure.
In summary, when the order is critical we can subtract all the contributions of range-$(L+1)$ supergraphs without the need for explicit diagrammatic computations. This result makes the subtraction step nearly trivial, reducing the original computation to the analysis of wrapping diagrams, and is a consequence of the use of ${\mathcal{N}}=1$ superspace techniques. Just to have an idea of the simplification deriving from it, the range-five diagrams that should be calculated explicitly in the four-loop case, without the results of Section \[superspace\], are more than one hundred.
Now we can apply all this machinery to the four-loop case to find the exact four-loop anomalous dimension of the Konishi operator.
The four-loop case
------------------
We applied the general procedure that we have presented in the previous section to the four-loop case in [@us; @uslong]. To do this, we started from the expression of the four-loop component of the asymptotic dilatation operator, which was given in the permutation basis in [@Beisert:2007hz], and we wrote it in the chiral basis . Then, we subtracted the range-five contributions by deleting all the terms with a range-five chiral function, i.e. all those with both 1 and 4 among the arguments. Afterwards, we added the contribution from wrapping supergraphs. We will not present here the details of the computation, which can be found in [@us; @uslong]. Instead, we prefer to stress the most interesting features of the procedure.
=0.75mm
- First of all, the asymptotic dilatation operator depends on a set of coefficients that parameterize the behaviour under similarity transformations. An example is given by the $\epsilon_{2a}$ coefficient at three loops in . Such coefficients do not affect the spectrum, and depend on the renormalization scheme. Since they are non-physical, they cannot alter the spectrum even in the finite-length case, and hence the contributions proportional to them from the subtraction and the wrapping terms must combine in such a way that the final eigenvalues of the exact dilatation operator do not depend on them. However, the wrapping supergraphs must be calculated in a particular renormalization scheme, and thus their dependence on the similarity coefficients is hidden. This is why the values of the relevant coefficients must be determined explicitly in the chosen scheme, through the computation of a subset of the range-five diagrams.
- Special care must be dedicated to the listing of all the possible wrapping supergraphs, following [@Sieg:2005kd]. It is natural to organize them starting from the completely chiral ones. There are three of them, shown in Figure \[diagrams-chi\]. Their action on the length-four subsector can still be described in terms of the standard chiral functions, after the identification of the first and the fifth lines in the operator, as $$\label{chiMr4}
\begin{aligned}
W_{\mathrm{chiral}}^{(1)}\quad&\sim\quad\chi(2,4,1,3){~,}\\
W_{\mathrm{chiral}}^{(2)}\quad&\sim\quad\chi(4,1,2,3){~,}\\
W_{\mathrm{chiral}}^{(3)}\quad&\sim\quad\chi(4,3,1,2){~.}\end{aligned}$$
We then consider wrapping diagrams with vector interactions. Since we work at the critical order they can all be drawn so that the wrapping line is a vector one. Thus, any supergraph of this kind can be obtained from one of the chiral structures with range up to three, by adding the right number of vectors in all the possible ways. The cancellation result of Section \[superspace\] reduces the number of relevant contributions. As an example, we list the relevant supergraphs with chiral structure $\chi(2,1)$ in Figure \[diagrams-21\].
=0.75mm
- Another non-trivial aspect of the procedure concerns the symmetry factors for wrapping diagrams. The best way to draw a wrapping graph is on the surface of a cylinder, with one of the bases representing the composite operator. Such a representation is suggested by the cyclicity of the trace, and is useful to analyze the behaviour under a parity transformation, which reverses the order of the fields in the chain, thus leaving two-impurity states of the $SU(2)$ sector unchanged. When a diagram is not symmetric, we can account for its reflection by simply doubling its contribution.
Once we have listed all the possible supergraphs, we perform the D-algebra [@Gates:1983nr] and obtain a set of momentum integrals that can be computed, for example, by means of the Gegenbauer Polynomial $x$-space technique (GPXT) [@Chetyrkin:1980pr]. This method is particularly effective in our case because the insertion of the composite operator, being a vertex from where a large number of lines start, is typically a very good candidate for the root vertex [@Chetyrkin:1980pr; @usfive], whose wise choice can simplify the calculation considerably. As a consequence, trying to extract anomalous dimensions from the two-point function would be much more difficult using the GPXT. Moreover, in the calculation of anomalous dimensions we only need the divergent parts of diagrams. This fact result in a further simplification, since it allows us to neglect the exponential factor depending on the external momentum, which does not affect the ultraviolet behaviour of the integral. This is equivalent to putting the external momentum to zero, so a cutoff $R$ on the radial integrations is required to avoid the appearance of infrared divergences [@Chetyrkin:1980pr]. In this way, we reduce by one the number of infinite summations, and we obtain integrands that are simple monomials in the radial variables, free of Bessel functions. In more complicated situations, where more than one power of the same momentum appears in the numerator, an additional regularization must be introduced to correctly consider the dimensional continuation of the appearing traceless products, as discussed in detail in [@uslong].
Proceeding in this way, we were able to compute the exact correction to the four-loop dilatation operator on the length-four subsector. The linear combinations of the basis that renormalize multiplicatively are found to be $$\begin{aligned}
&\mathcal{O}_{\mathrm{protected}}=\mathcal{O}_{4,1}+2\mathcal{O}_{4,2}=\frac{1}{2}[3\,\mathrm{tr}(\phi\{Z,\phi\}Z)-\mathrm{tr}(\phi[Z,\phi]Z)] {~,}\\
&\mathcal{O}_K={\mathcal{O}_{4,1}}-{\mathcal{O}_{4,2}}=\mathrm{tr}(\phi[Z,\phi]Z) {~.}\end{aligned}$$ The first one is protected, whereas the second one, which is a Konishi descendant, corresponds to the four-loop eigenvalue $$\gamma_4=-2496+576\zeta(3)-1440\zeta(5) {~.}$$ Restoring the lower-order components, we can write the exact expression for the Konishi anomalous dimension up to four loops $$\label{finalgamma}
\gamma=12\lambda-48\lambda^2+336\lambda^3+\lambda^4(-2496+576\zeta(3)-1440\zeta(5))
{~.}$$ The most evident feature of this result is the presence of the $\zeta(5)$ term, which comes entirely from wrapping interactions and which cannot arise in the asymptotic regime, where the only allowed transcendental term is $\zeta(3)$.[^1] The appearance of these transcendental functions is expected from the structure of wrapping integrals and the Gegenbauer Polynomial $x$-space technique. Moreover, our exact result ruled out previous conjectures based on analogies with the Hubbard model [@Rej:2005qt] or the BFKL equation [@Lipatov:1976zz; @Kuraev:1977fs; @Balitsky:1978ic; @Kotikov:2007cy].
The exact value of the anomalous dimension was later confirmed by an independent computation performed on the string side and based on the Lüscher approach [@Bajnok:2008bm]. This is a highly non-trivial check for both of the procedures and for the AdS/CFT correspondence itself. Afterwards, a further check was obtained from a computer-made perturbative computation based on the component-field formalism [@Velizhanin:2009zz], which has also been extended to take non-planar contributions into account [@Velizhanin:2009gv]. The comparison between our approach and the one used in [@Velizhanin:2009zz] allows to understand better the power of integrability and superspace techniques: we only had to compute explicitly less than fifty supergraphs, whereas the component-field approach involves more than 130 000 diagrams.
Finally, the wrapping correction to the Konishi anomalous dimension was obtained also from the Y-system in [@Gromov:2009tv], thus representing the first explicit check on the validity of the new proposal.
The five-loop case
------------------
The four-loop analysis can be extended to five loops in a straightforward way to compute wrapping effects on the length-five operators $$\label{Opbasis5}
{\mathcal{O}_{5,1}}=\mathrm{tr}(\phi Z\phi ZZ){~,}\qquad{\mathcal{O}_{5,2}}=\mathrm{tr}(\phi\phi ZZZ) {~.}$$ With this choice for the length, order five is again the critical perturbative order for finite-size contributions. All the details of the computation have been presented in [@usfive].
Some care must be dedicated to the determination of the most general form of the asymptotic operator $\mathcal{D}_5$, in order to fully parameterize the behaviour under similarity transformations, so that explicit computations in any renormalization scheme are possible. For a detailed description of this procedure and of the derivation of $\mathcal{D}_5$ from the integrability hypothesis, see [@Beisert:2004ry; @usfive]. In the five-loop case, we must subtract the contribution of range-six interactions. This task can be accomplished again through the simple cancellation of the range-six chiral functions in the expression of the asymptotic five-loop dilatation operator.
A difference with respect to the four-loop case is represented by the fact that now not all of the lower-range structures appear in independent wrapping diagrams. In particular, $\chi(1,2,4)$ and $\chi(2,1,4)$ lead to the same wrapping diagrams. Similarly, $\chi(1,3)$ is equivalent to $\chi(1,4)$. Hence, an independent set of structures is made of $$\begin{gathered}
\chi(2,4,1,3){~,}\ \chi(3,2,1,4){~,}\ \chi(1,2,3,4){~,}\ \chi(1,4,3,2){~,}\ \chi(1,3,2){~,}\\
\chi(2,1,3){~,}\ \chi(1,2,3){~,}\ \chi(2,1,4){~,}\ \chi(2,1){~,}\ \chi(1,4){~,}\ \chi(1) {~.}\end{gathered}$$ Putting all the contributions together, the total wrapping correction to the five-loop dilatation operator can be assembled as a $2\times2$ matrix on the length-five subsector. The linear combination of the basis operators $${\mathcal{O}_{5,1}}'=\mathrm{tr}(\phi Z\phi ZZ)+\mathrm{tr}(\phi\phi ZZZ){~,}$$ is protected. On the contrary, the combination $$\label{op5mult}
{\mathcal{O}_{5,2}}'=\mathrm{tr}(\phi Z\phi ZZ)-\mathrm{tr}(\phi\phi ZZZ){~,}$$ is not protected, and its exact anomalous dimension up to five loops is $$\begin{gathered}
\label{fullgamma5}
\gamma=8\lambda-24\lambda^2+136\lambda^3-8[115+16\zeta(3)]\lambda^4 \\
+[6664+1152\zeta(3)+3840\zeta(5)-2240\zeta(7)]\lambda^5 {~.}\end{gathered}$$ As in the four-loop case, the maximum-transcendentality term in the final result, which in this case is $\zeta(7)$, is generated entirely by wrapping effects. The $\zeta(5)$ and $\zeta(3)$ terms get contributions also from the dressing phase of the asymptotic regime.
The result confirms the computation of [@Beccaria:2009eq], based on a conjecture. Moreover, it agrees [@usfive] with the prediction of the Y-system, thus representing a new independent check for it.
It would be very interesting to be able to compute also the five-loop exact anomalous dimension of the Konishi descendant, which we studied at four loops in the previous section and which has length four. In fact, its value has been obtained in [@Bajnok:2009vm] by means of the Lüscher approach and in [@Arutyunov:2010gb; @Balog:2010xa] using the Thermodynamic Bethe Ansatz, so it would be important to check the result against a direct perturbative computation based on pure field-theoretical techniques. At the moment, however, such a computation seems to be out of reach, even with the help of superspace methods, since when the condition of criticality of the perturbative order no longer holds the number of possibly relevant diagrams greatly increases, and the D-algebra becomes much more complicated. Anyway, we think that such difficulties may be overcome if new and more powerful cancellation identities, similar to those described in Section \[superspace\], will be found.
Wrapping in the $\beta$-deformed ${\mathcal{N}}=4$ SYM {#betadef}
======================================================
We now move to the analysis of wrapping effects in the $\beta$-deformed ${\mathcal{N}}=4$ SYM [@betadef; @Fiamberti:2008sn]. We will see that the features of this theory allow to perform computations at much higher orders.
General considerations
----------------------
The general procedure described in Section \[undeformed\] to deal with finite-size effects can be applied in the deformed case, too. In order to do this, we need the expressions for the components of the asymptotic dilatation operator. It is easy to see that this operator can be found directly from its undeformed counterpart, without the need for full computations from scratch. Let us consider in fact the deformed permutations [@Berenstein:2004ys] $$\operatorname{\mathbf{P}}_{i,j}=\frac{1}{2}\left[{\mathds{1}}_{i,j}+\sigma_i^3\sigma_j^3+q^2\,\sigma_i^+\sigma_j^-+\bar{q}^2\,\sigma_i^-\sigma_j^+\right]
{~,}\qquad\qquad q\equiv e^{i\pi\beta} {~,}\label{defperm}$$ where the $\sigma_k^j$ are the Pauli matrices at chain site $k$ and $\sigma_k^\pm=\sigma_k^1\pm i\sigma_k^2$, which reduce to the standard ones in the undeformed limit $\beta\to0$. We can use them to build a set of basis operators similar to $$\label{defbasisops}
{{}\boldsymbol{\{a_1,\dots,a_n\}}{}}=\sum_{r=0}^{L-1}\operatorname{\mathbf{P}}_{a_1+r,\;a_1+r+1}\cdots\operatorname{\mathbf{P}}_{a_n+r,\;a_n+r+1}
{~.}$$ This definition is suggested by the fact that the deformation affects only the three-scalar interactions, and is useful because it allows to write the deformed chiral functions ${\boldsymbol{\chi}(\ldots)}$, describing the chiral structure of Feynman supergraphs in the deformed theory, as linear combinations of the operators with the same coefficients that appear in . The function ${\boldsymbol{\chi}(1)}$ is still the building block for all the possible non-trivial chiral structures, and all the dependence of a diagram on $\beta$ is encoded in the corresponding chiral function. Hence, the deformed dilatation operator must be writable in the basis of the ${\boldsymbol{\chi}(\ldots)}$ functions with coefficients that do not depend on $\beta$. Since the chiral functions reduce to their undeformed counterparts when $\beta\to0$, we can conclude that such coefficients must be exactly the same as in the undeformed case.
So we can obtain the deformed dilatation operator simply by replacing every chiral function in the expansion of the standard dilatation operator with its deformed version. In particular, we can deform the result we found in the previous section to compute wrapping effects at four and five loops on two-impurity states of length four and five respectively. The four-loop result has been confirmed recently using the Lüscher technique in [@Ahn:2010yv]. The resulting expressions are not very enlightening, being complicated functions of $\beta$, and so we will not present them here. Instead, it is more interesting to focus on the class of single-impurity operators, which were protected by supersymmetry in standard ${\mathcal{N}}=4$ SYM, but which are not in the $\beta$-deformed case. These states in fact are easier to study than two-impurity ones, and thus we will be able to push perturbative computations up to higher orders. It is useful to summarize the main features of single-impurity operators that give rise to such simplifications:
- for every value of $L$, only one single-impurity operator $\mathcal{O}_L=\operatorname{tr}(\phi Z^{L-1})$ of length $L$ exists, so that there is no mixing under renormalization and Feynman supergraphs reduce to numbers instead of matrices.
- We do not need the explicit expression for the asymptotic dilatation operator: the asymptotic contribution to the anomalous dimension can be extracted from the all-loop formula [@Mauri:2005pa] $$\label{single-all-orders}
\gamma(\mathcal{O}_\text{as})=-1+\sqrt{1+4\lambda\Big\vert q-\frac{1}{q}\Big\vert^2}=-1+\sqrt{1+16\lambda\sin^2(\pi\beta)} {~,}$$ as $$\gamma^{as}_L= \alpha_L\,\lambda^L \sin^{2L}(\pi\beta){~,}\qquad\alpha_L=-(-8)^L \frac{(2L-3)!!}{L!} {~.}$$
- The form of the possible chiral structures of Feynman graphs is restricted: apart from completely chiral wrapping diagrams, the only allowed functions are those of the form ${\boldsymbol{\chi}(1,2,\ldots,k)}$ and their parity reflections. Note that, besides the reduction in the possible number of contributions, with respect to the two-impurity case here the most complicated structures (especially those with two $\chi(1)$ blocks acting directly on the composite operator) do not contribute.
=0.75mm
=0.75mm
Single-impurity operators at higher loops
-----------------------------------------
We are now ready to undertake the general discussion on single-impurity states, following [@betadef; @Fiamberti:2008sn]. More precisely, we will give a general formula for the wrapping correction to the $L$-loop anomalous dimension of the length-$L$ single-impurity operator $\mathcal{O}_L$.
Thanks to the validity of the general supergraph cancellation identities of Section \[superspace\], and given the restrictions on the possible chiral structures for single-impurity states, we conclude that only two diagrams need to be considered for the subtraction of range-$(L+1)$ interactions, the $S_L$ of Figure \[SL\] and its reflection. In the deformed theory, the outcome of a supergraph will be complex in general, since every vertex gets a factor of $q$ or $\bar{q}$ depending on the order of the $\phi$, $Z$ and $\psi$ superfields. A parity transformation reverses this order, thus turning each $q$ into a $\bar{q}$ and vice-versa. Hence, the parity reflection of a supergraph is actually the complex conjugate of the original diagram, and their total contribution is real as expected. The structure of $S_L$ is simple enough to allow to complete the D-algebra for any $L$, and we find $$\begin{aligned}
{S^{(L)}}+{\bar{S}^{(L)}}&\rightarrow
(g^2 N)^L {K_{1}^{(L)}}\,[{\boldsymbol{\chi}(1,2,\dots,L)}+
{\boldsymbol{\chi}(L,\dots,2,1)}]\\
&\phantom{{}\rightarrow{}}
=(g^2 N)^L {K_{1}^{(L)}}\,
(q-\bar{q})^2\left[q^{2(L-1)}+\bar{q}^{2(L-1)}
\right]{~,}\end{aligned}$$ where ${K_{1}^{(L)}}$ is the $L$-loop integral of Figure \[JL\].
Similarly, we have to consider a single completely chiral wrapping supergraph, the $W_0^{(L)}$ of Figure \[WL\]. In this case, the D-algebra gives $$\begin{aligned}
{W_{0}^{(L)}}+{\bar{W}_{0}^{(L)}}&\rightarrow(g^2 N)^L {K_{2}^{(L)}}\,[{\boldsymbol{\chi}(L,1,2,\dots,L-1)}+{\boldsymbol{\chi}(1,L,L-1,\dots,2)}]\\
&\phantom{{}\rightarrow{}}=(g^2 N)^L {K_{2}^{(L)}}\,
(q-\bar{q})^2\left[q^{2(L-1)}+\bar{q}^{2(L-1)}
\right]{~,}\end{aligned}$$ where the $L$-loop integral ${K_{2}^{(L)}}$ is shown in Figure \[KL\].
Coming to wrapping supergraphs with vector interactions, because of the restrictions on the chiral functions, now the number of vector lines uniquely identifies the structure. So we can denote by ${W_{k}^{(L)}}$ the sum of all the diagrams with $k$ vectors, corresponding to the structure ${\boldsymbol{\chi}(1,2,\ldots,L-k)}$. From the cancellation identities of Section \[superspace\] we know that vector propagators can be attached to outgoing scalar lines only at double-vector vertices. This observation greatly reduces the number of possibilities, which would otherwise grow with the number of vectors, and in the end we find that only four diagrams can be relevant for each chiral structure, with the exception of ${\boldsymbol{\chi}(1)}$, which is associated to two diagrams only. These considerations are summarized in Figure \[defwrapgraphs\].
Once again it is possible to complete the D-algebra for generic $L$. Introducing the deformation factors $$\label{colfact}
{C_{j}^{(L)}}=(q-\bar{q})^2\left[q^{2(L-j-1)}+\bar{q}^{2(L-j-1)}\right]=-8\sin^2(\pi\beta)\cos[2\pi\beta(L-j-1)] {~,}$$ for $j\in\{0,\ldots,L-1\}$, we can write the contribution from each class as $$\label{defres}
\begin{aligned}
({W_{0}^{(L)}}+{\bar{W}_{0}^{(L)}})-({S^{(L)}}+{\bar{S}^{(L)}})&\to(g^2 N)^L\ {C_{0}^{(L)}}({K_{2}^{(L)}}-{K_{1}^{(L)}}){~,}\\
\vdots\\
{W_{j}^{(L)}}+{\bar{W}_{j}^{(L)}}&\to2(g^2 N)^L\ {C_{j}^{(L)}}{I_{j+1}^{(L)}}{~,}\\
\vdots\\
{W_{L-1}^{(L)}}+{\bar{W}_{L-1}^{(L)}}&\to-(g^2 N)^L\ {C_{L-1}^{(L)}}({K_{2}^{(L)}}-{K_{1}^{(L)}}){~,}\end{aligned}$$ where the subtraction of ${S^{(L)}}$ has been combined with the diagram ${W_{0}^{(L)}}$. The integrals ${I_{j}^{(L)}}$ are shown in Figure \[ILk\], where the pair of arrows indicates that the scalar product of the corresponding momenta appears in the numerator. The integrals $I_j^{(L)}$ satisfy the relation $$\label{ILrel}
{I_{j}^{(L)}}=-{I_{L-j+1}^{(L)}}
{~,}$$ and thus the total number of integrals that we must compute explicitly is halved. Moreover, it is useful to rewrite $({K_{2}^{(L)}}-{K_{1}^{(L)}})$ in terms of ${I_{1}^{(L)}}$ and of the integral ${P^{(L)}}$ of Figure \[PL\] as $${K_{2}^{(L)}}-{K_{1}^{(L)}}={P^{(L)}}-2{I_{1}^{(L)}}
{~.}$$
We can at last collect all the terms to obtain the correction to the asymptotic anomalous dimension $$\gamma_L(\mathcal{O}_{L})=\gamma_L^{as}+\delta\gamma_L(\mathcal{O}_{L})
{~.}$$ Since ${P^{(L)}}$ and all the ${I_{j}^{(L)}}$ are free of subdivergences, we can write our final result as $$\label{wrapcorr}
\delta\gamma_L(\mathcal{O}_{L})=-2L(g^2 N)^L\lim_{\varepsilon\rightarrow0}\varepsilon\Bigg[({C_{0}^{(L)}}-{C_{L-1}^{(L)}}){P^{(L)}}(\varepsilon)-2\!\sum_{j=0}^{[\frac{L}{2}]-1}\!({C_{j}^{(L)}}-{C_{L-j-1}^{(L)}}){I_{j+1}^{(L)}}(\varepsilon)\Bigg]
{~.}$$
=0.75mm\
$\Huge{\vdots}$\
\
$\Huge{\vdots}$\
=0.75mm
The computation of the wrapping correction has thus been reduced to the calculation of the divergent parts of the $[L/2]$ integrals ${I_{j}^{(L)}}$ and of ${P^{(L)}}$. For the latter, the result is known as a function of $L$, but we do not have a similar solution for the ${I_{j}^{(L)}}$. However, the value of ${I_{j}^{(L)}}$ can be computed exactly for any fixed values of $L$ and $j$, by means of a set of recurrence relations obtained from the triangle rule for integration by parts [@Chetyrkin:1981qh; @Broadhurst:1985vq; @Fiamberti:2008sn]. A different set of relations can be found by applying the GPXT directly in momentum space, as explained in [@Fiamberti:2008sn]. As a particular case of the general result , we can see that in the three-loop case the correction to the asymptotic result vanishes, in agreement with the explicit computation of [@betadef]. This is likely to be a consequence of the oversimplified structure of three-loop integrals, since there is no apparent reason why $\mathcal{O}_3$ should be protected against finite-size corrections. In fact, we expect that a non-trivial correction would arise at four loop. The corresponding computation, however, involves a non-critical perturbative order, and would thus be very difficult, for the same reasons explained in Section \[undeformed\] for the case of the Konishi operator at five loops.
We performed the explicit calculations of the integrals ${I_{j}^{(L)}}$ up to $L=11$ [@Fiamberti:2008sn]. In all the cases, the finite-size correction to the asymptotic anomalous dimension contains only transcendental terms, all of the form $\zeta(2L-2k-1)$ with $k\in\{1,2,\ldots,[L/2]\}$. The rational part of the answer is hence protected against wrapping corrections at the critical order! In particular, the cancellation of the lower-transcendentality contributions is a non-trivial feature which was absent in the case of two-impurity states. As an example, the five-loop anomalous dimension of $\mathcal{O}_5$ depends only on $\zeta(7)$ and $\zeta(5)$, whereas the five-loop wrapping correction for two-impurity operators in the undeformed theory contained an additional $\zeta(3)$ term. About the term with maximum transcendentality, our results suggest that it comes entirely from the integral $P^{(L)}$, which in turn is produced only[^2] by the diagrams in the classes ${W_{1}^{(L)}}$ and ${W_{L-2}^{(L)}}$. We believe that such definite transcendentality pattern will be preserved in general, as far as we restrict to the critical order. A hint on the transcendentality properties of quantities that involve computations beyond the critical order is offered by the recent calculation, based on the Lüscher technique, of the five-loop anomalous dimension of the single-impurity operator $\mathcal{O}_4$ [@Bajnok:2009vm]. In the final result, the wrapping correction is still made only of transcendental terms, namely $\zeta(3)$, $\zeta(5)$, $\zeta(7)$ and $\zeta(3)^2$. So one could guess that the wrapping corrections on single-impurity states always involve only transcendental terms. Unluckily, as we have already explained previously, the computation of the results of [@Bajnok:2009vm; @Arutyunov:2010gb], and of other non-critical quantities, by means of direct field-theoretical techniques seems out of reach at the moment.
The anomalous dimensions of the $\mathcal{O}_L$ operators in the case of even $L$ and $\beta=1/2$ have been found also by means of the Lüscher technique applied to the undeformed theory [@Gunnesson:2009nn; @Beccaria:2009hg], exploiting the correspondence between the actual deformed case and the unphysical single-impurity states with momentum $p=\pi$. The results agree with our calculations.
A proposal for the general description of wrapping effects in the deformed theory, possibly as an adaptation of the Y-system, has not been formulated yet. If such a solution were available, it might be checked against the whole series of single-impurity anomalous dimensions. We think that, thanks to the simplifications deriving from the possibility to work with single-impurity operators, the deformed theory may be a preferable environment for deep investigations on the validity of the recent proposals for the description of the full spectrum.[^3]
Comments
========
In all the cases that we analyzed, ${\mathcal{N}}=1$ superspace techniques revealed to be a very powerful tool for the computation of wrapping corrections to the anomalous dimensions of composite operators. First of all, the use of Feynman supergraphs allowed us to find useful cancellation identities coming from supersymmetry. This, together with the fact that every supergraph encodes the information on a large number of component-field diagrams, greatly reduced the number of possible terms. In particular, using supergraphs, one never explicitly encounters fermionic matter interactions and their associated $\gamma$-matrix manipulations. The standard multi-loop integrals produced after D-algebra can be computed by means of the Gegenbauer Polynomial $x$-space technique, whose application is greatly simplified by the fact that we are only interested in divergent contributions. This allows to find analytic results to very high loop orders.
Direct perturbative computations based on field-theoretical techniques are important because the results can serve as tests for the proposals for the general description of finite-size effects by means of integrable systems. In fact, the recently found Y-system exactly reproduces our results at four and five loops. For the same reason, it would be interesting to extend such perturbative analyses to deal with wrapping effects beyond the critical order. The only known prediction for a non-critical quantity is the five-loop component of the Konishi anomalous dimension, found in computations based on the Lüscher approach and on the Thermodynamic Bethe Ansatz. The corresponding field-theoretical calculation appears to be very difficult at the moment, but the discovery of new and more powerful identities for supergraph cancellations may make it feasible. Thus, we expect that superspace techniques may still reveal to be useful in the future for the analysis of wrapping effects.
In the case of the $\beta$-deformed theory, thanks to the existence of non-protected single-impurity operators, many more perturbative results are known, so that the tests of the new attempts to the general solution to the wrapping problem could be much deeper. This would require an extension to the $\beta$-deformed case of the recent proposals based on the thermodynamic Bethe ansatz approach and on the Y-system.
Acknowledgements {#acknowledgements .unnumbered}
================
This work has been supported in part by INFN and by the Italian MIUR-PRIN contract 20075ATT78.
[^1]: Based on this, in the recent paper [@Kataev:2010tm] the absence of a $\zeta(5)$ term in the results of perturbative quenched QED was interpreted as the absence of wrapping interactions.
[^2]: The classes ${W_{0}^{(L)}}$ and ${W_{L-1}^{(L)}}$ do not contain terms proportional to $P^{(L)}$. In the expression they are cancelled in combination with the $j=0$ term of the sum.
[^3]: After the appearance of the first preprint version of this review, the paper [@Gromov:2010dy] has been presented, in which a Y-system for the $\beta$-deformed theory was proposed. Starting from it, the authors were able to reproduce all of our perturbative results for single-impurity states and to confirm our conjecture about the transcendentality pattern of the anomalous dimensions. Moreover, they present a generating function from which one can extract the general result for any values of $\beta$ and $L$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
$f:\cup{{\mathcal A}}\to {\rho}$ is called a [*conflict free coloring of the set-system ${{\mathcal A}}$ (with ${\rho}$ colors)*]{} if $$\forall A\in {{\mathcal A}}\,\, \exists\, {\zeta}<{\rho}\, (\,|A\cap
f^{-1}\{{\zeta}\}|=1\,).$$ The [*conflict free chromatic number*]{} ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})$ of ${{\mathcal A}}$ is the smallest $\rho$ for which ${{\mathcal A}}$ admits a conflict free coloring with ${\rho}$ colors.
${{\mathcal A}}$ is a $(\lambda,\kappa,\mu)$-system if $|{{\mathcal A}}| = \lambda$, $|A| = \kappa$ for all $A \in {{\mathcal A}}$, and ${{\mathcal A}}$ is ${\mu}$-almost disjoint, i.e. $|A\cap A'|<{\mu}$ for distinct $A,
A'\in {{\mathcal A}}$. Our aim here is to study $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)
= \sup \{{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}): {{\mathcal A}}\mbox{ is a }
(\lambda,\kappa,\mu)\mbox{-system} \}$$ for $\lambda \ge \kappa
\ge \mu$, actually restricting ourselves to $\lambda \ge \omega$ and $\mu \le \omega$.
For instance, we prove that
- for any limit cardinal $ \kappa$ (or $\kappa = \omega$) and integers\
$n \ge 0,\,k >
0$, GCH implies $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) =\left\{
\begin{array}{lll}
\kappa^{+(n+1-i)}&\mbox{ if $\,\,i\cdot k < t \le (i+1)\cdot k\,,$}
\\&\makebox[60pt]{}\mbox{$i = 1,...,n$;}\\
{}\\
\kappa& \mbox{ if $\,\,(n+1)\cdot k < t\,$}\,;
\end{array}
\right.$$
- if $\lambda \ge \kappa \ge \omega > d > 1\,,$ then $\,\lambda < \kappa^{+\omega}$ implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) <
\omega$\
and $\lambda \ge \beth_\omega(\kappa)\,$ implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) = \omega\,$;
- GCH implies $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le \omega_2$ for $\lambda \ge \kappa \ge
\omega_2$ and\
V=L implies $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le
\omega_1$ for $\lambda \ge \kappa \ge \omega_1\,$;
- the existence of a supercompact cardinal implies\
the consistency of GCH plus\
${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\aleph_{\omega+1},\omega_1,\omega)= \aleph_{\omega+1}$ and\
${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\aleph_{\omega+1},\omega_n,\omega) = \omega_2$ for $2 \le n
\le \omega\,$ ;
- CH implies $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega,\omega) = {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega_1,\omega) =
\omega_1$, while\
$MA_{\omega_1}$ implies $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega,\omega) = {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega_1,\omega) =
\omega\,$.
address:
- ' Alfr[é]{}d R[é]{}nyi Institute of Mathematics, Budapest, Hungary '
- ' Alfr[é]{}d R[é]{}nyi Institute of Mathematics, Budapest, Hungary '
- ' Alfr[é]{}d R[é]{}nyi Institute of Mathematics, Budapest, Hungary '
- Eötvös University of Budapest
author:
- András Hajnal
- István Juhász
- Lajos Soukup
- Zoltán Szentmiklóssy
date: 'March, 2010.'
---
Introduction
============
If ${{\mathcal A}}$ is a set-system and $\rho$ is a cardinal then a function $f : \cup {{\mathcal A}}\to \rho$ is called a [*proper coloring*]{} of ${{\mathcal A}}$ with ${\rho}$ colors if $f$ takes at least $2$ values on each $A \in {{\mathcal A}}$. The smallest $\rho$ for which ${{\mathcal A}}$ admits a proper coloring with $\rho$ colors is the [*chromatic number of*]{} ${{\mathcal A}}$ and is denoted by $\chi({{\mathcal A}})$. The chromatic numbers of various set-systems, in particular almost disjoint ones, had been systematically studied by Erd[ő]{}s and Hajnal and others in [@EH3], [@EH1], and [@EH2].
A function $f:\cup{{\mathcal A}}\to {\rho}$ is called a [*conflict free coloring*]{} of ${{\mathcal A}}$ with ${\rho}$ colors if $$\forall A\in {{\mathcal A}}\ \exists {\zeta}<{\rho}\ (|A\cap
f^{-1}\{{\zeta}\}|=1).$$ We say that $f$ is a [*weak conflict free coloring* ]{} of ${{\mathcal A}}$ if in the above definition the assumption ${\operatorname{dom}}(f)=\cup{{\mathcal A}}$ is weakened to ${\operatorname{dom}}(f){\subset}\cup{{\mathcal A}}$.
The [*conflict-free chromatic number*]{} and the [*weak conflict-free chromatic number*]{} of a set-system ${{\mathcal A}}$, denoted by $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) \mbox{ and }\;w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})$$ respectively, are defined as the minimum number of colors needed for a conflict free or a weak conflict-free coloring of ${{\mathcal A}}$, respectively.
Conflict-free colorings of hypergraphs, that is of systems of finite sets, were first studied in Cheilaris [@CHE] and Pach-Tardos [@pach]. Earlier, conflict-free colorings were mainly considered for some concrete hypergraphs, usually defined by geometric means [@E]. János Pach suggested to us that it would be worth while to study the conflict free colorings of almost disjoint transfinite set systems. It took little time to convince us.
Before going on with the story we state a few very elementary facts. Note first that $\chi({{\mathcal A}})$ is only defined if every member of ${{\mathcal A}}$ has at least two elements, so from here on this is assumed for every set-system ${{\mathcal A}}$.
1. $\chi({{\mathcal A}}) \leq {\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})\leq {w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})+1}$.
2. $\chi({{\mathcal A}})={\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) $ provided $|A|\leq 3$ for all $A\in{{\mathcal A}}$.
3. For each $\kappa \geq \omega$ there exists a quadruple system ${{\mathcal A}}$ with $\chi({{\mathcal A}})=2$ and ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})= \kappa$.
The first statement is trivial, the second follows from $2+2>3$. To see the third, let $${{\mathcal A}}= \{H\in \br \kappa;4;:\text{ $H$ contains
two even and two odd ordinals}\}.$$
For any cardinals $ {\mu}$ and ${\nu}$, the set system ${{\mathcal A}}$ is called [*$({\mu},{\nu})$-almost disjoint*]{} if $$|\cap{{\mathcal B}}|<\mu$$ whenever ${{\mathcal B}}\in \br {{\mathcal A}};{\nu};$. We simply write [*$\mu$-almost disjoint*]{} instead of $({\mu}, 2)$-almost disjoint.
A graph $G=\<V,E\>$ is called [*$({\mu},{\nu})$-almost disjoint*]{} iff the family $\{E(v):v\in V\}$ is $({\mu},{\nu})$-almost disjoint, where $E(v)=\{w\in V: \{v,w\} \in
E\}$. In [@EH1], Erdős and Hajnal proved, in 1966, that if $n <\omega$ and $G$ is an $(n,\omega_1)$-almost disjoint graph, then $\chi(G)\leq\omega$, which of course means $\chi(E)
\le \omega$. They tried to state a generalization of this result for set-systems consisting of finite sets, but failed. Such a generalization was found in the triple paper [@EH2] with B.Rothchild, where some results were proved for finitary $({\mu},{\nu})$-almost disjoint set-systems. In Part I we prove results for such set-systems that are improvements of the results of [@EH2]. The work started in [@EH2] was continued in the almost ninety page long triple paper [@EGH] of Erdős, Galvin and Hajnal. Although we could find some improvements of the results of this paper as well, we did not dare to start to investigate this methodically.
Our main objects of study will be the (weak) conflict free chromatic numbers of $(\lambda,\kappa,\mu)$-systems: ${{\mathcal A}}$ is a $(\lambda,\kappa,\mu)$-system if $|{{\mathcal A}}| = \lambda$, $|A| =
\kappa$ for all $A \in {{\mathcal A}}$, and ${{\mathcal A}}$ is ${\mu}$-almost disjoint. We shall always assume that $\lambda \ge \kappa \ge \mu$ and that $\lambda$ is infinite. These assumptions imply that if ${{\mathcal A}}$ is a $(\lambda,\kappa,\mu)$-system then $|\cup {{\mathcal A}}| \le
\lambda$, hence ${{\mathcal A}}$ has an isomorphic copy ${{\mathcal B}}{\subset}[\lambda]^\kappa$. Conversely, if $\mu < \omega$ then for every $\mu$-almost disjoint ${{\mathcal A}}{\subset}[\lambda]^\kappa$ we have $|{{\mathcal A}}| \le \lambda$.
Now, our basic definition is the following. Let $\psi$ be any one of the functions $\chi,\, {\operatorname{\mbox{${\chi}$}_{\rm CF}}}\,$, or $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}$.
\[def:psi\] For $\lambda \ge \kappa \ge \mu$ we set $$\psi(\lambda,\kappa,\mu)
= \sup \{\psi({{\mathcal A}}): {{\mathcal A}}\mbox{ is a }
(\lambda,\kappa,\mu)\mbox{-system} \}.$$
Let us point out certain basic properties of these. First, it is obvious that $\,\chi(\lambda,\kappa,\mu) \le
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)\,$ and $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu) \le {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu) \le
w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)+1\,.$$ Thus, although in some cases $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)$ is much easier to handle than ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)$, the results on the former reveal a lot of information about the latter. Second, it is immediate from their definitions that they are monotone increasing in their first and third variables.
Intuitively, it also seems plausible that they are monotone decreasing in their second variable: the larger the sets, the more room we have to color them appropriately. For $\chi(\lambda,\kappa,\mu)$ this is obvious and all our results confirm this for the other two as well. Alas, we do not have a formal proof of this, so we propose it as a conjecture.
\[co:conj\] If $\lambda \ge \kappa > \kappa' \ge \mu$ with $\lambda$ infinite, then $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu) \le {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa',\mu)\,.$$
Third, we note that if $\mu = 1$, i.e. we deal with [*disjoint*]{} systems, then trivially $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,1) = 1$ and $\chi(\lambda,\kappa,1) = {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,1) = 2$. Consequently, in what follows we always assume $\mu \ge 2$.
While working on this paper we found it useful to write ${[\lambda,\kappa,\mu]\to\rho }$ for the relation ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu) \le \rho$ and, analogously, ${[\lambda,\kappa,\mu]\to_w\rho }$ for the relation $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu) \le \rho$.
On one hand, the behavior of these symbols shows much similarity to the symbol $M(\lambda,\kappa,\mu)\rightarrow B(\rho)$, investigated in [@EH3], [@HJS1], and [@HJS2], meaning that every $(\lambda,\kappa,\mu)$-system has a $\rho$-transversal, i.e. a set $B$ that meets every element of ${{\mathcal A}}$ in a non-empty set of size $<{\rho}$. But the main reason for this apparent duplication of our notation is that certain variations of these arrow relations will turn out to be quite useful later.
The paper is naturally divided into three parts as follows:
Part I. $\lambda \ge \omega > \kappa \ge \mu$,
Part II. $\lambda \ge \kappa \ge \omega > \mu$,
Part III. $\lambda \ge \kappa \ge \omega = \mu$,
and the three parts are largely independent of each other. However closure arguments, in the “modern" disguise of elementary chains, have been extensively used in all three parts. This method was developed in the papers [@MIL; @EH3; @HJS1; @HJS2], the earlier ones naturally using different terminology.
The main result of Part I is theorem \[tm:gch\] that gives a full description of ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)$ for this case in which $\kappa$ (and hence $\mu$) is finite. We also have ZFC results, for instance corollary \[cor:w1\] that states ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,2k,k+1) = \lambda$ for any $\lambda \ge \omega$ and $0 < k < \omega$. Of course, then conjecture \[co:conj\] would imply ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,k+1) = \lambda$ for $k < t < 2k$ as well. In corollary \[cor:odd\] we could prove this, with some effort, for “almost all" $\lambda$, namely those that are not successors of singular cardinals.
In Part II we first show that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d)$ is always countable, i.e. $[\lambda,\kappa,d] \to \omega$ holds, if $\kappa
\ge \omega
> d$. In fact we show something stronger that involves a modified arrow relation. To get this we first need the following notation.
If $f$ is a function and $A$ is any set, we let $$f[A]=\{f({\alpha}):{\alpha}\in A\cap {\operatorname{dom}}(f)\}$$ and
$$I_f(A)=\{{\xi} \in {\operatorname{ran}}(f): |A \cap f^{-1}\{\xi\}|=1\}.$$
Thus, $f$ is a weak conflict free coloring of a set system ${{\mathcal A}}$ exactly if $I_f(A) \ne \emptyset$ for all $A \in {{\mathcal A}}$. Keeping this in mind, we indeed define a strengthening of the relation $[\lambda,\kappa,\mu] \to \rho$ below.
Assume that $\lambda \ge \kappa \ge \rho \ge \omega$ and $\mu \le
\kappa$. Then $[\lambda,\kappa,\mu] \Rightarrow \rho$ denotes that there is a function $f : \cup {{\mathcal A}}\to \rho$ such that $|\rho
{\setminus}I_f(A)| < \rho$ holds for all $A \in {{\mathcal A}}$.
What we actually prove in theorem \[tm:korlatos\] is $[\lambda,\kappa,d] \Rightarrow \omega$ whenever $\kappa \ge
\omega > d$.
In [@EH3] it was proved that $M(\kappa,\kappa^{+n},d)
\rightarrow B((n+1)(d-1)+2)$ and that this is best possible assuming GCH. In Sections 5, 6, and 7 of Part II we prove analogous results for our symbols. In some sense, these chapters are the heart of our present paper. The results and their proofs seem more complicated than those from Part I, and there are a number of unsolved problems left.
By theorem \[tm:egeszresz\], if $m$ and $d$ are natural numbers and $\kappa$ is infinite, then $$\notag
w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+m},\kappa,d) \le {\left\lfloor
{\frac{(m+1)(d-1)+1}2} \right\rfloor+1}.$$ >From the other side, theorems \[tm:step3’\] and \[tm:step3\] yield $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,2) \ge \left\lfloor \frac
{m}{2}\right\rfloor+2\,$$ and $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,2\ell+1) \ge (m+1)\cdot \ell +
1\,,$$ respectively. Consequently, under GCH we get the exact values $${w{\chi}_{\rm CF}({\kappa}^{+m},\kappa,2)}={\lfloor m/2\rfloor +2}$$ and $${w{\chi}_{\rm CF}({\kappa}^{+m},\kappa,2\ell+1)}=(m+1)\cdot\ell +1.$$
It seems to be much more challenging to find the exact values of, say, $\,{{\chi}_{\rm CF}(\omega_m,\omega,d)}\,$, even under GCH and for $d =
2$. We conjecture that GCH implies ${{\chi}_{\rm CF}(\lambda,\kappa,d)}= {w{\chi}_{\rm CF}(\lambda,\kappa,d)}+1$, but we could not even prove that $${{\chi}_{\rm CF}({\omega_m},\omega,2)}= \lfloor m/2\rfloor +3\,\,$$ holds for each $m\in {\omega}$. This equality holds for $m=0,1$ in ZFC, by proposition \[f:omega\], and for $m=3$ under GCH , by theorem \[tm:ooh\]. However, for $m=2$, we cannot prove even the consistency of ${{\chi}_{\rm CF}({{\omega}_2},\omega,2)}=4$.
In Part III we only investigate conflict free colorings of $(\lambda,\kappa,\omega)$-systems, but it is fairly clear that most of the results would generalize for arbitrary infinite cardinals $\mu$ instead of $\omega$. This practically means that we only follow in the footsteps of the triple paper [@HJS1], leaving the cases covered only in [@HJS2] alone. Results for these cases are reserved for later publications or left for future generations.
By a result of Komjáth [@KO], we have $\chi(2^\omega,\omega,\omega) = {{\chi}_{\rm CF}(2^{\omega},\omega,\omega)}={2^{\omega}}$, and if $\clubsuit(\lambda)$ holds for a regular $\lambda$ then ${{\chi}_{\rm CF}(\lambda,\omega,\omega)}=\lambda$. So, in ZFC, we can not have any non-trivial upper bound for ${{\chi}_{\rm CF}(\lambda,\omega,\omega)}$. By theorem \[tm:ch\_w1w1\], CH implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega_1,\omega) = \omega_1$, so even for uncountable $\kappa$ we expect to have only uncountable upper bounds for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$.
Such bounds can indeed be found, at least consistently. For instance, theorem \[tm:above\_oot\] says that if $\mu^\omega =
\mu$ holds for each $\mu < \lambda$ with $\cf(\mu) = \omega$, then we have $[\lambda,\kappa,\omega] \Rightarrow \omega_2$, hence ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le \omega_2$, whenever ${{\omega}_2}\le
{\kappa}\le {\lambda}$. Moreover, if in addition we also assume $\Box_\mu$ for all $\mu$ with $\omega = \cf(\mu) < \mu < \lambda$, then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le \omega_1$ whenever $\omega_1
\le {\kappa}\le {\lambda}$, by theorem \[tm:above\_oo\].
These results are very sharp, at least modulo large cardinals. Indeed, we show in section 9 that the existence of a supercompact cardinal implies the consistency of GCH plus the following two equalities:
- ${{\chi}_{\rm CF}({{\omega}_{{\omega}+1}},{{{\omega}_1}},\omega)}={{\omega}_{{\omega}+1}}$,
- ${{\chi}_{\rm CF}({{\omega}_{{\omega}+1}},{{\omega}_n},\omega)}={{\omega}_2}\,$ for $\,2 \le n\le{\omega}$.
We close each Part by stating the problems that are nagging us most.
Our notation is standard, as e.g. in [@KU] . If ${\lambda}$ is an infinite cardinal then we call [*a ${\lambda}$-chain of elementary submodels*]{} a continuous sequence $\<N_{\alpha}:{\alpha}<{\lambda}\>$ such that $N_0={\emptyset}$, $\{N_{\alpha}:1\le {\alpha}<{\lambda}\}$ are elementary submodels of $\<H_{\theta},\in \>$ for some fixed, appropriately chosen regular cardinal $\theta$, moreover $|N_{\alpha}|<{\lambda}$, $N_{\alpha}\in N_{{\alpha}+1}$ and ${\alpha}{\subset}N_{\alpha}\cap
{\lambda}$ for ${\alpha}<{\lambda}$. If ${\lambda}={\kappa}^+$ then we also assume ${\kappa}{\subset}N_1$. We put $N_0 = {\emptyset}$ to ensure that $\{N_{{\alpha}+1}{\setminus}N_{\alpha}:{\alpha}<{\lambda}\}$ be a partition of $\cup\{N_{\alpha}:{\alpha}<{\lambda}\}$.
[**[Part I. The case $\lambda \ge \omega > \kappa \ge \mu$]{}**]{}
Upper bounds {#sc:fin}
============
It is obvious that for every ${{\mathcal A}}{\subset}\mathcal{P}(\kappa)$ we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{{\mathcal A}}})\le{\kappa}$. Our next result shows that this inequality remains true for suitably almost disjoint families ${{\mathcal A}}$ of finite subsets of $\kappa^{+n}$ with $\omega > n > 0$, provided that the members of ${{\mathcal A}}$ are large enough.
\[tm:ub\] Let $\kappa \ge \nu \ge \omega$ where $\nu$ is assumed to be regular, moreover $n \ge 1$ and $k\ge 1$ be natural numbers. If ${{\mathcal A}}$ is a $(k+1,{\nu})$-almost disjoint subfamily of $[\kappa^{+n-1}]^{<\omega}\, $ such that $|A| > n\cdot k$ for every $ A \in {{\mathcal A}}$, then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{{\mathcal A}}})\le{\kappa}$.
We actually prove the following stronger statement $(*)_n$ by induction on $n \ge 1$, keeping all the other parameters fixed.
- If $\,{{\mathcal A}}{\subset}\br \kappa^{+n-1};< \omega;\,\setminus\,[\kappa^{+n-1}]^{\le n\cdot k}\,$ is $(k+1,{\nu})$-almost disjoint and $g:{{\mathcal A}}\to \br
{\kappa};<{\nu};$ then there is a function $f:\kappa^{+n-1} \to
\kappa\,$ such that $$I_f(A){\setminus}g(A)\ne {\emptyset}$$ for each $A\in {{\mathcal A}}$.
[**First step: $n=1$.**]{}
We define an injective function $f:\kappa \to \kappa$ inductively on $\xi < \kappa$. Assume that we have defined $f\restriction \xi$ and let $${{\mathcal A}}_\xi =\{A\in {{\mathcal A}}: \xi =\max A\}\,.$$ Clearly, $|{{\mathcal A}}_\xi| < \kappa$, hence $|\bigcup\{g(A):A\in {{\mathcal A}}_\xi \}|
< \kappa$ as well. The second inequality uses that $\kappa$ is regular in case $\nu = \kappa$ and is trivial otherwise. Thus we may pick $$f(\xi)\in {\kappa}{\setminus}(f[\,\xi]\cup\bigcup\{g(A):A\in {{\mathcal A}}_\xi
\})\,.$$ By the construction, we have $f(\max A)\in I_f(A){\setminus}g(A)$ for all $A\in {{\mathcal A}}$. (Of course, this construction does not make use of the almost disjointness or the largeness assumptions made on ${{\mathcal A}}$.)
[**Inductive step: $(*)_n\to (*)_{n+1}$.**]{}
Now we start with a $(k+1,{\nu})$-almost disjoint system $$\,{{\mathcal A}}{\subset}\br \kappa^{+n};<
\omega;\,\setminus\,[\kappa^{+n}]^{\le (n+1)\cdot k}\,$$ and a function $g:{{\mathcal A}}\to \br {\kappa};<{\nu};$. Let us then fix a ${\kappa}^{+n}$-chain of elementary submodels $\<N_{\alpha}:{\alpha}<{\kappa}^{+n}\>\,$ with ${{\mathcal A}},g \in N_1$. For every ${\alpha}<{\kappa}^{+n}$ let $Y_{\alpha}= {\kappa}^{+n}
\cap ( N_{{\alpha}+1}{\setminus}N_{\alpha})\,$, ${{\mathcal A}}_{\alpha}={{\mathcal A}}\cap (N_{{\alpha}+1}{\setminus}N_{\alpha})$ and, finally, ${{\mathcal A}}'_{\alpha}=\{A\cap Y_{\alpha}:A\in
{{\mathcal A}}_{\alpha}\}.$ We may clearly assume that $|N_{{\alpha}+1}| =
|Y_\alpha| = {\kappa}^{+n-1}$ for all $\alpha < {\kappa}^{+n}$.
For every $A\in {{\mathcal A}}{\setminus}N_{\alpha}$ we have $|A\cap
N_{\alpha}| \le k$ because ${{\mathcal A}}$ is $(k+1,{\nu})$-almost disjoint. So if $A\in {{\mathcal A}}_{\alpha}$ then $|A\cap Y_{\alpha}| >
(n+1)\cdot k - k = n\cdot k$, consequently ${{\mathcal A}}'_{\alpha}{\subset}[Y_\alpha]^{<\omega} \setminus \br Y_{\alpha};\le n\cdot k ;$ and, clearly, ${{\mathcal A}}'_{\alpha}$ is $(k+1,{\nu})$-almost disjoint.
We next define, for each ${\alpha}<{\kappa}^{+n}$, a function $f_{\alpha}:Y_{\alpha}\to {\kappa}$, using transfinite induction as follows. Assume that $f_{\xi}$ has been defined for each ${\xi}<{\alpha}<{\kappa}^{+n}$ and set $f_{<{\alpha}}=\cup\{f_{\xi}:{\xi}<{\alpha}\}$. For any $A'\in
{{\mathcal A}}'_{\alpha}$ let $$g_{\alpha}(A')= \bigcup\{f_{<{\alpha}}[A]\cup
g(A):A\in{{\mathcal A}}_{\alpha}\land A\cap Y_{\alpha}=A'\}.$$ Since $|A'| > n\cdot k \ge k$ (recall that $n \ge 1\,$!) and ${{\mathcal A}}$ is $(k+1,{\nu})$-almost disjoint, $|\{A\in{{\mathcal A}}_{\alpha}:A\cap Y_{\alpha}=A'\}| < \nu$ and hence $g_{\alpha}(A') \in [\kappa]^{<\nu} $, using that $\nu$ is regular.
Thus, the inductive assumption $(*)_n$ can be applied to ${{\mathcal A}}_{\alpha}'$ and $g_{\alpha}$ and yields us a function $f_{\alpha}:Y_{\alpha} \to {\kappa}$ such that $$I_{f_{\alpha}}(A'){\setminus}g_{\alpha}(A')
\ne {\emptyset}$$ for each $A'\in {{\mathcal A}}'_{\alpha}$.
Finally, let $f=\cup\{f_{\alpha}:{\alpha}<{\kappa}^{+n}\}$. Then for every $A\in {{\mathcal A}}_{\alpha}$ we have $$I_f(A){\setminus}g(A)\supset
I_{f_{\alpha}}(A\cap Y_{\alpha}){\setminus}g_{\alpha}(A\cap
Y_{\alpha})\ne {\emptyset},$$ hence we are done because ${{\mathcal A}}= \bigcup
\{{{\mathcal A}}_\alpha : {\alpha < {\kappa}^{+n}}\}$.
We now give a consistency result in the spirit of theorem \[tm:ub\] that uses Martin’s axiom.
\[tm:ubma\] Assume $MA_\lambda(K)$, i.e. $MA_\lambda$ for partial orders satisfying property $K$. Then for every natural number $k$ and for every $(k+1,\omega)$-almost disjoint system ${{\mathcal A}}{\subset}[\lambda]^{<\omega}$ such that $|A| > 2k$ for all $A \in {{\mathcal A}}$ we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{{\mathcal A}}})\le{\omega}$.
We first define the poset ${{\mathcal P}}_{{\mathcal A}}=\<P_{{\mathcal A}},\le\>$ as follows: A function $f\in Fn({\lambda},{\omega})$ (that is a finite partial function from $\lambda$ to $\omega$) is in $P_{{\mathcal A}}$ iff $I_f(A)\ne {\emptyset}$ whenever $A\in {{\mathcal A}}$ and $A {\subset}{\operatorname{dom}}(f)$. We then let $f\le g$ iff $f\supset g$.
We claim that the poset ${{\mathcal P}}_{{\mathcal A}}$ satisfies property $K$. Indeed, assume that $\{f_{\alpha}:{\alpha}<{{{\omega}_1}}\} {\subset}P_{{\mathcal A}}$. Without loss of generality we can assume that $$\label{eq:delta1}
\text{$f_{\alpha}=f\cup^* f_{\alpha}'$, and ${\operatorname{dom}}(f'_{\alpha})\cap {\operatorname{dom}}(f'_{\beta})={\emptyset}$ for $\{{\alpha},
{\beta}\} \in [{{{\omega}_1}}]^2$.}$$
For each ${\alpha}<{{{\omega}_1}}$ then ${{\mathcal A}}_{\alpha}=\{A\in {{\mathcal A}}:|A\cap
{\operatorname{dom}}(f_{\alpha})| > k\}\,$ is finite because ${{\mathcal A}}$ is $(k+1,\omega)$-almost disjoint. Let $$F({\alpha})=\{{\beta}<\omega_1:{\operatorname{dom}}(f_{\beta}')\cap \cup {{\mathcal A}}_{\alpha}\ne
{\emptyset}\}\,,$$ then $F({\alpha})$ is also finite. So by the (simplest case of the) free set theorem for set mappings we can find a set $S \in [\omega_1]^{\omega_1}$ such that ${\alpha}\notin
F({\beta})$ and ${\beta}\notin F({\alpha})$ whenever $\{\alpha,\beta \} \in [S]^2$.
We claim that $f=f_{\alpha}\cup f_{\beta}\in P_{{\mathcal A}}$, hence $f_\alpha$ and $f_\beta$ are compatible, for any such pair $\{\alpha,\beta \}$. By (\[eq:delta1\]), $f$ is a function. So assume now that $A\in {{\mathcal A}}$ with $A{\subset}{\operatorname{dom}}(f)$. Since $|A|>2k$ we can assume that e.g. $|A\cap {\operatorname{dom}}(f_{\alpha})|
> k$, that is $A\in {{\mathcal A}}_{\alpha}$, hence $A\cap {\operatorname{dom}}(f'_{\beta})={\emptyset}$. But then $A{\subset}{\operatorname{dom}}(f_{\alpha})$ and so $I_f(A)=I_{f_{\alpha}}(A)\ne {\emptyset}$. Thus $f\in P_{{\mathcal A}}$, completing the proof that $P_{{\mathcal A}}$ has property $K$.
The rest of the proof is a standard density argument that we leave to the reader.
[**Remark:**]{} A slightly weaker statement than theorem \[tm:ubma\], for the chromatic number $\chi$ instead of the conflict free chromatic number${\operatorname{\mbox{${\chi}$}_{\rm CF}}}$, was proved in [@EGH Theorem 5.6]. It was asked there, in Problem 2, if the statement remains true for $(k,\omega_1)$-almost disjoint families. We still do not know the answer to this.
Lower bounds
============
We start this section with presenting a result which implies that the assumptions on the set systems formulated in theorems \[tm:ub\] and \[tm:ubma\], namely that their members should be “suitably large", are really necessary.
\[tm:lb\] Assume that $\lambda \ge \omega$ and $\mu$ are cardinals, $n \ge
2$, $k \ge 1$ are natural numbers such that the partition relation $$\lambda \to (n)_{\mu^k}^{n-1}$$ holds true. (Of course, if $\mu$ is infinite then $\mu^k = \mu$.) Then we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,k+1)
> \mu$ for every number $\,t$ satisfying $k < t \le n\cdot k$ if $n >
2$ and for every [*even*]{} number $t$ satisfying $k < t \le
2\cdot k$ if $\,n = 2$.
Let us put $H = [\lambda]^{n-1} \times k$, then $|H| = \lambda$. We shall construct a $(k+1)$-almost disjoint family ${{\mathcal A}}{\subset}[H]^t$ of cardinality $\lambda$ which does not have a conflict free coloring with $\mu$ colors.
For each $Y \in [\lambda]^n$ we may choose a $t$-element set $A_Y
\in \big[[Y]^{n-1}\times k \big]^t$ such that for every $i<k$ we have $$\label{eq:B}
|\{B\in \br Y;n-1;:\<B,i\>\in A_Y\}|\ne 1.$$ This is easy to check and this is the point where $t$ has to be even in case $n = 2$. Let us now set $${{\mathcal A}}=\{A_Y:Y \in [\lambda]^n \} {\subset}[H]^t\,,$$ then clearly $|{{\mathcal A}}| = \lambda$.
Since $|\br Y;n-1;\cap \br Z;n-1;|\le 1$ for distinct $Y,Z \in
[\lambda]^n$, we clearly have $|A_Y\cap A_Z|\le k$, hence ${{\mathcal A}}$ is $(k+1)$-almost disjoint, i.e. ${{\mathcal A}}$ is a $(\lambda,t,k+1)$-system. Now, it remains to show that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}})>{\mu}$.
Assume that $f:H \to \mu$ is given and define the map $$g:
[\lambda]^{n-1} \to\, {}^k\mu$$ by the stipulation $g(B)(i)=f(\<B,i\>)\,$. By our partition relation hypothesis then there is a $g$-homogeneous set $Y \in [\lambda]^n$. Consider an arbitrary $\<B,i\>\in A_Y$. By (\[eq:B\]) there is a $B' \ne B $ with $\<B',i\>\in A_Y$ as well, hence we have $f(\<B,i\>)=g(B)(i)=g(B')(i)=f(\<B',i\>)$. Since $\<B,i\>$ was arbitrary we obtain that $f$ is [*not*]{} a conflict free coloring of ${{\mathcal A}}$, completing the proof.
We now list a number of easy but quite useful corollaries of theorem \[tm:lb\].
\[cor:wc\] If $\lambda = \omega$ or $\lambda$ is weakly compact then for any $2 \le d \le t < \omega$ we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) = \lambda$.
To see this, let us first choose a natural number $n > 2$ such that $t \le n\cdot (d-1)$. By our choice of $\lambda$, for every $\mu < \lambda$ we have $\lambda \to (n)_{\mu^{d-1}}^{n-1}\,$, in fact even $\lambda \to (\lambda)_{\mu^{d-1}}^{n-1}\,$. But then theorem \[tm:lb\] immediately yields ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) > \mu$, hence as $\mu < \lambda$ was arbitrary, $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) =
\lambda$.
Since ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,t,2) \ge {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega,t,2)$, it immediately follows from \[cor:wc\] and the case $n = 2\,,\,k = 1$ of theorem \[tm:ub\] that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,t,2) = \omega$ whenever $3
\le t < \omega$. Similarly, comparing theorem \[tm:ubma\] with corollary \[cor:wc\] we may conclude that $MA_\lambda(K)$ implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) = \omega$ whenever $d \ge 2$ and $t >
2(d-1)$.
An analogous argument as in the proof of corollary \[cor:wc\], using the case $n = 2$ of theorem \[tm:lb\] and the trivial partition relation $\lambda \to (2)^1_\kappa\,$ for all $\kappa <
\lambda$, yields the following result.
\[cor:w1\] If $\lambda$ is infinite and $1 \le k < \omega$, then $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda\,,2k\,,k+1) = \lambda.$$
On the basis of the conjecture that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\mu)$ is monotone decreasing in its second argument, it is natural to expect from \[cor:w1\] that we also have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda\,,2k-1\,,k+1) = \lambda.$ We shall show below that this is indeed true for “most" values of $\lambda$, however the full statement remains open in ZFC. We first give a somewhat technical lemma.
\[lm:3\] Let $\lambda$ be a cardinal that admits a coloring $f :
[\lambda]^2 \to \lambda$ of its pairs such that for any partition ${{\mathcal P}}$ of $\lambda$ with $|{{\mathcal P}}| < \lambda$ there are $P \in
{{\mathcal P}}$ and $\{\alpha,\beta,\gamma\} \in [P]^3$ satisfying $f\{
\alpha,\beta\} = \gamma$. Then, for any $k > 1$, we have $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda\,,2k-1\,,k+1) = \lambda\,.$$
$${{\mathcal I}}(f) = \big\{ \{\alpha,\beta\} \in [\lambda]^2 : f\{
\alpha,\beta\} \notin \{\alpha,\beta\} \big\}$$ naturally decomposes into the following three parts: $${{\mathcal I}}_0(f) = \big\{ \{\alpha,\beta\} \in [\lambda]^2 : f\{
\alpha,\beta\} < \alpha < \beta \},$$ $${{\mathcal I}}_1(f) = \big\{ \{\alpha,\beta\} \in [\lambda]^2 : \alpha < f\{
\alpha,\beta\} < \beta \},$$ $${{\mathcal I}}_2(f) = \big\{ \{\alpha,\beta\} \in [\lambda]^2 : \alpha< \beta< f\{
\alpha,\beta\} \}.$$
We claim that our assumption on $f$ may be strengthened as follows: There is a fixed $j < 3$ such that for any partition ${{\mathcal P}}$ of $\lambda$ with $|{{\mathcal P}}| < \lambda$ there are $P \in
{{\mathcal P}}$ and $\{\alpha,\beta\} \in {{\mathcal I}}_j(f) \cap [P]^2$ for which $f\{\alpha,\beta\} \in P$.
Indeed, for every $j < 3$ let $g_j : [\lambda]^2 \to \lambda$ be chosen in such a way that $g_j$ extends $f \upharpoonright
{{\mathcal I}}_j(f)$. Then for one $j < 3\,$ the coloring $g_j$ together with its index $j$ must satisfy the claim. Otherwise for every $j
< 3$ there is a partition ${{\mathcal P}}_j$ of $\lambda$ with $|{{\mathcal P}}_j| <
\lambda$ such that $g_j\{\alpha,\beta\} \notin P$ whenever $\{\alpha,\beta\} \in {{\mathcal I}}_j(g_j) \cap [P]^2$. But then $${{\mathcal P}}=
\{P_1 \cap P_2 \cap P_3 : P_j \in {{\mathcal P}}_j,\,j < 3 \}$$ is a partition of $\lambda$ with $|{{\mathcal P}}| < \lambda$ that cannot satisfy our original assumption on $f$, a contradiction. So from here on we assume that $f$ has the stronger property with $j$ fixed.
Take $\lambda$ many pairwise disjoint sets of size $k - 1\,$, $\{H_\alpha : \alpha < \lambda\}$, and for each $\alpha < \lambda$ fix a member $h_\alpha \in H_\alpha$. For each $\{\alpha,\beta\}
\in {{\mathcal I}}(f)$ let $$A_{\{\alpha,\beta\}} = H_\alpha \cup H_\beta
\cup \{h_{f\{\alpha,\beta\}}\}.$$ It is easy to check that then ${{\mathcal A}}= \{A_{\{\alpha,\beta\}} : \{\alpha,\beta\} \in
{{\mathcal I}}_j(f)\}$ is a $(\lambda\,,2k-1\,,k+1)$-system and we claim that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) = \lambda$.
Indeed, consider any map $g : \cup {{\mathcal A}}\to \kappa$ with $\kappa
< \lambda$. Then, by our assumption, there is a pair $\{\alpha,\beta\} \in {{\mathcal I}}_j(f)$ such that $$g[H_\alpha] = g[H_\beta] = g[H_{f\{\alpha,\beta \}}]\,.$$ But clearly, every value taken by $g$ on $A_{\{\alpha,\beta \}}$ is taken at least twice, consequently $g$ is not a conflict free coloring of ${{\mathcal A}}$.
Let us note that if $\lambda$ is regular and $f : [\lambda]^2 \to
\lambda$ establishes the negative partition relation $$\lambda
\nrightarrow [\lambda]^2_\lambda\,,$$ that is, $f[X] = \lambda$ for every $X \in [\lambda]^\lambda$, then $f$ trivially satisfies the requirement of lemma \[lm:3\] as well. Moreover, it is known that $\lambda \nrightarrow [\lambda]^2_\lambda\,$ is valid whenever $\lambda = \kappa^+$ for a regular cardinal $\kappa$, see e.g. [@Sh]. Thus, we immediately obtain the following result.
\[cor:odd\] If $\lambda$ is either a limit cardinal or the successor of a regular cardinal and $1 < k < \omega$ then $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda\,,2k-1\,,k+1) = \lambda\,.$$
The following corollary of theorem \[tm:lb\] uses, for $r = n- 2
> 0$, the well-known Erdős-Rado partition theorem $$\beth_{r}(\kappa)^+ \to (\kappa^+)^{r+1}_\kappa\,.$$ Recall that $\beth_r(\kappa)$ is defined by the recursion $\beth_0(\kappa) = \kappa\,,\beth_{r+1}(\kappa)=
2^{\beth_r(\kappa)}$.
\[cor:ER\] If $n \ge 3$ and $k < t \le n \cdot k$ then, for every $\kappa \ge
\omega$, $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_{n-2}(\kappa)^+,t\,,k+1) > \kappa\,.$$ Consequently, if $\lambda$ is strong limit then for any $2 \le d
\le t < \omega$ we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) = \lambda$.
The first part, as mentioned, follows immediately from theorem \[tm:lb\] and the Erdős-Rado partition theorem. To see the second, consider any $\kappa < \lambda$ and choose $n \ge 3$ such that $t \le n\cdot (d-1)$. Then, by the first part, we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_{n-2}(\kappa)^+,t\,,k+1) > \kappa\,$, moreover $\beth_{n-2}(\kappa)^+ < \lambda$ as $\lambda$ is strong limit, hence ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) > \kappa$ as well. This completes the proof as $\kappa < \lambda$ was arbitrary.
Our next result yields a lower bound for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,k+1)$ for $t \le 2k$, like corollaries \[cor:w1\] and \[cor:odd\]. Of course, if the statement of corollary \[cor:odd\] turns out to be valid for all $\lambda$, as we expect, then it becomes superfluous.
\[tm:lbch\] Assume that $\lambda$ and $\mu$ are infinite cardinals such that $\lambda^{<\mu} = \lambda$, moreover $0 < k < t \le 2k$ are natural numbers. Then $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,k+1) \ge \mu\,.$$
We are going to construct a $(\lambda,t,k+1)$-system ${{\mathcal A}}{\subset}[\lambda]^t$ that satisfies the following property $\Phi(\lambda,\mu,k,t)\,$:
For every $Y \in [\lambda]^{<\mu}$ and for every [*disjoint*]{} collection $B {\subset}[\lambda]^k$ with $|B| < \mu$ there is a set $x \in [\lambda \setminus Y]^{t-k}$ such that $x \cup b \in {{\mathcal A}}$ for each $b \in B$.
Before doing this, however, let us show that if ${{\mathcal A}}$ satisfies $\Phi(\lambda,\mu,k,t)\,$ then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) \ge \mu$. Indeed, let $f : \lambda \to \nu$ be given for some $\nu < \mu$, where $\nu$ is infinite if $\mu > \omega$. Let us put $S = \{\zeta < \nu :
|f^{-1}\{\zeta \}| \ge \omega \}$ if $\mu = \omega$ and $S =
\{\zeta < \nu : |f^{-1}\{\zeta \}| \ge \nu \}$ otherwise. We also set $Y = \bigcup \{ f^{-1}\{\zeta \} : \zeta \in \nu \setminus S
\}$, clearly then $|Y| < \mu$. Next we consider the collection ${{\mathcal S}}= \{z {\subset}S : 0 <|z| \le t-k \}$, again we have $|{{\mathcal S}}|
< \mu$. It is straight-forward to check that we may select for each $z \in {{\mathcal S}}$ a set $b_z \in [\lambda \setminus Y]^{k}$ so that $f[b_z] = z$, moreover $B = \{b_z : z \in {{\mathcal S}}\}$ is disjoint.
By $\Phi(\lambda,\mu,k,t)\,$ there is some $x \in [\lambda
\setminus Y]^{t-k}$ such that $x \cup b_z \in {{\mathcal A}}$ for each $z
\in {{\mathcal S}}$. Now, $x \cap Y = \emptyset$ implies that $z = f[x] \in
{{\mathcal S}}$, hence $x \cup b_z \in {{\mathcal A}}$. But, as $x \cap b_z =
\emptyset$, the equality $f[x] = f[b_z]( = z)$ witnesses that $f$ is not a conflict free coloring of ${{\mathcal A}}$, hence ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) \ge
\mu$.
Now, we show how to construct ${{\mathcal A}}$ satisfying $\Phi(\lambda,\mu,k,t)\,$ by a transfinite recursion of length $\lambda\,$. To start with, we fix a $\lambda$-type enumeration of $[\lambda]^{<\mu} \times {{\mathcal B}}\,$: $$[\lambda]^{<\mu} \times {{\mathcal B}}= \{\langle Y_\alpha,B_\alpha \rangle : \alpha < \lambda\}\,,$$ where ${{\mathcal B}}$ is the family of all disjoint collections $B {\subset}[\lambda]^k$ with $|B| < \mu$. This is possible because $\lambda^{<\mu} = \lambda$.
Next, assume that $\alpha < \lambda$ and for each $\beta < \alpha$ we have already constructed a $(k+1)$-almost disjoint family ${{\mathcal A}}_\beta {\subset}[\lambda]^t$ such that $|{{\mathcal A}}_\beta| \le
\mu\cdot|\beta|$ if $\mu < \lambda$ and $|{{\mathcal A}}_\beta| < \lambda$ if $\mu = \lambda$. We also assume that ${{\mathcal A}}_\beta {\subset}{{\mathcal A}}_\gamma$ whenever $\beta < \gamma < \alpha$.
Now, if $\alpha$ is limit then we simply put ${{\mathcal A}}_\alpha =
\cup_{\beta < \alpha}{{\mathcal A}}_\beta$. It is easy to see that then all our inductive hypotheses remain valid. This is obvious if $\mu <
\lambda$, and if $\mu = \lambda$ then it follows because $\lambda$ is regular by the assumption $\lambda^{<\lambda} = \lambda\,$.
If, on the other hand, $\alpha = \beta + 1\,$ then we consider the pair $\langle Y_\beta\,,B_\beta \rangle\,$ and choose a set $x \in
[\lambda]^{t-k}$ that is disjoint from $\,\,\bigcup {{\mathcal A}}_\beta
\cup \bigcup B_\beta \cup Y_\beta\,$. Then we put $${{\mathcal A}}_\alpha =
{{\mathcal A}}_{\beta+1} = {{\mathcal A}}_\beta \cup \{ b \cup x : b \in B_\beta
\}\,.$$ Again, it is obvious that our inductive hypotheses remain valid.
Finally, if the transfinite recursion is completed, then we set $${{\mathcal A}}= \bigcup \{ {{\mathcal A}}_\alpha : \alpha < \lambda \}\,.$$ It is obvious from our construction that ${{\mathcal A}}{\subset}[\lambda]^t$ is a $(\lambda,t,k+1)$-system that satisfies property $\Phi(\lambda,\mu,k,t)\,$ and hence ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) \ge \mu\,$.
\[cor:ch\] Let $k$ and $t$ be integers with $1 \le k < t \le 2k$. If $\kappa^+ = 2^\kappa$ then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^+,t,k+1) = \kappa^+ $.
In particular, as we promised, CH implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,t,k+1) =
\omega_1$ for any such $k$ and $t$. Actually, our previous results enable us to give, under the assumption of GCH, a complete and rather attractive description of the behavior of $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,k+1)\,$ for all $\lambda \ge \omega > t > k \ge
1$.
\[tm:gch\] Assume GCH and let $\kappa$ be any limit cardinal or $\kappa =
\omega$, moreover fix the natural number $k \ge 1$. Then for any $n<\omega$ we have $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) =\left\{
\begin{array}{lll}
\kappa^{+(n+1-i)}&\mbox{ if $\,\,i\cdot k < t \le (i+1)\cdot k\,,$}\\&
\makebox[75pt]{}\mbox{$i = 1,...,n$;}\\
{}\\
\kappa& \mbox{ if $\,\,(n+1)\cdot k < t\,$}.
\end{array}
\right.$$
Let us note first that by the second part of corollary \[cor:ER\] and by corollary \[cor:wc\] we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa,t,k+1) = \kappa$ for all $0 < k < t < \omega$ which shows that our claim holds for $n = 0$. So, from here on we fix $n
\ge 1$.
Let us assume now that $k < t \le 2k$. In this case we may apply corollary \[cor:ch\] to $\kappa^{+n} = 2 ^{(\kappa^{+n-1})}$ and conclude that $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) = \kappa^{+n} = \kappa^{+(n+1-1)}.$$
Next, consider the case $i\cdot k < t \le (i+1)\cdot k\,$ with $2
\le i \le n$. Then from $i\cdot k < t$, applying theorem \[tm:ub\] to the cardinal $\kappa^{+(n+1-i)}$ and the number $i$, we obtain ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) \le \kappa^{+(n+1-i)}$. >From $\,t \le (i+1)\cdot k\,$, on the other hand, applying corollary \[cor:ER\] to the number $i+1 \ge 3$ and the cardinal $\kappa^{+(n-i)}$ we obtain the converse inequality ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) \ge \kappa^{+(n+1-i)}$.
Finally, assume that $t > (n+1)\cdot k$. Then from theorem \[tm:ub\], applied with the number $n+1$, we conclude ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) \le \kappa$. But then we must have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+n},t,k+1) = \kappa$ because already ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa,t,k+1) = \kappa$.
This concludes the proof because we have checked all the cases.
It is immediate from theorem \[tm:gch\] that, in accordance with our earlier conjecture, ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d)$ is a monotone decreasing function of $t < \omega$ for fixed $\lambda$ and $d$, at least if GCH holds.
Is $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,2k-1,k+1) = \lambda\,$ provable in ZFC for all $\lambda \ge \omega$ and $1 < k < \omega$?
Part II. The case $\lambda \ge \kappa \ge \omega > \mu$
$\omega$ colors suffice {#sc:dfirst}
=======================
It follows from theorem \[tm:ub\] that if $\lambda <
\aleph_\omega$ then, for fixed $d < \omega$, we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,t,d) \le \omega$ provided that $t < \omega$ is large enough. The result we prove in this section shows that if we replace $t$ with any infinite cardinal $\kappa$ then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) \le \omega$ holds for [*all*]{} $\lambda
\ge \kappa$.
\[tm:korlatos\] For any ${\lambda}\ge \kappa \ge \omega$ and $d < \omega$ we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) \le \omega$, in fact even the stronger relation $[\lambda,\kappa,d] \Rightarrow \omega$.
We prove $[\lambda,\kappa,d] \Rightarrow \omega$ by transfinite induction on $\kappa$ and $\lambda$: Assuming $[\kappa',\kappa',d] \Rightarrow \omega$ and $[\lambda',\kappa,d]
\Rightarrow \omega$ for all ${\omega}\le {\kappa}'<{\kappa}$ and ${\kappa}\le {\lambda}'< {\lambda}\,$, we deduce $[\lambda,\kappa,d] \Rightarrow \omega$.
[**Case 1:**]{} ${\lambda}={\kappa}={\omega}$.\
Let ${{\mathcal A}}=\{A_n:n<{\omega}\} {\subset}[\omega]^\omega$ be $d$-almost disjoint (actually, $\omega$-almost disjoint would suffice) and construct $c:{\omega}\to {\omega}$ in such a way that $c\restriction A_n{\setminus}\cup\{A_m:m<n\}$ is a bijection with range ${\omega}$ for each $n<{\omega}$. Thus $\omega \setminus I_c(A_n)
{\subset}c[A_n\cap \cup\{A_m:m<n\}]$ is finite for all $A_n \in
{{\mathcal A}}$, and we are done.
[**Case 2:**]{} ${\lambda}={\kappa}>{\omega}$.\
Let ${{\mathcal A}}{\subset}\br {\kappa};{\kappa};$ be $d$-almost disjoint and $\<N_{\alpha}:{\alpha}<{\kappa}\>$ be a ${\kappa}$-chain of elementary submodels with ${{\mathcal A}}\in N_1$. For ${\alpha}<{\kappa}$ let $\kappa_\alpha = |N_{\alpha+1}|$, $B_{\alpha}=\cup({{\mathcal A}}\cap N_{\alpha})$ and $Y_{\alpha}=N_{{\alpha}+1} \cap ({\kappa}{\setminus}(B_{\alpha}\cup
N_{\alpha}))$.
If $A\in {{\mathcal A}}\cap N_{{\alpha}+1}{\setminus}N_{\alpha}$ then $$|A\cap B_{\alpha}|\le \sum\{|A\cap A'|:A'\in{{\mathcal A}}\cap N_{\alpha}\}\le
|N_{\alpha}|\cdot d<{\kappa},$$ and so $ |{\kappa}{\setminus}(B_{\alpha}\cup N_{\alpha})| = |A{\setminus}(B_{\alpha}\cup N_{\alpha})|={\kappa}$. But $A{\setminus}(B_{\alpha}\cup N_{\alpha})\in N_{{\alpha}+1}$ and $\kappa_\alpha
{\subset}N_{{\alpha}+1}$ imply $$|Y_\alpha| = |A\cap Y_{\alpha}|=|N_{{\alpha}+1} \cap (A{\setminus}(B_{\alpha}\cup N_{\alpha}))|=\kappa_\alpha,$$ consequently $${{\mathcal A}}_{\alpha}=\{A \cap Y_{\alpha}:A\in {{\mathcal A}}\cap
N_{{\alpha}+1}{\setminus}N_{\alpha}\} {\subset}[Y_\alpha]^{\kappa_\alpha},$$ and ${{\mathcal A}}_{\alpha}$ is clearly $d$-almost disjoint. By the inductive assumption $[\kappa_\alpha, \kappa_\alpha,d] \Rightarrow
\omega$, there is a function $c_{\alpha}: Y_{\alpha}\to {\omega}$ such that ${\omega}{\setminus}I_{c_{\alpha}}(A')$ is finite for all $A'\in{{\mathcal A}}_{\alpha}$.
Let $c'=\cup \{c_{\alpha}:{\alpha}<{\kappa}\}$ and consider the function $c\supset c'$ which maps ${\lambda}$ into ${\omega}$ in such a way that $c[{\lambda}{\setminus}{\operatorname{dom}}(c')]{\subset}\{0\}$. Now, let $A\in {{\mathcal A}}$ and ${\alpha}<{\kappa}$ be such that $A\in
N_{{\alpha}+1}{\setminus}N_{\alpha}$. Then $A'=A\cap Y_{\alpha}\in
{{\mathcal A}}_{\alpha}$, so ${\omega}{\setminus}I_{c_{\alpha}}(A')$ is finite. But we also have $$\label{eq:bullet1x}
|A \cap {\operatorname{dom}}(c'{\setminus}c_{\alpha})|<d.$$ Indeed, if ${\alpha}<{\beta} < \kappa\,$ then $\,A \cap Y_{\beta}
={\emptyset}$, while $A\notin N_{\alpha}$ implies $|A\cap
N_{\alpha}|<d$, and hence $|A\cap
\cup\{Y_{\gamma}:{\gamma}<{\alpha}\}|<d$ as well. Since $$I_{c_{\alpha}}(A'){\setminus}I_c(A)
\subset c'[A{\setminus}{\operatorname{dom}}c_{\alpha}]\cup \{0\},$$ it follows that ${\omega}{\setminus}I_c(A)$ is finite, and we are done.
[**Case 3:**]{} ${\lambda}>{\kappa}$.
Let ${{\mathcal A}}{\subset}\br {\lambda};{\kappa};$ be $d$-almost disjoint and $\<N_{\alpha}:{\alpha}<{\lambda}\>$ be a ${\lambda}$-chain of elementary submodels with $({\kappa}+1)\cup\{{{\mathcal A}}\}{\subset}N_1$. For each ${\alpha}<{\lambda}$ let $Y_{\alpha}={\lambda}\cap
N_{{\alpha}+1}{\setminus}N_{\alpha}$, then $\kappa \le |Y_\alpha| =
|N_{\alpha+1}| < \lambda$.
For any $A \in {{\mathcal A}}\cap N_{{\alpha}+1}{\setminus}N_{\alpha}$ we have $|A\cap N_{\alpha}|<d$ and $A{\subset}N_{{\alpha}+1}$, hence $${{\mathcal A}}_{\alpha}=\{A {\setminus}N_{\alpha}:A\in {{\mathcal A}}\cap
N_{{\alpha}+1}{\setminus}N_{\alpha}\} {\subset}\br Y_{\alpha};{\kappa};,$$ and ${{\mathcal A}}_{\alpha}$ is $d$-almost disjoint. Now, we may argue inductively, exactly as in Case 2, to obtain a map $c : \lambda
\to \omega$ such that $\omega {\setminus}I_c(A)$ is finite for each $A
\in {{\mathcal A}}$.
P. Komjáth pointed out to us an easy proof of Theorem \[tm:korlatos\] for the case $\kappa = \omega$. His proof relied on a result of his proved in [@KO2] claiming that every $(\lambda,\omega,d)$-system ${{\mathcal A}}$ is [*essentially disjoint*]{}, i.e. one can omit a finite set $F(A)$ from each element $A$ of ${{\mathcal A}}$ in such a way that the sets $A {\setminus}F(A) $ are pairwise disjoint. By taking a bijection between $A {\setminus}F(A) $ and $\omega$ for each $A \in {{\mathcal A}}$, and then coloring the rest arbitrarily, we get an appropriate ${\omega}$-coloring. Based on this observation, and a result of Erdős and Hajnal, we shall give a short alternative proof of theorem \[tm:korlatos\].
We recall from [@HJS1] and [@HJS2] that a set $X$ is called a $\tau$-transversal of a family $\,{{\mathcal A}}\,$ if $\,0 < |X
\cap A| < \tau\,$ for all $A \in {{\mathcal A}}$. Moreover, the symbol ${{\mathbf M}(\lambda,\kappa,\mu)}\to {{\mathbf B}(\tau)}$ is used there to denote the statement that every $(\lambda,\kappa,\mu)$-system has a $\tau$-transversal.
For us it will be useful to introduce the following variation on this concept: We say that $X$ is a $\tau$-witness for ${{\mathcal A}}$ iff $\,|X \cap A| = \tau\,$ for all $A \in {{\mathcal A}}$. Clearly, any $\tau$-witness is a $\tau^+$-transversal. It is easy to see that if $\kappa \ge \tau \ge \omega$ then ${{\mathbf M}(\lambda,\kappa,\mu)}\to {{\mathbf B}(\tau^+)}$ holds iff every $(\lambda,\kappa,\mu)$-system has a $\tau$-witness.
A $(\lambda,\kappa)$-family ${\mathcal{A}}$ is called [*essentially disjoint*]{} (ED, in short) iff for each $A\in {\mathcal{A}}$ there is a set $F(A)\in \br A;<{\kappa};$ such that the family $\{A{\setminus}F(A):A\in {\mathcal{A}}\}$ is disjoint. ${{\mathbf M}(\lambda,\kappa,\mu)}\to
{\bf ED}$ denotes the statement that every $(\lambda,\kappa,\mu)$-system is ED.
\[lm:reduction\] Assume $ {\mu}\le \tau \le {\kappa}\le {\lambda}$ and $\tau \ge
\omega$. Then $${{\mathbf M}(\lambda,\kappa,\mu)}\to {{\mathbf B}({\tau}^+)}
\mbox{ and }\, {{\mathbf M}(\lambda,\tau,\mu)}\to {\bf ED}$$ together imply $[\lambda,\kappa,\mu] \Rightarrow \tau\,.$
Let ${\mathcal{A}}{\subset}\br {\lambda};{\kappa};$ be a $(\lambda,\kappa,{\mu})$-system. Since ${{\mathbf M}(\lambda,\kappa,\mu)}\to {{\mathbf B}({\tau}^+)}$ there is a $\tau$-witness $X$ for ${{\mathcal A}}$. Then $${{\mathcal A}}'={\mathcal{A}}\restriction X=\{A\cap X:A\in {\mathcal{A}}\}{\subset}\br
{\lambda};{\tau};$$ is a $(\lambda,\tau,\mu)$-system. Applying ${{\mathbf M}(\lambda,\tau,\mu)}\to {\bf ED}$ for ${{\mathcal A}}'$ there is a function $F:{\mathcal{A}}'\to \br {\lambda};<{\tau};$ such that the family $\{A'{\setminus}F(A'):A'\in {\mathcal{A}}'\}$ is disjoint.
Let $c:{\lambda}\to {\tau}$ be a function such that
(i) $c[{\lambda}{\setminus}X]=\{0\}$,
(ii) $c\restriction A'{\setminus}F(A')$ is a bijection between $A'{\setminus}F(A')$ and ${\tau}$ for each $A'\in {\mathcal{A}}'$.
Then for each $A\in {\mathcal{A}}$, $${\mu}{\setminus}I_c(A){\subset}\{0\}\cup c[F(A\cap X)]\in \br
{\tau};<{\tau};,$$ i.e. $c$ witnesses ${[\lambda,\kappa,\rho]\Rightarrow\mu }$
In [@EH3 Theorem 8(b)] Erdős and Hajnal proved that $$\label{eh1x}
\text{${{\mathbf M}(\lambda,\kappa,d)}\to {{\mathbf B}(\omega)}$ for $d<{\omega}\le
{\kappa}\le {\lambda}$.}$$ Moreover, in [@KO2 Theorem 2], Komjáth proved $$\label{ko1}
\text{${{\mathbf M}(\lambda,\omega,d)}\to {\bf ED}$ for $d<{\omega}\le
{\lambda}$,}$$ By proposition \[lm:reduction\], (\[eh1x\]) and (\[ko1\]) imply ${[\lambda,\kappa,d]\Rightarrow\omega }$. (Actually, instead of (\[eh1x\]), ${{\mathbf M}(\lambda,\kappa,d)}\to {{\mathbf B}({{{\omega}_1}})}$ would be enough.)
As a matter of fact, the theorem of Erdős and Hajnal, [@EH3 Theorem 8(b)] that we stated and used above can be proved with the method of elementary chains as presented in the first proof of theorem \[tm:korlatos\]. Moreover, we should point out that all the results mentioned in this section can also be deduced from the very general, and therefore rather technical, main theorem 1.6 of [@HJS1].
A finite upper bound for $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+m},\kappa,d)$
===========================================================================================
We have seen in the previous section that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d)$ is countable whenever $\lambda\ge \kappa \ge \omega > d$. The aim of this section is to show that if $\lambda$ is “not much bigger than" $\kappa$, namely it is a finite successor of $\kappa$, then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d)$ is even finite. This is immediate from the following theorem that is formulated in terms of the weak conflict free chromatic number.
\[tm:egeszresz\] If ${\kappa}$ is infinite, $d > 0$ and $m$ are natural numbers then $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+m},\kappa,d) \le {\left\lfloor
{\frac{(m+1)(d-1)+1}2} \right\rfloor+1}\,,$$ or with our alternative arrow notation: $$\label{dag}
{[{\kappa}^{+m},\kappa,d]\to_w\left\lfloor {\frac{(m+1)(d-1)+1}2} \right\rfloor+1 }.$$
We shall actually prove a stronger result than theorem \[tm:egeszresz\]. This involves a refined version of our weak arrow relation whose definition is given next. In this we shall use ${{\mathcal F}}(A,B)$ to denote the set of all partial functions from $A$ to $B$.
\[def:gen\_princ\] Let $ {\lambda} \ge \kappa \ge \omega$ and $d,k,x\in {\omega}$. Then $${[\lambda,\kappa,d,k]\to_w x}$$ abbreviates the following statement: If $C {\subset}\lambda$ and ${{\mathcal A}}{\subset}\br
{\lambda};{\kappa};$ is any $d$-almost disjoint system satisfying $|A\cap C|\le k$ for each $A\in {{\mathcal A}}$, then for every partial function $f \in {{\mathcal F}}(C,x)$ there is a weak conflict free coloring $g \in {{\mathcal F}}(\lambda,x)$ of ${{\mathcal A}}$ such that $g\upharpoonright C
= f$. Note that the last equality is equivalent to $g \supset f$ and $C \cap {\operatorname{dom}}(g) = {\operatorname{dom}}(f)$.
For later use we also define the (strict) relation ${[\lambda,\kappa,d,k]\to x}$ as follows: For any $d$-almost disjoint ${{\mathcal A}}{\subset}\br {\lambda};{\kappa};$ and $f \in {{\mathcal F}}(\lambda,x)$ satisfying $|A\cap {\operatorname{dom}}(f)|\le k$ for each $A\in {{\mathcal A}}$, there is a conflict free coloring $g : \lambda \to x$ of ${{\mathcal A}}$ with $g
\supset f$.
The main result of this section may be then formulated as follows. (Note that theorem \[tm:egeszresz\] is an immediate corollary of the particular case $k = 0$ of theorem \[tm:ind\].)
\[tm:ind\] Let ${\kappa}$ be an infinite cardinal and $m,d,k $ be natural numbers with $d>0$. Then $${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{m}{0}}{}{m} }}}
{{\kappa}^{{\ifthenelse{ \equal{m}{0} } {}
{ \ifthenelse {\equal{m}{1}} {+} {+m}} }}}},\kappa,d,k]\to_w \left\lfloor {\frac{(m+1)(d-1)+k+1}{2}} \right\rfloor+1}.$$
The proof of theorem \[tm:ind\] will be carried out by induction on $m$, using theorems \[tm:ind0\] and \[tm:ind\_new\] below.
\[tm:ind0\] Let ${\kappa}$ be an infinite cardinal, moreover $\,d$ and $\,x $ be natural numbers with $2x > d$. Then $${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{0}{0}}{}{0} }}}
{{\kappa}^{{\ifthenelse{ \equal{0}{0} } {}
{ \ifthenelse {\equal{0}{1}} {+} {+0}} }}}},\kappa,d,2x-d-1]\to_w x}\,.$$
Let us write $k=2x-d-1$ and assume that a set $C {\subset}\kappa$, a $d$-almost disjoint system ${{\mathcal A}}{\subset}\br {\kappa};{\kappa};$, and a partial function $f \in {{\mathcal F}}(C,x)$ are given such that $|A\cap C|\le k$ for each $A\in {{\mathcal A}}$. We may clearly assume that $|{{\mathcal A}}| = \kappa$, and hence may fix a one-to-one $\kappa$-type enumeration $\{A_\eta : \eta < \kappa\}$ of ${{\mathcal A}}$.
By transfinite induction we shall define $f_{\eta} \in
{{\mathcal F}}(\kappa,x)$ for ${\eta}\le {\kappa}$ such that the inductive conditions (i) - (iv) below be valid.
(i) $f_{\eta} \supset f_\zeta \supset f$ for $\eta > \zeta\,$,
(ii) $\,C \cap{\operatorname{dom}}(f_{\eta}{\setminus}f) = \emptyset$ and $|f_{\eta}{\setminus}f|\le |{\eta}|\,$,
(iii) $\forall\, {\zeta}<{\eta}$ $\exists\, i<x$ $|A_{\zeta}\cap f_{\eta}^{-1}\{i\}|=1\,$,
(iv) if ${\gamma}\ge {\eta}$ then $|A_{\gamma}\cap {\operatorname{dom}}(f_{\eta}{\setminus}f)|\le d$.
[**Case 1.**]{} [*${\eta}=0$.*]{}\
Put $f_0=f$, then (i) - (iv) hold trivially.
[**Case 2.**]{} ${\eta}$ is a limit ordinal.\
Put $f_{\eta}=\cup\{f_{\zeta}:{\zeta}<{\eta}\}$. It is again easy to check that the validity of conditions (i) - (iv) will be preserved. In particular, (iii) is preserved because, as $x$ is finite, for each $\zeta < \eta$ there are cofinally many $\xi <
\eta$ satisfying $|A_{\zeta}\cap f_{\xi}^{-1}\{i\}|=1\,$ with the same $i < x$.
[**Case 3.**]{} [*${\eta}={\zeta}+1$.*]{}\
Then we have $$\begin{gathered}
\notag
|A_{\zeta}\cap {\operatorname{dom}}(f_{\zeta})| = |A_{\zeta}\cap {\operatorname{dom}}(f)|+
|A_{\zeta}\cap ({\operatorname{dom}}(f_{\zeta}{\setminus}f))|\le k+d\,,\end{gathered}$$ consequently, $2x>2x-1=k+d$ implies that there is $i<x$ such that $|A_{\zeta}\cap f_{\zeta}^{-1}\{i\}|\le 1$. If there is an $i<x$ such that $|A_{\zeta}\cap f_{\zeta}^{-1}\{i\}|= 1$ then the choice $f_{\eta}=f_{\zeta}$ clearly works.
Otherwise we may fix $j<x$ with $A_{\zeta}\cap
f_{\zeta}^{-1}\{j\}={\emptyset}$. Let us then put $${{\mathcal A}}_{\zeta}=\{A_{\xi}:{\xi}<{\zeta}\}\cup\{
A_{\gamma}:{\zeta}<{\gamma}\land |A_\gamma \cap
{\operatorname{dom}}(f_{\zeta}{\setminus}f)| = d\}\,.$$ Using $|{\operatorname{dom}}(f_{\zeta}{\setminus}f)|\le |{\zeta}| < \kappa$ and that ${{\mathcal A}}$ is $d$-almost disjoint we get $|{{\mathcal A}}_{\zeta}| <{\kappa}$, moreover we also have $|C \cap A_\zeta| \le k$. Thus we can pick $$\xi_{\zeta}\in A_{\zeta}{\setminus}\big(\cup{{\mathcal A}}_{\zeta} \cup C \big)$$ and put $$f_{\eta} = f_{\zeta+1} =f_{\zeta} \cup \{\langle
\xi_\zeta,j \rangle\}.$$
Then $f_{\eta}$ clearly satisfies (i) and (ii). If ${\xi}<{\zeta}$ then, by our construction, $f_{\eta}\restriction
A_{\xi}=f_{\zeta}\restriction A_{\xi}$, hence, as (iii) is satisfied by $f_\zeta$, there is $i<x$ such that $|f_{\eta}^{-1}\{i\}\cap A_{\xi}|=1$. Moreover, $f_{\eta}^{-1}\{j\}\cap A_{\zeta}=\{\xi_{\zeta}\}$, so (iii) is satisfied by $f_{\eta}$ as well.
Finally, to show that $f_\eta$ satisfies (iv), consider any $\gamma \ge {\eta}$. If we have $|A_{\gamma}\cap\,
{\operatorname{dom}}(f_{\zeta}{\setminus}f)|< d$ then $|A_{\gamma}\cap\,
{\operatorname{dom}}(f_{\eta}{\setminus}f )|\le d$ holds trivially, because $|{\operatorname{dom}}(f_{\eta} {\setminus}f_{\zeta}|\le 1$. If, on the other hand, $\,|A_{\gamma}\cap {\operatorname{dom}}(f_{\zeta}{\setminus}f)| = d$ then $A_{\gamma}\in {{\mathcal A}}_{\zeta}$ and so $\xi_{\zeta}\notin
A_{\gamma}$. Thus, in this case, $|A_{\gamma}\cap\,
{\operatorname{dom}}(f_{\eta}{\setminus}f)|= |A_{\gamma}\cap\, {\operatorname{dom}}(f_{\zeta}{\setminus}f)|=d$; in any case $f_\eta$ satisfies (iv).
Obviously, then $f_{\kappa} \in {{\mathcal F}}(\lambda,x)$ is a weak conflict free coloring of ${{\mathcal A}}$ that satisfies $f_\kappa
\upharpoonright C = f$, completing the proof.
Next we prove a stepping up result for the first parameter of our new arrow relations. The proof of this will reveal why we chose to introduce this new relation.
\[tm:ind\_new\] Let $\lambda \ge {\kappa} \ge \omega$ and $\,d,k,x\in {\omega}$ with $d > 0$. Then
1. $[\lambda,\kappa,d,k+d-1] \to_w x$ implies $[\lambda^+,\kappa,d,k] \to_w x$,
2. $[\lambda,\kappa,d,k+d-1] \to x$ implies $[\lambda^+,\kappa,d,k] \to x$.
(1). Assume that $C {\subset}\lambda^+$ and the $d$-almost disjoint system ${{\mathcal A}}{\subset}\br {\lambda^+};{\kappa};$ are such that $|A
\cap C| \le k$ for any $A \in {{\mathcal A}}$, moreover $f \in {{\mathcal F}}(C,x)$. Let $\<N_{\nu}:{\nu}<\lambda^+\>$ be a $\lambda^+$-chain of elementary submodels such that $\lambda^+,{{\mathcal A}},C,f \in N_1$ and $\lambda{\subset}N_1$.
By transfinite induction we shall define $g_{\eta}\in
{{\mathcal F}}(\lambda^+,x)$ for all ${\eta}< \lambda^+$ satisfying the following inductive hypotheses.
(i) ${\operatorname{dom}}(g_{\eta}){\subset}N_{{\eta}}$ and $g_\zeta \subset g_\eta$ for $\zeta < \eta$,
(ii) $g_\eta \upharpoonright C \cap N_\eta = f\restriction C \cap N_{\eta}$,
(iii) $g_{\eta}$ is a weak conflict free coloring of ${{\mathcal A}}\cap
N_{\eta}$.
[**Case 1.**]{} [*${\eta}=0$.*]{}\
We have to put $g_0 = \emptyset\,$ because $N_0 = \emptyset$. This works trivially for the same reason.
[**Case 2.**]{} [*${\eta}$ is limit.*]{}\
Then we put $g_{\eta}=\cup\{g_{\zeta}:{\zeta}<{\eta}\}$. Now, (i) and (ii) follow immediately from $N_{\eta}=\cup\{N_{\xi}:{\xi}<{\eta}\}$. To check (iii), pick $A\in {{\mathcal A}}\cap N_{\eta}$. There is a $\zeta < \eta$ with $A \in
N_\zeta$ and so for every $\nu \in \eta {\setminus}\zeta$ there is $i_\nu < x$ with $|A\cap g_{\nu}^{-1}\{i_\nu\}|=1$. As $x$ is finite, we have an $i < x$ such that $i_\nu = i$ for cofinally many $\nu \in \eta$, hence $|A\cap g_{\eta}^{-1}\{i\}|=1$.
[**Case 3.**]{} [*${\eta}={\zeta}+1$.*]{}\
Let us put $C_\eta = (C \cap N_\eta) \cup N_{\zeta}$ and $f_{\eta}=
(f\restriction N_{\eta}) \cup g_{\zeta} \in {{\mathcal F}}(C_\eta,x)$. Then for all $A\in {{\mathcal A}}\cap (N_\eta {\setminus}N_{\zeta}) {\subset}[\lambda^+
\cap N_\eta]^{\kappa}$ we have $$|A\cap C_{\eta}|\le |A\cap C|+|A\cap N_{\zeta}|\le
k+(d-1)\,.$$
But $|\lambda^+ \cap N_\eta| = \lambda$, hence we can apply $[\lambda,\kappa,d,k+d-1] \to_w x$ to $C_\eta$, ${{\mathcal A}}\cap
(N_\eta {\setminus}N_{\zeta})$, and $f_{\eta}$ to find a weak conflict free coloring $g_{\eta}$ of ${{\mathcal A}}\cap (N_{\eta} {\setminus}N_\zeta)$ such that ${\operatorname{dom}}(g_{\eta}){\subset}\lambda^+ \cap N_{\eta}$ and $$g_{\eta}\upharpoonright C_\eta = f_{\eta}\upharpoonright
C_\eta\,.$$ In particular, then $g_\zeta {\subset}g_\eta$ and since for every $A \in {{\mathcal A}}\cap N_\zeta$ we have $A {\subset}N_\zeta
{\subset}C_\eta$ we obtain that $g_\eta$ is a weak conflict free coloring $g_{\eta}$ of ${{\mathcal A}}\cap N_\eta$. Finally, $C \cap
N_\eta {\subset}C_\eta$ implies $g_\eta \upharpoonright C \cap N_\eta
= f\restriction C \cap N_{\eta}$, which shows that $g_\eta$ satisfies all three inductive hypotheses and thus completes the inductive construction.
It is now obvious that the function $g= \bigcup_{\eta <
\lambda^+}g_\eta$ is a weak conflict free coloring of ${{\mathcal A}}$ and satisfies $g \upharpoonright C = f$, which completes the proof of (1).
\(2) can be proved in a completely similar, but even simpler, manner.
To start with, in the case $m = 0$, we have to show $$[\kappa,\kappa,d,k] \to_w \left\lfloor \frac{k+d}2\right\rfloor +1$$ for all natural numbers $k$ and $d$. To see this, put $x =
\left\lfloor \frac{k+d}2\right\rfloor +1$ and note that we have $2x \ge k+d+1$, hence $2x-d-1 \ge k$. But then, applying theorem \[tm:ind0\], we can conclude ${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{0}{0}}{}{0} }}}
{{\kappa}^{{\ifthenelse{ \equal{0}{0} } {}
{ \ifthenelse {\equal{0}{1}} {+} {+0}} }}}},\kappa,d,2x-d-1]\to_w x}\,$ and hence ${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{0}{0}}{}{0} }}}
{{\kappa}^{{\ifthenelse{ \equal{0}{0} } {}
{ \ifthenelse {\equal{0}{1}} {+} {+0}} }}}},\kappa,d,k]\to_w x}\,$ as well.
Now, assume that $m > 0$ and theorem \[tm:ind\] has been verified for $m-1$, i.e.
$${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{m-1}{0}}{}{m-1} }}}
{{\kappa}^{{\ifthenelse{ \equal{m-1}{0} } {}
{ \ifthenelse {\equal{m-1}{1}} {+} {+m-1}} }}}},\kappa,d,k]\to_w \left\lfloor {\frac{m(d-1)+k+1}{2}}
\right\rfloor+1}$$
holds for all $d$ and $k$. Applying the stepping up theorem \[tm:ind\_new\] to this formula with $k$ replaced by $k+d-1$ (and $\lambda = \kappa^{+m-1}$) we obtain $${[{\ifthenelse{\equal{{\kappa}}{{\omega}}}
{{\kappa}_{{\ifthenelse{\equal{m}{0}}{}{m} }}}
{{\kappa}^{{\ifthenelse{ \equal{m}{0} } {}
{ \ifthenelse {\equal{m}{1}} {+} {+m}} }}}},\kappa,d,k]\to_w \left\lfloor {\frac{(m+1)(d-1)+k+1}{2}}
\right\rfloor+1},$$ completing the induction step from $m-1$ to $m$.
A lower bound for $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,d)$ {#sc:dlast}
========================================================================================
Now we know that $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{+m},\kappa,d)$ is finite, hence it is natural to attempt to find its exact value. The aim of this section is to execute this attempt, at least under GCH and for $d
= 2$ or $d$ odd. The case $m = 0$ is relatively easy to deal with, using the following lemma.
\[lm:0\*\] Fix a cardinal $\kappa \ge \omega$ and a natural number $t > 0$. We have a procedure that assigns to any $(\kappa,\kappa,2t)$-system ${{\mathcal F}}$ another $(\kappa,\kappa,2t)$-system ${{\mathcal F}}^*$ in such a way that $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) > t$ holds whenever $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) \ge t$.
Given any $(\kappa,\kappa,2t)$-system ${{\mathcal F}}$, let us first choose pairwise disjoint sets $\{A_n:n<2t\}\cup\{B_{\nu}:{\nu}<{\kappa}\}{\subset}\br
{\kappa};{\kappa};$. For each $n<2t$ let ${{\mathcal A}}_n{\subset}\br
A_n;{\kappa};$ be an isomorphic copy of ${{\mathcal F}}$. Let $\{X_{\nu}:{\nu}<{\kappa}\}$ be a one-one enumeration of the family $$\{X : |X| = 2t \mbox{ and } |X\cap A_n|= 1 \text{ for each } n<2t\}$$ of all transversals of $\{A_n:n<2t\}$. Write $C_{\nu}=B_{\nu}\cup
X_{\nu}$ and let $${{\mathcal F}}^*=\bigcup\{{{\mathcal A}}_n:n<{2t}\}\cup \{C_{\nu}:{\nu}<{\kappa}\}.$$ Then ${{\mathcal F}}^*$ is $2t$-almost disjoint, because
- $|C_{\nu}\cap C_{\mu}|=|X_{\nu}\cap X_{\mu}| < 2t$ for ${\nu} \ne {\mu}$,
- $|C_{\nu}\cap A|\le 1$ for $A\in \bigcup_{n<{2t}}{{\mathcal A}}_n$.
Now, assume that $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) \ge t$ and, contrary to our claim, $h$ is a weak conflict free coloring of ${{\mathcal F}}^*$ with color set $t$. Then $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}_n)\ge t$ implies that $h[A_n] = t$ for each $n < 2t$, hence for each $i < t$ there are $x_i\in A_{2i}$ and $y_i\in A_{2i+1}$ such that $h(x_i)=h(y_i)=i$.
There is a ${\nu}<{\kappa}$ with $X_{\nu}=\{x_i, y_i:i<t\}$. But then $|h^{-1}\{i\}\cap C_{\nu}|\ge 2$ for each $i<t$, and $h$ is not a weak conflict free coloring of ${{\mathcal F}}^*$, a contradiction. So, indeed, we have $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*)> t$.
\[tm:omegaw\] For any cardinal ${\kappa} \ge \omega$ and integer $d \ge 2$ we have $$\tag{$*_d$}
{w{\chi}_{\rm CF}(\kappa,\kappa,d)}={\left\lfloor {\frac{d}2} \right\rfloor+1}.$$
We shall prove, by induction on $1\le s<{\omega}$, that $$\tag{$\circ_s$}
{w{\chi}_{\rm CF}(\kappa,\kappa,2s)}\ge s+1.$$ Then, also applying theorem \[tm:egeszresz\], we have $$s+1 \le {w{\chi}_{\rm CF}(\kappa,\kappa,2s)}\le{w{\chi}_{\rm CF}(\kappa,\kappa,2s+1)}\le s+1\,,$$ hence $(\circ_s)$ implies both $(*_{2s})$ and $(*_{2s+1})$.
[**First step:**]{} $s=1$.\
Take a $2$-dimensional vector space $V$ with $|V| = \kappa$ above any field of cardinality ${\kappa}$ and let ${{\mathcal L}}$ be the family of all lines (1-dimensional affine subspaces) in $V$. Then ${{\mathcal L}}$ is $2$-almost disjoint, hence a $(\kappa,\kappa,2)$-system, and it trivially does not have a weak conflict free coloring with a single color. So ${w{\chi}_{\rm CF}(\kappa,\kappa,2)}\ge 2=1+1$.
[**Induction step:**]{} $s\to (s+1) $.\
Let ${{\mathcal F}}$ be a $(\kappa,\kappa,2s)$-system with ${\operatorname{w\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}})
\ge s+1$. We may then apply lemma \[lm:0\*\] to ${{\mathcal F}}$ with $t =
s+1$ to conclude that the $(\kappa,\kappa,2(s+1))$-system ${{\mathcal F}}^*$ satisfies $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) \ge t+1 = (s+1)+1$.
Theorem \[tm:omegaw\] shows that the upper bound established in theorem \[tm:egeszresz\] is sharp for $m = 0$. We shall show next that this is also true for all $m > 0$, provided that GCH holds and $d$ is odd. The following lemma plays the key role in proving this.
\[lm:l\*\] For any cardinals $\lambda \ge \kappa \ge \omega$ and natural number $\ell > 0$ we have a procedure assigning to any $(\lambda,\kappa,2\ell+1)$-system ${{\mathcal F}}$ a $\,(2^\lambda,\kappa,2\ell+1)$-system ${{\mathcal F}}^*$ so that $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) \ge \ell$ implies $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) \ge w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) + \ell\,.$$
Fix the $(\lambda,\kappa,2\ell+1)$-system ${{\mathcal F}}$ and then choose pairwise disjoint sets $$\{A_{\alpha}:{\alpha}<{\lambda}\}
\cup\{C_{\delta}:{\delta}<{{{2^{\lambda}}}}\}{\subset}\br {{{2^{\lambda}}}};{\lambda};\,.$$ For ${\alpha}<{\lambda}$, resp. ${\delta}<{{{2^{\lambda}}}}$, let ${{\mathcal A}}_{\alpha}{\subset}\br A_{\alpha};{\kappa};$, resp. ${{\mathcal C}}_{\delta}{\subset}\br C_{\delta};{\kappa};$, be isomorphic copies of ${{\mathcal F}}$. For every $\delta < {{2^{\lambda}}}$ we also fix a one-one enumeration ${{\mathcal C}}_{\delta}=\{C_{{\delta},i}:i<{\lambda}\}$. Let us then put $${{\mathcal S}}=\{S\in \big[\bigcup_{\alpha < \lambda}A_\alpha
\big]^{2\ell} :\forall {\alpha}<{\lambda}\ |S\cap A_{\alpha}|\le
1\}$$ and $\{f_{\delta}:{\delta}<{{{2^{\lambda}}}}\}$ be an enumeration of all functions $f : {\lambda} \to {{\mathcal S}}$ that satisfy $$\text{$f(i) \cap f(j) = {\emptyset}\,$ for any $\{i,j\} \in
[{\lambda}]^2$.}$$ Finally, let $C^*_{{\delta},i}=C_{{\delta},i}\cup f_{\delta}(i)$ and put $${{\mathcal F}}^*= \bigcup_{\alpha < \lambda} {{\mathcal A}}_\alpha
\cup\{C^*_{{\delta},i} : {\delta}<{{{2^{\lambda}}}},i<{\lambda}\}.$$
[ **Claim 1.** ]{}[*${{\mathcal F}}^*$ is $(2\ell+1)$-almost disjoint.* ]{}
The only non-trivial case is showing $|C^*_{{\delta},i}\cap
C^*_{\delta',i'}|\le 2\ell$ for $\<{\delta},i\>\ne
\<{\delta}',i'\>$. Clearly, we have $$C^*_{{\delta},i}\cap C^*_{{\delta}',i'}{\subset}(C_{{\delta}}\cap
C_{{\delta}'})\cup \bigl(f_{{\delta}}(i) \cap
f_{{\delta}'}(i')\bigr).$$ Now, if ${\delta}\ne {\delta}'$ then $C_{{\delta}}\cap
C_{{\delta}'}={\emptyset}$ and $|f_{{\delta}}(i) \cap
f_{{\delta}'}(i')|\le |f_{{\delta}}(i)|=2\ell$. If, on the other hand, ${\delta}={\delta}'$ then $f_{{\delta}}(i) \cap
f_{{\delta}}(i')={\emptyset}$ by definition, so $|C^*_{{\delta},i}\cap
C^*_{{\delta},i'}| = |C_{{\delta},i}\cap C_{{\delta},i'}|\le
2\ell\,$ because $\,{{\mathcal C}}_\delta\,$ is $(2\ell+1)$-almost disjoint.
[ **Claim 2.** ]{}If $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) \ge \ell\,$ then $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) \ge w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) + \ell\,.$
The claim is obvious if $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) \ge \omega$, so we may assume that $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) < \omega$. Now, let $h$ be any weak conflict-free coloring of ${{\mathcal F}}^*$ with a finite color set $T$. By $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}_{\alpha})\ge \ell$, for each ${\alpha}<{\lambda}$ we have $|h[A_{\alpha}]|\ge \ell$, thus there are $I\in \br
{\lambda};{\lambda};$ and $M =\{m_j:j<\ell\}\in \br T;\ell;$ such that $h[A_{\alpha}]\supset M$ for each ${\alpha}\in I$.
Let $\{{\alpha}_{{\zeta},n}:{\zeta}<{\lambda}\,,\,n<2\ell\}$ be distinct elements of $I$. We may then, for each $j<\ell$, pick $x_{{\zeta},2j}\in A_{{\alpha}_{{\zeta},2j}}$ and $x_{{\zeta},2j+1}\in A_{{\alpha}_{{\zeta},2j+1}}$ satisfying $$h(x_{{\zeta},2j})=h(x_{{\zeta},2j+1})=m_j\,.$$ There is ${\delta}<2^{\lambda}$ such that $f_{\delta}({\zeta})=\{x_{{\zeta},n}:n<2\ell\}$ for all $\zeta <
\lambda$, then for each $m\in M$ and ${i}<{\lambda}$ we have $|h^{-1}\{m\}\cap f_{\delta}({i})|\ge 2$. It follows that $h\restriction (C_{\delta}{\setminus}h^{-1}M)$ must be a weak conflict free coloring of ${{\mathcal C}}_{\delta}$ with color set $T {\setminus}M$, showing that $|T {\setminus}M| \ge w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}})$, hence $|T | \ge
w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) + \ell$, completing the proof.
\[tm:step3\] For any $\kappa \ge \omega$ and $m,\ell \in \omega$ with $\ell >
0$ we have $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,2\ell+1) \ge (m+1)\cdot
\ell + 1\,.$$
By Theorem \[tm:omegaw\], we have ${w{\chi}_{\rm CF}({\kappa},\kappa,2\ell+1)}=\ell +1.$ So we may simply apply theorem \[tm:step3\] $m$ times to obtain the result.
As an immediate consequence of theorems \[tm:korlatos\] \[tm:step3\] we obtain the following result.
\[tm:step4\] For every infinite cardinal $\kappa$ and natural number $d > 1$ we have $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_\omega(\kappa),\kappa,d) = \omega\,.$$
>From theorems \[tm:egeszresz\] and \[tm:step3\] we may immediately deduce the promised exact value of ${w{\chi}_{\rm CF}({\kappa}^{+m},\kappa,2\ell+1)}$ under GCH.
\[cor:gchoup3\] If GCH holds then for any cardinal ${\kappa} \ge \omega$ and integers $m\ge 0$, $\ell>0$ we have $${w{\chi}_{\rm CF}({\kappa}^{+m},\kappa,2\ell+1)}=(m+1)\cdot\ell +1.$$
We do not know, in general, if an exact formula like this can be obtained for ${w{\chi}_{\rm CF}({\kappa}^{+m},\kappa,2\ell)}$, but we do know this in the simplest case $\ell =1\,.$ The key to this is again a “lift up" lemma in the spirit of lemmas \[lm:0\*\] and \[lm:l\*\].
\[lm:2\*\] For any $\lambda \ge \kappa \ge \omega$, we can assign to every $(\lambda,\kappa,2)$-system ${{\mathcal F}}$ a $\,(2^{2^\lambda},\kappa,2)$-system ${{\mathcal F}}^*$ so that if $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}})$ is finite then $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) > w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}})\,.$$
Let ${{\mathcal F}}$ be any $(\lambda,\kappa,2)$-system and, to start with, fix pairwise disjoint sets $$\begin{gathered}
\notag
\{A_{\delta}:{\delta}<{{{2^{\lambda}}}}\}\cup
\{B_{{\eta},{\alpha}}:{\eta}<{2^{2^{\lambda}}}, {\alpha}<{\lambda}\}
\cup\\\{C_{{\eta},{\delta}}:{\eta}<{2^{2^{\lambda}}},{\delta}<{{{2^{\lambda}}}}\}{\subset}\br {2^{2^{\lambda}}};{\lambda};.\end{gathered}$$ For every ${\delta}<{{{2^{\lambda}}}}$ let ${{\mathcal A}}_{\delta}{\subset}\br
A_{\zeta};{\kappa};$ be an isomorphic copy of ${{\mathcal F}}$ and define similarly ${{\mathcal B}}_{{\eta},{\alpha}}{\subset}\br
B_{{\eta},{\alpha}};{\kappa};$ and ${{\mathcal C}}_{{\eta},{\delta}} {\subset}\br C_{{\eta},{\delta}};{\kappa};$. We also enumerate, without repetitions, each ${{\mathcal C}}_{{\eta},{\delta}}$ as $\{C_{{\eta},{\delta},i}:i<{\lambda}\}$.
Let us put $A=\cup \{A_{\delta}:{\delta}<{{{2^{\lambda}}}}\}$ and $B_{\eta}=\cup \{B_{{\eta},{\alpha}}: {\alpha}<{\lambda}\}$ for each ${\eta}<{2^{2^{\lambda}}}$. Then enumerate the injective functions $f :
{{{2^{\lambda}}}}\times {\lambda} \to A$ as $\{f_{\eta}:{\eta}<{2^{2^{\lambda}}}\}$ and, for any ${\eta}<{2^{2^{\lambda}}}$, enumerate the injective functions $g :
{\lambda} \to B_{\eta}$ as $\{g_{\eta,\delta}:{\delta}<{{{2^{\lambda}}}}\}$. Finally, let $$C^*_{{\eta},{\delta},i}=
C_{{\eta},{\delta},i}\cup\{f_{\eta}({\delta},i),g_{\eta,\delta}(i)\}$$ and put $$\begin{gathered}
\notag
{{\mathcal F}}^*= \bigcup\{{{\mathcal A}}_\delta : \delta < {{2^{\lambda}}}\}\cup \bigcup
\{{{\mathcal B}}_{{\eta},{\alpha}} : \< \eta,\alpha \> \in {2^{2^{\lambda}}}\times \lambda \}\cup \\
\cup\{C^*_{{\eta},{\delta},i} :\<\eta,\delta,i\> \in {2^{2^{\lambda}}}\times
{{2^{\lambda}}}\times {\lambda}\}.\end{gathered}$$
[ **Claim 1.** ]{}[*${{\mathcal F}}^*$ is $2$-almost disjoint.*]{}
The only non-trivial task is to show that $|C^*_{{\eta},{\delta},i}\cap C^*_{{\eta}',{\delta}',i'}|\le 1$ for $\<{\eta},{\delta},i\>\ne \<{\eta}',{\delta}',i'\>$. Clearly, we have $$\begin{gathered}
C^*_{{\eta},{\delta},i}\cap C^*_{{\eta}',{\delta}',i'}{\subset}(C_{{\eta},{\delta},i}\cap C_{{\eta}',{\delta}',i'})\cup\\
\cup\bigl(\{f_{\eta}({\delta},i)\} \cap
\{f_{\eta'}({\delta}',i')\}) \cup \bigl(\{g_{\eta,\delta}(i)\}
\cap \{g_{\eta',\delta'}(i')\}) \bigr).\end{gathered}$$ If ${\eta}\ne {\eta}'$ then $C_{{\eta},{\delta}}\cap
C_{{\eta}',{\delta}'}={\emptyset}$, and $\{g_{\eta,\delta}(i)\} \cap
\{g_{\eta',\delta'}(i')\}{\subset}B_{{\eta}}\cap B_{{\eta}'}={\emptyset}$, hence $$C^*_{{\eta},{\delta},i}\cap
C^*_{{\eta}',{\delta}',i'}{\subset}\{f_{\eta}({\delta},i)\} \cap
\{f_{\eta'}({\delta}',i')\}.$$ If ${\eta}={\eta}'$ and ${\delta}\ne {\delta}'$ then $C_{{\eta},{\delta}}\cap
C_{{\eta},{\delta}'}={\emptyset}$, and $f_{\eta}({\delta},i)\ne
f_{\eta}({\delta}',i')$ because $f_{\eta}$ is injective, hence $$C^*_{{\eta},{\delta},i}\cap C^*_{{\eta}',{\delta}',i'}{\subset}\{g_{\eta,\delta}(i)\} \cap \{g_{\eta',\delta'}(i')\}.$$ Finally, if ${\eta}={\eta}'$, ${\delta}={\delta}'$, and $i\ne i'$ then $f_{\eta}({\delta},i)\ne f_{\eta}({\delta},i')$ and $g_{\eta,\delta}(i)\ne g_{\eta,\delta}(i')$ because $g_{\eta,\delta}$ is also injective, and so $$|C^*_{{\eta},{\delta},i}\cap C^*_{{\eta}',{\delta}',i'}| =
|C_{{\eta},{\delta},i}\cap C_{{\eta}',{\delta}',i'}|\le 1\,.$$
[ **Claim 2.** ]{}$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}^*) > w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}})\,$ [*if the latter is finite.*]{}
Assume that $w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) = k < \omega$ and, contrary to our claim, $h$ is a weak conflict-free coloring of ${{\mathcal F}}^*$ with ${\operatorname{ran}}(h)=k$. Then, for each ${\delta}<{{{2^{\lambda}}}}$, the equality $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}_{\delta}) = k$ implies that there is $a_{\delta}\in
A_{\delta}$ with $h(a_{\delta})=0$. Since $|\{a_{\delta}:{\delta}<{{{2^{\lambda}}}}\}| = {{{2^{\lambda}}}}$, there is an ${\eta}<{2^{2^{\lambda}}}$ with ${\operatorname{ran}}(f_{\eta})=
\{a_{\delta}:{\delta}<{{{2^{\lambda}}}}\}{\subset}h^{-1}\{0\}$.
Fix this $\eta$ and then apply $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal B}}_{{\eta},{\alpha}}) =
k$ to find, for each ${\alpha}<{\lambda}$, some $b_{\alpha}\in
B_{{\eta},{\alpha}}$ with $h(b_{\alpha})=0$. Again, we have $|\{b_{\alpha}:{\alpha}<{\lambda}\}| = {\lambda}$, hence there is a $\,{\delta}<{{{2^{\lambda}}}}$ with ${\operatorname{ran}}(g_{\eta,\delta})=
\{b_{\alpha}:{\alpha}<{\lambda}\}{\subset}h^{-1}\{0\}$.
But then for each $i<{\lambda}$ we have $\{f_{\eta}({\delta},i),
g_{\eta,\delta}(i)\} {\subset}h^{-1}\{0\}$, consequently $h\restriction (C_{{\eta},{\delta}}{\setminus}h^{-1}\{0\})$ must be a weak conflict free coloring of ${{\mathcal C}}_{\eta,\delta}$ with $k-1$ colors, a contradiction. This contradiction proves Claim 2 and completes the proof of the lemma.
\[tm:step3’\] For any $\kappa \ge \omega$ and $\,m \in \omega$ we have $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,2) \ge \left\lfloor \frac {m}{2}\right\rfloor+2\,.$$
By theorem \[tm:omegaw\] this is true for $m = 0$ and $m = 1$. Moreover, if we assume $\,w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_m(\kappa),\kappa,2) \ge
\left\lfloor \frac {m}{2}\right\rfloor+2\,$ then, applying lemma \[lm:2\*\] with $\lambda = \beth_m(\kappa)$, we obtain $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_{m+2}(\kappa),\kappa,2) \ge \left\lfloor \frac
{m}{2}\right\rfloor+2+1 = \left\lfloor \frac
{m+2}{2}\right\rfloor+2\,.$$ Thus the theorem follows by a straight-forward induction.
Comparing this with theorem \[tm:egeszresz\] we get the following result.
\[cor:gchoup2\] For ${\kappa} \ge \omega$ and $m \in \omega$, the equality $\beth_m(\kappa) = \kappa^{+m}$ implies $$\label{eq:e1}
{w{\chi}_{\rm CF}({\kappa}^{+m},{\kappa},2)}=\left\lfloor \frac
{m}{2}\right\rfloor+2\,.$$
Attempts to compute ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_k,\omega,2)$
==================================================================================
In the previous section we succeeded in computing the exact value of $\,{w{\chi}_{\rm CF}({\kappa}^{+m},{\kappa},d)}$ in a lot of cases, at least under GCH. As we have $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) \le
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d) \le w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,d)+1\,,$$ this gives us a lot of information about ${{\chi}_{\rm CF}({\kappa}^{+m},{\kappa},d)}$ as well. But can we find the exact value of $\,{{\chi}_{\rm CF}({\kappa}^{+m},{\kappa},d)}$, or even just of $\,{{\chi}_{\rm CF}(\omega_m,{\omega},d)}$, say under GCH and for many values of $m$ and $d$? This turned out to be a very hard problem that we address in the present section, admittedly with only rather meager results. There is no problem in the simplest possible case: $m \le
1$ and $d = 2$.
\[f:omega\] ${{\chi}_{\rm CF}(\kappa,\kappa,2)}={{\chi}_{\rm CF}(\kappa^+,\kappa,2)}=3$ for all ${\kappa}\ge {\omega}$.
First, by theorem \[tm:egeszresz\], we have $${{\chi}_{\rm CF}(\kappa,\kappa,2)} \le {{\chi}_{\rm CF}(\kappa^+,\kappa,2)} \le 3\,.$$ We have seen in the proof of theorem \[tm:omegaw\] that if $V$ is any $2$-dimensional vector space with $|V| = \kappa$ above any field of cardinality ${\kappa}$, then the $(\kappa,\kappa,2)$-system ${{\mathcal L}}$ of all lines in $V$ satisfies $$w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal L}}) = w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa,\kappa,2) = 2\,.$$ Consequently, we shall be done if we can show that ${{\mathcal L}}$ does not have a conflict free coloring with 2 colors.
Assume, on the contrary, that $f:V\to 2$ is a CF-coloring of ${{\mathcal L}}$ and write $C_i=f^{-1}\{i\}$ for $i\in 2$. Since $|C_i\cap
L|\ge 1$ for each line $L$ and color $i<2$, neither $C_i$ is collinear, i.e. $C_i\not\subset L$ for any $i<2$ and for any line $L$. Thus there are four lines $\{K^j_i:i,j<2\} {\subset}{{\mathcal L}}$ such that $|C_i\cap K^j_i|\ge 2$ for all $i,j < 2$. Since $f$ is a CF-coloring, for any $i,j<2$ we have a point $P^j_i$ with $K^j_i
\cap C_{1-i} = \{P^j_i\}$.
There is a line $L$ that intersects each $K^j_i$ in distinct points which are all different from the points $P^j_i$. Then $|L\cap C_i|\ge 2$ for $i < 2$, hence $f$ is not a CF-coloring of ${{\mathcal L}}$, a contradiction.
What can we say about $\,{{\chi}_{\rm CF}(\omega_m,\omega,2)}$ for $m>1$? If $\beth_m = \omega_m$, in particular under GCH, from corollary \[cor:gchoup2\], we have, for any $m < \omega$, $$\left\lfloor \frac {m}{2}\right\rfloor+2 \le {{\chi}_{\rm CF}({\omega}_{m},{\omega},2)}
\le \left\lfloor \frac {m}{2}\right\rfloor+3\,,$$ hence, in particular, $$3\le{{\chi}_{\rm CF}({{\omega}_2},\omega,2)}\le {{\chi}_{\rm CF}({{\omega}_3},\omega,2)} \le 4.$$
We actually do not know the exact value of ${{\chi}_{\rm CF}({{\omega}_2},\omega,2)}$ even under GCH, but we can reformulate the problem in terms of the strict five-parameter arrow relation that was introduced in definition \[def:gen\_princ\]. One direction of this works in ZFC.
\[tm:equi\] If $[\kappa,\kappa,2,2] \to 3$ then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{++},\kappa,2) =
3$.
Starting with the relation $[\kappa,\kappa,2,2] \to 3$ and applying theorem \[tm:ind\_new\] (2) twice we obtain $[\kappa^{++},\kappa,2,0] \to 3$ which, of course, is just $[\kappa^{++},\kappa,2] \to 3$, and hence, together with ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa,\kappa,2) = 3$, implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{++},\kappa,2) =
3$.
To go in the opposite direction, we first need the following result concerning the relation $[\lambda,\kappa,2,k] \to x$.
\[lm:x\] If $\,{[\lambda,\kappa,2,k]\not\to x}$ then this can be witnessed by a $(\lambda,\kappa,2)$-system ${{\mathcal X}}= \{X_i : i < \lambda\}
{\subset}[\lambda]^\kappa$ and a map $c \in {{\mathcal F}}(\lambda,x)$ such that $$Y = {\operatorname{dom}}(c) = \cup\{Y_i : i < \lambda\},$$ where $X_i \cap
Y {\subset}Y_i \in [Y]^k$ for each $i < \lambda$ and the $k$-element sets $Y_i$ are pairwise disjoint.
Fix an arbitrary $(\lambda,\kappa,2)$-system ${{\mathcal X}}= \{X_i : i <
\lambda\} {\subset}[\lambda]^\kappa$ and a map $c \in
{{\mathcal F}}(\lambda,x)$ that witnesses $\,{[\lambda,\kappa,2,k]\not\to x}$. For each $y \in Y$ consider the set $I_y
= \{ i \in \lambda : X_i \cap Y \ne \emptyset\}$ and if $|I_y| >
1$ then, for each $i \in I_y$ replace $y$ in $X_i$ by the pair $\<y,i\>$ and “blow up" $y$ in $Y$ to $I_y \times \{y\}$. Having done this for all $y \in Y$ let us denote the “new" $X_i$ by $X'_i$ and the “new" $Y$ by $Y'$. Also define the “new" function $c'$ on $Y'$ by the rule $c'(\<y,i\>) = c(y)$. We may then add, if necessary, completely new elements to $Y'$ (and extend $c'$ to them arbitrarily) to obtain the pairwise disjoint $k$-element sets $Y_i \supset X'_i \cap Y'$ forming a partition of $Y'$.
It is easy to check that the $(\lambda,\kappa,2)$-system ${{\mathcal X}}' =
\{X'_i : i < \lambda \}$ and the map $c'$, that now are of the desired form, also witness ${[\lambda,\kappa,2,k]\not\to x}$.
\[tm:x\] For any $\lambda \ge \kappa \ge \omega > k\,$, $$\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,2) = {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_k(\lambda),\kappa,2) = x <
\omega$$ implies $\,[\lambda,\kappa,2,k] \to x$.
By the previous result, to conclude $[\lambda,\kappa,2,k] \to x$, it suffices to show the existence of a conflict free coloring of ${{\mathcal X}}$ that extends $c$ for any $(\lambda,\kappa,2)$-system ${{\mathcal X}}=\{X_i:i<{\lambda}\}{\subset}\br \lambda;{\kappa};$ and partial map $c \in {{\mathcal F}}(\lambda,x)$ satisfying the conditions of lemma \[lm:x\]. That is, we may assume having a partition $\{Y_i : i <
\lambda\}$ of ${\operatorname{dom}}(c) = Y$ into disjoint $k$-element sets such that $X_i \cap Y {\subset}Y_i$ for all $i < \lambda$. For each $i <
\lambda$ we write $Y_i = \{y_i^j : 1 \le j \le k\}$. By $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,2) = x$, we can fix a $(\lambda,\kappa,2)$-system ${{\mathcal F}}$ with ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal F}}) = x$.
We now introduce some notation. For any $j$ we write $\beth_j(\lambda) = \lambda_j$ (so, in particular, $\lambda_0 =
\lambda$) and put $\Pi = \lambda_k \times
\lambda_{k-1}\times...\times \lambda_0$. For each $j \le k$ we shall also write $\Pi^j = \lambda_k \times...\times
\lambda_{j+1}\times \lambda_{j-1} \times...\times \lambda_0$, that is the members of $\Pi^j$ are obtained from the members of $\Pi$ by deleting their $j$-coordinate.
Next we choose pairwise disjoint sets $\{A_\sigma^j : j \le
k,\,\sigma \in \Pi^j\}$ of size $\lambda$, and for every $j$ with $1 \le j \le k$ and $\sigma \in \Pi^j$ we let ${{\mathcal A}}_\sigma^j$ be a copy of ${{\mathcal F}}$ on $A_\sigma^j$.
For fixed $j$ with $1 \le j \le k$ and $\varrho \in \lambda_k
\times... \times\lambda_{j+1}$, consider the family $\mathbb{F}_\varrho^j$ of all functions $f$ such that ${\operatorname{dom}}(f) =
\lambda_{j-1} \times...\times \lambda_0\,$ and $f(\eta) \in
A_{\varrho\smallfrown\eta}^j \,$ for all $\,\eta \in \lambda_{j-1}
\times...\times \lambda_0\,$. Then $|\mathbb{F}_\varrho^j| =
\lambda_j$, hence for every $j$ with $1 \le j \le k$ there is a function $f^j$ with ${\operatorname{dom}}(f^j)= \Pi$ and having the property that, if we fix $\varrho \in \lambda_k \times... \times\lambda_{j+1}$, then the functions $\eta \mapsto f^j(\varrho\smallfrown
\<\xi\>\smallfrown \eta)$ enumerate $\mathbb{F}_\varrho^j$ in a one-one manner, as $\xi$ ranges over $\lambda_j$.
For any $\sigma \in \Pi^0$ we put $$B_\sigma^0 = A_\sigma^0 \cup
\{f^j(\sigma\smallfrown \<i\>) : 1 \le j \le k \mbox{ and } i <
\lambda
\}\,.$$ Then, as $|\lambda {\setminus}Y| = \lambda$, we may fix a bijection $h_\sigma : \lambda \to B_\sigma^0$ such that $$h_\sigma[\lambda {\setminus}Y] = A_\sigma^0 \mbox{ and }
h_\sigma(y_i^j) = f^j(\sigma\smallfrown \<i\>)$$ for any $1 \le j
\le k$ and $i < \lambda$. Now, if $\tau \in \Pi$ with $\tau =
\sigma\smallfrown <i>$ then we set $B_\tau = h_\sigma[X_i]$.
We claim that the family $${{\mathcal A}}= \bigcup \{ {{\mathcal A}}_\sigma^j : 1
\le j \le k \mbox{ and } \sigma \in \Pi^j \} \cup \{B_\tau : \tau
\in \Pi\}$$ is 2-almost disjoint. Here the only problematic task is to show that $|B_\tau \cap B_{\tau'}| \le 1$ for two distinct members, $\tau = \<\xi_k, ...,\xi_1,i \>$ and $\tau'= \<\xi'_k,
...,\xi'_1,i' \>$, of $\Pi$. Let $\sigma = \<\xi_k, ...,\xi_1 \>$ and $\sigma' =\<\xi'_k, ...,\xi'_1 \>\,$. If $\sigma \ne \sigma'$ and $j\ge 1$ is maximal such that $\xi_j \ne \xi'_j$, then we have $B_\tau \cap B_{\tau'} {\subset}\{f^j(\tau)\} \cap \{f^j(\tau')\}$. If, however, $\sigma = \sigma'$ then $i \ne i'$ and $$B_\tau \cap
B_{\tau'} = h_\sigma[X_i] \cap h_\sigma[X_{i'}] = h_\sigma[X_i
\cap X_{i'}]\,,$$ hence we are done because ${{\mathcal X}}$ is 2-almost disjoint.
Thus ${{\mathcal A}}$ is a $(\lambda_k\,,\kappa,2)$-system and so, by our assumption, it has a conflict free coloring $d : \cup {{\mathcal A}}\to
x$. Our choice of ${{\mathcal A}}_\varrho^j$ implies that, for every $j$ with $1 \le j \le k$ and $\varrho \in \Pi^j$, we have $d[A_\varrho^j] = x$. It follows that there is a function $f \in
\mathbb{F}^k_\emptyset$ which satisfies $d(f(\varrho)) = c(y_i^k)$ for all $\varrho \in \Pi^k$, where $i$ is the last ($0$) coordinate of $\varrho$, and there is an ordinal $\xi_k <
\lambda_k$ for which we have $f(\varrho) =
f^k(\<\xi_k\>\smallfrown \varrho)$ for all $\varrho \in \Pi^k$.
Repeating this procedure “downward", step by step, we arrive at a sequence $\sigma = \<\xi_k, ... ,\xi_1 \> \in \Pi^0 $ which, for any $j$ with $1 \le j \le k$ and $i < \lambda$, satisfies the equality $$d(f^j(\sigma\smallfrown \<i\>)) = c(y_i^j)\,.$$ But recall that we have $h_\sigma(y_i^j) = f^j(\sigma\smallfrown
\<i\>)$ by definition, hence the composition $d\circ h_\sigma$ is a conflict free coloring of ${{\mathcal X}}$ which extends $c$, completing our proof of $\,[\lambda,\kappa,2,k] \to x$.
\[cor:equi2\] For every infinite cardinal $\kappa\,$, ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_2(\kappa),\kappa,2) = 3$ implies $[\kappa,\kappa,2,2]
\to 3$. Consequently, if $\,\beth_2(\kappa) = \kappa^{++}$, in particular under GCH, $[\kappa,\kappa,2,2] \to 3$ is equivalent to ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa^{++},\kappa,2) = 3$.
Our next aim is to show that ${{\chi}_{\rm CF}({{\omega}_3},\omega,2)} = 4$ under GCH. This will follow from the ZFC result ${{\chi}_{\rm CF}({\beth_3},\omega,2)} \ge 4$ that, in turn, follows from the negative relation ${[{\ifthenelse{\equal{\omega}{{\omega}}}
{\omega_{{\ifthenelse{\equal{0}{0}}{}{0} }}}
{\omega^{{\ifthenelse{ \equal{0}{0} } {}
{ \ifthenelse {\equal{0}{1}} {+} {+0}} }}}},\omega,2,3]\not\to 3}$. To prove the latter, we need the following technical lemma.
\[lm:fini\] There are a finite $2$-almost disjoint family ${{\mathcal A}}$ of countably infinite sets, a finite set $C$, and a function $c:C\to 3$ such that
(1) $|A \cap C|= 4$ for each $A\in{{\mathcal A}}$,
(2) the sets $\{A\cap C:A\in{{\mathcal A}}\}$ are pairwise disjoint,
(3) $c$ can not be extended to a conflict free coloring of ${{\mathcal A}}$ with $3$ colors.
For $\{a, b\} \in [\mathbb R]^2$ let $L_{a,b}$ be the line in $\mathbb R^2$ which contains $a$ and $b$ and put $E_{a,b}=L_{a,b}\cap \mathbb Z^2$. We then put $${{\mathcal A}}=\{E_{a,b}: \{a,b\}\in \br 4\times 6;2;\}.$$ Let $C{\subset}\cup {{\mathcal A}}{\setminus}(4\times 6)$ be any finite set that satisfies (1) and (2).
Write $V_i=E_{\<i,0\>,\<i,1\>}$ for $i<4$ and $H_j=E_{\<0,j\>,\<1,j\>}$ for $j<6$. Define $c:C\to 3$ in such a way that if $C_i=c^{-1}\{i\}$ for $i<3$, then we have
(a) for each $i<4$ $$|C_0\cap V_i|= |C_1\cap V_i|=2$$
(b) for each $j<6$ $$|C_1\cap H_j|= |C_2\cap H_j|=2$$
(c) for each $i\ne i'<4$ and $j\ne j' <6$ $$|C_0\cap E_{\<i,j\>, \<i',j'\>}|= |C_2\cap E_{\<i,j\>,
\<i',j'\>}|=2$$
Assume that $f : \cup {{\mathcal A}}\to 3$ is a conflict free coloring of ${{\mathcal A}}$ with $c\subset f$. Then, by (a), for each $i<4$ there is exactly one $x_i\in V_i$ such that $f(x_i)=2$. Since $6-4=2$ there are $j\ne j'<6$ such that $$\{x_i:i<4\}\cap (H_j\cup H_{j'})={\emptyset}\,.$$ By (b), there are unique $y_j\in H_j$ and $y_{j'}\in H_{j'}$, respectively, such that $f(y_j)=f(y_{j'})=0$. Since $4-2=2$ there are $i\ne i'<4$ such that $$\{y_j, y_{j'}\}\cap (V_i\cup V_{i'})={\emptyset}\,.$$
Let $a=\<i,j\>$ and $b=\<i'j'\>$. Then $a\ne x_i$ implies $f(a)\ne
2$ and similarly, $a\ne y_j$ implies $f(a)\ne 0$, hence $f(a)=1$. Similarly, we have $f(b)=1$. But, as $a,b\in E_{a,b}$ and (c) holds, we have $|E_{a,b}\cap f^{-1}\{i\}| > 1$ for each $i
< 3$, which is a contradiction.
\[tm:ooh\] ${[{\ifthenelse{\equal{\omega}{{\omega}}}
{\omega_{{\ifthenelse{\equal{0}{0}}{}{0} }}}
{\omega^{{\ifthenelse{ \equal{0}{0} } {}
{ \ifthenelse {\equal{0}{1}} {+} {+0}} }}}},\omega,2,3]\not\to 3}$.
We shall construct a $2$-almost disjoint family ${{\mathcal H}}{\subset}[H]^\omega$ for a countable set $H$, a subset $K{\subset}H$, and a function $d:K\to 3$ such that
(1) $|H\cap K|\le 3$ for each $H\in{{\mathcal H}}$,
(2) $d$ can not be extended to a conflict free coloring of ${{\mathcal H}}$ with $3$ colors.
We first choose, using ${{\chi}_{\rm CF}(\omega,\omega,2)}=3$, a $2$-almost disjoint family ${{\mathcal B}}{\subset}\br {\omega};{\omega};$ such that $$\begin{gathered}
\label{eq:sok_szin}
\text{if $f:{\omega}\to 3$ is any conflict-free coloring of
${{\mathcal B}}$}\\\text{then $f^{-1}\{i\}$ is infinite for each $i<3$.}\end{gathered}$$ (Let $\{A_n : n < \omega\}$ be a partition of $\omega$ into infinite sets and ${{\mathcal B}}_n {\subset}[A_n]^\omega$ be a copy of a family witnessing ${{\chi}_{\rm CF}(\omega,\omega,2)}=3$. Then ${{\mathcal B}}=
\cup_{n<\omega} {{\mathcal B}}_n$ clearly satisfies (\[eq:sok\_szin\]).)
Fix a countable set $X$, a finite family ${{\mathcal A}}{\subset}\br
X;{\omega};$, a finite set $C{\subset}X$, and a function $c:C\to 3$ as in Lemma \[lm:fini\] . $D{\subset}C$ be such that $|A\cap D|=1$ for each $A\in {{\mathcal A}}$.
Let ${{\mathcal G}}$ denote the collection of all injective functions $g:D\stackrel{1-1}{\longrightarrow}{\omega}$ and $\{H_g:g\in
{{\mathcal G}}\}$ be disjoint countably infinite sets with $H_g\cap
{\omega}={\emptyset}$. For each $g\in {{\mathcal G}}\,$ fix a bijection $h'_g:(X{\setminus}D)\to H_g$ and put $h_g=g\cup h'_g$.
Let us then define $$\begin{gathered}
H=\omega\cup \bigcup \{H_g:g\in {{\mathcal G}}\},\\
{{\mathcal H}}={{\mathcal B}}\cup\{h_g[A]:A\in {{\mathcal A}},g\in {{\mathcal G}}\},\\
K=\cup\{h_g[C{\setminus}D]:g\in {{\mathcal G}}\},\end{gathered}$$ and, finally, define $d:K\to 3$ as follows: $$\label{eq:K}
\text{if $k=h_g(x)$ for some $x\in C{\setminus}D$ and $g \in {{\mathcal G}}$,
then $d(k)=c(x)$.}$$ We claim that $H$, ${{\mathcal H}}$, $K$, and $d$ are as required, that is satisfy (1) and (2). Of course, only (2) needs to be checked.
Assume, on the contrary, that $f:H\to 3$ is a conflict-free coloring for ${{\mathcal H}}$ with $d{\subset}f$. Using (\[eq:sok\_szin\]) we may find an injective function $g:D\to {\omega}$ such that for each $x \in D$ we have $$\label{eq:fgc}
f(g(x))=c(x).$$ Let us now define $F:{\omega}\to 3$ by $F(x)=f(h_g(x))$. Since $f$ is a conflict free coloring of $\{h_g[A]:A\in {{\mathcal A}}\}{\subset}{{\mathcal H}}$ and $h_g$ is a bijection, $F$ is a conflict free coloring of ${{\mathcal A}}$.
If $x\in D$ then $F(x)=f(h_g(x))=f(g(x))=c(x)$ by (\[eq:fgc\]) and if $x\in C{\setminus}D$ then $F(x)=f(h_g(x))=d(h_g(x))=c(x)$ by (\[eq:K\]), hence $c{\subset}F$. But this contradicts the choice of ${{\mathcal A}}$, which proves that $H$, $K$, ${{\mathcal H}}$, and $d$ really satisfy conditions (1) and (2).
\[cor:beth3\] ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\beth_3,\omega,2) \ge 4$. Consequently, if $\beth_3 =
\omega_3$ then ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_3,\omega,2) = 4$.
Is $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_2,\omega,2) = 4\,$ provable under GCH?
Part III. The case $\lambda \ge \kappa \ge \omega = \mu$
Consistent upper bounds for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$
==============================================================================================
We start by pointing out that ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$ is always infinite. This follows immediately from the next proposition because ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$ is increasing in its first parameter.
\[pr:kko\] For every infinite cardinal $\kappa$ we have $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\kappa,\kappa,\omega) \ge \omega.$$
By theorem \[tm:omegaw\], for every $d \in \omega {\setminus}2$ there is a $(\kappa,\kappa,d)$-system ${{\mathcal A}}_d$ such that $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}_d) \ge w{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}_d) = {\left\lfloor {\frac{d}2} \right\rfloor+1}.$$ But clearly if ${{\mathcal A}}$ is the union of $\{{{\mathcal A}}_d : d \in \omega
{\setminus}2 \}$ (taken on disjoint underlying sets) then ${{\mathcal A}}$ is a $(\kappa,\kappa,\omega)$-system with ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal A}}) \ge \omega$.
The main aim of this section is to show that we have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le \omega_2$ for $\lambda \ge \kappa
\ge \omega_2$, provided that $\mu^\omega = \mu^+$ holds for every $\mu < \lambda$ with $\cf(\mu) = \omega$. Moreover, if in addition $\square_\mu$ also holds for any $\mu$ with $\cf(\mu) = \omega <
\mu < \lambda$, then we even have ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega) \le
\omega_1$ whenever $\lambda \ge \kappa \ge \omega_1$. The first part will follow from a general stepping up result, whose formulation needs the following definition.
Assume that $\omega \le {\rho} \le {\lambda}$ are cardinals, ${{\mathcal A}}$ is any set-system, and $\vec N =
\<N_{\alpha}:{\alpha}<{\lambda}\>$ is a ${\lambda}$-chain of elementary submodels. We say that $\vec N$ [*${\rho}$-cuts*]{} ${{\mathcal A}}$ iff $$\label{eq:dec}
\text{${{\mathcal A}}\in N_1$, moreover ${\alpha}<{\lambda}$ and $A\in
{{\mathcal A}}{\setminus}N_{\alpha}$ imply $|A\cap N_{\alpha}|<{\rho}$.}$$
\[tm:gen\_step\_up\] Let ${\omega}\le {\mu}\le {\rho}\le {\kappa}\le {\lambda}$ be cardinals and assume that every $(\lambda,\kappa,\mu)$-system is $\rho$-cut by a ${\lambda}$-chain of elementary submodels. Assume also that
(i) if ${\kappa}={\lambda}$ then there is ${\kappa}^*<{\kappa}$ such that $[\kappa',\kappa',\mu] \Rightarrow \rho$ whenever ${\kappa}^*\le{\kappa'}<{\kappa}$ (note that in this case $\rho
\le \kappa^* < \kappa = \lambda$),
(ii) if ${\kappa}<{\lambda}$ then $[\lambda',\kappa',\mu] \Rightarrow \rho$ whenever ${\kappa}\le
\kappa' \le {\lambda}'<{\lambda}$.
Then $[\lambda,\kappa,\mu] \Rightarrow \rho$.
Let ${{\mathcal A}}{\subset}\br {\lambda};{\kappa};$ be a $(\lambda,\kappa,\mu)$-system and let $\vec
N=\<N_{\alpha}:{\alpha}<{\lambda}\>$ be a ${\lambda}$-chain of elementary submodels which ${\rho}$-cuts ${{\mathcal A}}$. We can assume that $\max({\kappa}^*+1,{\rho}+1){\subset}N_1$ in case ${\kappa}={\lambda}$ and ${\kappa}+1{\subset}N_1$ in case ${\kappa}<{\lambda}$. For each ${\alpha}<{\lambda}$ let $${{\mathcal A}}_{\alpha}={{\mathcal A}}\cap (N_{{\alpha}+1}{\setminus}N_{\alpha})\,,$$ then $\<{{\mathcal A}}_{\alpha}:{\alpha}<{\lambda}\>$ is a partition of ${{\mathcal A}}$ and $|{{\mathcal A}}_{\alpha}|\le |N_{{\alpha}+1}|<{\lambda}$. We let $$\label{eq:ya}
Y_{\alpha}={\lambda}\cap N_{{\alpha}+1}{\setminus}\bigl(N_{\alpha} \cup
\bigcup{{\mathcal A}}\cap N_{\alpha}\bigr)$$ and $${{\mathcal A}}'_{\alpha}=\{A\cap Y_{\alpha}:A\in {{\mathcal A}}_{\alpha}\}.$$ If $A\in {{\mathcal A}}_{\alpha}$ then $|A\cap N_{\alpha}|<{\rho} \le
\kappa$, hence $$\label{eq:ayr}
| A\cap \cup\{Y_{\beta}:{\beta}<{\alpha}\}|<{\rho},$$ and, by definition, $$\label{eq:aydis}
A \cap\cup\{Y_{\beta}:{\beta}>{\alpha}\}={\emptyset}\,.$$
Assume first that ${\kappa}={\lambda}$. Then $A \in {{\mathcal A}}_\alpha$ implies $$|A\cap \bigcup ({{\mathcal A}}\cap N_{\alpha})|\le
{\mu}\cdot|N_{\alpha}|<{\kappa}\,,$$ hence, by elementarity, $|A
\cap Y_\alpha|=|Y_{\alpha}|=|N_{{\alpha}+1}| \ge \kappa^*$. Consequently, ${{\mathcal A}}'_{\alpha}{\subset}\br Y_{\alpha};|Y_{\alpha}|;$ is a $(|Y_{\alpha}|,|Y_{\alpha}|,\mu)$-system and thus, by (i), there is a function $c_{\alpha}:Y_{\alpha}\to {\rho}$ such that for each $A\in {{\mathcal A}}_{\alpha}$ we have $$\label{eq:ca1}
|{\rho}{\setminus}I_{c_\alpha}(A \cap Y_\alpha)|<{\rho}.$$
Assume now that ${\kappa}<{\lambda}$. Then $\cup({{\mathcal A}}\cap
N_{\alpha}){\subset}N_{\alpha}$, and so $$A\cap Y_{\alpha} = A {\setminus}A \cap N_\alpha \in [Y_\alpha]^\kappa\,.$$ But ${\kappa}\le
|Y_{\alpha}|=|N_{{\alpha}+1}|<{\lambda}$ and ${{\mathcal A}}'_{\alpha}{\subset}\br Y_{\alpha};{\kappa};$ is ${\mu}$-almost disjoint, so by (ii) there is $c_{\alpha}:Y_{\alpha}\to {\rho}$ such that for each $A\in {{\mathcal A}}_{\alpha}$ we have $$\label{eq:ca2}
|{\rho}{\setminus}I_{c_\alpha}(A \cap Y_\alpha)|<{\rho}.$$
Let us put (in both cases) $c=\cup\{c_{\alpha}:{\alpha}<{\lambda}\}$, then $c \in
{{\mathcal F}}({\lambda},{\rho})$. For $A\in {{\mathcal A}}$ pick ${\alpha}<{\lambda}$ with $A\in {{\mathcal A}}_{\alpha}$, then (\[eq:aydis\]) implies $$I_c(A)\supset I_{c_{\alpha}}(A\cap Y_{\alpha}){\setminus}c[A\cap
\cup\{Y_{\beta}: {\beta}<{\alpha}\}]\,.$$ But $|A\cap
\cup\{Y_{\beta}: {\beta}<{\alpha}\} |<{\rho}$ by (\[eq:ayr\]), hence either (\[eq:ca1\]) or (\[eq:ca2\]) implies $|\rho {\setminus}I_c(A)| < {\rho}$. Finally, if ${\operatorname{dom}}(c) \ne \lambda$ then we may extend $c$ to a full function $d : \lambda \to \rho+1$ by mapping every member of $\lambda {\setminus}{\operatorname{dom}}(c)$ to $\rho$, and then we have $|\varrho {\setminus}I_{d}(A)| < \rho$, which completes the proof of $[\lambda,\kappa,\mu] \Rightarrow \rho$.
Now, using the trivial relation $[\rho,\rho,\mu] \Rightarrow \rho$ and theorem \[tm:gen\_step\_up\], the following result may be established by a straight-forward transfinite induction. The details are left to the reader.
\[cor:gen\_step\_up\] Let ${\omega}\le {\mu}\le {\rho} < {\lambda}$ be cardinals. If every $(\lambda',\kappa,\mu)$-system is $\rho$-cut by a ${\lambda}'$-chain of elementary submodels whenever $\rho <
\lambda' \le \lambda$ and $\rho \le \kappa \le \lambda'$ then $\,{[\lambda,\kappa,\mu]\Rightarrow\rho }$.
The following easy lemma will be used in the proof of the first result that was promised in the introductory paragraph of this section.
\[lm:gen\_small\_close\] Assume that $\lambda \ge \omega_2$ and ${\mu}^{\omega}={\mu}^+$ holds for each ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$. If ${{\mathcal A}}$ is an $\omega$-almost disjoint set system and $\,X$ is any set with $\, |X| < \lambda$, then $$\big|\{A\in {{\mathcal A}}:|X\cap A| > \omega \}\big| \le |X|.$$
It obviously follows from our assumption that if $\mu < \lambda$ and $\cf(\mu) > \omega$ then $\mu^\omega = \mu$. Thus, if $\cf(|X|) > \omega$ then, as ${{\mathcal A}}$ is $\omega$-almost disjoint, we even have $$|\{A\in {{\mathcal A}}:|X\cap A|\ge \omega\}|\le
|X|^{\omega}= |X|\,.$$ If, however, $\cf(|X|)={\omega} < |X|$ then we may write $X=\cup\{X_n:n<{\omega}\}$ with $|X_n|<|X|$ for each $n < \omega$. But then we have $$|\{A\in {{\mathcal A}}:|X\cap A|\ge {{{\omega}_1}}\}|=
|\{A\in {{\mathcal A}}:\exists n\, |X_n\cap A|\ge {{{\omega}_1}}\}|,$$ and so $$\begin{gathered}
\notag
|\{A\in {{\mathcal A}}: |X\cap A|\ge {{{\omega}_1}}\}|\le \sum_{n<{\omega}}|\{A\in
{{\mathcal A}}:
|X_n\cap A|\ge {\omega}\}|\le\\
\le \sum_{n<{\omega}}|X_n|^{\omega} = |X|.\end{gathered}$$
\[tm:above\_oot\] Assume that $\lambda \ge \omega_2$ and ${\mu}^{\omega}={\mu}^+$ holds for each ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$. Then ${[\lambda,\kappa,\omega]\Rightarrow{{\omega}_2}}$ whenever ${{\omega}_2}\le
{\kappa}\le {\lambda}$.
By corollary \[cor:gen\_step\_up\], it clearly suffices to show that if $\omega_2 < \lambda' \le \lambda$ and ${{\mathcal A}}$ is any ${\omega}$-almost disjoint set-system of cardinality ${\lambda}'$, then ${{\mathcal A}}$ is $\omega_2$-cut by a ${\lambda}'$-chain of elementary submodels.
To see this, let $\<M_{\alpha}:{\alpha}<{\lambda}'\>$ be any ${\lambda}'$-chain of elementary submodels satisfying ${{\omega}_2}\cup
\{{{\mathcal A}}\}{\subset}M_1$ and for every ${\alpha}<{\lambda}'$ write $N_{\alpha}=M_{{\omega}{\alpha}}$. We claim that $\<N_{\alpha}:{\alpha}<{\lambda}'\>$, also a ${\lambda}'$-chain of elementary submodels, ${{\omega}_2}$-cuts ${{\mathcal A}}$.
Indeed, assume that $\alpha < \lambda'$ and $A\in {{\mathcal A}}$ with $|A\cap N_{\alpha}| \ge {{\omega}_2}$. Since ${\omega}{\alpha}$ is a limit ordinal, then there is ${\beta}<{\omega}{\alpha}$ such that $|A\cap M_{\beta}|\ge{{{\omega}_1}}$. But then ${{\mathcal A}}' = \{A'\in
{{\mathcal A}}:|A'\cap M_{\beta}|\ge {{{\omega}_1}}\}\in M_{{\beta}+1}$ and $|{{\mathcal A}}'|\le |M_{\beta}|$ by lemma \[lm:gen\_small\_close\], hence we have $A \in {{\mathcal A}}' {\subset}M_{{\beta}+1}{\subset}M_{{\omega}{\alpha}}=N_{\alpha}$.
A very short alternative proof of theorem \[tm:above\_oot\] may be obtained as follows. In [@EH3 Theorem 6] Erdős and Hajnal proved that if ${\mu}^{\omega}={\mu}^+$ holds for each ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$ then $$\label{eh2x}
\text{${{\mathbf M}(\lambda,\kappa,\omega)}\to {{\mathbf B}({\omega}_2)}$ whenever
${\omega}_1\le {\kappa}\le {\lambda}$}.$$ Moreover, under the same assumption, Komjáth proved in [@KO2 Theorem 5] that $$\label{ko2}
\text{${{\mathbf M}(\lambda,{\omega}_2,\omega)}\to {\bf ED}\,$ for all
$\lambda \ge \omega_2 $.}$$ Applying proposition \[lm:reduction\] with $\mu = \omega$ and $\tau = \omega_2$, we may conclude that (\[eh2x\]) and (\[ko2\]) together imply $[\lambda,\kappa,\omega] \Rightarrow
\omega_2$ whenever $\lambda \ge \kappa \ge \omega_2$.
Actually, the above proof yields the stronger conclusion $$[\lambda,\kappa,\omega] \Rightarrow \omega_1 \mbox{ whenever }\,\lambda
\ge \kappa \ge \omega_1\,,$$ provided that in (\[ko2\]) we may replace $\omega_2$ by $\omega_1$. But by [@KO2 Theorem 5(c)], this can be done if, in addition to ${\mu}^{\omega}={\mu}^+$ for all ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$, we also assume $\Box_\mu$ for each ${\mu}<{\lambda}$ with $\cf({\mu})={\omega} <
\mu$. (In fact, as it is shown in [@HJS1], the assumption of a very weak version of $\Box_\mu$ suffices for this.) Thus we get the following result.
\[tm:above\_oo\] Let ${\lambda}$ be an uncountable cardinal and assume that
(i) ${\mu}^{\omega}={\mu}^+$ for each cardinal ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$,
(ii) $\Box_\mu$ holds for each singular cardinal ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$.
Then ${[\lambda,\kappa,\omega]\Rightarrow{{{\omega}_1}}}$ holds whenever ${{{\omega}_1}}\le
{\kappa}\le {\lambda}$.
As condition (ii) of theorem \[tm:above\_oo\] is only relevant for $\lambda > \aleph_\omega\,$, we immediately obtain the following result.
\[cor:box\] CH and $\,\omega_1 \le \kappa \le \lambda \le \aleph_\omega\,$ imply $\,{[\lambda,\kappa,\omega]\Rightarrow{{{\omega}_1}}}$.
Consistent lower bounds for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$
================================================================================================
In the previous section we gave (consistent) universal upper bounds for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$ when $\kappa \ge
\omega_2$ and $\kappa \ge \omega_1$, respectively. That no such universal upper bound can be given for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega,\omega)$ follows from the fact that if $\,\clubsuit(\lambda)$ holds, that is for each $\alpha \in
E_\omega^\lambda$ there is an $\omega$-type subset $A_\alpha$ cofinal in $\alpha$ such that for every $X \in [\lambda]^\lambda$ we have $A_\alpha {\subset}X$ for some $\alpha \in E_\omega^\lambda$, then clearly $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega,\omega) \ge
\chi(\lambda,\omega,\omega) \ge \cf(\lambda)\,.$$ In particular, if $\lambda$ is also regular then we have $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega,\omega) = \chi(\lambda,\omega,\omega) =
\lambda\,.$$
In order to get some lower bounds for ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$ with $\kappa > \omega$, and thus to show that the results of the previous section are sharp, we shall make use of a result in [@HJS1]. First we give some notation.
If $\lambda > \omega_1$ is a regular cardinal and $S {\subset}E_{\omega_1}^{\lambda}$ is stationary then we denote by ${\bigstar(S)}$ the following statement:
- [*there is an ${\omega}$-almost disjoint family $\{A_{\alpha}:{\alpha}\in S\}$ such that $\,A_{\alpha}\,$ is a cofinal subset of $\,\alpha$ of order type $\omega_1$ for each ${\alpha}\in S$.*]{}
It is an immediate consequence of Fodor’s pressing down theorem that such an $\{A_{\alpha}:{\alpha}\in S\}$ is not essentially disjoint, hence if we assume condition (i) of theorem \[tm:above\_oo\] then (very weak) $\Box_\mu$ must fail at some singular $\mu < \lambda$ with $\cf(\mu) = \omega$, in particular $\lambda > \aleph_\omega$. This implies that if ${\bigstar(S)}$ holds then we must have some large cardinals, and in fact it was shown in [@HJS1] that the existence of a supercompact cardinal implies the consistency of GCH with $\bigstar(S)$ for some $S
{\subset}\aleph_{\omega+1}$.
For any set $S {\subset}\lambda$ we shall denote by $\clubsuit(S)$ the statement that there is a sequence $\{B_{\alpha}:{\alpha}\in
S\}$ with $\cup B_{\alpha}={\alpha}$ for each ${\alpha}\in S$ such that for every $X \in [\lambda]^\lambda$ we have $B_\alpha {\subset}X$ for some $\alpha \in S$. Then $\{B_{\alpha}:{\alpha}\in S\}$ is called a $\clubsuit(S)$-sequence. Clearly, every $\diamondsuit(S)$-sequence is a $\clubsuit(S)$-sequence.
\[tm:club\] Assume that ${\lambda} > 2^\omega$ is a regular cardinal and we have both ${\bigstar(S)}$ and $\diamondsuit(S)$ for a stationary set $S {\subset}E_{\omega_1}^{\lambda}$. Then
\(1) there is an $\omega$-[*almost disjoint*]{} $\clubsuit(S^*)$-sequence for some $S^*{\subset}S$, hence $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega_1,\omega) = \chi(\lambda,\omega_1,\omega) =
\lambda\,;$$
\(2) for every cardinal $\kappa$ with $\omega_2 \le
\kappa < \lambda$ we have $\omega_2 \le
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\kappa,\omega)$.
\(1) Let us fix an ${\omega}$-almost disjoint family $\{A_{\alpha}:{\alpha}\in S\}$ witnessing ${\bigstar(S)}$ and a $\diamondsuit(S)$-sequence $\{B_{\alpha}:{\alpha}\in S\}$. Let $$B_\alpha = \{b(\alpha,\gamma) : \gamma < {\operatorname{tp}}(B_\alpha)\}$$ be the increasing enumeration of $B_\alpha$.
Next, by transfinite recursion we define sets $\{E_{\alpha}:{\alpha}\in S\}$ as follows. Assume that $\{E_{\beta}:{\beta}\in {\alpha}\cap S\}$ has been constructed. If $\,{\operatorname{tp}}(B_{\alpha}) < {\alpha}$ then let $E_\alpha = \emptyset$. Otherwise, if ${\operatorname{tp}}(B_{\alpha})={\alpha}$, set $$E_{\alpha}=\{b({\alpha},\gamma): \gamma \in A_\alpha \}\,,$$ clearly then $E_\alpha \in [B_\alpha]^{\omega_1}$ is cofinal in $\alpha$.
Let us next define $$S^*=\{{\alpha}\in S : |E_{\alpha}|={{{\omega}_1}}\land \,\forall {\beta}\in
S\cap {\alpha}\,\,(\,|E_{\beta}\cap E_{\alpha}|<{\omega})\,\},$$ and $${{\mathcal E}}=\{E_{\alpha}:{\alpha}\in S^*\}.$$ Then ${{\mathcal E}}{\subset}\br {\lambda};{{{\omega}_1}};$ is ${\omega}$-almost disjoint by definition and we claim that ${{\mathcal E}}$ is a $\clubsuit(S^*)$-sequence.
Indeed, let $B\in \br \lambda;\lambda;$ and consider the club set $$C=\{{\xi}<{\lambda}:{\operatorname{tp}}(B\cap {\xi})={\xi}\}$$ and the stationary set $$\hat S=\{{\alpha}\in S\cap C: B\cap {\alpha} =B_{\alpha}\}.$$ Now, if $\alpha \in \hat S \cap S^*$ then $E_\alpha \subset
B_\alpha = B \cap \alpha {\subset}B$, hence it suffices to show that $\hat S \cap S^* \ne \emptyset$.
Assume, on the contrary, that $\hat S \cap S^* = \emptyset$. Then for each ${\alpha}\in \hat S$, as ${\operatorname{tp}}(B_{\alpha})={\alpha}$, there is a $ {\beta}<{\alpha}$ such that $E_{\alpha}\cap
E_{\beta}$ is infinite. By Fodor’s theorem and $2^\omega <
\lambda\,$, there are ${\beta}<{\alpha}<{\alpha'}$ and $X\in \br
E_{\beta};{\omega};$ such that $\alpha,\alpha' \in \hat S$ and $X{\subset}E_{\alpha}\cap E_{\alpha'}$. But $B_{\alpha}= \alpha \cap
B_{\alpha'}\,$, hence $b\big({\alpha},\gamma\big) =
b\big({\alpha'},\gamma\big)$ for all $\gamma < \alpha$ and $b\big({\alpha'},\gamma\big) \notin B_\alpha$ for $\gamma \ge
\alpha$, consequently $x \in E_\alpha \cap E_{\alpha'}$ implies that $x = b(\alpha,\,\gamma)$ for some $\gamma \in A_{\alpha}\cap
A_{\alpha'}$. This, however contradicts $|A_{\alpha}\cap
A_{\alpha'}| < \omega$, proving that $\hat S \cap S^* \ne
\emptyset$ and so ${{\mathcal E}}$ is a $\clubsuit(S^*)$-sequence.
But then ${{\mathcal E}}$ is a $(\lambda,\omega_1,\omega)$-system for which $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({{\mathcal E}}) = \chi({{\mathcal E}}) = \lambda\,$ holds trivially, completing the proof of part (1).
\(2) Having fixed $\kappa$ with $\omega_1 < \kappa < \lambda$, we shall construct a $(\lambda,\kappa,{\omega})$-system ${{\mathcal F}}{\subset}\br {\lambda};\kappa;$ such that for every function $h:{\lambda}\to {{{\omega}_1}}$ there is $F\in {{\mathcal F}}$ for which $$\label{eq:h}
\text{$\nu \in h[F]\,$ implies $\,|F \cap h^{-1}\{{\nu}\}| \ge
{{{\omega}_1}}$}.$$
Consider the club set $K = \{\kappa \cdot \xi : \xi < \lambda \}$ and, for every $\xi < \lambda$, let $K_\xi$ denote the (half-closed) interval $\big[\kappa \cdot \xi\,, \kappa \cdot
(\xi+1) \big)$. We can assume, without any loss of generality, that $S {\subset}K$.
For every $\alpha \in S$ we also fix a partition of $A_\alpha$ into ${{{\omega}_1}}$-many disjoint uncountable pieces: $A_\alpha =
\cup\{A_{\alpha,\nu}:{\nu}<{{{\omega}_1}}\}$. Finally, this time, we use $\diamondsuit(S)$ by choosing a $\diamondsuit(S)$-sequence $\{h_{\alpha}:{\alpha}\in S\}$ for the functions $h : {\lambda}
\to {{{\omega}_1}}$.
Next, by transfinite recursion define the sets $\{E_{\alpha}:{\alpha}\in S\}$ as follows. Assume that $\alpha \in
S$, moreover $\{E_{\beta}:{\beta}\in {\alpha}\cap S\}$ has been constructed. Let $$D_{\alpha}=\{{\nu}<{{{\omega}_1}}: {\operatorname{tp}}(h_{\alpha}^{-1}\{{\nu}\})={\alpha}\},$$ for every $\nu \in D_\alpha$ let $\,\{b({\alpha},{\nu},{\eta}):
{\eta}<{\alpha}\}$ be the increasing enumeration of $h_{\alpha}^{-1}\{{\nu}\}\,$, and put $$E'_{\alpha}=\{b({\alpha},{\nu},\gamma):
{\nu}\in D_{\alpha}, \gamma \in A_{\alpha,\nu}\}.$$ Of course, if $D_\alpha = \emptyset$ then we have $E'_\alpha =
\emptyset$ as well, and in this case we put $E_\alpha =
\emptyset$. If, however, $D_\alpha \ne \emptyset$ then for every $\nu \in D_\alpha$ the set $B_{\alpha,\nu} =
\{b({\alpha},{\nu},\gamma): \gamma \in A_{\alpha,\nu}\}$ is cofinal in $\alpha$. Thus, using that $\alpha = \kappa \cdot \xi$ for some $\xi$ with $\cf(\xi) = \omega_1 < \kappa$, we can find $E_\alpha {\subset}E'_\alpha$ such that (i) $|E_\alpha \cap
B_{\alpha,\nu}| = \omega_1$ for each $\nu \in D_\alpha$, and (ii) $|E_\alpha \cap K_\zeta| \le 1$ for every $\zeta < \lambda$.
Next, similarly as in the proof of (1), we let $$S^*=\{{\alpha}\in S : |E_{\alpha}|={{{\omega}_1}}\land \,\forall {\beta}\in
S\cap {\alpha}\,\,(\,|E_{\beta}\cap E_{\alpha}|<{\omega})\,\},$$ and then for any $\alpha = \kappa \cdot \xi \in S^*$ we define $$F_{\alpha}=E_{\alpha}\cup K_\xi = E_{\alpha}\cup
[\alpha,\,\alpha+\kappa)\,, \mbox{ and
}{{\mathcal F}}=\{F_{\alpha}:{\alpha}\in S^*\}.$$ Clearly, ${{\mathcal F}}{\subset}\br {\lambda};\kappa;$ and ${{\mathcal F}}$ is ${\omega}$-almost disjoint because, by (ii), we have $|F_{\alpha}\cap F_{\beta}|\le |E_{\alpha}\cap E_{\beta}|+1$ for any $\{\alpha,\beta \} \in [S^*]^2$.
Now, consider any map $h:{\lambda}\to {{{\omega}_1}}$ and let $$D=\{{\nu}<{{{\omega}_1}}:|h^{-1}\{{\nu}\}|={\lambda}\}\,;$$ then $D\ne {\emptyset}$. For every ${\nu}\in D$ put $$C_{\nu}=\{{\xi}<{\lambda}:{\operatorname{tp}}({\xi}\cap(h^{-1}\{{\nu}\}))={\xi}\}$$ and $$C=\cap\{(C_{\nu}:{\nu}\in D\},$$ then $C$ is a club set.
We have ${\eta}=\sup (h^{-1}[{{{\omega}_1}}{\setminus}D]) < \lambda\,$ because $\lambda > \omega_1$ is regular. Let $T=S \cap C {\setminus}{\eta}$, then $h[T]{\subset}D$, $$\hat S=\{{\alpha}\in T: h\restriction {\alpha} =h_{\alpha}\}$$ is stationary, and if ${\alpha}\in \hat S$ then $ D_\alpha = D$.
Note that if $\alpha \in \hat S \cap S^*$ then $h[F_{\alpha}] =
h[E_{\alpha}]=D_{\alpha}=D$ and, by our construction, $$|h^{-1}\{{\nu}\}\cap E_{\alpha}| = \omega_1$$ for each ${\nu}\in D$, hence $F_{\alpha} \in {{\mathcal F}}$ witnesses (\[eq:h\]). Thus, to prove part (2), it again suffices to show that $\hat S
\cap S^* \ne \emptyset$.
Assume, on the contrary, that $\hat S \cap S^* = \emptyset$. Since $D_{\alpha}=D\ne {\emptyset}$ for every $\alpha \in \hat S {\subset}C$ this would imply that for every ${\alpha}\in \hat S$ there exists ${\beta}<{\alpha}$ for which $E_{\alpha}\cap E_{\beta}$ is infinite. But then, in the same way as in the proof of (1), we could conclude that there is a pair $\{{\alpha},{\alpha'}\} \in
[\hat S]^2$ with $\alpha < \alpha'$ such that $E_{\alpha}\cap
E_{\alpha'}$ is infinite. Using that $h_\alpha = h_{\alpha'}
\upharpoonright \alpha$ and hence $h_{\alpha}^{-1}\{{\nu}\}$ is an initial segment of $h_{\alpha'}^{-1}\{{\nu}\}$, this would imply that $A_{\alpha}\cap A_{\alpha'}$ is also infinite, a contradiction.
As we noted above, it was shown in [@HJS1] that the existence of a supercompact cardinal implies the consistency of GCH with $\bigstar(S)$ for some $S {\subset}E^{\aleph_{\omega+1}}_{\omega_1}$. This, together with theorem \[tm:club\], immediately yields the following result which shows that the results of the previous section are sharp, even under GCH.
\[cor:hsj\] If it is consistent that there is a supercompact cardinal then it is also consistent that GCH holds and
(1) $\chi(\aleph_{\omega+1},\omega_1,\omega) = {\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\aleph_{\omega+1},\omega_1,\omega ) = \aleph_{\omega+1}$,
(2) ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\aleph_{\omega+1},\omega_n,\omega) ={{\omega}_2}$ for $2\le n\le{\omega}$.
We conclude this section with a (somewhat surprising) result showing that consistently, e.g. under GCH, the relation $\chi(\lambda,\omega_1,\omega) \le \omega_1$, hence ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega_1,\omega) \le \omega_1$ as well, implies ${{\mathbf M}(\lambda,{\omega}_1,\omega)}\to {\bf ED}\,$.
\[tm:Rr\] Let ${\lambda}$ be an uncountable cardinal and assume that
(i) ${\mu}^{\omega}={\mu}^+$ for any ${\mu}<{\lambda}$ with $\cf({\mu})={\omega}$,
(ii) if $\omega < {\mu}<{\lambda}$ with $\cf({\mu})={\omega}$ then $\diamondsuit(S)$ holds for every stationary set $\,S {\subset}E_{\omega_1}^{\,\mu^+}\,$.
Then $\,\chi(\lambda,\omega_1,\omega) \le \omega_1$ implies ${{\mathbf M}(\lambda,{\omega}_1,\omega)}\to {\bf ED}\,$.
We shall prove this by induction on ${\lambda}$. It is trivially true for $\lambda = \omega_1$, hence we can assume ${\lambda}>{{{\omega}_1}}$ and that it holds for all ${\lambda}'<{\lambda}$.
We shall make use of the following obvious corollary of our assumption (i): If $X$ is any set with $|X| \le \lambda$ and ${{\mathcal F}}{\subset}[X]^{\omega_1}$ is $\omega$-almost disjoint then $|{{\mathcal F}}| \le |X|$. In fact, this follows from the following consequence of (i): $\,\mu^\omega = \mu$ if $\mu \le \lambda\,$ with $\,\cf(\mu)
> \omega$.
Now, let ${{\mathcal A}}{\subset}\br {\lambda};{{{\omega}_1}};$ be an ${\omega}$-almost disjoint set-system, we have to show that ${{\mathcal A}}$ is essentially disjoint.
[**Case 1:**]{} [*$\lambda$ is a limit cardinal or ${\lambda} = {\mu}^+$ for some ${\mu}$ with $\cf({\mu})
> {\omega}$.*]{}
Condition (i) implies $\nu^\omega < \lambda$ for any $\nu <
\lambda$, hence we can find a ${\lambda}$-chain $\<M_{\alpha}:{\alpha}<{\lambda}\>$ of elementary submodels with ${{\mathcal A}}\in M_1$ and $\omega_1 {\subset}M_1$ such that $\br
M_{{\alpha}};{\omega};{\subset}M_{{\alpha}+1}$ for each ${\alpha}<{\lambda}$. Let us put $N_{\alpha}=M_{{\omega}\cdot{\alpha}}$ for ${\alpha}<{\lambda}$ , then ${{\mathcal A}}$ is $\omega_1$-cut by the $\lambda$-chain $\<N_{\alpha}:{\alpha}<{\lambda}\>$.
Indeed, if $|A\cap N_{\alpha}|=|A\cap M_{{\omega}\cdot{\alpha}}| =
{{{\omega}_1}}\,$ then there is a ${\beta}<{\omega}\cdot{\alpha}$ such that $|A\cap M_{\beta}|\ge{\omega}$. Since ${{\mathcal A}}$ is ${\omega}$-almost disjoint and $\br M_{{\beta}};{\omega};{\subset}M_{{\beta}+1}$ then we have $A\in M_{{\beta}+1} {\subset}M_{{\omega}\cdot{\alpha}}=N_{\alpha}\,$.
For ${\alpha}<{\lambda}$ let $${{\mathcal A}}_{\alpha}={{\mathcal A}}\cap (N_{{\alpha}+1}{\setminus}N_{\alpha})\,,$$ then $|{{\mathcal A}}_{\alpha}|\le |N_{{\alpha}+1}|<{\lambda}$. By this and the inductive hypothesis there is a function $F_{\alpha}:{{\mathcal A}}_{\alpha}\to \br {\lambda};{\omega};$ such that $A \cap N_\alpha {\subset}F_{\alpha}(A)$ for all $A \in {{\mathcal A}}_\alpha$ and the family $$\{A{\setminus}F_{\alpha}(A):A\in {\mathcal{A}}_{\alpha}\}$$ is disjoint. Now, it is easy to check that the function $$F =
\cup_{\alpha < \lambda}F_\alpha :{\mathcal{A}}\to \br
{\lambda};{\omega};$$ witnesses the essential disjointness of ${{\mathcal A}}$.
[**Case 2:**]{} [*${\lambda}= {\tau}^+$ for some singular cardinal ${\tau}$ with $\cf({\tau})={\omega}$.*]{}
For any $A\in {{\mathcal A}}$ let $$L(A)=\{{\alpha}<{\lambda}:\cf({\alpha})={{{\omega}_1}}\, \mbox{ and
}\,{\alpha}= \sup A\cap {\alpha}\}.$$ Clearly, then $1\le |L(A)|\le {{{\omega}_1}}$. We claim that the set $$S=\cup\{L(A):A\in {{\mathcal A}}\}$$ is non-stationary in ${\lambda}$.
Indeed, by definition, for each $A\in{{\mathcal A}}$ we may find a family of pairwise disjoint sets $$\{B(A,{\alpha}):{\alpha}\in L(A)\}{\subset}\br A;{{{\omega}_1}};$$ such that $\sup (B(A,{\alpha}))={\alpha}$ and ${\operatorname{tp}}(B(A,{\alpha})={{{\omega}_1}}$. So, if $S$ were stationary then the $\omega$-almost disjoint family $${{\mathcal B}}=\{B(A,{\alpha}):A\in {{\mathcal A}}, {\alpha}\in L(A)\}$$ would witness ${\bigstar(S)}\,$. But then, by condition (ii) and part (1) of theorem \[tm:club\], we would have $\,\chi(\lambda,\omega_1,\omega) = \lambda > \omega_1$, a contradiction. So there is a club $E{\subset}{\lambda}$ such that $$E\cap \cup\{L(A):A\in {{\mathcal A}}\}={\emptyset}.$$
It follows from our introductory remark that if ${\delta}<{\lambda}$ then $$\label{eq:claim}
\big|\{A\in {{\mathcal A}}:|A\cap {\delta}| = {{{\omega}_1}}\}\big|\le
{\delta}<{\lambda}\,,$$ hence the following set $D$ is club in ${\lambda}$: $$D=\{{\zeta}<{\lambda}: \forall {\delta}<{\zeta}\ \forall A\in
{{\mathcal A}}\ (\text{ $|A\cap {\delta}| = {{{\omega}_1}}$ implies $A{\subset}{\zeta}$})\}.$$ Let $C=E\cap D$ and $C=\{{\gamma}_{\nu}:{\nu}<{\lambda}\}$ be the increasing enumeration of $C$.
For any $A\in {{\mathcal A}}$ let $${\nu}_A=\min \{{\nu}<{\lambda}: |A\cap
{\gamma}_{\nu}| = {{{\omega}_1}}\}\,.$$ Then $C {\subset}E$ implies that ${\nu}_A$ can not be a limit ordinal, hence ${\nu}_A={\eta}_A+1$. This and the definition of $D$ imply $$|A\cap {\gamma}_{{\eta}_A}|\le {\omega}\, \text{ and } \,A{\subset}{\gamma}_{{\eta}_A+1}.$$
Let us put ${{\mathcal A}}_{\eta}=\{A\in {{\mathcal A}}:{\eta}_A={\eta}\}\,$ for any $\eta < \lambda\,$, then $|{{\mathcal A}}_\eta| \le \gamma_{\eta+1} <
\lambda$. By the inductive hypothesis, for each ${\eta}<{\lambda}$ there is a function $F_{\eta}:{{\mathcal A}}_{\eta}\to \br
{\lambda};{\omega};$ such that $A \cap \gamma_\eta {\subset}F_\eta(A)$ for any $A \in {{\mathcal A}}_\eta$ and the family $$\{A{\setminus}F_{\eta}(A):A\in {\mathcal{A}}_{\eta}\}$$ is disjoint. Now, it is again easy to check that the function $$F
= \cup_{\eta < \lambda}F_\eta :{\mathcal{A}}\to \br {\lambda};{\omega};$$ witnesses the essential disjointness of ${{\mathcal A}}$.
Let us remark that, by a recent result of Shelah from [@Sh], if $\omega = \cf(\mu) < {\mu}$ and $2^\mu = \mu^+$ then $\diamondsuit(S)$ holds for every stationary set $\,S {\subset}E_{\omega_1}^{\,\mu^+}\,$. Consequently, conditions (i) and (ii) of theorem \[tm:Rr\] together are equivalent with the following single statement: For all $\mu < \lambda$ with $\cf(\mu) = \omega$ we have $2^\mu = \mu^+$.
If ${{\mathcal A}}$ is an essentially disjoint $(\lambda,\omega_1,\omega)$-system then, trivially, we have $\chi({{\mathcal A}}) = 2$, moreover there is a coloring $f : \cup {{\mathcal A}}\to \omega_1$ that satisfies $|\omega_1 {\setminus}I_f(A)| < \omega_1$ for all $A \in {{\mathcal A}}$. Consequently, from theorem \[tm:Rr\] we immediately obtain the following result.
\[cor:el\] Under the assumptions of theorem \[tm:Rr\], in particular under GCH, the following five statements are equivalent for an uncountable cardinal $\lambda$:
1) $\,[\lambda,\omega_1,\omega] \Rightarrow \omega_1\,,$
2) $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\lambda,\omega_1,\omega) \le \omega_1\,,$
3) $\,\chi(\lambda,\omega_1,\omega) \le \omega_1\,,$
4) $\,\chi(\lambda,\omega_1,\omega) = 2\,,$
5) $\,{{\mathbf M}(\lambda,{\omega}_1,\omega)}\to {\bf ED}\,$.
On ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega)$ and ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega)$
====================================================================================================================================================
Our previous results give no help in deciding the exact values of ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega)$ and ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega)$, except proposition \[pr:kko\] which implies that both are equal to either $\omega$ or $\omega_1$. We shall show below that actually both equal $\omega_1$ under CH and both equal $\omega$ under $MA_{\aleph_1}$. We also remark that, as any $(\omega_1,\omega_1,\omega)$-system clearly has an $\omega$-witness, we have $$\label{eq:le}
\omega \le {\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega) \le
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega) \le \omega_1$$ in ZFC. However, we do not know if their equality is provable in ZFC.
That CH implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega) = \omega_1$ is an immediate consequence of the following ZFC result of Komjáth [@KO].
$$\chi({{{2^{\omega}}}},\omega,\omega) =
{{2^{\omega}}}\,.$$
Before giving our proof that CH also implies ${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega) = \omega_1$, we need a preparatory lemma.
\[lm:eqx\] Let ${{\mathcal A}}{\subset}\br {{{\omega}_1}};{{{\omega}_1}};$ be $\omega$-almost disjoint and ${{\mathcal I}}({{\mathcal A}})$ be the ideal generated by ${{\mathcal A}}$, that is, $X \in
{{\mathcal I}}({{\mathcal A}})$ iff there is ${{\mathcal B}}\in [{{\mathcal A}}]^{<\omega}$ with $X
{\subset}\cup {{\mathcal B}}$. Then, for any $X {\subset}\omega_1\,$, $\,X \cap
\alpha \in {{\mathcal I}}({{\mathcal A}})$ for all $\alpha < \omega_1$ implies $X
\in {{\mathcal I}}({{\mathcal A}})$.
For each ${\alpha}<{{{\omega}_1}}$ we may pick a ${\subset}$-minimal ${{\mathcal B}}_\alpha \in [{{\mathcal A}}]^{<\omega}$ such that $X\cap {\alpha}
{\subset}^* \cup {{\mathcal B}}_\alpha$, i.e. $|X \cap {\alpha} {\setminus}\cup
{{\mathcal B}}_\alpha| < \omega$. There is $I\in \br {{{\omega}_1}};{{{\omega}_1}};$ for which $\{{{\mathcal B}}_{\alpha}:{\alpha}\in I\}$ forms a $\Delta$-system with kernel ${{\mathcal B}}$. We claim that ${{\mathcal B}}_{\alpha}={{\mathcal B}}$ for all ${\alpha}\in I$. Then we are done because this implies $X {\subset}^*
\cup {{\mathcal B}}$ and hence $X \in {{\mathcal I}}({{\mathcal A}})$ by $X {\subset}\cup
{{\mathcal A}}$.
So assume, on the contrary, that $\alpha \in I$ and $A \in
{{\mathcal B}}_\alpha {\setminus}{{\mathcal B}}$. By the ${\subset}$-minimality of ${{\mathcal B}}_\alpha$ then $$Y = A \cap (X \cap \alpha {\setminus}\cup
{{\mathcal B}})$$ must be infinite. But, for any $\beta \in I$ with $\beta
> \alpha$, if $B \in {{\mathcal B}}_\beta {\setminus}{{\mathcal B}}$ then $|B \cap Y| \le |B \cap A| <
\omega$, contradicting $$Y {\subset}X \cap \beta {\setminus}\cup {{\mathcal B}}{\subset}^* \cup ({{\mathcal B}}_\beta {\setminus}{{\mathcal B}}).$$
\[tm:ch\_w1w1\] CH implies $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega) =
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega) = \omega_1\,.$$
By induction on ${\alpha}$, we shall construct an ${\omega}$–almost disjoint family ${{\mathcal A}}=\{A_{\alpha}:{\alpha}<{{{\omega}_1}}\}{\subset}\br {{{\omega}_1}};{{{\omega}_1}};$ such that for any coloring $h:{{{\omega}_1}}\to {\omega}$ there is $A_\alpha \in {{\mathcal A}}$ satisfying $$\label{eq:oooo}
\forall \, n \in h[A_\alpha]\,\,\big(\,|h^{-1}\{n\}\cap A_\alpha|={\omega}
\big).$$ To start with, using CH, let
- $\{T_{\alpha}:{\alpha}<{{{\omega}_1}}\}$ be a partition of ${{{\omega}_1}}$ into uncountable sets such that $T_{\alpha}{\subset}{{{\omega}_1}}{\setminus}{\alpha}\,$ for every $\alpha < \omega_1$;
- $\{S_{\alpha}:{\alpha}<{{{\omega}_1}}\}$ be an enumeration of $[\omega_1]^\omega$.
Assume that $\{A_{\beta}:{\beta}<{\alpha}\}$ has been constructed and we have $\alpha \in T_\gamma$. For any subset $a{\subset}{\alpha}$ we write ${A{[a]}} =\cup\{A_{\beta}:{\beta}\in a\}$, in particular, ${A{[\xi]}} = \cup_{\eta <\xi}A_\eta$. Consider the set $$H_{\alpha}=\{{\beta}<{\alpha}: S_\beta {\subset}\alpha {\setminus}{A{[\gamma]}}\, \mbox{ and } \,\forall a\in \br {\alpha};<{\omega};\
\big|S_{\beta}{\setminus}{A{[a]}} \big|={\omega}\}.$$
We can choose $B_{\alpha} {\subset}{\alpha} {\setminus}{A{[\gamma]}}$ such that
1. $|B_{\alpha}\cap A_\beta|<{\omega}$ for each $\beta < \alpha$,
2. $|B_{\alpha} \cap S_\beta|={\omega}\,$ whenever $\,{\beta}\in
H_{\alpha}\,$.
Indeed, if $H_\alpha = {\emptyset}$ then $B_\alpha = {\emptyset}$ works, and otherwise $B_\alpha$ can be obtained by a simple recursive construction. Finally, let us put $A_{\alpha}=B_{\alpha}\cup
T_{\alpha}$. Note that, by definition, $A_\beta \cap A_\alpha =
A_\beta \cap B_\alpha$ is finite for every $\beta < \alpha$.
Let $A = {A{[\,\omega_1]}} = \cup {{\mathcal A}}$ and consider any coloring $h:A \to {\omega}$. We set $$I =\{n \in {\omega}:\,\exists\,\delta < \omega_1\, (\,h^{-1}\{n\}
{\subset}{A{[\delta]}}\,) \}$$ and $K = \omega {\setminus}I$. We may then find $\gamma < \omega_1$ such that $h^{-1}(I) {\subset}{A{[\gamma]}}$.
For any $n\in K$ consider the set $$X_n=h^{-1}\{n\}{\setminus}{A{[\gamma]}}\,,$$ then obviously $X_n \notin {{\mathcal I}}({{\mathcal A}})\,$. Thus, by lemma \[lm:eqx\], there is ${\alpha}_n<{{{\omega}_1}}$ such that $X_n \cap
\alpha_n \notin {{\mathcal I}}({{\mathcal A}})\,$ as well. For each $n \in K$ pick $\beta_n<{{{\omega}_1}}$ with $S_{\beta_n}=X_n\cap {\alpha}_n$ and choose ${\alpha}\in T_{\gamma}$ such that ${\alpha}>\sup\{\beta_n:n\in
K\}$.
Clearly, then $\{\beta_n:n\in K\}{\subset}H_{\alpha}$, hence $$B_{\alpha}\cap h^{-1}\{n\}\supset B_{\alpha}\cap (X_n\cap
{\alpha}_{n}) = B_{\alpha}\cap S_{\beta_n}$$ is infinite for every $n \in K$. If, however, $n\in I$ then $h^{-1}\{n\}{\subset}{A{[\gamma]}}$, and so $B_\alpha \cap {A{[\gamma]}} = A_\alpha \cap {A{[\gamma]}} = {\emptyset}$ implies $h^{-1}\{n\}\cap A_{\alpha}={\emptyset}$. Thus $A_{\alpha}$ witnesses (\[eq:oooo\]).
Now we turn to our other promised result, namely that $MA_{\omega_1}$ implies $${\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega_1,\omega) =
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}({\omega_1},\omega,\omega) = \omega\,.$$ In fact, we prove the following stronger theorem.
\[tm:ma1\] If $MA_{\omega_1}$ holds then
\(1) $\,[\,{\omega_1},\omega,\omega] \Rightarrow \omega$,
\(2) $\,[\,{\omega_1},\omega_1,\omega] \Rightarrow
\omega$.
Let us start by noting that (2) follows from (1) because every $({\omega_1},\omega_1,\omega)$-system admits an $\omega$-witness.
Now, to prove (1), let us consider any $({\omega_1},\omega,\omega)$-system ${{\mathcal A}}= \{A_\alpha : \alpha <
\omega_1\} {\subset}[\omega_1]^\omega$. We then define a poset ${{\mathcal P}}= \<P,\preceq\>$ as follows. Let $P = Fn(\omega_1,\omega)
\times [\omega_1]^{<\omega}$ and for $\<f,I\>, \<g,J\>\in P$ put $\<g,J\>\preceq \<f,I\>$ iff $g \supset f\,,J \supset I\,$, and for all ${\alpha}\in I$ we have
(i) $(g{\setminus}f)\restriction A_{\alpha}$ is 1–1, and
(ii) $(g{\setminus}f)[A_{\alpha}] \cap f[A_\alpha] = {\emptyset}$.
It is easy to check that $\preceq$ is indeed a partial order on $P$.
We next show that ${{\mathcal P}}$ is CCC. To see this, consider first two members of $P$, say $p = \<f,I\>$ and $q = \<g,J\>$, such that the following conditions hold with $D = {\operatorname{dom}}f$ and $E = {\operatorname{dom}}g\,$:
(a) $f \upharpoonright D \cap E = g \upharpoonright D \cap E\,$, i.e. $f$ and $g$ are compatible functions;
(b) $A[I] \cap (E {\setminus}D) = {\emptyset}= A[J] \cap (D {\setminus}E)\,$.
(Here, as in the proof of theorem \[tm:ch\_w1w1\], ${A{[x]}} = \cup
\{A_\alpha : \alpha \in x \}$.) Then, trivially, $r = \< f \cup g,
I \cup J \> \in P$ and $r \preceq p,q$. Indeed, for instance, $r
\preceq p$ because $(g {\setminus}f) \upharpoonright A_\alpha = {\emptyset}$ for each $\alpha \in I$. Thus, to show that ${{\mathcal P}}$ is CCC, it will suffice to prove that among any $\omega_1$ members of $P$ there are two that satisfy (a) and (b).
So let $\{p_{\nu}:{\nu}<{{{\omega}_1}}\}{\subset}P$ with $p_{\nu}={\<f_{\nu}, I_{\nu}\>}$. Using standard $\Delta$-system and counting arguments we can assume the following:
1) $\{{\operatorname{dom}}(f_{\nu}):{\nu}<{{{\omega}_1}}\}$ forms a $\Delta$-system with kernel $D$ and we have $D<D_{\nu}<D_{\mu}$ for ${\nu}<{\mu}<{{{\omega}_1}}$, where $D_{\nu}={\operatorname{dom}}(f_{\nu}){\setminus}D$.
2) \[iii\] $f_{\nu}\restriction D=f$ and $|D_{\nu}|=n$ for all $\nu <
\omega_1$.
3) $\{I_{\nu}:{\nu}<{{{\omega}_1}}\}$ forms a $\Delta$-system with kernel $I$ and $I<J_{\nu}<J_{\mu}$ for ${\nu}<{\mu}<{{{\omega}_1}}$, where $J_{\nu}=I_{\nu}{\setminus}I$. Moreover, $|J_{\nu}|=m$ for all $\nu < \omega_1$.
4) ${A{[I]}} < D_0$ and ${A{[I_\nu]}} < D_\mu$ whenever $\nu < \mu <
\omega_1$.
\[cl:NM\] If $N\in \br {{{\omega}_1}};{\omega};$ and $M\in \br {{{\omega}_1}};n \cdot m+1;$ satisfy $\, N < M\,$ then there are ${\nu}\in N$ and ${\mu}\in M$ such that $D_{\nu}\cap {A{[J_{\mu}]}} ={\emptyset}$.
Let ${{\mathcal U}}$ be a non-principal ultrafilter on $N$. Write $D_{\nu}=\{{\delta}_{{\nu},i}:i<n\}$ and $J_{\mu}=\{{\alpha}_{{\mu},j}:j<m\}$.
Assume, on the contrary, that for any ${\nu}\in N$ and ${\mu}\in
M$ there are $i<n$ and $j<m$ such that ${\delta}_{{\nu}, i}\in
A_{{\alpha}_{{\mu},j}}$. This implies that, for any fixed $\mu \in
M$, there is a pair $\<i,j\> \in n \times m$ for which $$V^{i,j}_\mu = \{\nu \in N : {\delta}_{{\nu}, i}\in
A_{{\alpha}_{{\mu},j}}\} \in {{\mathcal U}}\,.$$ Then, as $|M| > n \cdot
m$, there are two distinct $\mu,\,\mu' \in M$ and a pair $\<i,j\>
\in n \times m$ such that both $V^{i,j}_\mu \in {{\mathcal U}}$ and $V^{i,j}_{\mu'} \in {{\mathcal U}}$ and hence $V^{i,j}_{\mu} \cap
V^{i,j}_{\mu'} \in {{\mathcal U}}$ is infinite. This, however, would imply that $$A_{\alpha_\mu,j} \cap A_{\alpha_{\mu',j}} \supset\, \{\delta_{\nu,i} : \nu \in V^{i,j}_{\mu} \cap V^{i,j}_{\mu'}\}$$ is also infinite, a contradiction.
But if $\nu,\,\mu$ are as in claim \[cl:NM\], then clearly (a) and (b) are satisfied for $p_\nu$ and $p_\mu$, and hence they are compatible. This completes the proof that ${{\mathcal P}}$ is CCC.
Let us now consider, for every $\alpha < \omega_1$ and $n <
\omega$, the sets $$D_{\alpha}=\{\<f,I\>\in P:{\alpha}\in {\operatorname{dom}}(f)\,\},$$ and $$E^n_{\alpha}=\{\<f,I\>\in P:{\alpha}\in I \mbox{ and } n \in
f[A_\alpha]\}.$$ It is easy to check that all these sets are dense in ${{\mathcal P}}$, let us only do it for the $E^n_{\alpha}$. Indeed, any $\<f,I\>\in P$ is extended by $\<f,I \cup \{\alpha\}\>\,$, so we may assume that $\alpha \in I$, to begin with. Now, if $n \notin {\operatorname{ran}}(f)$ then pick first $\gamma \in A_\alpha {\setminus}{A{[I {\setminus}\{\alpha\}]}}\,$. Obviously, we have then $\<f \cup \{\<\gamma,n \>\},I\> \preceq
\<f,I\>$ and $\<f \cup \{\<\gamma,n \>\},I\> \in E^n_{\alpha}$.
By $MA_{\omega_1}$ there is a filter ${{\mathcal G}}$ in ${{\mathcal P}}$ that meets all the dense sets $D_\alpha$ and $E^n_{\alpha}$. Let us put $$F=\cup\{f:\<f,I\>\in {{\mathcal G}}\}.$$ Then $F : \omega_1 \to \omega$ because ${{\mathcal G}}$ meets every $D_\alpha$ and we claim that $I_F(A_\alpha) =^* \omega$ for each $\alpha < \omega_1$. Indeed, ${{\mathcal G}}\cap E^n_{\alpha} \ne {\emptyset}$ for all $n < \omega$ implies $F[A_\alpha] = \omega$. Moreover, there is some $\<f,I\>\in {{\mathcal G}}$ with $\alpha \in I$, consequently, by the definition of $\preceq$, we clearly have $I_F(A_\alpha) \supset \omega {\setminus}f[A_\alpha]$.
Is $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega_1,\omega) =
{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega,\omega)\,$ provable in ZFC?
Recall that “stick" is the following combinatorial statement, a common weakening of CH and $\clubsuit = \clubsuit(\omega_1)$: There is a family ${{\mathcal A}}{\subset}[\omega_1]^\omega$ such that $|{{\mathcal A}}| = \omega_1$ and for every uncountable set $S {\subset}\omega_1$ we have an $A \in {{\mathcal A}}$ with $A {\subset}S$. We know that stick implies $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega,\omega) = \omega_1$.
Does stick imply $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(\omega_1,\omega_1,\omega) = \omega_1$?
Is $\,{\operatorname{\mbox{${\chi}$}_{\rm CF}}}(2^\omega,2^\omega,\omega) = 2^\omega\,$ provable in ZFC?
[99]{}
P. Cheilaris: [*Conflict-Free Coloring*]{}, PhD thesis, City University of New York, 2008.
P. Erdős, F. Galvin and A. Hajnal: [*On set-systems having large chromatic number and not containing prescribed subsystems*]{}, Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I; Colloq. Math. Soc. János Bolyai, Vol. 10 , pp. 425–513, North-Holland, Amsterdam, 1975
P. Erdős and A. Hajnal: [*On a property of families of sets*]{}, Acta Math. Acad. Sci. Hung., 12,(1961) 87-124
P. Erdős and A. Hajnal: [*On the chromatic number of graphs and set-systems*]{}, Acta Math. Acad. Sci. Hung., 17,(1966) 159-229
P. Erdős, A. Hajnal and B. Rothchild: [*On chromatic numbers of graphs and set-systems*]{}, Proceedings of the 1971 Cambridge Summer School, Springer Lecture Notes in Math., no. 337, (1971), pp. 531-538.
G. Even, Z. Lotker, D. Ron, and S. Smorodinsky: SIAM J. Comput., 33, (2003) 94-136.
A.Hajnal, I. Juhász and S. Shelah: [*Splitting strongly almost disjoint families*]{}, Trans. Amer. Math. Soc., 295, (1986) 369-387 A.Hajnal, I. Juhász and S. Shelah: [*Strongly almost disjoint families, revisited*]{}, Fund. Math., 163, (2000) 13-23
P. Komjáth: [*Dense systems of almost-disjoint sets*]{}. Finite and infinite sets, (Eger, 1981), 527–536, Coll Math Soc. J. Bolyai, 10, 1984.
P. Komjáth: [*Families close to disjoint ones*]{}, Acta Math. Hungar. 43 (1984), 199–207
K. Kunen: [*Set Theory*]{}, North-Holland, New York, 1980.
E. W. Miller: [*On a property of families of sets*]{}, Comptes Rendus Varsovie, 30, (1937) 31-38
J. Pach and G. Tardos: [*Conflict-free colourings of graphs and hypergraphs*]{}, Combin. Probab. Comput. 18 (2009), no. 5, 819–834.
S. Shelah: [*Diamonds*]{}, preprint of paper $\sharp$922, ArXiv:0711.3030
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove there are exactly $16$ arithmetic lattices of hyperbolic $3$-space which are generated by two elements of finite orders $p$ and $q$ with $p,q\geq6$. We also verify a conjecture of H.M. Hilden, M.T. Lozano, and J.M. Montesinos concerning the orders of the singular sets of arithmetic orbifold Dehn surgeries on two bridge knot and link complements.'
author:
- 'C. Maclachlan and G.J. Martin [^1]'
title: |
The $(p,q)$-arithmetic hyperbolic lattices;\
$p,q \geq 6$
---
Introduction
============
There are infinitely many lattices in the group $\Isom^+{\mathbb H}^3$ $\cong \PSL(2,
{\mathbb C})$, of orientation-preserving isometries of hyperbolic 3-space (equivalently Kleinian groups of finite co-volume) which can be generated by two elements of finite orders $p$ and $q$. For instance, all but finitely many $(p,0)-(q,0)$ Dehn surgeries on any of the infinitely many hyperbolic two-bridge links will have fundamental groups which are such uniform (co-compact) lattices [@Thurston]. Two infinite families of such groups are shown below in Figure 1.
In [@MM], we showed that, up to conjugacy, only finitely many of these lattices can be arithmetic. In [@MM2], we identified the 20 such non-uniform lattices of which 15 were [*generalised triangle groups*]{}; that is, groups with a presentation of the form $\langle x,y : x^p = y^q =
w(x,y)^r = 1 \rangle$ where $w(x,y)$ is a word involving both $x$ and $y$ (see [@FR1; @BMS]) and $p,q,r\geq 2$.
In this paper we prove that, up to conjugacy, there are exactly $16$ arithmetic lattices in $\Isom^+{\mathbb H}^3$ which can be generated by two elements of finite orders $p$ and $q$ with $6 \leq p,q $. Among these groups there are, curiously, no generalised triangle groups. Two of the groups are non-uniform (and are discussed in [@MM2]) and the others are identified as fundamental groups of orbifolds obtained by Dehn surgeries on 2-bridge knots and links. As such they appear in [@HLM1] and our results establish a conjecture in that paper on the degree of the singular set of such orbifolds.
As a basic reference to the deep relationships between arithmetic and hyperbolic geometry we refer to [@MR]. We note here a few connections. In $\Isom^+{\mathbb H}^2$, Takeuchi [@Tak] identified all 82 arithmetic lattices generated by two elements of finite order (equivalently arithmetic Fuchsian triangle groups), and, all arithmetic Fuchsian groups with two generators have been identified (see [@Tak2; @MRos2]). The connections between arithmetic surfaces, number theory and theoretical physics can be found in work of Sarnak and co-authors eg. [@Sar2; @RS], see also [@katok]. In [@Vin1] Vinberg gave criteria for Coxeter groups in $\Isom\, {\mathbb H}^n$ to be arithmetic. Such groups do not exist in the co-compact case for dimension $n \geq 30$, [@Vin2]. It has now been established that there are finitely many conjugacy classes of maximal arithmetic Coxeter groups in all dimensions. There are two proofs see [@Agol] and [@Nik1] (previously established in two-dimensions in [@LMR]).
Returning to dimension 3, the orientation-preserving subgroups of Coxeter groups for tetrahedra, some of which are generated by two elements of finite order, which are arithmetic, are identified in [@Vin1] (see [@MR2; @CM] for related results). Reid [@R3] identified the figure eight knot complement as the only arithmetic knot complement. The four orientable hyperbolic 3-manifolds with fundamental group generated by a pair of parabolic elements which are arithmetic are two bridge knot and link complements [@GMM]. The 14 finite co-volume Kleinian groups with two generators, one of finite order, one parabolic, which are arithmetic are described in [@CMMO]. An algorithmic approach to deciding if an orbifold obtained by $(p,0)-(q,0)$ surgery on a two bridge link (or knot) has arithmetic fundamental groups was given in [@HLM1]. Arithmetic hyperbolic torus bundles are discussed in [@HLM; @Bow] and generalised triangle groups which are Kleinian in [@HMR; @HLM2; @Vin3]. Various extremal groups have been identified as two-generator arithmetic; for instance the minimal volume non-compact hyperbolic 3–manifold and orbifold [@Meyer; @CaoM]. The Week’s manifold is arithmetic, two-generator and conjecturally the minimal volume orientable hyperbolc 3-manifold [@CFJR]. The prime candidate for the minimal volume orientable hyperbolic 3-orbifold is also arithmetic and generated by two elements of finite order [@Chin2; @GM3; @GM4; @MMarshall].
Before precisely stating our main result let us say a few words about its proof and why we have the restriction $p,q\geq 6$. In our work identifying the two-generator non-uniform lattices in [@MM2], a key observation was that the non-compactness hypothesis provided [*a priori*]{} knowledge that the underlying fields were quadratic imaginary and the groups we were looking for were commensurable with Bianchi groups. In this paper a major part of the work is to identify the underlying fields. For this we make use of some important results of Stark [@Stark] and Odlyzko [@Od], as well as results concerning the discriminants of number fields of small degree such as those of Diaz Y Diaz and Olivier [@Di; @CDO; @DO]. Using these bounds and some results from the geometry of numbers and various discreteness criteria, we bound the degree of the fields in question and then, in turn, bound the possible parameters for an arithmetic Kleinian group - once we have fixed the orders of the generators, the space of all discrete groups up to conjugacy is parameterised by a one complex-dimensional space.
Finally to identify all the groups we use a computer search to examine all algebraic integers in the field satisfying the given bounds and additional arithmetic restrictions on the real embeddings. This procedure gives us a relatively short list of candidate discrete groups which are now known to be subgroups of arithmetic Kleinian groups [@GMMR]. We then use various ideas, discussed in the body of the text, to decide if these groups are in fact arithmetic - at issue here is the finiteness of the co-volume.
Thus we are able identify (up to conjugacy) all the arithmetic Kleinian groups $\langle f,g
\rangle$ generated by an element $f$ of order $p$ and $g$ of order $q$ with $p$ and $q$ at least $6$. Our results here also give the cases $p=2$, $q\geq 6$ by the known result that a $(2,p)$-arithmetic hyperbolic lattice contains a $(p,p)$-arithmetic hyperbolic lattice with index at most two.
At present the remaining cases $p=2,3,4,5$ and $q\geq p$ seem computationally infeasible, unless $q$ is large enough - although we have made some recent progress using the work of [@FR] on the most difficult case $p=2$ and $q=3$. As the reader will come to realise, the main problem here is finding effective bounds on the degree of the associated number fields.
Here is our main result:
Let $\Gamma=\langle f,g \rangle$ be an arithmetic Kleinian group generated by elements of order $p$ and $q$ with $p,q \geq 6$. Then $p,q$ fall into one of the following 4 cases:
1. $p=q=6$,\
there are precisely $12$ groups enumerated below in Table 1. (See comments following the table).
2. $p=q=8$,\
there is precisely one group obtained by (8,0) surgery on the knot 5/3.
3. $p=q=10$,\
there is precisely one group obtained by (10,0) surgery on each component of the link 13/5.
4. $p=q=12$,\
there are precisely two groups obtained by (12,0) surgery on the knot 5/3 and on each component of 8/3.
Here $r/s$ denotes the slope (or Schubert normal form) of a two bridge knot or link, [@BZ] §12. Thus 5/3 denotes the well known figure eight knot complement.
As a corollary we are able to verify a condition noticed by Hilden, Lozano and Montesinos [@HLM1] concerning the $(n,0)$ surgeries on two bridge link complements.
Let $(r/s,n)$ denote the arithmetic hyperbolic orbifold whose underlying space is the 3-sphere and whose singular set is the 2 bridge knot or link with slope $r/s$ and has degree $n$. Then $$n \in \{2,3,4,5,6,8,10,12,\infty \}$$
We also have the following, slightly surprising, corollary.
\[notrigrp\] There are no co-compact arithmetic generalised triangle groups with generators of orders at least $6$.
To prove the corollary we shall show later (see \[proofngtg\]) that there cannot be another presentation of the same group on two generators of orders at least 6 as a generalised triangle group.
In two dimensions, there are in fact 32 arithmetic triangle groups with two generators of orders at least $6$ on Takeuchi’s list [@Tak]. In three dimensions for each $p$ and $q$ ($\min\{p,q\}\geq 3 $) there are infinitely many co-compact generalised triangle groups with a presentation of the form $\langle f,g: f^p=g^q=w(f,g)^2=1\rangle$ for certain words $w$ in $f$ and $g$. Some of these are discussed in [@JR]. Apparently as soon as $\min\{p,q\}\geq 6 $ none of these groups can be arithmetic.
$\gamma$ value $k\Gamma$ description of orbifold
----------------------- ----------------- -----------------------------------
$ i\sqrt{3} $ $z^2-z+1$ $\Gamma_{21}$
$-1+i$ $z^2+1$ (6,0) surgery on 5/3
$-1$ $z^2+z+1$ $\Gamma_{20}$
$1+3i$ $z^2-2z+2$ (6,0)-(6,0) surgery on link 24/7
$-1+i\sqrt{7}$ $z^2-z+2$ (6,0)-(6,0) surgery on link 30/11
$-2+i\sqrt{2}$ $z^2+2$ (6,0)-(6,0) surgery on link 12/5
$4.1096 - i\ 2.4317$ $1+2z-3z^2+z^3$ (6,0) surgery on knot 65/51
$3.0674 -i\ 2.3277$ $2-2z^2+z^3$ (6,0) surgery on knot 13/3$^*$
$2.1244 -i\ 2.7466$ $1+z-2z^2+z^3$ (6,0) surgery on knot 15/11
$1.0925 - i\ 2.052 $ $1-z^2+z^3$ (6,0) surgery on knot 7/3$^*$
$0.1240 -i\ 2.8365$ $1+z-z^2+z^3$ (6,0) surgery on knot 13/3$^*$
$-0.8916 -i\ 1.9540 $ $1+z+z^3$ (6,0)-(6,0) surgery on link 8/5
$-1.8774 - i\ 0.7448$ $1+2z+z^2+z^3$ (6,0) surgery on knot 7/3$^*$
$-2.8846 -i\ 0.5897$ $1+3z+z^2+z^3$ (6,0)-(6,0) surgery on link 20/9
: Arithmetic groups with $p,q=6$
[**Notes:**]{} Table 1, and the results above, were produced as follows. The methods we outlined above and discuss in detail in the body of the paper produce for us all possible values of the trace of the commutator of a pair of primitive elliptic generators of an arithmetic Kleinian group (the parameters) as well as an approximate volume for the orbit space. We then use Jeff Weeks’ hyperbolic geometry package “Snappea” [@snap] to try and identify the orbifold in question by surgering various two bridge knots and links and comparing volumes. Once we have a likely candidate, we use the matrix presentation given by Snappea and verify that the commutator traces are the same. As these traces come as the roots of a monic polynomial with integer coefficients of modest degree, this comparison is exact. Since this trace determines the group up to conjugacy, we thereby identify the orbit space. Conversely, once the two bridge knot or link and the relevant surgery is determined, a value of $\gamma$ can be recovered from the algorithm in [@HLM1].
Next, if $\alpha$ is a complex root of the given polynomial in Table 1, then $k\Gamma={\mathbb Q}(\alpha)$ and $\alpha(\alpha+1)=\gamma$. Recall that the parameter $\gamma$ is determined by the Nielsen equivalence class of a pair of generators of the groups. In these tables an $*$ denotes that a Nielsen inequivalent pair of generators of order 6 (also listed in the table) gives rise to the same group. Curiously these examples were identified as follows. Each is an index two subgroup of a group obtained by $(2,0)$-surgery on one component $C_1$ and $(6,0)$-surgery on the other component $C_2$ of a two bridge link, in particular the links $7^{2}_{1}$ and $9^{2}_{1}$ in Rolfsen’s tables [@Rolfsen]. Suppose images of the meridians are $f$ of order six and $g$ of order two. Then $f$ and $gfg^{-1}$ are generators, both of order six for the group in question. If however we do $(6,0)$-surgery on $C_1$ and $(2,0)$-surgery on $C_2$, with images of meridians being $f'$ and $g'$, then $f'$ and $g'f'g'^{-1}$ give the same group, but are not Nielsen equivalent (as the $\gamma$ parameters are different). Of course once identified, one can use the retriangluation procedure on Snappea to try to generate these different Nielsen classes of generators (knowing they exist is a big incentive to retriangulate a few times).
The groups listed as $\Gamma_{20}$ and $\Gamma_{21}$ are the only non-compact examples and were found in [@MM2]. They have the following presentations $$\Gamma_{20} = \langle
x,y:x^6=y^6=[x,y]^3=([x,y]x)^2=(y^{-1}[x,y])^2=(y^{-1}[x,y]x)^2=1\rangle$$ $$\Gamma_{21} = \langle x,y:x^6=y^6=
(y^{-1}x)^2y[x^{-1},y][x,y][x,y^{-1}]x^{-1} = ([y^{-1},x]yx^2)^2 =1\rangle$$
Two-generator Arithmetic Lattices
=================================
The group of orientation preserving isometries of the upper half-space model of ${\mathbb H}^3$, 3-dimensional hyperbolic space, is given by the group $\PSL(2, {\mathbb C})$, the natural action of its elements by linear fractional transformations on $\hat{{\mathbb C}}$ extending to ${\mathbb H}^3 = {\mathbb C}\times {\mathbb R}^+$ and preserving the metric of constant negative curvature via the Poincaré extension.
A subgroup ${\Gamma}$ of $\PSL(2,{\mathbb C})$ is said to be [*reducible*]{} if all elements have a common fixed point in their action on $\hat{{\mathbb C}}$ and ${\Gamma}$ is otherwise [*irreducible*]{}. Also ${\Gamma}$ is said to be [*elementary*]{} if it has a finite orbit in its action on ${\mathbb H}^3 \cup \hat{{\mathbb C}}$ and ${\Gamma}$ is otherwise [*non-elementary*]{}.
Parameters
----------
If $f \in \PSL(2,{\mathbb C})$ is represented by a matrix $A \in \SL(2,{\mathbb C})$ then the trace of $f$, $\tr(f)$, is only defined up to a sign. However, if $[f,g]=fgf^{-1}g^{-1}$ denotes the commutator of $f$ and $g$, then $\tr[f,g]$ is well-defined and, furthermore, the two generator group $\langle f,g\rangle $ is reducible if and only if $\tr[f,g]=2$. For a two-generator group $\langle f,g\rangle $ the three complex numbers $(\gamma(f,g),\beta(f), \beta(g))$ $$\label{betadef} \beta(f) = \tr^2(f) - 4, \hskip10pt \beta(g) = \tr^2(g) - 4, \hskip10pt \gamma(f,g) = \tr[f,g]-2$$ are well-defined by $f,g$ and form the [*parameters*]{} of the group $\langle f,g\rangle $. They define $\langle f,g\rangle $ uniquely up to conjugacy provided $\langle f,g\rangle $ is irreducible, that is $\gamma(f,g) \neq 0$, see [@GM0].
Now suppose that $f$ and $g$ have finite orders $p$ and $q$ respectively where we can assume that $p \geq q$. In considering the group ${\Gamma}= \langle f , g \rangle $ we can assume that $f$ and $g$ are primitive elements and so ${\Gamma}$ has parameters $$(\gamma, -4 \sin^2 \pi/p, -4 \sin^2 \pi/q).$$ (Where there is no danger of confusion, we will abbreviate $\gamma(f,g)$ simply to $\gamma$.) For fixed $p,q$, any $\gamma \in {\mathbb C}\setminus \{ 0\}$ uniquely determines the conjugacy class of such a group ${\Gamma}= \langle f , g \rangle $. We say ${\Gamma}$ is [*Kleinian*]{} if it is a discrete non-elementary subgroup of $\PSL(2,{\mathbb C})$. For fixed $p$ and $q$ it is an elementary consequence of a theorem of Jørgensen [@Jorg] that the set of all such $\gamma$ is closed and computer generated pictures suggest that it is highly fractal in nature - for instance the Riley slice, corresponding to two parabolic generators, would correspond to $p=q=\infty$.
The cases where $\gamma$ is real have been investigated in [@Klim; @KlimKop; @MM3] We have shown in [@MM], that for each pair $(p,q)$ there are only finitely many $\gamma$ in ${\mathbb C}$ which yield arithmetic Kleinian groups and for all but a finite number of pairs $(p,q)$, that finite number is zero. It is our aim here to determine all $\gamma$ such that ${\Gamma}$ is an arithmetic Kleinian group (i.e. a 3-dimensional arithmetic hyperbolic lattice) with $p,q \geq 6$ and to obtain a geometric description of these groups.
Arithmetic Kleinian Groups
--------------------------
For detailed information on arithmetic Kleinian groups see [@Bo; @Vig; @MR]. For completeness, and since we will rely heavily on these results, we recall here some basic facts.
Let $k$ be a number field and for each place $\nu$ of $k$, let $k_{\nu}$ denote the completion of $k$ with respect to the metric on $k$ induced by the valuation $\nu$. For each Galois monomorphism $\sigma : k \rightarrow {\mathbb C}$, there is an Archimedean valuation given by $| \sigma(x) |$ and if $\sigma(k) \subset {\mathbb R}$ then $k_{\nu} \cong {\mathbb R}$ and, if not, each complex conjugate pair forms a place and $k_{\nu} \cong {\mathbb C}$. The other valuations are ${\cal P}-$adic and correspond to prime ideals ${\cal P}$ of $R_k$. The fields $k_{\nu} =
k_{\cal P}$ are finite extensions of the $p$-adic numbers ${\mathbb Q}_p$. Let $A$ be a quaternion algebra over $k$ and let $A_{\nu} = A \otimes_k
k_{\nu}$ so that $A_{\nu}$ is a quaternion algebra over the local field $k_{\nu}$. For $k_{\nu} \cong {\mathbb C}$, then $A_{\nu} \cong M_2({\mathbb C})$ but for all other places there are just two quaternion algebras over each local field one of which is $M_2(k_{\nu})$ and the other is a unique quaternion division algebra over $k_{\nu}$.
We say that $A$ is [*ramified*]{} at $\nu$ if $A_{\nu}$ is a division algebra. The set of places at which $A$ is ramified is finite of even cardinality and is called the [*ramification set*]{} of $A$, denoted by $\Ram(A)$. The ramification set determines the isomorphism class of $A$ over $k$. We also denote the set of Archimedean ramified places by $\Ram_{\infty}(A)$ and the non-Archimedean or finite places at which $A$ is ramified by $\Ram_f(A)$. Now as a quaternion algebra $A$ has a basis of the form $1,i,j,ij$ where $i^2 = a, j^2 = b$ and $ij = -ji$, with $a,b \in k^*$. It can thus be represented by a [*Hilbert symbol*]{} ${\displaystyle}{{\left(\frac{a,b}{k}\right)}}$. If the real place $\nu$ corresponds to the embedding $\sigma : k \rightarrow {\mathbb R}$, then $$A_{\nu} \cong
{\displaystyle}{{\left(\frac{a,b}{k}\right)} \otimes_k k_{\nu} \cong {\left(\frac{\sigma(a),\sigma(b)}{{\mathbb R}}\right)}}$$ and $A$ will be ramified at $\nu$ if and only if $A_{\nu}$ is isomorphic to Hamilton’s quaternions. This occurs precisely when both $\sigma(a)$ and $\sigma(b)$ are negative.
Now assume that $k$ has exactly one complex place and that the quaternion algebra $A$ is ramified at least at all the real places of $k$. Let ${\cal O}$ be an order in $A$ and let ${\cal O}^1$ denote the elements of norm 1. In these circumstances there is a $k$-embedding $\rho : A \rightarrow M_2({\mathbb C})$ and the group $P\rho({\cal O}^1)$ is a Kleinian group of finite co-volume. The set of [*arithmetic Kleinian groups*]{} is the set of Kleinian groups which are commensurable with some such $P\rho({\cal O}^1)$.
It is our aim to identify all (conjugacy classes of) arithmetic Kleinian groups generated by two elements of finite order. To do this we use the identification theorem below which gives a method of identifying arithmetic Kleinian groups from the elements of the given group.
We require the following preliminaries. Let ${\Gamma}$ be any non-elementary finitely-generated subgroup of $\PSL(2,{\mathbb C})$. Let ${\Gamma}^{(2)} = \langle g^2 \mid g \in {\Gamma}\rangle $ so that ${\Gamma}^{(2)}$ is a subgroup of finite index in ${\Gamma}$. Define $$\left.
\begin{array}{lll}
k{\Gamma}& = & {\mathbb Q}(\{ \tr(h) \mid h \in {\Gamma}^{(2)} \}) \\
A{\Gamma}& = & \{ \sum a_i h_i \mid a_i \in k{\Gamma}, h_i \in {\Gamma}^{(2)} \}
\end{array}\;\;\;\; \right\}$$ where, with the usual abuse of notation, we regard elements of ${\Gamma}$ as matrices, so that $A\Gamma \subset M_2({\mathbb C})$.
Then $A{\Gamma}$ is a quaternion algebra over $k{\Gamma}$ and the pair $(k{\Gamma}, A{\Gamma})$ is an invariant of the commensurability class of ${\Gamma}$. If, in addition, ${\Gamma}$ is a Kleinian group of finite co-volume then $k{\Gamma}$ is a number field.
We state the identification theorem as follows:
\[idthm\] Let ${\Gamma}$ be a subgroup of $\PSL(2,{\mathbb C})$ which is finitely-generated and non-elementary. Then ${\Gamma}$ is an arithmetic Kleinian group if and only if the following conditions all hold:
1. $k{\Gamma}$ is a number field with exactly one complex place,
2. for every $g \in {\Gamma}$, $\tr(g)$ is an algebraic integer,
3. $A{\Gamma}$ is ramified at all real places of $k{\Gamma}$.
4. ${\Gamma}$ has finite co-volume.
It should be noted that the first three conditions together imply that ${\Gamma}$ is Kleinian, and without the fourth condition, are sufficient to imply that ${\Gamma}$ is a subgroup of an arithmetic Kleinian group.
The first two conditions clearly depend on the traces of the elements of ${\Gamma}$. In addition, we may also find a Hilbert symbol for $A{\Gamma}$ in terms of the traces of elements of ${\Gamma}$ so that the third condition also depends on the traces (for all this, see [@MR1],[@MR Chap. 8]).
Two-generator arithmetic groups
-------------------------------
We now suppose that ${\Gamma}$ is generated by two elements $f,g$ of orders $p$ and $q$ respectively where $p \geq q$. We have noted that the conjugacy class of ${\Gamma}$ is uniquely determined by the single complex parameter $\gamma$. We now show how the first three conditions of Theorem \[idthm\] can be equivalently expressed in terms of $\gamma$. This is not true of the fourth condition, but for ${\Gamma}$ to have finite co-volume places some necessary conditions on $\gamma$ (see §3 below).
Note that $\tr f = \pm 2\cos(\pi/p)$ and $\tr g= \pm 2\cos(\pi/q)$ are algebraic integers and recall that the traces of all elements in $\langle f , g \rangle $ are integer polynomials in $\tr f, \tr g$ and $\tr fg$. Now the Fricke identity states $$\label{Fricke}
\gamma=\gamma(f,g) = \tr^2 f + \tr^2 g + \tr^2 fg - \tr f \tr g \tr fg -4.$$ Thus $\tr fg$ is an algebraic integer if and only if $\gamma$ is an algebraic integer so that the second condition of Theorem \[idthm\] is equivalent, in these two-generator cases, to requiring that $\gamma$ be an algebraic integer.
Now suppose that $p,q \geq 3$. Throughout, we denote $\beta(f), \beta(g)$ (see (\[betadef\])) by $\beta_1, \beta_2$ respectively so that $$\beta_1 = -4 \sin^2 \frac{\pi}{p},\hskip10pt \beta_2
= -4 \sin^2 \frac{\pi}{q}, \hskip10pt \beta_1+4 = 4 \cos^2 \frac{\pi}{p}, \hskip10pt \beta_2 + 4 = 4 \cos^2 \frac{\pi}{q}$$ Now $k{\Gamma}= {\mathbb Q}(\tr^2 f, \tr^2 g, \tr f \tr g \tr fg)$ (see for instance [@MR Chap.3]). We consistently use $L$ to denote the totally real subfield $$L = {\mathbb Q}( \tr^2 f, \tr^2 g) = {\mathbb Q}(\beta_1, \beta_2) =
{\mathbb Q}(\cos \frac{2\pi}{p}, \cos \frac{2\pi}{q})$$ Thus $k{\Gamma}= L(\lambda)$ where $\lambda = \tr f \tr g \tr fg$. From the Fricke identity (\[Fricke\]) and $\tr^2(fg)=\lambda^2/(\beta_1+4)(\beta_2+4)$ we deduce that $\lambda$ satisfies the quadratic equation $$\label{eqn5}
x^2 - (4+\beta_1)(4+\beta_2) \, x
+ (4+\beta_1)(4+\beta_2)(\beta_1 + \beta_2 + 4 -\gamma) = 0,$$ and that $[k{\Gamma}: L(\gamma)]\leq 2$.
Let us at this point remove the inconvenient case that $\gamma(f,g)$ is real as this case complicates our discussion. Suppose then that $\gamma\in {\mathbb R}$. In the next section (see (\[eqn21\])), it will be shown that, for ${\Gamma}$ to have finite co–volume we must have $$-4 < \gamma < 4(\cos \pi/p + \cos \pi/q)^2$$ Now if $\gamma \geq 0 $, then for any Kleinian group ${\Gamma}= \langle f, g\rangle $ with $o(f) = p, o(g)=q$ and $\gamma(f,g) = \gamma$, ${\Gamma}$ has an invariant plane [@Klim; @MM3] and so, as the reader can easily verify, cannot have finite co-volume and hence cannot be an arithmetic Kleinian group. Thus $-2 < \tr[f,g] < 2$ and so, whenever ${\Gamma}$ is discrete and finite co-volume the commutator $[f,g]$ must be elliptic. All such groups, arithmetic or otherwise, have been determined in [@MM3]. There are precisely nine such groups which are arithmetic, all have $p,q \leq 6$ and there is only one with $p=q=6$.
[*Thus we assume henceforth that $\gamma$ is not real.* ]{}
Then $k{\Gamma}$ will be a number field with one complex place if and only if $L(\gamma)$ has one complex place and the quadratic at (\[eqn5\]) splits into linear factors over $L(\gamma)$. This implies that, if $\tau$ is any real embedding of $L(\gamma)$, then the image of the discriminant of (\[eqn5\]), which is $(4+\beta_1)(4+\beta_2)(\beta_1 \beta_2 + 4 \gamma)$, under $\tau$ must be positive. Clearly this is equivalent to requiring that $$\label{eqn6}
\tau( \beta_1 \beta_2 + 4 \gamma) > 0.$$ Thus $k{\Gamma}$ has one complex place if and only if (i) ${\mathbb Q}(\gamma)$ has one complex place, (ii) $L \subset {\mathbb Q}(\gamma)$, (iii) for all real embeddings $\tau$ of ${\mathbb Q}(\gamma)$, (\[eqn6\]) holds and (iv) the quadratic at (\[eqn5\]) factorises over ${\mathbb Q}(\gamma)$.
Now, still in the cases where $p,q > 2$,([@MR §3.6]) $$\label{eqn7}
A{\Gamma}= {\left(\frac{\beta_1(\beta_1+4), (\beta_1+4)(\beta_2+4)\,\gamma}{k{\Gamma}}\right)}.$$ Under all the real embeddings of $k{\Gamma}$, the term $\beta_1(\beta_1+4)$ is negative and $(\beta_1+4)(\beta_2+4)$ is positive. Thus $A{\Gamma}$ is ramified at all real places of $k{\Gamma}$ if and only if, under any real embedding $\tau$ of $k{\Gamma}$, $$\label{eqn8}
\tau(\gamma) < 0.$$ Thus, summarising, we have the following theorem which we will use to determine the possible $\gamma$ values for the groups we seek.
\[2genthm\] Let ${\Gamma}= \langle f , g \rangle $ be a non-elementary subgroup of $\PSL(2,{\mathbb C})$ with $f$ of order $p$ and $g$ of order $q$, $p \geq q \geq 3$. Let $\gamma(f,g) = \gamma \in {\mathbb C}\setminus {\mathbb R}$. Then ${\Gamma}$ is an arithmetic Kleinian group if and only if
1. $\gamma$ is an algebraic integer,
2. ${\mathbb Q}(\gamma) \supset L = {\mathbb Q}(\cos 2 \pi/p, \cos 2 \pi/q)$ and ${\mathbb Q}(\gamma)$ is a number field with exactly one complex place,
3. if $\tau : {\mathbb Q}(\gamma) \rightarrow {\mathbb R}$ such that $\tau |_L = \sigma$, then $$\label{eqn10}
- \sigma( \frac{\beta_1 \beta_2}{4}) < \tau(\gamma) < 0,$$
4. the quadratic polynomial at (\[eqn5\]) factorises over ${\mathbb Q}(\gamma)$,
5. ${\Gamma}$ has finite co-volume.
Any non-elementary subgroup ${\Gamma}= \langle f, g \rangle $ of $\PSL(2,{\mathbb C})$ where $o(f) =
o(g) = p > 2$ is contained as a subgroup of index at most 2 in a group ${\Gamma}^* = \langle h, f \rangle $ where $o(h) = 2$ with $$\label{eqn11}
\gamma(f,g) = \gamma(h,f) ( \gamma(h,f) - \beta_1)$$ and conversely (see [@GM1]). Thus, $(k{\Gamma}, A{\Gamma}) = (k{\Gamma}^*, A{\Gamma}^*)$, since these are commensurability invariants, and so we can obtain necessary and sufficient conditions for arithmeticity of ${\Gamma}$ in terms of $\gamma = \gamma(h,f)$ where $o(h)=2, o(f)=p > 2$. In this case, $k{\Gamma}^* = {\mathbb Q}( \tr^2 f, \gamma)
= L(\gamma)$ (see [@MR]) and $$A{\Gamma}^* = {\left(\frac{\beta_1(\beta_1+4),\gamma\,(\gamma - \beta_1)}{k{\Gamma}^*}\right)}.$$ Arguing as above, we have
\[2gen\*\] Let ${\Gamma}^* = \langle h,f \rangle $ be a non-elementary subgroup of $\PSL(2,{\mathbb C})$ with $h$ of order $2$ and $f$ of order $p >2$. Let $\gamma(h,f) = \gamma \in {\mathbb C}\setminus {\mathbb R}$. Then ${\Gamma}^*$ is an arithmetic Kleinian group if and only if
1. $\gamma$ is an algebraic integer,
2. ${\mathbb Q}(\gamma) \supset L = {\mathbb Q}(\cos 2 \pi/p)$ and ${\mathbb Q}(\gamma)$ is a number field with exactly one complex place,
3. if $\tau : {\mathbb Q}(\gamma) \rightarrow {\mathbb R}$ such that $\tau|_L = \sigma$ then $$\label{eqn13}
\sigma(\beta_1) < \tau(\gamma) < 0,$$
4. ${\Gamma}^*$ has finite co-volume.
Implementation of the fourth condition of Theorem \[2genthm\] can be simplified as follows: Suppose that $m(x)$, the minimum polynomial of $\gamma$ over $L$, has the form $x^r + a_{r-1} x^{r-1} + ... + a_0.$ From our usual expression for $\gamma$ at (5), we have: $$\label{eqn14}
\tr^2 f \tr^2 g \; \gamma = (\tr f \tr g \tr fg)^2 - \tr^2 f \tr^2 g (\tr f \tr g \tr fg) + \tr^2 f \tr^2 g ( \tr^2 f + \tr^2 g - 4).$$ That is $ b \gamma = \lambda^2 - b \lambda + c $ where $b$ and $c$ are integers in $L$. Next, substituting in $m(x)$ and clearing denominators gives $(\lambda^2 - b \lambda + c)^r + a_{r-1} b (\lambda^2 - b \lambda +c)^{r-1} + ..... + a_0 b^r = 0 $ which is a monic polynomial in $\lambda$ of degree $2r$ with coefficients integers in $L$. We define the polynomial $$M(y) =(y^2 - by + c)^r + a_{r-1} b (y^2 - b y +c)^{r-1} + ..... + a_0 b^r$$ simply replacing $\lambda$ by $y$.
Since ${\mathbb Q}(\lambda) = {\mathbb Q}(\gamma)$, then $\lambda$ is an algebraic integer in $k{\Gamma}$ which has a minimum polynomial over $L $ which is monic with integer coefficients. This must also be true of the “other" root $\lambda' = b - \lambda$. So the two factors of the polynomial $M(y)$ have coefficients which are integers in $L.$ Hence the fourth condition of Theorem \[2genthm\] is equivalent to
\[condition4’\] The polynomial $M(y)$ factors over $L$ into two monic factors both of degree $r$ and having integral coefficients (in $L$)
A slight simplification of this occurs in the cases where $(p,q)>2$. In these cases, $ {\displaystyle}{a = 8 \cos \frac{\pi}{p} \, \cos \frac{\pi}{q} \, \cos ( \frac{\pi}{p} + \frac{\pi}{q})}$ is an algebraic integer in $L$. If we set $\epsilon =
\lambda - a$, then ${\mathbb Q}(\lambda) = {\mathbb Q}(\epsilon)$ and equation (14) takes the form $$\label{eqn15}
b \gamma = \epsilon ( \epsilon - c)$$ where $b= 16 \cos^2 \pi/p \cos^2 \pi/q$, $c= 4 \sin 2 \pi/p \sin 2 \pi/q$ are integers in $L$. We can use this factorisation in $m(x)$ to obtain the corresponding result to Lemma 2.4.
See (21) later for an example of this condition applied - using an integral basis for $L$ this condition can be rewritten to assert the existence of a solution in rational integers of a nonlinear system of equations. Since our methods are to deduce the possible minimum polynomials of $\gamma$ over $L$, this alternative formulation can be readily computationally implemented. Note that when $p=q$, $$\epsilon = - 4 \cos^2\frac{\pi}{p} \,\, \gamma(h,f)$$ and (\[eqn11\]) is a special case of (\[eqn15\]) and hence of (\[eqn14\]).
Free Products
=============
As we have noted, the first four conditions of Theorem \[2genthm\] on $\gamma$ are sufficient to imply that ${\Gamma}$ is a subgroup of an arithmetic Kleinian group. However many of the groups satisfying these four conditions will be isomorphic to the free product $\langle f \rangle * \langle g \rangle $ and so cannot be arithmetic Kleinian groups as they must fail to have finite co-volume. To eliminate these groups we now seek conditions on $\gamma$ which force a discrete group ${\Gamma}= \langle f , g \rangle $ to be a free product. Moreover, we will extend the methods of [@MM] to enumerate the parameters $\gamma$ which give rise to arithmetic Kleinian groups by obtaining bounds which involve the discriminant of the power basis of ${\mathbb Q}(\gamma)$ over $L$ determined by $\gamma$. For this purpose, and also for other methods to be used in the enumeration, we want to obtain as stringent bounds as possible on $| \gamma |, \Im(\gamma), \Re(\gamma)$. The extreme values of these are attained within a contour $\Omega_{p,q}$ in the $\gamma$-plane. We thus obtain bounds which are simple functions of one variable which, for each pair $(p,q)$ can be (computationally) maximised.
Define $$A = \begin{pmatrix} \cos \pi/p & i \sin \pi/p \\ i \sin \pi/p &
\cos \pi/p \end{pmatrix}, \quad B = \begin{pmatrix}
\cos \pi/q & iw \sin\ \pi/q \\ i w^{-1} \sin \pi/q & \cos \pi/q \end{pmatrix}.$$ Then if ${\Gamma}= \langle f,g\rangle $ is a non-elementary Kleinian group with $o(f)=p,
o(g) = q$, where $p \geq q \geq 3$ then ${\Gamma}$ can be normalised so that $f,g$ are represented by the matrices $A,B$ respectively. The parameter $\gamma$ is related to $w$ by $$\label{eqn16}
\gamma = \sin^2 \frac{\pi}{p} \, \sin^2 \frac{\pi}{q} \; (w - \frac{1}{w})^2.$$ Given $\gamma$, we can further normalise and choose $w$ such that $| w |
\leq 1$ and ${\rm Re}(w) \geq 0$.
It is convenient here to also consider the cases where ${\Gamma}^* = \langle h,f \rangle$ with $o(h)=2, o(f)=p$ as discussed in Theorem \[2gen\*\] so that in this section we will allow $q$ to be equal to 2.
We recall the [*isometric circles*]{} of a linear fractional transformation $$g(z)=\frac{az+b}{cz+d} \approx \begin{pmatrix} a &b \\ c &
d \end{pmatrix} \in PSL(2,{\mathbb C}), \hskip10pt c\neq 0$$ are the pair of circles $$I(g) = \{z : |cz+d|=1\},\hskip20pt I(g^{-1}) = \{z : |cz-a|=1\}$$ Notice that $I(g)=\{|g'(z)|=1\}$ and $I(g^{-1})=\{|(g^{-1})'(z)|=1\}$ and that $g$ maps the exterior of $I(g)$ to the interior of $I(g^{-1})$.
The Klein combination theorem, (see [@Mas] for this and important generalisations) can be used to establish the following well known fact: If the isometric circles of $g$ lie inside the intersection of the disks bounded by the isometric circles of $f$, then $\langle f,g \rangle \cong \langle f\rangle * \langle g \rangle $. (See the illustrative examples in Diagram 1, where this situation holds in case 1 but not in case 2.)
[**Diagram 1. $p=q=3$ isometric circles**]{};\
1. non-intersecting ($\gamma=-4 + 4 i$) 2. intersecting ($\gamma=-1.5 + 1.75 i$)
This geometric configuration occurs precisely when $$\label{eqn17}
| i w \cot \pi/q + i \cot \pi/p | + \frac{| w |}{\sin \pi/q} \leq \frac{1}{\sin \pi/p}.$$ As $w$ traverses the boundary of the region described by (\[eqn17\]), then $\gamma$ traverses a contour $\Omega_{p,q}$, so that, when $\gamma$ lies outside this, the corresponding group will be a free product. The general shape of such a contour is illustrated by the case exhibited in Diagram 2.
$\Omega_{8,6}$
More specifically, let $w = r {\rm e}^{i \theta}$, and define $c(p,q) = \cos \pi/p \, \cos \pi/q$, $s(p,q) = \sin \pi/p \, \sin \pi/q$. Then on the boundary of the region defined by (\[eqn17\]), $$\label{rexprn} r^2 + \frac{1}{r^2} = \frac{4(1+c(p,q)\cos \theta)^2}{s(p,q)^2} - 2$$ Since $$\gamma =
s(p,q)^2\,[(r^2 + r^{-2})\cos 2 \theta - 2 +
(r^2 - r^{-2}) i \sin 2 \theta],$$ and we can assume that $|w|<1$ and $\Re(w) \geq0$, we set $\cos \theta = t
\in [0,1]$ and obtain $$\begin{aligned}
\label{Omega}
\Omega_{p,q}(t)= & 4(2t^2-1)(1+t c(p,q))^2-4t^2 s(p,q)^2 \\
& -8t \sqrt{1-t^2} (1+t c(p,q)( \sqrt{(1+t c(p,q))^2-s(p,q)^2} i \nonumber
\end{aligned}$$ It is clear that the real part of $\gamma$ takes its maximum value for $t=1$ and so $$\label{eqn20}
\Re(\gamma) \leq 4( \cos \pi/p + \cos \pi/q)^2.$$ and for each $(p,q)$ its minimum value can be computed from this formula. Note that, if $\gamma$ is real, then $$\label{eqn21}
-4 \leq \gamma \leq 4(\cos \pi/p + \cos \pi/q)^2.$$ which gives us the estimate we used earlier to handle the case $\gamma\in{\mathbb R}$.
More generally, for $\gamma \in \Omega_{p,q}(t)$, we have $$| \gamma | = 4[(1 + t c(p,q))^2 - t^2 s(p,q)^2].$$ When $c(p,q) \geq s(p,q)$, which occurs in particular when $p,q \geq 6$, $$\label{eqn22}
| \gamma | \leq 4( \cos \pi/p + \cos \pi/q)^2.$$ Also, in the case $(p,2)$, we have $| \gamma | \leq 4$. Finally, note that $$| \gamma + 4 s(p,q)^2 | + | \gamma| = 2 s(p,q)^2 ( r^2 + r^{-2}).$$ From the expression for $r^2+r^{-2}$ at (\[rexprn\]) above, this clearly takes its maximal value when $\cos \theta = 1$. Thus if $x$ is a real number in the interval $[-\beta_1 \beta_2 / 4,0] = [-4 s(p,q)^2, 0]$, then $$\label{eqn23}
| \gamma - x | \leq 4(1 + \cos \pi/p \cos \pi/q)^2.$$
The possible values of $(p,q)$
==============================
From Theorem \[2genthm\], we note, first, that $\gamma$ is an algebraic integer, secondly, that ${\mathbb Q}(\gamma)$ has exactly one complex place and thirdly, that ${\mathbb Q}(\gamma)$ must contain $L = {\mathbb Q}(\cos 2 \pi/p,
\cos 2 \pi/q)$. Let $[{\mathbb Q}(\gamma) : L ] = r$. We now make use of these facts, together with the inequalities that $\gamma$ and its conjugates must satisfy given in §2 and §3 to produce a list of possible values for the triple $(p,q,r)$ for which there may exist a $\gamma$-parameter corresponding to an arithmetic Kleinian group which is not obviously free using the criteria from §3. Further, if $(p,q,r)$ does not appear on this list there cannot be any corresponding arithmetic Kleinian groups (see Table 5). The list obtained in Table 5 is produced by refining a basic list in §4.1 using arguments on the norm and discriminant, each stage being implemented by an elementary program in Maple. The finiteness of such a list was established in [@MM] and the starting point here uses the crude estimate obtained in [@MM] that $p,q \leq 120$. In producing our lists, we assume that $p,q \geq 6$ although the methods apply for $p,q \geq 3$.
Norm method
-----------
Let $N$ denote the absolute norm $N : {\mathbb Q}(\gamma) \rightarrow {\mathbb Q}$ and, as before, $L = {\mathbb Q}(\cos 2 \pi/p, \cos 2 \pi/q)$. If $(p,q)>2$, then $L = {\mathbb Q}(\cos 2 \pi/M)$ where $M$ is the least common multiple of $p$ and $q$ and otherwise $L$ is of index 2 in that field. Thus if $\mu = [L : {\mathbb Q}]$, then $$\mu = \left\{ \begin{array}{ll}
\phi(M)/2 & {\rm if~}(p,q)>2 \\
\phi(M)/4 & {\rm if~}(p,q)\, |\, 2.
\end{array}
\right.$$ The field $L$ is totally real and the embeddings $\sigma : L \rightarrow {\mathbb R}$ are defined by $$\sigma( \cos \frac{2 \pi}{p}) = \cos \frac{2 \pi j}{p},
\sigma(\cos \frac{2 \pi}{q}) = \cos \frac{2 \pi j}{q}~{\rm where~}(j,pq)=1.$$ Let us denote these embeddings by $\sigma_1, \sigma_2, \ldots , \sigma_{\mu}$, with $\sigma_1 = {\rm Id}$. Since $\gamma$ is an algebraic integer $| N(\gamma)| \geq 1$ and $N(\gamma) = \gamma \bar{\gamma} \prod_{\tau} \tau(\gamma)$ where $\tau$ runs over the $r \mu - 2 $ real embeddings of ${\mathbb Q}(\gamma)$. If $\tau |_L = \sigma_i$, then by (\[eqn10\]) $$- \frac{\sigma_i(\beta_1 \beta_2)}{4} < \tau(\gamma) < 0$$ and from (\[eqn22\]), $| \gamma| < 4(\cos \pi/p + \cos \pi/q)^2$. Thus we obtain $${\label{Norm1}}
1\leq | N(\gamma)| \leq
16(\cos \pi/p + \cos \pi/q)^4 \, (4 s(p,q)^2)^{-2} \prod_{j=1}^{\mu} \left( \frac{\sigma_j(\beta_1 \beta_2)}{4} \right)^r.$$ Now letting $$\delta_n = \left\{ \begin{array}{ll}
1 & {\rm if~}n \neq p^{\alpha}, \;\; p~{\rm a~prime} \\
p & {\rm if~}n = p^{\alpha}, \;\; p ~{\rm a~prime},
\end{array}
\right.$$ then, (see [@MM]), if $\delta_{n,m} = \delta_n^{2/\phi(n)}\, \delta_m^{2/\phi(m)}$, $$\prod_{j=1}^{\mu} \sigma_j(\beta_1 \beta_2) =
\delta_{p,q}^{\mu}.$$ Thus, taking logs, for a triple $(p,q,r)$ to give rise to a $\gamma$ which represents an arithmetic Kleinian group it must satisfy the inequality $$\label{eqn30}
r \mu \leq 4 \log \left[
\frac{\cos \pi/p + \cos \pi/q}{\sin \pi/p \sin \pi/q} \right] /\log (4/\delta_{p,q})$$ Note that $r \geq 2$ and $6 \leq p,q \leq 120$ so that we can determine the triples for which (\[eqn30\]) holds by obtaining the values of $p$ and $q$ and an upper bound for the related value of $r$. This produces a list of 86 entries shown in Table 2, which, for future reference, we call the [*Norm List*]{}.
p q r p q r p q r p q r
---- ---- --- ---- ---- --- ---- ---- ---- ---- ---- ---
6 6 5 7 6 3 7 7 33 8 6 4
8 7 4 8 8 7 9 6 3 9 7 3
9 8 2 9 9 5 10 6 3 10 7 2
10 8 2 10 10 4 11 6 2 11 7 2
11 8 2 11 11 5 12 6 3 12 7 2
12 8 2 12 9 2 12 10 2 12 12 4
13 7 2 13 13 4 14 6 2 14 7 5
14 14 3 15 6 2 15 10 2 15 15 2
16 6 2 16 8 3 16 16 3 17 17 2
18 6 2 18 8 2 18 9 4 18 18 4
19 19 2 20 6 2 20 10 2 20 20 3
21 7 3 21 21 2 22 11 3 22 22 2
23 23 2 24 6 2 24 8 3 24 12 2
24 24 3 26 13 2 26 26 2 28 7 3
28 14 2 28 28 2 30 6 2 30 10 2
30 15 3 30 30 3 32 8 2 32 16 2
32 32 2 34 17 2 36 9 2 36 12 2
36 18 2 36 36 2 38 19 2 40 40 2
42 7 3 42 14 2 42 21 2 42 42 2
44 11 2 48 8 2 48 16 2 48 48 2
54 54 2 60 30 2 60 60 2 66 11 2
70 7 2 84 7 2
: Norm List
Discriminant Method $(r \geq 3)$
--------------------------------
This is a refinement of the method used in [@MM], and we apply it when $r \geq 3$.
If $\Delta$ is the discriminant of the power basis $1, \gamma, \gamma^2, \ldots , \gamma^{r-1}$ over $L$ and $\delta_{{\mathbb Q}(\gamma) \mid L}$, the relative discriminant, then $$| N_{L \mid {\mathbb Q}}(\Delta) | \geq | N_{L \mid {\mathbb Q}}(\delta_{{\mathbb Q}(\gamma) \mid L}) |.$$ Choose embeddings $\tau_1, \tau_2, \ldots , \tau_{\mu}$ of ${\mathbb Q}(\gamma)$ into ${\mathbb C}$ such that $\tau_i|_L = \sigma_i$. Then $N_{L \mid {\mathbb Q}}(\Delta)
= \prod_{i=1}^{\mu} \sigma_i(\Delta)$ and $\sigma_i(\Delta)$ is the discriminant of the power basis $1, \tau_i(\gamma), \tau_i(\gamma^2), \ldots
, \tau_i(\gamma^{r-1})$ of $\tau_i({\mathbb Q}(\gamma))$ over $L$. As in [@MM], we use Schur’s bound [@S] which gives that, if $-1 \leq x_1 < x_2 < \cdots < x_r \leq 1$ with $r \geq 3$ then $$\label{eqn31}
\prod_{1 \leq i < j \leq r} (x_i - x_j)^2 \leq M_r = \frac{2^2\,3^3\, \ldots r^r\, 2^2 \, 3^3 \, \ldots (r-2)^{r-2}}{3^3\, 5^5 \, \dots (2r-3)^{2r-3}}.$$ Thus, for $i \geq 2$ we have $$\label{discr1}
| \sigma_i(\Delta) | \leq \left( \frac{\sigma_i(\beta_1 \beta_2)}{8} \right)^{r(r-1)} \, M_r.$$ In the case where $i=1$, $\gamma$ has $r-2$ real conjugates over $L$ denoted by $x_3, x_4, \ldots , x_r$ which, by (\[eqn10\]), all lie in the interval $(-\beta_1 \beta_2
/4, 0)$. Thus $$\label{discr2}
| \Delta | \leq | \gamma - \bar{\gamma} |^2 \left(\prod_{i=3}^r (\gamma - x_i)^2(\bar{\gamma}-x_i)^2\right) \left( \frac{\beta_1 \beta_2}{8}\right)^{(r-2)(r-3)} M_{r-2}.$$
For $\gamma$ on the contour $\Omega_{p,q}$ we have (see (23)) $$| \gamma - x_i | = | \bar{\gamma} - x_i | < 4(1+ \cos \pi/p \cos \pi/q)^2.$$
We thus define $$K_1(p,q,r) = 4 M_{r-2} [4(1+c(p,q))^2]^{4(r-2)} (2 s(p,q)^2)^{(r-2)(r-3)}
{\rm Max}_{0 \leq t \leq 1}| \Im(\Omega_{p,q}(t) |^2$$ which can be determined using (\[Omega\]).
From (\[discr1\]) and (\[discr2\]) we obtain an upper bound for $| N_{L \mid {\mathbb Q}}(\delta_{{\mathbb Q}(\gamma) \mid L})|$. This is bounded below by 1 but since $|N_{L \mid {\mathbb Q}}(\delta_{{\mathbb Q}(\gamma) \mid L})| = |\Delta_{{\mathbb Q}(\gamma)}|/\Delta_L^r,$ this lower bound may be improved. Since ${\mathbb Q}(\gamma)$ is a field of degree $r \mu$ with exactly one complex place, for $n \geq 2$, let $D_n$ denote the minimum absolute value of the discriminant of any field of degree $n$ over ${\mathbb Q}$ with exactly one complex place. For small values of $n$ the number $D_n$ has been widely investigated ([@CDO; @Di; @DO]) and lower bounds for $D_n$ for all $n$ can be computed ([@Mull; @Od; @Rodgers; @Stark]). In [@OdlyzkoU], the bound is given in the form $D_n > A^{n-2} B^2 \exp(-E)$ for varying values of $A,B$ and $E$. Choosing, by experimentation, suitable values from this table we obtain the bounds shown in Table 3.
------------ ------------------------------ ------------ ------------------------------
Degree $n$ Bound Degree $n$ Bound
2 3 3 27
4 275 5 4511
6 92779 7 2306599
8 68856875\* 9 $0.11063894 \times 10^{10} $
10 $0.31503776 \times 10^{11}$ 11 $0.90315026 \times 10^{12}$
12 $0.25891511 \times 10^{14}$ 13 $0.74225785 \times 10^{15}$
14 $0.21279048 \times 10^{17}$ 15 $0.61002775 \times 10^{18}$
16 $0.17488275 \times 10^{20}$ 17 $0.50135388 \times 10^{21}$
18 $0.14372813 \times 10^{23} $ 19 $0.41203981 \times 10^{24}$
20 $0.11812357 \times 10^{26}$
------------ ------------------------------ ------------ ------------------------------
: Discriminant Bounds
\* [*The exact bound in degree 8 is only known for imprimitive fields [@CDO]. This suffices here as the only case not covered here is $p=q=6$ where, by the Norm List, the degree does not exceed 5.*]{}
For any integer $M \geq 2$, let $D(M) = M^{\phi(M)/2}/(\prod_{\pi} \pi^{\phi(M)/(2 \pi - 2)})$ where the product is over all primes which divide $M$. Then $$\label{eqn34}
\Delta_{{\mathbb Q}(\cos 2 \pi/M)} = \left\{ \begin{array}{ll}
D(M) & {\rm if~}M \neq m^{\alpha}, 2 m^{\alpha}, m~{\rm a~prime} \\
D(M)/\sqrt{m} & {\rm if~}M=m^{\alpha}, 2 m^{\alpha},m~{\rm an~odd~prime} \\
D(M)/2 & {\rm if~}M=2^{\alpha}, \alpha \geq 2.
\end{array}
\right.$$ If $(p,q) > 2$, $L = {\mathbb Q}(\cos 2 \pi/M)$ where $M$ is the least common multiple of $p$ and $q$. If $(p,q) \mid 2$, then $\Delta_L = \Delta_{{\mathbb Q}(\cos 2 \pi/p)}^{\phi(q)/2} \, \Delta_{{\mathbb Q}(\cos 2 \pi/q)}^{\phi(p)/2}$.
Thus, from (\[discr1\]) and (\[discr2\]), for all $(p,q,r)$ with $r \geq 3$, the following inequality must hold $${\label{discr}}
K_1(p,q,r) (2 s(p,q)^2)^{-r(r-1)} \left( \delta_{p,q}/8 \right)^{\mu r(r-1)}
M_r^{\mu - 1} \geq {\rm Max} \{ 1, D_{r \mu}/\Delta_L^r \}.$$ Extracting the cases with $r \geq 3$ from the Norm List, and applying this inequality first with a lower bound of 1, results in triples $(p,q,r)$ where the total degree $r \mu$ is no greater than 20. On these we can apply (\[discr\]) with values of $D_n$ in Table 3. The result is the so-called [ *Discriminant List*]{} given in Table 4.
p q r p q r p q r
---- ---- ------- ---- ---- ------- ---- ---- -----
6 6 3,4,5 7 7 3,4,5 8 6 3,4
8 8 3,4,5 9 9 3,4 10 6 3
10 10 3,4 11 11 3 12 6 3
12 12 3,4 14 7 3,4 14 14 3
16 8 3 16 16 3 18 9 3,4
18 18 3,4 20 20 3 24 8 3
24 24 3 30 15 3 30 30 3
: Discriminant List
Balancing Method
-----------------
Once again this is a refinement of an argument used in [@MM] and here we extend the argument from the case $r=2$ to all $r$. Note that the upper bound for $| N(\gamma)|$ used at (\[Norm1\]) is attained when the real conjugates of $\gamma$ cluster at one end of the relevant interval, in which case, the discriminant of the basis using $\gamma$ will be small. This argument aims to balance these by incorporating both the norm amd the discriminant.
Let the minimum polynomial of $\gamma$ over $L$ have roots $x_1 (=\gamma)$, $x_2(=\bar{\gamma})$, $x_3 \ldots ,x_r$. Recall that, for each $\tau_i : {\mathbb Q}(\gamma) \rightarrow {\mathbb R}$ such that $\tau_i|_L = \sigma_i$ we have $\tau_i(\gamma) \in (-\sigma_i(\beta_1 \beta_2/4), 0)$. For $i = 2, \ldots,
\mu$, let $$\tau_i(x_j) = t_i^{(j)}(-\sigma_i(\beta_1 \beta_2 / 4)), \, j= 1,2 \ldots , r$$ so that $0 < t_i^{(j)} < 1$. $$N_{L \mid {\mathbb Q}}({\rm discr}\{1,\gamma, \gamma^2, \ldots, \gamma^{r-1}\}) =
(\gamma - \bar{\gamma})^2 \prod_{i=3}^r | \gamma - x_i |^4 \prod_{3 \leq j<k\leq r}(x_j-x_k)^2$$ $$~~~~~~~~~~~~~~~~~~ \prod_{i=2}^{\mu}\left( \sigma_i(\frac{\beta_1 \beta_2}{4})^{r(r-1)} \prod_{1 \leq j < k \leq r}(t_i^{(j)} - t_i^{(k)})^2 \right)$$ where here, and later, all empty products have the value 1. Define $$R_{p,q} = \prod_{i=2}^{\mu} \left(\sigma_i(\frac{\beta_1 \beta_2}{4})^2 \right) = \left(\frac{\delta_{p,q}}{4}\right)^{2 \mu} /(4 s(p,q)^2)^2.$$ Thus $${\label{Bal1}}
\prod_{i=2}^{\mu} \prod_{1 \leq j < k \leq r}| t_i^{(j)} - t_i^{(k)} | \geq
\frac{{\rm Max} \{1, (D_{r \mu}/\Delta_L^r)^{1/2}\}}{|\gamma - \bar{\gamma}| \prod_{i=3}^r |\gamma - x_i|^2 \prod_{3\leq j<k\leq r}|x_j-x_k| R_{p,q}^{r(r-1)/4}}.$$ On the other hand $$N_{L \mid {\mathbb Q}}(N_{{\mathbb Q}(\gamma) \mid L}(\gamma))= |\gamma|^2 \prod_{i=3}^r x_i
\prod_{i=2}^{\mu}\left(-\sigma_i(\beta_1 \beta_2/4)^r \prod_{j=1}^r t_i^{(j)} \right)$$ so that $${\label{Bal2}}
\prod_{i=2}^{\mu} \prod_{j=1}^r |t_i^{(j)}| \geq \frac{1}{|\gamma|^2 \prod_{j=3}^r|x_j| R_{p,q}^{r/2}}.$$ Let us define $t_i^{(0)} = 0$ for $i = 2, \ldots , \mu$ so that the product of (\[Bal1\]) and (\[Bal2\]) yields $$\prod_{i=2}^{\mu} \prod_{0 \leq j<k \leq r}|t_i^{(j)} - t_i^{(k)}| \geq
\frac{{\rm Max} \{1, (D_{r \mu}/\Delta_L^r)^{1/2}\}}{|\gamma - \bar{\gamma}| |\gamma|^2 \prod_{j=3}^r|\gamma- x_j|^2 \prod_{j=3}^r|x_j| \prod_{3 \leq j < k \leq r}|x_j-x_k| R_{p,q}^{r(r+1)/4}}.$$ Note that ${\displaystyle}{\prod_{0 \leq j < k \leq r}|t_i^{(j)}-t_i^{(k)}| \leq \left( M_{r+1}/2^{r(r+1)} \right)^{1/2}}$. In the same way, for $r>2$, $$\prod_{j=3}^r|x_j| \prod_{3 \leq j < k \leq r}|x_j - x_k| \leq \left(M_{r-1} (\frac{\beta_1 \beta_2}{8})^{(r-1)(r-2)}\right)^{1/2}.$$ Also with $\gamma \in \Omega_{p,q}$, $|\gamma-\bar{\gamma}| |\gamma|^2 \prod_{j=3}^r|\gamma - x_j|^2$ will be a maximum when all $x_j$ lie at the left hand extremity of the interval $(-\beta_1 \beta_2/4, 0)$. So define $$K_2(p,q,r) = M_{r-1}^{1/2} (2 s(p,q)^2)^{(r-1)(r-2)/2} \times$$ $$~~~~~~~~~~~~~~~~~~~~~~{\rm Max}_{0 \leq t \leq 1} |2 \Im(\Omega_{p,q}(t)) \Omega_{p,q}(t)^2 (\Omega_{p,q}(t) + 4 s(p,q)^2)^{2(r-2)}|$$ when $r>2$ and $K_2(p,q,2) = {\rm Max}_{0 \leq t \leq 1}| 2 \Im(\Omega_{p,q}(t)) \Omega_{p,q}(t)^2 |$. Thus all our triples $(p,q,r)$ must satisfy $$K_2(p,q,r) R_{p,q}^{r(r+1)/4} \left(\frac{M_{r+1}}{2^{r(r+1)}}\right)^{(\mu-1)/2} \geq {\rm Max} \{ 1,\left( \frac{D_{r \mu}}{\Delta_L^r} \right)^{1/2}\}.$$ Apply this to the Discriminant List for $r \geq 3$ and to the pairs $(p,q)$ appearing in the Norm List for $r=2$. In the latter case, if we apply the lower bound of 1 initially, the remaining fields all have total degree not exceeding 20 and we can then utilise Table 3. The end result is shown in Table 5, and termed the [*Aspiring List*]{}.
p q r p q r p q r
---- ---- --------- ---- ---- --------- ---- ---- -------
6 6 2,3,4,5 7 6 2 7 7 2,3,4
8 6 2,3 8 8 2,3,4,5 9 6 2
9 9 2,3 10 6 2,3 10 10 2,3
11 11 2 12 6 2,3 12 12 2,3,4
13 13 2 14 7 2,3 14 14 2
15 15 2 16 8 2 16 16 2
18 6 2 18 9 2,3 18 18 2,3
20 10 2 20 20 2 22 11 2
24 8 2 24 12 2 24 24 2
28 7 2 30 10 2 30 15 2
30 30 2,3 36 36 2 42 7 2
42 42 2
: Aspiring List
Using the field $L = {\mathbb Q}(\cos 2 \pi/p, \cos 2 \pi/q)$
=============================================================
From what we have found so far, the Aspiring List, Table 5, has the following property:
[ *If $\gamma \in {\mathbb C}\setminus {\mathbb R}$ is a parameter corresponding to an arithmetic Kleinian group\
${\Gamma}= \langle f,g \rangle $ with $f$ of order $p$ and $g$ of order $q$ and $[{\mathbb Q}(\gamma) : L] = r$, then $(p,q,r)$ must appear on the Aspiring List.*]{}
Furthermore, $\gamma$ will be an algebraic integer which satisfies an irreducible polynomial $$\label{eqn35}
x^r + c_{r-1}x^{r-1} + \cdots + c_0 = 0 \quad c_j \in R_L.$$ The coefficients $c_j$ are symmetric polynomials in $\gamma, \bar{\gamma}$ and their real conjugates over $L$. Also the images $\sigma_i(c_j)$ for the real embeddings $\sigma_i : L \rightarrow {\mathbb R}$ are symmetric polynomials in the real conjugates $\tau(\gamma)$ where $\tau : {\mathbb Q}(\gamma) \rightarrow {\mathbb R}$ with $\tau |_L = \sigma_i$, $i \geq 2$.
Thus the bounds on $| \gamma |^2$ and $\Re(\gamma)$ obtained from (\[Omega\]) §3 using the freeness criteria and the bounds on the real conjugates $\tau(\gamma)$ in §2 using the ramification criteria will place bounds on the algebraic integers $c_j$ and $\sigma_i(c_j)$. For each $(p,q)$ we can readily obtain an integral basis for $L$ over ${\mathbb Q}$. The bounds on $\gamma$ and its conjugates then translate into bounds on the rational integer coefficients when each $c_j$ is expressed in terms of this integral basis. Once a finite number of possibilities for each coefficient $c_j$ individually is obtained, the roots of each of the resulting finite number of polynomials at (\[eqn35\]) so obtained, and their conjugates, can be further examined to see if their roots satisfy the required bounds. We explain the basic methods used to carry out this computational process in this section. This basic method is carried out as a first step by a simple Maple program on the triples in the Aspiring List.
These remarks above actually apply to any algebraic integer $\delta$ in ${\mathbb Q}(\gamma)$ such that ${\mathbb Q}(\delta) = {\mathbb Q}(\gamma)$ and for which one can obtain bounds on $\delta$ and its conjugates. In particular, if $v$ is a unit in $L$, we can take $\delta = \gamma/v$ and suitable choices of $v$ lead to improved bounds on $\delta$.
For the basic method which we now describe, we assume first that $\mu \geq 3$, the cases where $\mu \leq 2$ being considerably easier to handle. For all $(p,q,r)$ on the Aspiring List, $L$ has an integral basis of the form $\{ 1, u, u^2, \ldots , u^{\mu - 1} \}$ where $u = 2 \cos 2 \pi/M$ for some integer $M$. Let $\sigma_1 = {\rm Id}, \sigma_2, \ldots , \sigma_{\mu}$ denote the Galois automorphisms of $L$ over ${\mathbb Q}$ with $\sigma_i (2 \cos 2 \pi/M)
= 2 \cos 2 \pi y_i/M$ where $1 \leq y_i < M/2$ and $(y_i,M)=1$.
Let $\delta$ be an algebraic integer as described above which satisfies (\[eqn35\]). Let $$c_j = m_0 + m_1 u + m_2 u^2 + \cdots + m_{\mu-1} u^{\mu-1}$$ where $m_k \in {\mathbb Z}$. Let $A$ be the $\mu \times \mu$ matrix $[\sigma_i(u^{j-1})]$, $1 \leq i,j \leq \mu$. Then $$\label{eqn36} A (\tilde{m}) = \tilde{c_j}$$ where $\tilde{m} = (m_0, m_1, \ldots , m_{\mu-1})^t$ and $\tilde{c_j} =
(c_j, \sigma_2(c_j), \ldots , \sigma_{\mu}(c_j))^t$. Thus $$\label{eqn37}
\tilde{m} = A^{-1} \tilde{c_j}$$ where we can numerically determine the entries of $A$ and $A^{-1}$. The bounds on $| \gamma |^2, \Re(\gamma)$ obtained by maximising them on $\Omega_{p,q}$ using (\[Omega\]) and the bounds on the real conjugates at (\[eqn10\]) give bounds on $\delta = \gamma/v, v \in R_L^*$ and its conjugates and hence on each entry of the matrix $\tilde{c_j}$. Thus there exist $\mu \times 1$ matrices $I_j$ and $S_j$ such that $I_j \leq \tilde{c_j} \leq S_j$ with the obvious notation. In the cases where $p$ is not a prime power, $4 \sin^2 \pi/p = -\beta_1$ is a unit and in these cases it is expedient to take $\delta = \gamma/(-\beta_1)$ or $= \gamma/(-\beta_2)$ if $q$ is also not a prime power.
$(p,q,r) = (42,42,2)$. In this case with $\delta = \gamma/(-\beta_1)$, $c_0 = | \gamma |^2/(16 \sin^4 \pi/42)$ and $0 < \sigma_i(c_0) <
\sigma_i(\beta_1 \beta_2 / 4 \beta_1)^2 = \sigma_i(\sin^2 \pi/42)^2$. Thus $I_0 < \tilde{c_0} < S_0$ with $I_0 = 0$, and the $i$-th entry $s_i$ of $S_0$ is $\sigma_i(\sin^2 \pi/42)^2$ for $i=2,3, \ldots ,6$ and $s_1 = 16(\cos^2 \pi/42/
\sin^2 \pi/42)^2$.
[**Remark.**]{} From this example, a common feature of many examples will be noted - that all entries of $S_0$ except the first are small. This is a consequence of our choice of $v$ and we will explain below how to exploit this.
Let us return to the general case as at (\[eqn37\]). We can obtain upper and lower estimates on $\tilde{m}$ as follows: Write $A^{-1} = A_+^{-1} + A_-^{-1}$ where $A_+^{-1}, A_-^{-1}$ are $\mu \times \mu$ matrices with all entries in $A_+^{-1}$ being $\geq 0$ and those in $A_-^{-1}$ being $\leq 0$. We thus obtain $$\label{eqn38}
A_+^{-1} I_j + A_-^{-1} S_j \leq \tilde{m} \leq A_+^{-1} S_j + A_-^{-1} I_j.$$ This then gives a finite number of possibilities for $\tilde{m}$. We refer to this as a search space and from these inequalities, its size can be readily measured. In general, the search space described by (\[eqn38\]) can be extremely large. In Example 5.1 above, for example, it is of the order of $1.5 \times 10^{25}$. In such cases, we extend this technique to exploit the fact that, in many cases, all the entries of $I_j$ and $S_j$ except the first are small.
From (\[eqn38\]) determine the possible values of $m_0$, the first entry of $\tilde{m}$ and the constant term in the expression of $c_j$ in terms of the integral basis $1, u, u^2, \ldots, u^{\mu -1}$. For each $m_0$ we have $$\label{eqn39}
m_1 u + m_2 u^2 + \cdots + m_{\mu - 1} u^{\mu - 1} = c_j - m_0$$ and the corresponding $\mu - 1$ equations under the embeddings $\sigma_i,
i = 2, \ldots, \mu$. Now if $B$ denotes the $\mu - 1 \times \mu - 1 $ matrix obtained from $A$ by deleting the first row and first column and if $\tilde{m}', \tilde{c_j}'$ denote the $\mu - 1 \times 1$ matrices obtained by removing the first entries of $\tilde{m}, \tilde{c_j}$, we can write the $\mu-1$ equations obtained from (\[eqn39\]) for the embeddings $\sigma_2, \ldots, \sigma_{\mu}$, in the form $$B \tilde{m}' = \tilde{c_j}' - m_0 \tilde{1}$$ where $\tilde{1}$ is the $\mu - 1 \times 1$ matrix all of whose entries are 1. This then yields $\tilde{m}' = B^{-1} \tilde{c_j}' - m_0 B^{-1} \tilde{1}$. For each $m_0$ the term $m_0 B^{-1} \tilde{1}$ is fixed. By splitting $B^{-1}$ into its positive and negative parts as we did for $A^{-1}$ and using the truncated limits ${I_j}',{S_j}'$ for $\tilde{c_j}$ we obtain bounds on $\tilde{m}'$ given by $$\label{eqn40}
B_+^{-1} {I_j}' + B_-^{-1} {S_j}' - m_0 B^{-1} \tilde{1} \leq \tilde{m}' \leq B_+^{-1} {S_j}' + B_-^{-1} {I_j}' - m_0 B^{-1} \tilde{1}.$$ If the entries of ${I_j}', {S_j}'$ are small, this yields a small search space for $\tilde{m'}$ whose size is essentially independent of $m_0$. In Example 5.1, for example, there are 6166 possibilities for $m_0$ and 576 for $\tilde{m}'$ so that the search space is now of the order of $3.5 \times 10^6$, a significant reduction.
For each resulting $\tilde{m}$ we check the validity of $I_j \leq A \tilde{m} \leq S_j$ and list the resulting $\tilde{m}$ and hence candidate $c_j$. Again in Example 5.1, there are three such integer vectors $\tilde{m}$ and hence only three candidates for $c_0$.
In the cases where $\mu=2$, we dispense with the use of the matrix $A$ (and hence B). For in that case, all integers in $R_L$ have the form $(a+b \sqrt{d})/2$ where $a,b \in {\mathbb Z}$ with $a \equiv b ({\rm mod}~2)$ and $a \equiv b \equiv 0 ({\rm mod}~2)$ if $d \not\equiv 1 ({\rm mod}~4)$. Thus if $c_j = (a_j + b_j \sqrt{d})/2$, the upper and lower bounds on $c_j$ and $\sigma(c_j)$ respectively for $\sigma$ the non-identity embedding, can be expressed as $$\ell_1 < \frac{a_j+b_j \sqrt{d}}{2} < u_1, \quad \ell_2 < \frac{a_j - b_j \sqrt{d}}{2} < u_2.$$ Thus $a_j$ must be an integer between $\ell_1 + \ell_2$ and $u_1 + u_2$ and, for each such $a_j$, $b_j$ lies between $(2 \ell_1 - a_j)/\sqrt{d}$ and $(2 u_1 - a_j)/\sqrt{d}$. Provided the bounds are reasonable, it is a simple matter to find all the integers satsifying these inequalities.
These methods described above for enumerating and listing candidate values of the coefficients $c_j$ in either the cases where $\mu \geq 3$ or $\mu = 2$ will be referred to as the [*Basic Method*]{}.
In the (many) cases where $p=q$, we noted in §2 that any non-elementary Kleinian group generated by $f,g$ where $o(f)=o(g)=p$ is a subgroup of index at most 2 in a non-elementary Kleinian group generated by $f$ and an element $h$ of order 2. Thus, in these cases, by Theorem 2.3, instead of trying to determine $\gamma = \gamma(f,g)$, we can search for possible values of $\gamma_1 = \gamma(h,f)$. For a real embedding $\tau : {\mathbb Q}(\gamma)
\rightarrow {\mathbb R}$ with $\tau_L = \sigma$ we have, by (\[eqn13\]), $-\sigma(4
\sin^2 \pi/p) < \tau(\gamma_1) < 0$. Also from §3, $| \gamma_1 | \leq 4$. Furthermore, by (\[eqn11\]), $\gamma_2 = \beta_1 - \gamma_1$ also corresponds to a group generated by an element of order 2 and an element of order $p$. Thus we can assume that the $\gamma_1$-space is symmetric about $\Re(\gamma_1)
= \beta_1/2$ and so $$- 4 < \Re(\gamma_1) < -2 \sin^2 \pi/p.$$ We can thus apply the same strategy as in the [*Basic Method*]{} to determine the coefficients $c_j$ of the polynomial satisfied by $\gamma_1$ or $\delta_1
= \gamma_1/v$ for a suitable unit $v \in R_L$. We refer to this also as a [*Basic Method*]{}.
Applying the [*Basic Methods*]{} to triples on the Aspiring List yields candidate values for the coefficients $c_j$ of the polynomials $p(x)$ satisfied by some $\delta$ where ${\mathbb Q}(\delta) = {\mathbb Q}(\gamma)$. In some cases the bounds are tight enough that there are no candidate values for one of the coefficients. We list these below in Table 6. In this Table and subsequently, we will use the notation $\gamma(p,q)$ for $\gamma(f,g)$ where $o(f)=p, o(g)=q$ and also $\gamma(2,p)$ for $\gamma(h,f)$ where $o(h)=2$. Generally, the search spaces are small in the cases of coefficients $c_0$ and $c_{r-1}$ as they are, up to sign, the product and sum of the roots. Thus degree 2, considered in §6 below, is reasonably straightforward. For the other coefficients, additional methods may be required to reduce the size of the search space to manageable proportions. These will be discussed in §8 to 9 below.
Triple $\delta$ Outcome
----------- ------------------------------------ ---------------------
(28,7,2) $\gamma(28/7)/4 \sin^2 \pi/28$ No values of $c_0$
(22,11,2) $\gamma(22,11)$ No values of $c_0$
(16,8,2) $\gamma(16,8)/(1+2 \cos 6 \pi/16)$ No values of $c_1$.
Degree 2
========
Here we consider the cases where $r = 2$ so that $\delta$ satisfies $p(x) = x^2 + c_1 x + c_0$. From the Basic Methods we have obtained candidate values for $c_1$ and $c_0$. The polynomial $p(x)$ will define a field with one complex place if and only if $c_1^2 - 4 c_0 <0$ and $\sigma_i(c_1^2-4c_0)>0$ for $i = 2,3, \ldots, \mu$. Furthermore, for $i \geq 2$ both roots of $p^{\sigma_i}(x)=x^2 + \sigma_i(c_1)x + \sigma_i(c_0)
= 0$ must lie in an interval $(-\ell_i, 0)$ where $\ell_i>0$ is the bound obtained using (\[eqn10\]) or (\[eqn13\]) for the particular choice of $\delta$. By the Basic Method, $0 \leq \sigma_i(c_1) < 2 \ell_i$ and $0 < \sigma_i(c_0) < \ell_i^2$ for $i \geq 2$. Thus the condition on the location of these real roots is equivalent to requiring that $\ell_i^2 - \sigma_i(c_1) \ell_i + \sigma_i(c_0) > 0$. Thus all these conditions can be checked directly on the candidate coefficients $c_1, c_0$. This will be referred to as [*polynomial reduction*]{}.
(1.) $(p,q,r) = (42,42,2)$. As in Example 5.1, take $\delta =
\gamma(42,42)/4 \sin^2 \pi/42$. The Basic Method throws up two candidates for $c_1$ and three for $c_0$. None of the 6 resulting polynomials satisfy all the inequalities above and so there are no arithmetic Kleinian groups corresponding to the triple $(42,42,2)$.
(2.) $(p,q,r) = (24,24,2)$. Taking $\delta = \gamma(2,24)$ the Basic Method yields 74 candidates for $c_0$ and 20 for $c_1$. Then polynomial reduction reduces this to two polynomials.
(3.) $(p,q,r)= (12,6,2)$. With $\delta = \gamma(12,6)$ we obtain 45 candidates for $c_0$ and 19 for $c_1$ and polynomial reduction reduces this to a total of 45 polynomials.
The remaining polynomials can then be computationally solved and the complex roots checked to see if they give rise to values of $\gamma(p,q)$ which lie inside the contour $\Omega_{p,q}$. All the polynomials which are left at this stage correspond to a $\gamma$ which satisfies conditons 1,2 and 3 of Theorems 2.2 or 2.3. If $p=q$ and the deduction is carried out using $\gamma(2,p)$, then the resulting $\gamma(p,p)= \gamma(2,p)(\gamma(2,p) + 4 \sin^2 \pi/p)$ corresponds to a subgroup of an arithmetic Kleinian group. It can turn out that the resulting $\gamma(p,p)$ is real, which cases, as noted in §2, are completely understood.
$(p,q,r)=(24,24,2)$. The two polynomials (see above) both yield that $\gamma(24,24)$ is real and there are no such arithmetic Kleinian groups.(See §2.3).
More generally, we still need to check condition 4 of Theorem 2.2 for $\gamma(p,q)$ using Lemma 2.4. If $p=q$ and $\gamma(p,p)$ is obtained by first determining $\gamma(2,p)$, then this condition is automatically satisfied as noted at the end of §2.3. Thus this is most frequently applied in the cases where $ p \neq q$.
$(p,q,r)=(12,6,2)$. Here the field $L= {\mathbb Q}(\sqrt{3})$ and we have 45 candidate polynomials from above. Using (\[eqn15\]) we replace the variable $x$ by $y(y-\sqrt{3})/(3(2+\sqrt{3}))$ and find that just one of the resulting quartic polynomials in $y$ factorise in ${\mathbb Q}(\sqrt{3})$. Thus there is one value of $\gamma(12,6)$ which gives rise to a subgroup of an arithmetic Kleinian group in this case.
Using this [*factorisation method*]{} any remaining polynomials will give values of $\gamma$ which correspond to subgroups of arithmetic groups. The results are shown in Table 7. These parameters must then be subjected to geometric methods to ascertain if they have finite covolume and so satisfy the final conditions of Theorems 2.2 or 2.3. These geometric methods will be described in §10.
The notation used in Table 7 is as follows: the second column gives the generating element $\delta$ to which we apply the Basic Method. The next two columns give the number of resulting possible values of the coefficients $c_0$ and $c_1$. The column headed “PR”, refers to the number of polynomials remaining after polynomial reduction, that headed “B” gives the number that are non-real and lie inside the contour $\Omega_{p,q}$ and the “F” column those left after the factorisation criteria has been applied. Thus the non-zero entries in the final column are those which need to be further considered by geometric methods.(The \* in the $(42,7,2)$ row indicates that the values of $c_1$ were calculated and from the small number of resulting values we obtained improved bounds on $c_0$ by using the inequalities implied by the method of polynomial reduction. The - in the row of $(14,7,2)$ indicates that we omitted this step.)
Triple $\delta$ $c_0$ $c_1$ PR B F
----------- ------------------------------------------------------- ------- ------- ----- ---- ----
(42,42,2) $\gamma(42,42)/4 \sin^2 \pi/42$ 3 2 0 0 0
(42,7,2) $\gamma(42,7)/4 \sin^2 \pi/42 \times 4 \sin^2 \pi/21$ \* 5 0 0 0
(36,36,2) $\gamma(2,36)$ 16 10 0 0 0
(30,30,2) $\gamma(2,30)$ 249 44 10 1 1
(30,15,2) $\gamma(30,15)/4 \sin^2 \pi/15$ 36 20 0 0 0
(30,10,2) $\gamma(30,10)/4 \sin^2 pi/10$ 9 8 0 0 0
(24,24,2) $\gamma(2,24)$ 72 20 2 0 0
(24,12,2) $\gamma(24,12)/4 \sin^2 \pi/12$ 6 5 0 0 0
(24,8,2) $\gamma(24,8)$ 12 12 1 0 0
(20,20,2) $\gamma(20,20)/4 \sin^2 \pi/20$ 16 13 0 0 0
(20,10,2) $\gamma(20,10)/4 \sin^2 \pi/20$ 1 4 0 0 0
(18,18,2) $\gamma(2,18)$ 122 30 16 3 3
(18,9,2) $\gamma(18,9)/4 \sin^2 \pi/18$ 268 62 73 47 2
(18,6,2) $\gamma(18,6)/4 \sin^2 \pi/18$ 6 9 0 0 0
(16,16,2) $\gamma(2,16)$ 61 19 0 0 0
(15,15,2) $\gamma(15,15)/4 \sin^2 \pi/15$ 4 5 0 0 0
(14,14,2) $\gamma(14,14)/4 \sin^2 \pi/14$ 85 38 10 3 0
(14,7,2) $\gamma(14,7)/4 \sin^2 \pi/14$ 244 65 161 - 1
(13,13,2) $ \gamma(2,13)$ 11 13 0 0 0
(12,12,2) $ \gamma(2,12)$ 64 17 67 18 18
(12,6,2) $\gamma(12,6)$ 45 19 45 30 1
(11,11,2) $\gamma(2,11)$ 35 17 0 0 0
(10,10,2) $\gamma(2,10)$ 48 8 44 15 15
(10,6,2) $ \gamma(10,6)$ 34 20 40 24 0
(9,9,2) $ \gamma(2,9)$ 72 22 7 2 2
(9,6,2) $\gamma(9,6)$ 4 7 1 0 0
(8,8,2) $ \gamma(2,8)$ 65 17 48 20 20
(8,6,2) $\gamma(8,6)$ 42 21 50 33 0
(7,7,2) $\gamma(2,7)$ 199 43 32 8 8
(7,6,2) $\gamma(7,6)$ 8 12 0 0 0
(6,6,2) $\gamma(2,6)$ 16 8 78 24 24
: Degree 2 candidates
Degree 3
========
. Apart from the [*polynomial reduction*]{} process, this is very similar to the degree 2 cases as carried out in the preceding section. Let $\delta$ be such that ${\mathbb Q}(\delta) = {\mathbb Q}(\gamma)$ where $[Q(\gamma) : L ] = 3$ so that $\delta$ satisfies $p(x) = x^3 + c_2 x^2 + c_1 x + c_0=0$ with $c_i \in R_L$. Using the Basic Methods we obtain candidate values for $c_0, c_1$ and $c_2$. In general, there are many more candidates for $c_1$ than for $c_0$ or $c_2$. We then ascertain that at the non-identity real places $\sigma_i$ of $L$, the conjugate polynomials $p^{\sigma_i}(x)$ has three real roots in the interval $(-\ell_i,0)$ where $\ell_i$ is obtained from (10) and (13). This can be checked without numerically solving the polynomial (which is a time consuming process) by the following sequence of requirements on combinations of the coefficients:
- $\sigma_i(c_2)^2 > 3 \sigma_i(c_1)$, which forces the derivative $Dp^{\sigma_i}(x)$ to have two real roots;
- $Dp^{\sigma_i}(-\ell_i) > 0$, which forces these roots, $r_1,r_2$ to lie in the interval $(-\ell_i, 0)$;
- $p^{\sigma_i}(-\ell_i) < 0$, which forces $p^{\sigma_i}(x)$ to have at least one root in the interval $(-\ell_i,0)$;
- $\sigma_i(-2 c_2 c_1 c_0/3 + 4 c_1^3/27+4 c_2^3 c_0/27-c_2^2 c_1^2/27 + c_0^2)<0$. which forces $p^{\sigma_i}(r_1)p ^{\sigma_i}(r_2)<0$ and so $p^{\sigma_i}(x)$ to have three real roots in the interval $(-\ell_i,0)$.
Any cubic remaining after this, can then be solved at the identity real place of $L$ to ensure that it has a pair of non-real roots and that the real root lies in the interval $(-\ell_1,0)$. Following this [*polynomial reduction*]{} procedure, we check to determine if the values of $\gamma(p,q)$ lie inside the contour $\Omega_{p,q}$. Finally, if appropriate, we apply the factorisation condition of Theorem 2.2. The results are tabulated in Table 8, as in Table 7 so that any non-zero numbers in the right hand column correspond to groups which must be checked by geometric methods to see if they have finite covolume.
Triple $\delta$ $c_0$ $c_1$ $c_2$ PR B F
----------- -------------------------------- ------- ------- ------- ------ ----- -----
(30,30,3) $\gamma(2,30/4 \sin^2 pi/30)$ 250 \* 296 - - 0
(18,18,3) $\gamma(2,18)/4 \sin^2 pi/18$ 8 4442 180 1 0 0
(18,9,3) $\gamma(18,9)/4 \sin^2 \pi/18$ 11 2429 137 0 0 0
(14,7,3) $\gamma(14,7)/4 \sin^2 \pi/14$ 25 2207 148 1 0 0
(12,12,3) $ \gamma(2,12)$ 65 218 26 85 19 19
(12,6,3) $\gamma(12,6)$ 3 138 30 1 1 1
(10,10,3) $\gamma(2,10)$ 48 175 29 33 5 5
(10,6,3) $ \gamma(10,6)$ 1 103 32 0 0 0
(9,9,3) $ \gamma(2,9)$ 219 812 56 0 0 0
(8,8,3) $ \gamma(2,8)$ 133 256 30 268 29 29
(8,6,3) $\gamma(8,6)$ 5 129 32 2 0 0
(7,7,3) $\gamma(2,7)$ 1381 2449 105 26 1 1
(6,6,3) $\gamma(2,6)$ 16 24 9 1496 124 124
: Degree 3 candidates
Note: The \* in case $(30,30,3)$ indicates that we actually used the linked triples method which is explained in the next sections. The outcome was that there were no linked triples and so no groups can arise.
Degree $ \geq 4$
================
From the [*Aspiring List*]{}, we see that there are six cases with $ r \geq 4$ all with $p=q$. In general, the Basic Methods enable one to determine the candidates for the coefficients $c_0$ and $c_{r-1}$, but give rise to unfeasible search spaces in attempting to determine the other coefficients. So we develop some new methods of obtaining bounds on the coefficients by exploiting the relationship at (11) between $\gamma = \gamma(p,p)$ and $\gamma_1 = \gamma(2,p)$. Since $\gamma_2 = \beta_1 - \gamma_1$ is also a candidate $\gamma(2,p)$ value, equation (11) can be stated as $$\gamma = - \gamma_1\, \gamma_2 .$$ We use the most “awkward” case $(7,7,4)$, which is the case of highest total degree over ${\mathbb Q}$ amongst these six, as a template to describe our methods.
Let $\beta_1 = -(2 - 2 \cos 2 \pi/7)$ and $\beta_i = \sigma_i(\beta_1)$ where $\sigma_i$, $i= 2,3$ are the non-trivial automorphisms of $L =
{\mathbb Q}(\cos 2 \pi/7)$. Let $B_i = - \beta_i^2/4$ so that, for $\tau : k \Gamma
\rightarrow {\mathbb R}$, we have $$\beta_i < \tau(\gamma_1), \tau(\gamma_2) < 0 ~{\rm and~} B_i < \tau(\gamma)
< 0$$ where $\tau |_L = \sigma_i$. From §3, we also have bounds on the complex number $\gamma$ i.e. $$| \gamma | < 4(2 \cos \pi/7)^2 = G_u ~{\rm and~} R_{\ell} < \Re(\gamma) < G_u$$ where $R_{\ell} \approx -5.0914 $ computed using (19). Since $\gamma_1, \gamma_2$ are symmetric about $\Re(\gamma_i) = \beta_1/2$, we can assume that $| \gamma_1 | <4$ and $-4 < \Re(\gamma_1) \leq \beta_1/2 $ and $|\gamma_2 | < 4$ and $\beta_1/2 \leq \Re(\gamma_2) < \beta_1 + 4$ ( see §3).
Let $\gamma, \gamma_1, \gamma_2$ satisfy the polynomials $$\left.
\begin{array}{ccl}
p(x) &= & x^4 + c_3 x^3 + c_2 x^2 + c_1 x + c_0 \\
p_1(x)& = & x^4 + c_3^{(1)} x^3 + c_2^{(1)} x^2 + c_1^{(1)} x + c_0^{(1)} \\
p_2(x)& = & x^4 + c_3^{(2)} x^3 + c_2^{(2)} x^2 + c_1^{(2)} x + c_0^{(2)}
\end{array} \right\}$$ respectively. As noted above, we can determine $c_0, c_0^{(1)}, c_0^{(2)}, c_3, c_3^{(1)},
c_3^{(2)}$ by our Basic Methods. The basic ideas here are then to use these determined values to place bounds and restrictions on the remaining coefficients. Furthermore, since $\gamma_2 = \beta_1 - \gamma_1$, the coefficients of $p_2(x)$ are combinations of the coefficients of $p_1(x)$. All this enables us to determine $c_2^{(1)}$ and $c_1^{(1)}$ from the other coefficients.
Using the Basic Methods we determine candidates for $c_0$ and $c_0^{(1)}$ (and hence $c_0^{(2)}$). There are 412 and 9769 respectively. From (42) it follows that $c_0 = c_0^{(1)} c_0^{(2)}$ and we determine all such linked triples $(c_0, c_0^{(1)}, c_0^{(2)} )$. (In this $(7,7,4)$ case it is expedient to first narrow down the search by using the fact that the rational integral equation $N_{L \mid {\mathbb Q}}(c_0) =
N_{L \mid {\mathbb Q}}(c_0^{(1)}) N_{L \mid {\mathbb Q}}( c_0^{(2)})$ must hold.) There are 8979 linked triples. Since $\gamma_2 = \beta_1 - \gamma_1$, then $$c_0^{(2)} = \beta_1^4 + c_3^{(1)} \beta_1^3 + c_2^{(1)} \beta_1^2 +
c_1^{(1)} \beta_1 + c_0^{(1)} .$$ This implies that $\beta_1 \mid c_0^{(1)}- c_0^{(2)} $, which, if $c_0^{(1)} - c_0^{(2)} = a + bu + c u^2$ where $u = 2 \cos 2 \pi/7$ is equivalent to $a + 2b + 4c \equiv 0({\rm mod}~7)$. We reduce our set of linked triples to satisfy this divisiblity condition, obtaining 1303 such triples.
For each candidate linked triple, we now obtain new bounds on the coefficients $c_1, c_1^{(1)}, c_1^{(2)}$ and their conjugates which depend on the values of a linked triple as follows: Let the roots of $p^{\sigma_i}(x), i = 2,3$ be $y_1, y_2, y_3, y_4$ so that $\sigma_i(c_0) = y_1 y_2 y_3 y_4$ and $\sigma_i(c_1) = - (y_1 y_2 y_3 y_4) \sum_{j=1}^4(1/y_j)$. Let the $y_j$ be ordered so that $B_i < y_4 < y_3 < y_2 < y_1 < 0$. So $\sigma_i(c_0) < (-y_1)(- B_i)^3$ and thus $(-1/y_1) <(- B_i)^3/\sigma_i(c_0)$. Also $\sigma_i(c_0) < (-y_2)^2(-B_i)^2$ so that $(-1/y_2) < ((-B_i)^2/\sigma_i(c_0))^{1/2}$. Continuing in this vein, we obtain $$\label{eqn57}
\sigma_i(c_1) < \sigma_i(c_0)\left[ \frac{(-B_i)^3}{\sigma_i(c_0)} + \left(\frac{(-B_i)^2}{\sigma_i(c_0)}\right)^{1/2} + \left(\frac{-B_i}{\sigma_i(c_0)}\right)^{1/3} + \left( \frac{1}{\sigma_i(c_0)}\right)^{1/4}\right].$$ In the other direction, from the arithmetic/geometric mean inequality, we deduce that $$\label{eqn58}
\sigma_i(c_1) \geq 4 \sigma_i(c_0)^{3/4}.$$ In a similar way at the identity embedding, we obtain an upper bound on $c_1$ as $$\label{eqn59}
c_1 < c_0 \left[ \left(\frac{G_u^2 (-B_1)}{c_0} \right) + \left( \frac{G_u^2}{c_0}\right)^{1/2} \right] - 2 R_{\ell} B_1^2.$$ On the other hand, $$\frac{c_1}{c_0} = - \left( \frac{1}{x_3} + \frac{1}{x_4} \right) - \left( \frac{1}{\gamma} + \frac{1}{\bar{\gamma}}\right),$$ where the roots of $p(x)$ are $\gamma, \bar{\gamma},x_3,x_4$ The first term here is greater than $-2/B_1$ and the second is greater than $-2/|\gamma|$. J[ø]{}rgensen’s Lemma states $|\gamma |+|\beta_1|\geq1$ in a discrete non-elementary group, thus $|\gamma | > 2 \cos 2\pi/7 - 1$ so that $$\label{eqn60}
c_1 > c_0 \left( \frac{-2}{B_1} - \frac{2}{2 \cos 2 \pi/7-1} \right).$$
In an entirely analogous manner, we can obtain similar bounds for $c_1^{(1)}$ and $c_1^{(2)}$ depending on each $c_0^{(1)}$ and $c_0^{(2)}$ in a linked triple.
Thus for $i=2,3$ and $j = 1,2$ we have $$4\sigma_i(c_0^{(j)})^{3/4} < \sigma_i(c_1^{(j)})<\sigma_i(c_0^{(j)})\left[\frac{(-\beta_i)^3}{\sigma_i(c_0^{(j)})}+ \cdots + \left(\frac{1}{\sigma_i(c_0^{(j)})}\right)^{1/4}\right].$$ Using the symmetry of $\gamma_1, \gamma_2$, we obtain $$\label{eqn61}
\left(\frac{-2}{\beta_1}-\frac{\beta_1}{16}\right)c_0^{(1)} < c_1^{(1)} <
c_0^{(1)} \left[ \left(\frac{16 (-\beta_1)}{c_0^{(1)}}\right) + \left(
\frac{16}{c_0^{(1)}}\right)^{1/2}\right] + 8 \beta_1^2.$$ If $p_2(x)$ has roots $\gamma_2, \bar{\gamma_2}, z_3, z_4$, then $$c_1^{(2)} = -| \gamma_2 |^2 (z_3 + z_4) - (\gamma_2 + \bar{\gamma_2}) z_3 z_4.$$ Using the AM/GM inequality and the fact that $-1 < \beta_1 < z_3, z_4 <0$ we have $$c_1^{(2)} \geq | \gamma_2|^2 2(z_3 z_4)^{1/2}-2 \Re(\gamma_2)z_3 z_3 > 2(z_3z_4)(|\gamma_2|^2 - 2 \Re(\gamma_2)) > - \beta_1^2/2.$$ Also $$c_1^{(2)} < c_0^{(2)} \left[\left( \frac{16(-\beta_1)}{c_0^{(1)}}\right) + \left(\frac{16}{c_0^{(1)}}\right)^{1/2}
\right]-\beta_1^3.$$
We now further exploit relation (11) to deduce that $$\label{eqn62}
- \beta_1 c_1 = c_0^{(2)} c_1^{(1)} + c_0^{(1)} c_1^{(2)}.$$ This gives upper bounds for $c_1^{(1)}$ and its conjugates which, in many cases, are an improvement on those obtained at (51) and (52) since $$\label{eqn63}
c_1^{(1)} \leq \frac{-\beta_1}{c_0^{(2)}} (\mbox{maximum value of $c_1$}) - \frac{c_0^{(1)}}{c_0^{(2)}}(\mbox{minimum value of $c_1^{(2)}$}).$$
In fact, in this $(7,7,4)$ case, we do not enumerate the candidates for $c_1$ and $c_1^{(2)}$, but use the upper bounds for $c_1$ from (47) and (49) and the lower bounds for $c_1^{(2)}$ from (51) and (53) in (56). Thus using (51), (52) and (56), the Basic Method yields candidates for $c_1^{(1)}$ which depend on each linked pair $(c_0^{(1)}, c_0^{(2)})$ (We drop $c_0$). Note that, from (46) $$\beta_1^2 \mid \beta_1 c_1^{(1)} + c_0^{(1)} - c_0^{(2)} ,$$ and we further reduce our list of candidates to satisfy this condition. The total number of triples $(c_1^{(1)}, c_0^{(1)}, c_0^{(2)})$ at this stage is 2071.
Again using the Basic Methods, we determine, independently of the foregoing calculations, the candidates for $c_3$ and $c_3^{(1)}$ (there are 452 and 187 respectively). Now $$c_2^{(1)} = \frac{1}{2}(c_3 + \beta_1 c_3^{(1)} + {c_3^{(1)}}^2).$$ The basic inequalities that $c_2^{(1)}$ and its conjugates must satisfy together with the fact that the second derivative of $p_1^{\sigma_i}(x), i = 2,3$ must have two real roots in the interval $(\beta_i, 0)$ gives inequalities relating $c_3^{(1)} $ and $c_2^{(1)}$ and hence involving $c_3$ and $c_3^{(1)}$. We thus determine all pairs $(c_3, c_3^{(1)})$ which are linked by these inequalities. Furthermore, since $2 \mid c_3 + \beta_1 c_3^{(1)} + {c_3^{(1)}}^2$ we reduce the pairs to satisfy this divisibility condition. We then solve for $c_2^{(1)}$ using (57) and drop $c_3$. There are 2218 resulting pairs $(c_3^{(1)}, c_2^{(1)})$.
We now relate these linked pairs $(c_3^{(1)}, c_2^{(1)})$ to the linked pairs $(c_0^{(1)}, c_0^{(2)})$ by inequalities. Once again using the AM/GM inequality yields $\sigma_i(c_3^{(1)}) \geq 4 \sigma_i(c_0^{(1)})^{1/4}$ and $\sigma_i(c_2^{(1)}) \geq 6 \sigma_i(c_0^{(1)})^{1/2}$ for $i=2,3$. Also $\sigma_i(c_2^{(1)}) < 3 \sigma_i(c_0^{(1)})^{1/2} + 3\beta_i^2$ for $i = 2,3$. A bit of manipulation using the AM/GM inequality also yields $c_2^{(1)} > 2 \sqrt{3} {c_0^{(1)}}^{1/2}$. We thus determine all 4-tuples which are linked by these inequalities. There are a total of 74570.
We now have a collection of 1051111 5-tuples $(c_3^{(1)}, c_2^{(1)}, c_1^{(1)}, c_0^{(1)}, c_0^{(2)})$ indexed by the linked pairs $(c_0^{(1)}, c_0^{(2)})$. They must satisfy equation (46). Implementing this gives 1934 4-tuples (we drop $c_0^{(2)})$. Then requiring that the first derivative of $p_1^{\sigma_i}(x)$, $i = 2,3$ has three roots in the interval $(\beta_i,0)$ (see §7) yields a list of 746 polynomials. These and their conjugates can then be numerically solved and only 8 polynomials have the correct distribution of real roots. All these 8 turn out to be reducible and so we do not obtain any groups in this $(7,7,4)$ case.
[**Comments on the other five cases**]{}
Case $(8,8,5)$. This is tackled in a very similar manner to the preceding $(7,7,4)$ case. The main difference is that in this case, we use the inequalities (47) to (56) to enumerate the candidates for $(c_1, c_1^{(1)}, c_1^{(2)})$ depending on the linked triple $(c_0, c_0^{(1)}, c_0^{(2)})$ which satisfy equation (55). As in the preceding case, we then determine the pairs $(c_4^{(1)}, c_3^{(1)})$ and relate them by inequalities to the linked pair $(c_0^{(1)}, c_0^{(2)})$. In this case $$\beta_1^5 + c_4^{(1)} \beta_1^4 + c_3^{(1)} \beta_1^3 + c_2^{(1)} \beta_1^2 + c_1^{(1)} \beta_1 + c_0^{(1)} = - c_0^{(2)}$$ $$5 \beta_1^4 + 4 c_4^{(1)} \beta_1^3 + 3 c_3^{(1)} \beta_1^2 + 2 c_2^{(1)} \beta_1 + c_1^{(1)} = c_1^{(2)}$$ from which we obtain $$3 \beta_1^5 + 2 c_4^{(1)} \beta_1^4 + c_3^{(1)} \beta_1^3 - c_1^{(1)} \beta_1 - c_1^{(2)} \beta_1 - 2 c_0^{(1)} - 2 c_0^{(2)} = 0.$$ We now determine all 6-tuples $(c_4^{(1)}, c_3^{(1)}, c_1^{(1)}, c_1^{(2)}, c_0^{(1)}, c_0^{(2)})$ which satisfy (60) and use (58) to determine $c_2^{(1)}$ from the remaining coefficients. Now as for degree 3, we reduce our collection by the condition that, at the non-identity place, the degree 3 polynomial which is the second derivative of $p_1(x)$ has three real roots in the interval $(\beta_2, 0)$ where $\beta_2 = -(2 + \sqrt{2})$. This gives us 95 polynomials which can then be solved numerically and none have five real roots in the interval $(\beta_2, 0)$. So there are no groups in this case.
Case $(8,8,4)$. A simplified version of the above yields three polynomials with the correct numbers of real roots, but at the identity embedding, the real roots do not lie in the interval $(\beta_1, 0)$ where $\beta_1 = -(2-\sqrt{2})$.
Case $(12,12,4)$. In this case, five polynomials have the correct distribution of roots, but for 3 of them, the real roots do not lie in the interval $(\beta_1, 0)$ at the identitiy embedding and for the other two, the resulting $\gamma(12,12)$ value lies outside the contour $\Omega_{12,12}$.
Case $(6,6,5)$. An even more simplified version of the above method yields 31 polynomials with 3 real roots in the interval $(-1,0)$. For all but one of them, the associated $\gamma(6,6)$ lies outside the contour $\Omega_{6,6}$. Thus there is one candidate to be considered by geometric methods.
Case $(6,6,4)$. Using the same techniques as above, there are 70 polynomials with the correct distribution of roots and such that $\gamma(6,6)$ lies inside the contour $\Omega_{6,6}$, all of these needing further examination by geometric methods.
Finite Co-volume
================
In §§5, 6, 7 and 8, we have outlined the methods we applied to all the triples $(p,q,r)$ which appear on the [*Aspiring List*]{}. The result is a set of irreducible polynomials of degree $r$ over $L$ whose complex roots $\gamma$ satisfy all four conditions (alternatively Lemma \[condition4’\]) of Theorem \[2genthm\] and also the inequalities of §3, meaning they are not obviously of infinite volume.
This means that $\gamma$ determines a group ${\Gamma}$ generated by elements of orders $p,q$ which is a subgroup of an arithmetic Kleinian group and hence discrete. From a specific value of $\gamma$ we can compute the (normalized) matrices $A$ and $B$ which represent the generators $f,g$ of ${\Gamma}$ (see §3). The inequalities of §3 are derived from the geometric result that, if ${\Gamma}$ is to be of finite co-volume then it cannot be a free product so that the isometric circles of $g$ and $g^{-1}$ cannot lie within the intersection of the isometric cicles of $f$ and $f^{-1}$.
Two further, but more complicated, geometric conditions (temporarily labelled [*Free2*]{} and [*Free3*]{} for use in the Examples below), necessary for ${\Gamma}$ not to be a free product have been given in [@MM2] in terms of the locations of the isometric circles of combinations of $f,g$. These simply consist of looking at the images of the isometric circles of one generator, say $g$ under the transformation $f$ and trying to piece together a fundamental domain from the intersection pattern. For instance, illustrated below, although the isometric circles of $g$ do not lie in the region bounded between the isometric circles of $f$ we have $f(I(g)\cup I(g^{-1}))\cap ( I(g)\cup I(g^{-1})) =\emptyset$ and so if we look at the region (where we write $I(g)$ to mean the disk bounded by $I(g)$ etc) $$I(f)\cap I(f^{-1}) \setminus (I(g)\cup I(g^{-1}) \cup f(I(g)\cup I(g^{-1}))$$ shaded below one can show without too much effort that this region lies within a fundamental domain for $\langle f,g \rangle$ on $\hat{{\mathbb C}}$ and in fact $\langle f,g \rangle$ is free on generators.
\[0.60\][ ]{}
Diagram 3. Second level isometric circles
Of course one can go on looking at more and more isometric circles and their patterns and using this to formulate algebraic inequalities on $\gamma$. However after three levels this becomes quite impractical and we shall discuss below the computer program used to deal with these cases.
We apply these two elementary tests on $\gamma$ to reduce the list of polynomials arising from §6,7 and 8.
$(p,q,r) = (10,10,3)$. From Table 8, there are 5 candidates for $\gamma$ which satisfy the four conditions of Theorem \[2genthm\] and the inequalities of §3. Their minimum polynomials over $L = {\mathbb Q}(\sqrt{5})$ are given below.
No. $\gamma$ polynomial
----- ---------------------------- --------------------------------------------------------------------------
1 $-4.918226 + 5.698268 i$ $x^3 +\frac{13+3\sqrt{5}}{2}x^2+(30+12\sqrt{5})x+1 $
2 $0.635991 + 5.238279 i$ $x^3 + (1-\sqrt{5})x^2+\frac{31+11\sqrt{5}}{2}x+1$
3 $3.251943 + 8.478242 i $ $x^3 + (-2-2\sqrt{5})x^2+(42+18\sqrt{5})x+\frac{3+\sqrt{5}}{2} $
4 $ 8.794158 + 4.828433 i $ $x^3+\frac{-15 -9\sqrt{5}}{2}x^2+(51+22\sqrt{5})x+\frac{3+\sqrt{5}}{2} $
5 $ 6.180432 + 10.631111 i $ $x^3+\frac{-9-7\sqrt{5}}{2}x^2+(77+33\sqrt{5})x+(3+\sqrt{5}) $
Test [*Free2*]{} removes cases 1 and 5, while test ${\it Free3}$ removes case 2 and 3, leaving just one possibility to consider further.
\[0.50\][ ]{}
Diagram 4. $(10,10,3)$-cases.
We list the numbers of candidates brought forward from §6,7 and 8 and in Table 9 show that many are eliminated using these Tests. Thus under “FT2”, “FT3”, are the number eliminated using FreeTest2 and FreeTest3 respectively. We also remove any reducible polynomials or duplicates which have survived to this stage. The final column gives the numbers of polynomials which are passed to the next step in procedure given below.
Triple No. FT2 FT3 Rem.
------------- ----- ----- ----- ------
$(30,30,2)$ 1 0 0 1
$(18,18,2)$ 2 1 1 1
$(18,9,2)$ 2 2 0 0
$(14,7,2)$ 1 0 1 0
$(12,12,2)$ 18 7 4 7
$(12,6,2)$ 1 0 1 0
$(10,10,2)$ 15 8 2 5
$(9,9,2)$ 2 1 1 0
$(8,8,2)$ 20 8 4 8
$(7,7,2)$ 8 4 2 2
$(6,6,2)$ 24 9 5 10
$(12,12,3)$ 19 10 5 4
$(12,6,3)$ 1 1 0 0
$(10,10,3)$ 5 2 2 1
$(8,8,3)$ 29 13 10 5
$(7,7,3)$ 1 0 1 0
$(6,6,3)$ 124 52 30 38
$(6,6,5)$ 1 0 0 1
$(6,6,4)$ 70 36 19 15
: Geometric Test Results
Finally, a computer program has been developed, initially by J. McKenzie, and named JSnap to study subgroups ${\Gamma}$ of $\PSL(2,{\mathbb C})$ which have two generators of finite order. This is effectively an implementation of the Dirichlet routine in J. Weeks’ program Snappea. This program aims to find a Dirichlet region for the group ${\Gamma}=\langle f, g\rangle$. A very important point to note here is that we know [*a priori*]{} that the group in question is discrete. However it is theoretically possible that the group ${\Gamma}$ is geometrically infinite and so computationally impossible to identify a fundamental domain. JSnap runs and either produces a fundamental domain - either of finite or infinite volume - or produces an error message if it can’t put together a fundamental domain after looking at words of a given bounded length. In our situation JSnap always produces a fundamental domain which is either compact or meets the sphere at infinity in an open set (which itself will be a fundamental domain for the action of ${\Gamma}$ on $\hat{{\mathbb C}}\setminus \Lambda({\Gamma})$). In this latter case the group cannot be of finite co-volume (and it might also not be free on generators - for instance certain Web-groups may arise) and so we can eliminate these cases.
If the fundamental domain found by JSnap is compact, then JSnap also returns an approximate co-volume.
In this way the remaining possibility for $(p,q,r)=(10,10,3)$ $\gamma = 8.794158 + 4.828433 i$ is shown to have a fundamental domain which meets the sphere at infinity in an open set, and thus cannot be arithmetic.
Applying JSnap to the 15 cases in $(6,6,4)$ shows that they all have infinite volume and so there are no corresponding arithmetic Kleinian groups. It is a similar story for $r=3$ except that here we meet our first example whose isometric circle configuration and $\gamma$ value are illustrated below.
\[0.85\][ ]{}
Diagram 5. A finite co-volume $(6,6,3)$ example.
The end product
===============
After dealing with all the cases on the Aspiring List in the manner outlined in the proceeding sections we are left with just 17 complex values of $\gamma$ whose corresponding group JSnap identifies as having finite co-volume and giving us an approximation to this co-volume. The groups corresponding to all but one of these values have already been identified as arithmetic in the literature in [@HLM1] as being obtained from surgery on 2-bridge knots and links. This can also be ascertained using Snappea as discussed in the introduction. The arithmetic data required to define these groups can be recovered from the polynomials satisfied by $\gamma$. The value $\sqrt{-3}$ gives a group which is not co-compact and has been discussed in [@MM2]. In addition there is one further group, also not co-compact, which corresponds to the only real value of $\gamma$ which arises for groups with generators of orders $\geq 6$ (see [@MM3; @MM4]).
In the tables below we list all the data on the groups we have found. The polynomial is that satisfied by $\gamma=\gamma(f,g)$ where $o(f)=p,
o(g)=q$ over the field ${\mathbb Q}(\cos 2 \pi/p, \cos 2 \pi/q)$. In most cases the description is given as an orbifold obtained by surgery on the one boundary component of a 2-bridge knot or on both boundary components of a 2-bridge link.
The appearance of the same description occurring twice in this table is discussed in the introduction as identification of different Nielsen classes of generators. In the other two cases, the description refers to [@MM2].
The commensurability class of the arithmetic group is, as we have discussed, determined by the field of definition and the defining quaternion algebra. In all the cases here, the discriminant $\Delta$ in the table uniquely describes the field given its degree and that it has one complex place. The quaternion algebra is determined by its ramification set which must include all real places so that the finite ramification suffices to identify the quaternion algebra. (The convention used here is that if a rational prime $p$ splits as ${\cal P}_p {\cal P}_p'$ then these are ordered so that $N({\cal P}_p) \leq N({\cal P}_p')$.) Note that the only commensurable pairs are those co-compact groups which are actually equal and given by different Nielsen classes of generators and the non-co-compact pair.
The orbifold volume is determined by Snappea and JSnap and the minimum volume is the smallest volume of any orbifold in the commensurability class which can be determined from the arithmetic data defining the commensurability class [@Bo; @MR].
No. $(p,q)$ $\gamma$ poly
----- ----------- ------------------------ ------------------------------------------------------
1 $(12,12)$ $-0.259113+1.998874 i$ $x^3+(4-2\sqrt{3})x^2+(11-4\sqrt{3})x+(7-4\sqrt{3})$
2 $(12,12)$ $-0.633975+0.930605i$ $x^2+(3-\sqrt{3})x+(3-\sqrt{3})$
3 $(10,10)$ $-1 +2.058171i$ $x^2+2x + (3+\sqrt{5})$
4 $(8,8)$ $-0.792893+0.978318i$ $x^2+(3-\sqrt{2})x + (3-\sqrt{2}) $
5 $(6,6)$ $-1.877438+0.744861i$ $x^3+4x^2+5x+1$
6 $(6,6)$ $-2.884646+0.589742i$ $x^3+6x^2+10x+2$
7 $(6,6)$ $-0.891622+1.954093i$ $x^3+2x^2+5x+1 $
8 $(6,6)$ $1.092519+2.052003i$ $x^3-2x^2+5x+1 $
9 $(6,6)$ $3.067442+2.327724i$ $x^3-6x^2+14x+2 $
10 $(6,6)$ $0.124046+2.836576i$ $x^3+8x+2$
11 $(6,6)$ $2.124407+2.746645i$ $x^3-4x^2+11x+3$
12 $(6,6)$ $4.109638+2.431700i$ $x^3-8x^2+21x+5$
13 $(6,6)$ $-1+i$ $x^2+2x+2$
14 $(6,6)$ $-2+1.414214i$ $x^2+4x+6$
15 $(6,6)$ $1.732051i$ $ x^2+3 $
16 $(6,6)$ $-1+2.645751i$ $x^2+2x+8$
17 $(6,6)$ $1+3i$ $x^2-2x+10$
18 $(6,6)$ $-1$ $x+1$
\
No. Description $\Delta$ ${\rm Ram}_f(A)$ Orb. Vol. Min. Vol.
----- --------------------------- ----------- ------------------------------------- ----------- ------------
1 $(12,0),(12,0)$ on $8/3$ $-288576$ $\emptyset$ $3.3933$ $0.424167$
2 $(12,0)$ on $5/3$ $-1728$ ${\cal P}_2, {\cal P}_3$ $ 1.8026$ $0.450658$
3 $(10,0)$ on $13/5$ $-400$ ${\cal P}_2, {\cal P}_5$ $5.1674$ $1.291862$
4 $(8,0)$ on $5/3$ $-448$ ${\cal P}_2, {\cal P}_7$ $1.5438$ $0.385966$
5 $(6,0)$ on $7/3$ $-23$ ${\cal P}_3$ $2.0425$ $0.510633$
6 $(6,0),(6,0)$ on $20/9$ $-76$ ${\cal P}_2,{\cal P}_3,{\cal P}_3'$ $5.2937$ $0.661715$
7 $(6,0),(6,0)$ on $8/3$ $-31$ ${\cal P}_3$ $2.6386$ $0.065965$
8 $(6,0)$ on $7/3$ $-23$ ${\cal P}_3$ $2.0425$ $0.510633$
9 $(6,0)$ on $13/3$ $-44$ ${\cal P}_2$ $3.7068$ $0.066194$
10 $(6,0)$ on $13/3$ $-44$ ${\cal P}_2$ $3.7068$ $0.066194$
11 $(6,0)$ on $15/11$ $-31$ ${\cal P}_3'$ $4.2217$ $0.263861$
12 $(6,0)$ on $65/51$ $-23$ ${\cal P}_5$ $8.7986$ $0.078559$
13 $(6,0)$ on $5/3$ $-4$ ${\cal P}_2, {\cal P}_3$ $1.2212$ $0.305322$
14 $(6,0),(6,0)$ on $12/5$ $-8$ ${\cal P}_2, {\cal P}_3$ $4.0153$ $0.250960$
15 Non-compact $\Gamma_{21}$ $-3$ $\emptyset$ $1.0149$ $0.253735$
16 $(6,0),(6,0)$ on $30/11$ $-7$ ${\cal P}_2, {\cal P}_3$ $7.1113$ $0.888915$
17 $(6,0),(6,0)$ on $24/7$ $-4$ ${\cal P}_2, {\cal P}_5$ $6.1064$ $0.152661$
18 Non-compact $\Gamma_{20}$ $-3$ $\emptyset$ $0.5074$ $0.084578$
[**Remark.**]{} In eliminating certain candidates because they are not of co-finite volume - and which are guaranteed discrete by our arithmetic criteria - we ran into a number of interesting examples where our computational package JSnap had difficulty. This was largely to do with accumulation of roundoff error. In exploring these groups (looking for regions of discontinuity) we made use of the package“lim" developed by C. McMullen to draw the limit sets of Kleinian groups. The limit set of one such group is illustrated below. After seeing these pictures we were encouraged to modify our code to run on a different platform with higher precision to get an infinite volume fundamental region. However it is clear that in these sorts of cases (with parameters algebraic integers of low degree) that working with a version of Snap (the precise arithmetic version of Snappea developed by O. Goodman et al, [@asnap]) would be the correct way forward. We are currently developing this program which will surely be necessary in extending our results beyond the cases $p,q\geq 6$.
Limit set of Kleinian group with two generators of order 12 and $\gamma= 2.73205+3.193141i$, a root of $x^2-2(1+\sqrt{3})x+(9+5\sqrt{3})=0$
Generalised Triangle Groups {#proofngtg}
---------------------------
Here we prove Corollary \[notrigrp\]. First we note that the only groups we need to consider here are the surgeries on two-bridge knot and link groups and thus the following lemma will suffice.
Let ${\Gamma}$ be the orbifold fundamental group of $(p,0)$-$(q,0)$ ($p,q\geq 2$) Dehn surgery on a two-bridge knot or link. Then ${\Gamma}$ does not have a presentation as a generalised triangle group.
[**Proof.**]{} Every element $g$ of finite order in ${\Gamma}$ has a nontrivial fixed point set in ${\mathbb H}^3$ which projects to an edge in the singular set of ${\mathbb H}^3/{\Gamma}$. Elements in the same conjugacy class project to the same edge. Next $(p,0)$-$(q,0)$ Dehn surgery on a two-bridge knot or link has at most two components in its singular set (one if it is a knot). Let us denote the two primitive generators of order $p$ and $q$ arising from this surgery as $f$ and $g$, so ${\Gamma}=\langle f,g\rangle$. Suppose ${\Gamma}$ has a presentation of the form $$\label{gtp}
\langle a^r=b^s=w(ab)^t\rangle, \hskip20pt r,s,t \geq 2$$ There are at most two conjugacy classes of torsion in ${\Gamma}$. Thus $a, b$ and any other element of finite order are conjugates of elements of $\langle f \rangle $ or $\langle g \rangle$. Thus, possibly increasing $r$ or $s$ and the complexity of $w$, we see that we can find a presentation of the form (\[gtp\]) with $a$ and $b$ conjugates of $f$ and $g$ and so $\{r,s\}\subset \{p,q\}$.
Suppose that $b$ is not a conjugate of $a$. Then as $w$ must also be conjugate into $\langle f \rangle $ or $\langle g \rangle$, the relation $w^r=1$ is a direct consequence of the relators $a^p=b^q=1$ and so ${\Gamma}= \langle a \rangle \ast \langle b \rangle$ which is not possible for a co-finite volume lattice. Thus $b$ is a conjugate of $a$ and $r=s$. This quickly implies that the abelianisation of ${\Gamma}$ is a subgroup of $\langle a \rangle$ as $w$ reduces to a power of $a$. Thus, if ${\Gamma}$ has a presentation as at (\[gtp\]) we have deduced that ${\Gamma}$ abelianises to a cyclic group ${\mathbb Z}_k$ with $k|p$ or $k|q$. Further, we cannot be dealing with a knot surgery as $a$ not conjugate to $w$ implies two components to the singular set.
Next, every two bridge link has a presentation on a pair of meridians of the form $\langle u,v: u w= w u\rangle$ for $w$ a word determined by the Schubert normal form [@BZ]. Dehn surgery is equivalent to adding the relators $u^p=v^q=1$ which then gives ${\mathbb Z}_p+{\mathbb Z}_p$ as the abelianisation.
Thus there can be no presentation as at (\[gtp\]) and the proof of the lemma is complete.$\Box$
[9999]{}
I. Agol. M. Belolipetsky, P. Storm and K. Whyte, [*Finiteness of arithmetic hyperbolic reflection groups*]{} arXiv:math.GT/0612132, v1, 5 Dec, 2006.
G. Baumslag, J. Morgan and P. Shalen, [*Generalised triangle groups*]{} Math. Proc. Camb. Phil. Soc., [**102**]{}, (1987), 25–31.
A. Borel [*Commensurability classes and volumes of hyperbolic three-manifolds*]{}, Ann. Sc. Norm. Pisa, [**8**]{}, (1981), 1 - 33.
B.H. Bowditch, C. Maclachlan and A.W. Reid, [ *Arithmetic hyperbolic surface bundles.*]{} Math. Ann., [**302**]{}, (1995), 31–60
G. Burde and H. Zieschang, [*Knots*]{}, Studies in Maths. Vol 5. de Gruyter, Berlin (1985).
C. Cao and R. Meyerhoff, [*The cusped hyperbolic $3$-manifold of minimum volume*]{}, Invent. Math., [**146**]{}, (2001), 451–478.
T. Chinburg and E. Friedman,[*The smallest arithmetic hyperbolic three-orbifold.*]{} Invent. Math., [**86**]{}, (1986), 507–527.
T. Chinburg, E. Friedman, K. Jones and A.W. Reid, [*The arithmetic hyperbolic 3-manifold of smallest volume*]{}, Ann. Scuola Norm. Sup. Pisa, [**30**]{}, (2001), 1–40.
H. Cohen, F. Diaz Y Diaz and M. Olivier, [*Tables of octic fields with a quartic subfield*]{}, Math. Comp., [**68**]{}, (1999), 1701–1716.
MDE Conder, GJ Martin, [*Cusps, triangle groups and hyperbolic $3$-folds*]{} J. Austral. Math. Soc. Ser. A, [**55**]{}, (1993), 149–182.
M.D.E. Conder, C. Maclachlan, G.J. Martin and E.A. O’Brien, [*$2$–generator arithmetic Kleinian groups [**III**]{}*]{} Math. Scand., [**90**]{}, (2002), 161–179.
D. Coulson, O.A. Goodman, C.D. Hodgson and W.D. Neumann, [*Computing arithmetic invariants of 3-manifolds*]{}, Experiment. Math., [**9**]{}, (2000), 127–152.
F. Diaz Y Diaz, [*Discriminant minimal et petis discriminants des corps de nombres de degre 7 avec cinq places reelles*]{}, J. London Math. Soc., [**38**]{}, (1988), 33–46.
F. Diaz, Y Diaz and M. Olivier, [*Corps imprimitifs de degré 9 de petit discriminant*]{}, preprint.
B. Fine and G. Rosenberger, [*A note on generalized triangle groups*]{}, Abh. Math. Sem> Univ. Hamburg, [**56**]{}, (1986), 233–244.
V. Flammang and G. Rhin, [*Algebraic integers whose conjugates all lie in an ellipse*]{}, Math. Comp. [**74**]{}, (2005), 2007–2014.
F. W. Gehring and G. J. Martin, [*Stability and extremality in Jorgensen’s inequality*]{}, Complex Variables, [**12**]{}, (1989), 277–282.
F. W. Gehring and G. J. Martin, [*Commutators, collars and the geometry of Möbius groups*]{}, J. d’Analyse Math., [**63**]{}, (1994), 175-219.
F. W. Gehring and G. J. Martin, [*On the minimal volume hyperbolic 3-orbifold*]{}, Math. Res. Letters, [**1**]{}, (1994), 107-114.
F. W. Gehring and G. J. Martin, [*Minimal co-volume hyperbolic lattices. I. The spherical points of a Kleinian group*]{}, Ann. of Math., [**170**]{}, (2009), 123–161.
F. W. Gehring, C. Maclachlan and G.J. Martin,[ *$2$–generator arithmetic Kleinian groups [**II**]{}*]{}, Bull. Lond. Math. Soc., [**30**]{}, (1998), 258–266
F.W. Gehring, C. Maclachlan, G. Martin and A.W. Reid [*Arithmeticity, Discreteness and Volume*]{}, Trans. Amer. Math. Soc., [**349**]{}, (1997), 3611–3643.
M. Hagelberg, C. Maclachlan and G. Rosenberger [*On discrete generalised triangle groups*]{}, Proc. Edinburgh Math. Soc., [**38**]{}, (1995), 397 - 412.
H.M. Hilden, M.T. Lozano, and J.M. Montesinos-Amilibia, [*The arithmeticity of certain torus bundle cone $3$-manifolds and hyperbolic surface bundle $3$-manifolds; and an enhanced arithmeticity test*]{}, KNOTS ’96 (Tokyo), 73–80, World Sci. Publishing, River Edge, NJ, 1997.
H.M. Hilden, M.T. Lozano, and J.M. Montesinos-Amilibia, [*On the arithmetic $2$-bridge knots and link orbifolds and a new knot invariant*]{}, J. Knot Theory, Ramifications, [**4**]{}, (1995), 81–114.
H.M. Hilden, M.T. Lozano, and J.M. Montesinos-Amilibia, [*A characterization of arithmetic subgroups of ${\rm SL}(2,{R})$ and ${\rm SL}(2,{C})$*]{}, Math. Nachr., [**159**]{}, (1992), 245–270.
K.N. Jones, Kerry and A.W. Reid, [*Minimal index torsion-free subgroups of Kleinian groups.*]{} Math. Ann., [**310**]{}, (1998), 235–250.
T. J[ø]{}rgensen, [*On discrete groups of Möbius transformations*]{}, Amer. J. Math., [**98**]{}, (1976), 739–749.
S. Katok, [*Modular forms associated to closed geodesics and arithmetic applications*]{}, Bull. Amer. Math. Soc., [**11**]{}, (1984), 177–179.
E. Klimenko, [*Discrete groups in the 3-dimensional Lobachevskiĭspace that are generated by two rotations*]{}, Siberian Math. J., [**30**]{} (1989), 95–100.
E. Klimenko and E. Kopteva, [*All discrete ${\cal RP}$ groups whose generators have real traces*]{}, Internat. J. Algebra Comput., [**15**]{}, (2005), 577–618.
D. Long, C. Maclachlan and A.W. Reid, [*Arithmetic Fuchsian groups of genus zero*]{}, Pure and Applied Math. Quarterly, to appear.
C. Maclachlan and G.J. Martin, [*On 2–generator Arithmetic Kleinian groups.*]{} J. Reine Angew. Math., [**511**]{}, (1999), 95–117
C. Maclachlan and G.J. Martin, [*The non-compact generalised arithmetic triangle groups*]{}, Topology, [**40**]{}, (2001), 927–944.
C. Maclachlan and G. J. Martin, [*All Kleinian groups with two elliptic generators whose commutator is elliptic*]{}, Math. Proc. Cambridge Philos. Soc., [**135**]{}, (2003), 413–420.
C. Maclachlan and G. J. Martin, [*The (6,p)-arithmetic hyperbolic lattices in dimension 3*]{}, Pure Appl. Math. Q., [**7**]{}, (2011), 365–382.
C. Maclachlan and A. W. Reid, [*The arithmetic of hyperbolic 3-manifolds*]{}, Graduate Texts in Math., Springer–Verlag, 2003.
C. Maclachlan and A.W. Reid [*Commensurability classes of arithmetic Kleinian groups and their Fuchsian subgroups*]{}, Math. Proc. Camb. Phil. Soc., [**102**]{}, (1987), 251 - 258.
C. Maclachlan and A.W. Reid, [*The arithmetic structure of tetrahedral groups of hyperbolic isometries,*]{} Mathematika, [**36**]{}, (1989), 221–240.
C. Maclachlan and G. Rosenberger, [*Two-generator arithmetic Fuchsian groups, II,*]{} Math. Proc. Cambridge Philos. Soc., [**111**]{}, (1992), 7–24.
T.H. Marshall and G.J. Martin, [*Minimal co-volume hyperbolic lattices, II: Simple torsion in a Kleinian group*]{}, Ann. of Math., [**176**]{}, (2012), 261–301. B. Maskit, [*Kleinian Groups*]{}, Springer–Verlag, 1988.
R. Meyerhoff, [*The cusped hyperbolic $3$-orbifold of minimum volume*]{}, Bull. Amer. Math. Soc., [**13**]{}, (1985), 154-156.
H.P. Mullholland, [*The product of $n$ complex homogeneous linear forms*]{}, J. London Math. Soc., [**35**]{}, (1960), 241–250.
W.D. Neumann and A.W. Reid, [*Arithmetic of hyperbolic manifolds,*]{} Topology ’90 (Columbus, OH, 1990), 273–310, Ohio State Univ. Math. Res. Inst. Publ., 1, de Gruyter, Berlin, 1992.
V.V. Nikulin, [*Finiteness of the number of arithmetic groups generated by reflections in Lobachevsky spaces*]{}, arXiv:math.AG/0609256, v1, 9 Sep, 2006.
A.M. Odlyzko, [*Some analytic estimates of class numbers and discriminants*]{}, Invent. Math., [**29**]{}, (1975), 275–286.
A.M. Odlyzko, [*Discriminant Bounds*]{} available from www.dtc.umn.edu/odlyzko/unpublished/discr.bound
A. W. Reid, [*Arithmeticity of Knot Complements*]{}, J. London Math. Soc., [**43**]{}, (1991), 171-184.
C.A. Rodgers, [*The product of $n$ real homogeneous linear forms*]{}, Acta Math., [**82**]{}, (1950), 185–208.
D. Rolfsen, [*Knots and links*]{}, Mathematics Lecture Series, No. 7. Publish or Perish, Inc., Berkeley, Calif., 1976.
Z. Rudnick and P. Sarnak, [*The behaviour of eigenstates of arithmetic hyperbolic manifolds*]{}, Comm. Math. Phys., [**161**]{}, (1994), 195–213.
P. Sarnak, [*The arithmetic and geometry of some hyperbolic three-manifolds,*]{} Acta Math, [**151**]{}, (1983), 253–295.
I. Schur, [*Über die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten.*]{} Math. Zeit., [**1**]{}, (1918), 377 - 402.
H.M. Stark, [*Some effective cases of the Brauer-Siegel Theorem*]{}, Invent. Math., [**23**]{}, (1974), 135–152.
K. Takeuchi, [*Arithmetic triangle groups*]{}, J. Math. Soc. Japan, [**29**]{}, (1977), 91–106.
K. Takeuchi, [*Arithmetic Fuchsian groups of signature $(1;e)$*]{}, J. Math. Soc. Japan, [**35**]{}, (1983), 381–407.
W.P. Thurston, [*Three-dimensional geometry and topology*]{}, Vol. 1., Edited by Silvio Levy. Princeton Mathematical Series, [**35**]{}, Princeton University Press, Princeton, NJ, 1997.
W.P. Thurston, [*Three-dimensional manifolds, Kleinian groups and hyperbolic geometry*]{}, Bull. Amer. Math. Soc., [**6**]{}, (1982), 357–381.
M-F. Vigneras, [*Arithm[é]{}tique des Alg[è]{}bres de Quaternions,*]{} Lecture Notes in Mathematics, No. 800, Springer-Verlag, 1980.
E.B. Vinberg, [*Discrete linear groups that are generated by reflections,*]{} Math. USSR Sbornik, [**114**]{}, (1967), 429–444.
E.B. Vinberg, [*The absence of crystallographic groups of reflections in Lobachevski spaces of large dimension*]{}, Trans. Moscow Math. Soc. [**47**]{} (1985), 75–112.
E.B. Vinberg, Ĭ. Mennike and Kh. Khelling, [ *On some generalized triangular groups and three-dimensional orbifolds*]{}, Trudy Moskov. Mat. Obshch., [**56**]{}, (1995), 5–32, (Trans. Moscow Math. Soc., 1995, 1–21)
J. R. Weeks, [*SnapPea: A computer program for creating and studying hyperbolic 3-manifolds,*]{} available from http://www.northnet.org/weeks/, 2001.
[^1]: Research supported in part by grants from the N.Z. Marsden Fund and the New Zealand Royal Society (James Cook Fellowship). AMS (1991) Classification. Primary 30F40, 30D50, 20H10, 22E40, 53A35, 57M60
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Rafael de Freitas Leão[^1] and Samuel Augusto Wainer[^2]'
title: 'Immersion in $\mathbb{R}^n$ by complex spinors'
---
**Keywords:** Immersion, Spinors, Killing Equation
**Abstract**\
A beautiful solution to the problem of isometric immersions in $\mathbb{R}^n$ using spinors was found by Bayard, Lawn and Roth [@bayard16]. However to use spinors one must assume that the manifold carries a ${\mbox{Spin}}$-structure and, especially for complex manifolds where is more natural to consider ${\mbox{Spin}^{{\mathbb}{C}}}$-structures, this hypothesis is somewhat restrictive. In the present work we show how the above solution can be adapted to ${\mbox{Spin}^{{\mathbb}{C}}}$-structures.
Introduction
============
The problem of isometric immersions of Riemannian manifolds is a classical and widely studied problem in differential geometry. Since 1998, manly because of the work of Thomas Friedrich [@friedrich98], this problem got a new understanding. In [@friedrich98], since Riemannian 2-manifolds are naturally ${\mbox{Spin}}$-manifolds, Friedrich showed that isometric immersions of these 2-manifolds are related with spinors that satisfies a Dirac type equation. Such relation can be understood as a spinorial approach of the standard Weierstrass representation.
Since then, a lot of work has been done to further understand this relation and to extend it to more general spaces than Riemannian 2-manifolds. Some remarkable examples of these contribuitions are, for example: in $2004$ Bertrand Morel [@morel04] extended Friedrich’s spinorial representation of isometric immersions in $\mathbb{R}^{3}$ to $\mathbb{S}^{3}$ and $\mathbb{H}^{3}$; in $2008$ Marie-Amelie Lawn [@lawn08] showed how a given Lorentzian surface $(M^{2},g)$ can be isometrically immersed in the pseudo-Riemannian space $\mathbb{R}^{2,1}$ using spinorial techniques; in $2010$ Lawn and Julien Roth [@lawnroth10] exhibit a spinorial characterization of Riemannian surfaces isometrically immersed in the 4-dimensional spaces $\mathbb{M}^4$, $\mathbb{M}^{3}\times \mathbb{R}$ $(\mathbb{M}\simeq (\mathbb{R},\mathbb{S},\mathbb{H})$; using the same spinorial techniques, Lawn and Roth [@lawnroth11] in $2011$, presented the spinorial characterization of isometric immersions of arbitrary dimension surfaces in $3$-dimensional space forms, thus generalizing Lawn’s work in $\mathbb{R}^{2,1}$; in $2013$ Pierre Bayard [@bayard13] proved that an isometric immersion of a Riemannian surface $M^{2}$ in $4$-dimensional Minkowski space $\mathbb{R}^{1,3}$, with a given normal bundle $E $ and a given mean curvature vector $\vec{H}\in \Gamma (E)$, is equivalent to the existence of a normalized spinor field $\varphi \in \Gamma (\Sigma M \otimes \Sigma E)$ which is solution of the Dirac equation $D\varphi =\vec{H}\cdot \varphi $ in the surface.
More recently, Bayard, Lawn and Roth [@bayard16], studied spinorial immersions of simply connected ${\mbox{Spin}}$-manifolds of arbitrary dimension. The main idea is to use the regular left representation of the Clifford algebra on itself, given by left multiplication, to construct a ${\mbox{Spin}}$-Clifford bundle of spinors. In this bundle, using the Clifford algebra structure, is possible to define a vector valued scalar product and, combining this product with a spinor field that satisfies a proper equation, define a vector valued closed 1-form whose integral gives a isometric immersion analogous to the Weierstrass representation of surface.
This work, [@bayard16], provides a beautiful generalization of the previous work relating the Weierstrass representation to spinors. However, mainly when we are considering complex manifolds, the hypothesis of existence of a ${\mbox{Spin}}$-structure is somewhat restrictive. Complex manifolds always have a canonical ${\mbox{Spin}^{{\mathbb}{C}}}$-structure that can be used to construct spinor bundles, but the existence of a ${\mbox{Spin}}$-structure is related to square roots of the canonical bundle and they do not always exist.
The aim of the present work is to show how the ideas of [@bayard16] can be generalized to spinor bundles associated to ${\mbox{Spin}^{{\mathbb}{C}}}$-structure, providing a more natural setting to complex manifolds. Precisely we prove:
Let $M$ a simply connected $n$-dimensional manifold, $E\rightarrow M$ a vector bundle of rank $m$, assume that $TM$ and $E$ are oriented and $Spin^{\mathbb{C}}.$ Suppose that $B:TM\times TM\rightarrow E$ is symmetric and bilinear. The following are equivalent:
1. There exist a section $\varphi \in \Gamma (N\sum\nolimits^{ad\mathbb{C}})$ such that $$\nabla _{X}^{\Sigma ^{ad\mathbb{C}}}\varphi =-\frac{1}{2}\sum_{i=1}e_{i}\cdot B(X,e_{i})\cdot \varphi +\frac{1}{2} \textbf{i}~A^{l}(X)\cdot \varphi ,~~\forall
X\in TM.$$
2. There exist an isometric immersion $F:M\rightarrow \mathbb{R}^{\left(n+m\right) }$ with normal bundle $E$ and second fundamental form $B$.
Furthermore, $F=\int \xi $ where $\xi $ is the $\mathbb{R}^{(n+m)}$-valued $ 1 $-form defined by $$\xi (X):=\left\langle \left\langle X\cdot \varphi ,\varphi \right\rangle
\right\rangle ,~~\forall X\in TM. \label{form}$$
Adapted Structures
==================
Let $E \rightarrow M$ be a hermitian vector bundle over $M$. A ${\mbox{Spin}^{{\mathbb}{C}}}$-structure on $E$ is defined by the following double covering
$$\xymatrixcolsep{1pc}\xymatrixrowsep{1pc}\xymatrix{ & Spin_{n}^{\mathbb{C}}
\ar[rr]^{p^{\mathbb{C}}=\lambda^{\mathbb{C}} \times l^{\mathbb{C}}} \ar@{^{(}->}[dd] & & SO_n \times S^1 \ar@{^{(}->}[dd] \\ \mathbb{Z}_{2}
\ar[ru] \ar[rd]& & &\\ & P_{Spin_{n}^{\mathbb{C}}}(E) \ar[rr]^{\Lambda^{\mathbb{C}}}
\ar[rd]_{\pi^{\prime}} & & P_{SO_{n}}(E) \times_M P_{S^1}(E) \ar[ld]^{\pi} \\ & & M & }$$
where ${\mbox{Spin}^{{\mathbb}{C}}}$ is the group defined by
$$Spin_n^{\mathbb{C}} = \frac{Spin_n \times S^1}{ \{ (-1,-1) \} },$$
and $S^1=U(1) \in \mathbb{C}$ is understood as the unitary complex numbers. As usual, a ${\mbox{Spin}^{{\mathbb}{C}}}$-structure can be viewed as a lift of the transition functions of $E$, $g_{ij}$, to the group ${\mbox{Spin}^{{\mathbb}{C}}}$, $\tilde{g}_{ij}$, but now the transition functions are classes of pairs $\tilde{g}_{ij} = \left[ (h_{ij},z_{ij}) \right]$, where $h_{ij}: U_i \cap U_j \rightarrow Spin_n$ and $z_{ij}: U_i \cap U_j \rightarrow S^1=U(1)$.
The identity on ${\mbox{Spin}^{{\mathbb}{C}}}$ is the class $\left\{ (1,1), (-1,-1) \right\}$. Because of this, neither $h_{ij}$ or $z_{ij}$ must satisfy the cocycle condition, only the class of the pair. But, $z_{ij}^2$ satisfies the cocycle condition and defines a complex line bundle $L$, associated with the $P_{S^1}$ principal bundle in the above diagram, called the determinant of the ${\mbox{Spin}^{{\mathbb}{C}}}$-structure.
The description using transition functions is useful to make clear that ${\mbox{Spin}^{{\mathbb}{C}}}$-structures are more general than ${\mbox{Spin}}$-structures. In fact, given a ${\mbox{Spin}}$-structure $P_{{\mbox{Spin}}}(E) \rightarrow P_{SO}(E)$ we immediately get a ${\mbox{Spin}^{{\mathbb}{C}}}$-structure by considering $z_{ij}=1$, in other words, by considering the trivial bundle as the determinant bundle of the structure. On the other hand [@hitchin], a ${\mbox{Spin}^{{\mathbb}{C}}}$-structure produces a ${\mbox{Spin}}$-structure iff the determinant bundle has a square root, that is, the functions $z_{ij}$ satisfies the cocycle condition.
Another way where ${\mbox{Spin}^{{\mathbb}{C}}}$-structures are natural is when we consider an almost complex manifold $(M,g,J)$. In this case the tangent bundle can be viewed as an $U(n)$ bundle, and the natural inclusion $U(n) \xhookrightarrow{} SO(2n)$ produces a canonical ${\mbox{Spin}^{{\mathbb}{C}}}$-structure on the tangent bundle [@friedrich00; @nicolaescu]. For this canonical structure the determinant bundle is identified with $\wedge^{0,n} M$ and the spinor bundle constructed using an irreducible complex representation of ${\mathcal{C}\ell}(2n)$ is isomorphic with $\wedge^{0,*}M = \oplus_{k=0}^n \wedge^{0,k}M$. So, various structures on spinors can be described using know structures of $M$.
Unlike the usual case for ${\mbox{Spin}}$-structures, a metric connection on $E$ is not enough to produce a connection on $P_{{\mbox{Spin}^{{\mathbb}{C}}}}(E)$, for this, we also need a connection on the determinant bundle of the structure to get a connection on $P_{SO}(E) \times P_{S^1}(E)$ and be able to lift this connection to $P_{{\mbox{Spin}^{{\mathbb}{C}}}}(E)$.
To understand the problem of immersions using the Dirac equation in the case of ${\mbox{Spin}^{{\mathbb}{C}}}$-structures, and spinors associated to this structure, we need to understand adapted ${\mbox{Spin}^{{\mathbb}{C}}}$-structures on submanifolds. The difference to the standard ${\mbox{Spin}}$ case is that we need to keep track of the determinant bundle. Using the ideas of [@bar98], we can describe the adapted structure.
Consider a ${\mbox{Spin}^{{\mathbb}{C}}}$ $(n+m)$-dimensional manifold $Q$ and a isometrically immersed $n$-dimensional ${\mbox{Spin}^{{\mathbb}{C}}}$ submanifold $M \xhookrightarrow{} Q$. Let $$\begin{split}
&P_{{\mbox{Spin}^{{\mathbb}{C}}}_{(n+m)}}(Q) \xrightarrow{\Lambda^Q} P_{SO_{(n+m)}}(Q) \times P_{S^1}(Q) \\
& \left. P_{{\mbox{Spin}^{{\mathbb}{C}}}_{(n+m)}}(Q) \right|_M \xrightarrow{\Lambda^Q} \left. P_{SO_{(n+m)}}(Q) \bigg|_M \times P_{S^1}(Q) \right. \\
&P_{{\mbox{Spin}^{{\mathbb}{C}}}_n}(M) \xrightarrow{\Lambda^M} P_{SO_n}(M) \times P_{S^1}(M)
\end{split}$$ be the corresponding ${\mbox{Spin}^{{\mathbb}{C}}}$-structures. And let the cocycles associated to this structures be, respectively, $\tilde{g}_{\alpha \beta}$, $\left. \tilde{g}_{\alpha \beta}\right|_M $ and $\tilde{g}_{\alpha \beta}^1$. If we define the functions $\tilde{g}_{\alpha \beta}^2$ by $$\tilde{g}_{\alpha \beta}^1 \tilde{g}_{\alpha \beta}^2 = \left. \tilde{g}_{\alpha \beta} \right|_M$$ it is easy to see, using an adapted frame, that the two sets of functions $\tilde{g}_{\alpha \beta}^1$ and $\tilde{g}_{\alpha \beta}^2$ commutes. This implies that $\tilde{g}_{\alpha \beta}^2$ satisfies the cocycle condition, because both $\tilde{g}_{\alpha \beta}$ and $\tilde{g}_{\alpha \beta}^1$ satisfies. The cocycles $\tilde{g}_{\alpha \beta}^2$ are exactly the ${\mbox{Spin}^{{\mathbb}{C}}}$-structure for the normal bundle $\nu(M)$. With this construction, if $L$, $L_1$ and $L_2$ denotes, respectively, the determinant bundle of the ${\mbox{Spin}^{{\mathbb}{C}}}$-structure of $Q$, $M$ and $\nu(M)$ we have the relation $$L = L_1 \otimes L_2$$
Knowing that $\nu(M)$ has a natural ${\mbox{Spin}^{{\mathbb}{C}}}$-structure we can use the left regular representation of ${\mathbb{C}\ell}(n)$ on itself to construct the following ${\mbox{Spin}^{{\mathbb}{C}}}$-Clifford bundle (this bundles will act as spinor bundles) $$\begin{split}
\Sigma^{\mathbb{C}}Q &:= P_{Spin^{\mathbb{C}}_{(n+m)}(Q)}
\times_{\rho_{(n+m)}} \mathbb{C}l_{(n+m)}, \\
\left. \Sigma^{\mathbb{C}}Q \right|_M &:= \left. P_{Spin^{\mathbb{C}}_{(n+m)}(Q)} \right|_M
\times_{\rho_{(n+m)}} \mathbb{C}l_{(n+m)}, \\
\Sigma^{\mathbb{C}} M &:= P_{Spin^{\mathbb{C}}_n(M)}
\times _{\rho_{n}}\mathbb{C}l_{(n)}, \\
\Sigma^{\mathbb{C}} \nu(M) &:=P_{Spin^{\mathbb{C}}_m \nu(M)}
\times _{\rho _{m}} \mathbb{C}l_{(m)}.
\end{split}$$
Using the isomophism $\mathbb{C}l_n \hat{\otimes} \mathbb{C}l_m \simeq \mathbb{C}l_{(n+m)}$ and standard arguments, [@bar98], we get the relation
$$\Sigma^{\mathbb{C}} Q \mid_M \simeq \Sigma^{\mathbb{C}} M \hat{\otimes} \Sigma^{\mathbb{C}} \nu(M) =: \Sigma^{ad \mathbb{C}}.$$
Let $\nabla ^{\Sigma ^{\mathbb{C}}Q},\nabla ^{\Sigma ^{\mathbb{C}}M}$ and $\nabla ^{\Sigma ^{\mathbb{C}}\nu}$ be the connection on $\sum^{\mathbb{C}}Q,\sum^{\mathbb{C}}M$ and $\sum^{\mathbb{C}}\nu(M)$ respectively, induced by the Levi-Civita connections of $P_{SO_{(n+m)}}(Q)$, $P_{SO_{(n)}}(M)$, and $P_{SO_{(m)}}(\nu)$. We denote the connection on $\sum\nolimits^{ad\mathbb{C}}$ by
$$\nabla ^{\Sigma ^{ad\mathbb{C}}} = \nabla ^{\Sigma ^{\mathbb{C}}M\otimes \Sigma^{\mathbb{C}} \nu}:=\nabla
^{\Sigma ^{\mathbb{C}}M}\otimes Id+Id\otimes \nabla ^{\Sigma ^{\mathbb{C}}\nu}.$$
The connections on these bundle are linked by the following Gauss formula:
$$\label{gaussformula}
\nabla_{X}^{\Sigma^{\mathbb{C}} Q} \varphi = \nabla_{X}^{\Sigma^{ad \mathbb{C}} } \varphi + \frac{1}{2} \sum _{i=1}^{n} e_i \cdot B(e_i,X) \cdot \varphi ,$$
where $B:TM \times TM \rightarrow \nu(M)$ is the second fundamental form and $\left\{ e_1 \cdots e_n \right\}$ is a local orthonormal frame of $TM$. Here “$\cdot $” is the Clifford multiplication on $\Sigma^{\mathbb{C}}Q$.
Note that if we have a parallel spinor $\varphi$ in $\Sigma ^{\mathbb{C}}Q$, for exemple if $Q=\mathbb{R}^{n+m}$, then Eq.(\[gaussformula\]) implies the following generalized Killing equation $$\nabla_{X}^{\Sigma^{ad \mathbb{C}} } \varphi =- \frac{1}{2} \sum _{i=1}^{n} e_i \cdot B(e_i,X) \cdot \varphi .$$
Constructing the Immersion
==========================
To construct the immersion we need two steps. First we need to construct a vector valued inner product using the Clifford algebra structure of the ${\mbox{Spin}^{{\mathbb}{C}}}$-Clifford bundle. This first step does not change when we consider ${\mbox{Spin}^{{\mathbb}{C}}}$-structures instead of ${\mbox{Spin}}$-structures. Therefore we just remember the construction by Bayard, Lawn and Roth [@bayard16] in the first subsection.
Second, we need to understand a Gauss type equation on the manifold. For this step the connection on the determinant bundle of the ${\mbox{Spin}^{{\mathbb}{C}}}$-structures is used and we show how the equations can be reformulated to this case. This is the principal part of the proof and is done on subsection \[immersion\].
A $\mathbb{C}l_{(n+m)}$-valued inner product
--------------------------------------------
To make the converse, obtaining an immersion using spinors that satisfies certain equations, we need the following ${\mathbb{C}\ell}(n+m)$-valued inner product
$$\begin{aligned}
\tau :{\mathbb{C}\ell}_{(n+m)}& \rightarrow &{\mathbb{C}\ell}_{(n+m)} \\
\tau (a~e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}) &:=&(-1)^{k}\bar{a}
~e_{i_{k}}\cdots e_{i_{2}}e_{i_{1}}, \\
\tau (\xi) &:=& \overline{\xi}
\end{aligned}$$
$$\begin{split}
\left\langle \left\langle \cdot ,\cdot \right\rangle \right\rangle:
\mathbb{C}l_{(n+m)}\times \mathbb{C}l_{(n+m)} &\rightarrow \mathbb{C}l_{(n+m)} \\
(\xi _{1},\xi _{2}) &\mapsto \left\langle \left\langle \xi _{1},
\xi_{2}\right\rangle \right\rangle =\tau (\xi _{2})\xi _{1}.
\end{split} \label{product}$$
$$\begin{split}
\left\langle \left\langle (g\otimes s)\xi _{1},(g\otimes s)
\xi_{2}\right\rangle \right\rangle &=s\overline{s}\tau
(\xi _{2})\tau (g)g \xi_{1}=\tau (\xi _{2})\xi _{1} =
\left\langle \left\langle \xi _{1}, \xi_{2}\right\rangle
\right\rangle , \\
g\otimes s &\in Spin_{(n+m)}^{\mathbb{C}}\subset \mathbb{C}l_{(n+m)},
\end{split}$$
so the product is well defined on the ${\mbox{Spin}^{{\mathbb}{C}}}$-Clifford bundles, i.e., Eq.(\[product\]) induces a $\mathbb{C}l_{(n+m)}$-valued map: $$\begin{gathered}
\sum\nolimits^{\mathbb{C}}Q\times \sum\nolimits^{\mathbb{C}}Q\rightarrow
\mathbb{C}l_{(n+m)} \\
(\varphi _{1},\varphi _{2})=([p,[\varphi _{1}]],[p,[\varphi _{2}]])\mapsto
\left\langle \left\langle \lbrack \varphi _{1}],[\varphi _{2}]\right\rangle
\right\rangle =\tau ([\varphi _{2}])[\varphi _{1}],\end{gathered}$$ where $[\varphi _{1}]$, $[\varphi _{2}]$ are the representative of $\varphi_{1},\varphi _{2}$ in the $Spin^{\mathbb{C}}(n+m)$ frame $p\in P_{Spin^{\mathbb{C}}(n+m)}.$
The connection $\nabla ^{\Sigma ^{\mathbb{C}}Q}$ is compatible with the product $\left\langle \left\langle \cdot ,\cdot \right\rangle \right\rangle .$
Fix $s=(e_{1},...,e_{(n+m)}):U\subset M\subset Q\rightarrow P_{SO(n+m)}$ a local section of the frame bundle, $l:U\subset M\subset Q\rightarrow P_{s^{1}}$ a local section of the associated $S^1$-principal bundle, $w^{Q}:T(P_{SO(n+m)})\rightarrow so(n+m)$ is the Levi-Civita connection of $P_{SO(n+m)}$ and $iA:TP_{S^{1}}\rightarrow i\mathbb{R}$ is an arbitrary connection on $P_{S^{1}}$, denote by $w^{Q}(ds(X))=(w_{ij}(X))\in so(n+m),$ $iA(dl(X))=iA^{l}(X).$
If $\psi =[p,[\psi ]]\ $and $\psi ^{\prime }=[p,[\psi ^{\prime }]]$ are sections of $\sum\nolimits^{\mathbb{C}}Q$ we have: $$\begin{aligned}
\nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\psi &=&\left[ p,X([\psi ])+\frac{1}{2} \sum\nolimits_{i<j}w_{ij}(X)e_{i}e_{j}\cdot \lbrack \psi ]+\frac{1}{2} iA^{l}(X)[\psi ]\right] , \\
\left\langle \left\langle \nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\psi ,\psi^{\prime }\right\rangle \right\rangle &=&\overline{[\psi ^{\prime }]}\left( X([\psi ])+\frac{1}{2}\sum\nolimits_{i<j}w_{ij}e_{i}e_{j}\cdot \lbrack \psi]+\frac{1}{2}iA^{l}(X)[\psi ]\right) , \\
\left\langle \left\langle \psi ,\nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\psi' \right\rangle \right\rangle &=&\overline{\left( X([\psi'])+\frac{1}{2}\sum\nolimits_{i<j}w_{ij}e_{i}e_{j}[\psi ']+\frac{1}{2}A^{l}[\psi '] \right) }[\psi ] \\
&=&\left( X(\overline{[\psi ']})+\frac{1}{2}\sum \nolimits_{i<j}w_{ij}\overline{e_{i}e_{j}[\psi ']}+\frac{1}{2} \overline{A^{l}}\overline{[\psi ']}\right) [\psi ] \\
&=&\left( X(\overline{[\psi ']})-\frac{1}{2}\sum \nolimits_{i<j}w_{ij}\overline{[\psi ']}e_{i}e_{j}-\frac{1}{2}A^{l} \overline{[\psi ']}\right) [\psi ],
\end{aligned}$$ then $$\begin{aligned}
\left\langle \left\langle \nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\psi ,\psi^{\prime }\right\rangle \right\rangle +\left\langle \left\langle \psi,\nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\psi ^{\prime }\right\rangle\right\rangle &=&\overline{[\psi ^{\prime }]}X(\xi )+X(\overline{[\psi^{\prime }]})[\psi ], \\
X\left\langle \left\langle \psi ,\psi ^{\prime }\right\rangle \right\rangle&=&X\left( \overline{\xi ^{\prime }}\xi \right) =X(\overline{\xi ^{\prime }})\xi +\overline{\xi ^{\prime }}X(\xi ).
\end{aligned}$$
The map $\left\langle \left\langle \cdot ,\cdot \right\rangle \right\rangle:\sum\nolimits^{\mathbb{C}}Q\times \sum\nolimits^{\mathbb{C}}Q\rightarrow \mathbb{C}l_{(n+m)}$ satisfies:
1. $\left\langle \left\langle X\cdot \psi ,\varphi \right\rangle
\right\rangle =-\left\langle \left\langle \psi ,X\cdot \varphi \right\rangle
\right\rangle ,~\psi ,\varphi \in \sum\nolimits^{\mathbb{C}}Q,~X\in TQ.$
2. $\tau \left\langle \left\langle \psi ,\varphi \right\rangle
\right\rangle =\left\langle \left\langle \varphi ,\psi \right\rangle
\right\rangle ,~\psi ,\varphi \in \sum\nolimits^{\mathbb{C}}Q$
This is an easy calculation:
1. $\left\langle \left\langle X\cdot \psi ,\varphi \right\rangle
\right\rangle =\tau \lbrack \varphi ][X\cdot \psi ]=\tau \lbrack \varphi
][X][\psi ]=-\tau \lbrack \varphi ]\tau \lbrack X][\psi ]=\left\langle
\left\langle \psi ,X\cdot \varphi \right\rangle \right\rangle $
2. $\tau \left\langle \left\langle \psi ,\varphi \right\rangle
\right\rangle =\tau (\tau \lbrack \varphi ][\psi ])=\tau \lbrack \psi
][\varphi ]=\left\langle \left\langle \varphi ,\psi \right\rangle
\right\rangle .$
Note the same idea, product and properties are valid for the bundles $\sum^{ \mathbb{C}}Q,$ $\sum^{\mathbb{C}}M$, $\sum^{\mathbb{C}} \nu(M)$, $\sum\nolimits^{ \mathbb{C}}M\hat{\otimes}\sum\nolimits^{\mathbb{C}} \nu(M).$
Spinorial Representation of Submanifolds in $\mathbb{R}^{n+m}$ {#immersion}
--------------------------------------------------------------
Let $M$ a $n$-dimensional manifold, $E\rightarrow M$ a real vector bundle of rank $m$, assume that $TM$ and $N$ are oriented and $Spin^{\mathbb{C}}.$ Denote by $P_{SO_{n}}(M)$ the frame bundle of $TM$ and by $P_{SO_{m}}(E)$ the frame bundle of $E.$ The respective $Spin^{\mathbb{C}}$ structures are defined as$$\begin{aligned}
\Lambda ^{1\mathbb{C}} &:&P_{Spin_{n}^{\mathbb{C}}}(M)\rightarrow
P_{SO_n}(M)\times P_{S^{1}}(M), \\
\Lambda ^{2\mathbb{C}} &:&P_{Spin_{m}^{\mathbb{C}}}(E)\rightarrow
P_{SO_m}(E)\times P_{S^{1}}(E).\end{aligned}$$ We can define the bundle $P_{S^{1}}$ as the one with transition functions defined by product of transition functions of $P_{S^{1}}(M)$ and $P_{S^{1}}(E)$. It is not diffiult to see that there is a canonical bundle morphism: $\Phi :P_{S^{1}}(M)\times _{M}P_{S^{1}}(E)\rightarrow P_{S^{1}}$ such that, in any local trivialization, the following diagram comute:
$$\xymatrixcolsep{1pc}\xymatrixrowsep{1pc}\xymatrix{
P_{S^{1}}(M)\times _{M}P_{S^{1}}(E) \ar[rr]^{~~~~~~~~~~ \Phi} \ar[dd] & & P_{S^{1}} \ar[dd] \\ & \\
U_{\alpha} \times S^1 \times S^1 \ar[rr]^{~~~~~\phi_\alpha} & & U_{\alpha} \times S^1 }$$
where $\phi_\alpha(x,r,s)=(x,rs), x\in U_{\alpha}, r,s \in S^1.$
Fix the following notation $$\begin{aligned}
\sum\nolimits^{ad\mathbb{C}} &:&=\sum\nolimits^{\mathbb{C}}M\otimes
\sum\nolimits^{\mathbb{C}}E\simeq \left( P_{Spin^{\mathbb{C}}(n)}\times
_{M}P_{Spin^{\mathbb{C}}(m)}\right) \times \mathbb{C}l_{(n+m)}, \\
N\sum\nolimits^{ad\mathbb{C}} &:&=\left( P_{Spin^{\mathbb{C}}(n)}\times
_{M}P_{Spin^{\mathbb{C}}(m)}\right) \times Spin_{(n+m)}^{\mathbb{C}}.\end{aligned}$$
Here $iA^{1}:TP_{S^{1}}(M)\rightarrow i\mathbb{R}$, $iA^{2}:TP_{S^{1}}(E)\rightarrow i\mathbb{R}$ are arbitrary connections in $P_{S^{1}}(M)$ and $P_{S^{1}}(E)$. Denote a local section by $s=(e_{1},\cdots,e_{n}):U\rightarrow P_{SO_{n}}(M)$, $l_{1}:U\rightarrow P_{S^{1}}(M)$, $l_{2}:U\rightarrow P_{S^{1}}(E),$ $l=\Phi (l_{1},l_{2}):U\rightarrow P_{S^{1}}$. Now $iA:TP_{S^{1}}\rightarrow i\mathbb{R}$ is the connection defined by $iA(d\Phi (l_{1},l_{2}))=iA_{1}(dl_{1})+iA_{2}(dl_{2}).$ Established this notation we have the following:
Let $M$ a simply connected $n$-dimensional manifold, $E\rightarrow M$ a vector bundle of rank $m$, assume that $TM$ and $E$ are oriented and $Spin^{ \mathbb{C}}.$ Suppose that $B:TM\times TM\rightarrow E$ is symmetric and bilinear. The following are equivalent:
1. There exist a section $\varphi \in \Gamma (N\sum\nolimits^{ad\mathbb{C }})$ such that $$\nabla _{X}^{\Sigma ^{ad\mathbb{C}}}\varphi =-\frac{1}{2}\sum_{i=1}e_{i} \cdot B(X,e_{i})\cdot \varphi +\frac{1}{2}i~A^{l}(X)\cdot \varphi ,~~\forall
X\in TM. \label{killing1}$$
2. There exist an isometric immersion $F:M\rightarrow \mathbb{R}^{\left(
n+m\right) }$ with normal bundle $E$ and second fundamental form $B$.
Furthermore, $F=\int \xi $ where $\xi $ is the $\mathbb{R}^{(n+m)}$-valued $ 1 $-form defined by $$\xi (X):=\left\langle \left\langle X\cdot \varphi ,\varphi \right\rangle
\right\rangle ,~~\forall X\in TM. \label{form}$$
$2)\Rightarrow 1)$ Since $\mathbb{R}^{n+m}$ is contratible there exists a global section $s:\mathbb{R}^{n+m}\rightarrow P_{Spin^{\mathbb{C}}(n+m)}$, with a corresponding parallel orthonormal basis $h=(E_{1},\cdots ,E_{n+m}): \mathbb{R}^{n+m}\rightarrow P_{SO(n+m)},$ and $l:\mathbb{R}^{n+m}\rightarrow P_{S^{1}},$ $\Lambda^{\mathbb{R}^{n+m}}(s)=(h,l).$ Fix a constant $[\varphi ]\in Spin^{\mathbb{C}}(n+m)\subset \mathbb{C}l_{(n+m)}$ and define the spinor field $\varphi =[s,[\varphi ]]\in \sum\nolimits^{\mathbb{C}}
\mathbb{R}^{n+m}:=P_{Spin^{\mathbb{C}}(n+m)}\times \mathbb{C}l_{(n+m)},$ again denote $w^{Q}(dh(X))=(w_{ij}^{h}(X))\in so(n+m),$ $iA(dl(X))=iA^{l}(X)\in i\mathbb{R},$ $$\begin{aligned}
\nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\varphi &=&\left[ s,X([\varphi ])+\left\{
\frac{1}{2}\sum\nolimits_{i<j}w_{ij}^{h}(X)E_{i}E_{j}+\frac{1}{2} i~A^{l}(X)\right\} \cdot \lbrack \varphi ]\right] \notag
\\
&=&\left[ s,\frac{1}{2}i~A^{l}(X)\cdot \lbrack \varphi ]\right]. \notag \\
&=&\frac{1}{2}i~A^{l}(X)\cdot \varphi \label{parallel}
\end{aligned}$$
Finally, restricting $\varphi$ to $\Sigma^{ad\mathbb{C}}$ and applying the gauss formula Eq.(\[gaussformula\]) $$\begin{aligned}
\nabla _{X}^{\Sigma ^{\mathbb{C}}Q}\varphi -\nabla _{X}^{\Sigma ^{ad\mathbb{C }}}\varphi &=&\frac{1}{2}\sum_{i=1}e_{i}\cdot B(X,e_{i})\cdot \varphi \notag
\\
\frac{1}{2}i~A^{l}(X)\cdot \varphi -\nabla _{X}^{\Sigma ^{ad\mathbb{C} }}\varphi &=&\frac{1}{2}\sum_{i=1}e_{i}\cdot B(X,e_{i})\cdot \varphi \notag
\\
\nabla _{X}^{\Sigma ^{ad\mathbb{C}}}\varphi &=&-\frac{1}{2} \sum_{i=1}e_{i}\cdot B(X,e_{i})\cdot \varphi +\frac{1}{2}i~A^{l}(X)\cdot
\varphi . \label{killing}
\end{aligned}$$
$1)\Rightarrow 2)$ The ideia here is to prove that the $1$-form $\xi $ Eq.( \[form\]) gives us an immersion preserving the metric, the second fundamental form and the normal connection. For this purpose, we will present the following lemmas:
Suppose that $\varphi \in \Gamma (N\sum\nolimits^{ad\mathbb{C}})$ satisfies Eq.(\[killing1\]) and define $\xi $ by Eq.(\[form\]), then
1. $\xi $ is $\mathbb{R}^{(n+m)}$-valued $1$-form.
2. $\xi $ is a closed $1$-form, $d\xi =0$
<!-- -->
1. If $\varphi =[p,[\varphi ]],X=[p,[X]],$ where $[\varphi ]$ and $[X]$ represent $\varphi $ and $X$ in a given frame $\tilde{s}\in P_{Spin^{\mathbb{ C}}(n)}\times P_{Spin^{\mathbb{C}}(m)},$ $$\xi (X):=\tau \lbrack \varphi ][X][\varphi ]\in \mathbb{R}^{n}\subset
Cl_{n}\subset \mathbb{C}l_{n},\text{ because }[\varphi ]\in Spin^{\mathbb{C} }.$$
2. Supouse that in the point $x_{0}\in M$ $\nabla ^{M}X=\nabla ^{M}Y=0,$ to simplify write $\nabla _{X}^{\Sigma ^{ad\mathbb{C}}}\varphi =\nabla
_{X}\varphi $ and $\nabla ^{M}X=\nabla X$, $$\begin{aligned}
X(\xi (Y)) &=&\left\langle \left\langle Y\cdot \nabla _{X}\varphi ,\varphi
\right\rangle \right\rangle +\left\langle \left\langle Y\cdot \varphi
,\nabla _{X}\varphi \right\rangle \right\rangle =(id-\tau )\left\langle
\left\langle Y\cdot \varphi ,\nabla _{X}\varphi \right\rangle \right\rangle
\\
&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2} \sum\nolimits_{j=1}^{m}Y\cdot e_{j}\cdot B(X,e_{j})\cdot \varphi -\frac{1}{2 }A^{l}(X)iY\cdot \varphi \right\rangle \right\rangle , \\
Y(\xi (X)) &=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2} \sum\nolimits_{j=1}^{m}X\cdot e_{j}\cdot B(Y,e_{j})\cdot \varphi -\frac{1}{2 }A^{l}(Y)iX\cdot \varphi \right\rangle \right\rangle ,
\end{aligned}$$ from now on $$\begin{aligned}
d\xi (X,Y) &=&X(\xi (Y))-Y(\xi (X)) \notag \\
&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2} \sum\nolimits_{j=1}^{m}\left[ Y\cdot e_{j}\cdot B(X,e_{j})-X\cdot
e_{j}\cdot B(Y,e_{j})\right] \cdot \varphi \right. \right. \notag \\
&&\left. \left. +\frac{1}{2}i\left(
A^{l}(Y)X-A^{l}(X)Y\right) \cdot \varphi \right\rangle \right\rangle \notag \\
&=&(id-\tau )\left\langle \left\langle \varphi ,C\cdot \varphi \right\rangle
\right\rangle,
\end{aligned}$$ with $C=\frac{1}{2}\sum_{j=1}^{m}\left[ Y\cdot e_{j}\cdot B(X,e_{j})-X\cdot
e_{j}\cdot B(Y,e_{j})\right] +\frac{1}{2}A^{l}(Y)iX-\frac{1}{2}A^{l}(X)iY.$ Write $X=\sum_{k=1}^{m}x^{k}e_{k};~Y=\sum_{k=1}^{m}y^{k}e_{k}$ then $$\begin{aligned}
\sum\nolimits_{k=1}^{m}X\cdot e_{k}\cdot B(Y,e_{k})
&=&\sum\nolimits_{j=1}^{m}\sum\nolimits_{k=1}^{m}x^{k}e_{k}\cdot
e_{j}\cdot B(Y,e_{j}) \notag \\ &=&-B(Y,X)+\sum\nolimits_{j=1}^{m}\sum\nolimits
_{\substack{ k=1 \\ k\neq j}}^{m}x^{k}e_{k}\cdot e_{j}\cdot B(Y,e_{j}) \\
\sum\nolimits_{l=1}^{m}Y\cdot e_{k}\cdot B(X,e_{k})
&=&\sum\nolimits_{j=1}^{m}\sum\nolimits_{k=1}^{m}y^{k}e_{k}\cdot
e_{j}\cdot B(X,e_{j}) \notag \\
&=&-B(X,Y)+\sum\nolimits_{j=1}^{m}\sum\nolimits
_{\substack{ k=1 \\ k\neq j}}^{m}y^{k}e_{k}\cdot e_{j}\cdot B(X,e_{j})
\end{aligned}$$ from what $$\begin{aligned}
C &=&\frac{1}{2}\left[ \sum\nolimits_{j=1}^{m}\sum\nolimits_{\substack{
k=1 \\ k\neq j}}^{m}e_{k}\cdot e_{j}\cdot \left[
y^{k}B(X,e_{j})-x^{k}B(Y,e_{j})\right] \right. \notag \\ && \left. +i(A^{l}(Y)X-A^{l}(X)Y)\right] \notag \\
\tau ([C]) &=&-\frac{1}{2}\left[ \sum\nolimits_{j=1}^{m}\sum\nolimits
_{\substack{ k=1 \\ k\neq j}}^{m}\left[ y^{k}B(X,e_{j})-x^{k}B(Y,e_{j}) \right] \right] \cdot e_{j}\cdot e_{k} \notag \\ && + \frac{i}{2}(A^{l}(Y)\left[ X\right]
-A^{l}(X)\left[ Y\right] ) \notag \\
&=&\frac{1}{2}\left[ \sum\nolimits_{j=1}^{m}\sum\nolimits_{\substack{ k=1
\\ k\neq j}}^{m}e_{k}\cdot e_{j}\cdot \left[ y^{k}B(X,e_{j})-x^{k}B(Y,e_{j}) \right] \right] \notag \\
&& + \frac{i}{2}(A^{l}(Y)\left[ X\right] -A^{l}(X)\left[ Y\right]
)=[C].
\end{aligned}$$ Which implies that $$d\xi (X,Y)=(id-\tau )\left\langle \left\langle \varphi ,C\cdot \varphi
\right\rangle \right\rangle =(id-\tau )(\tau \lbrack \varphi ]\tau \lbrack
C][\varphi ])=0.$$
From the fact that $M$ is simply connected and $\xi $ is closed, from the Poincaré’s Lemma we know that there exists a $$F:M\rightarrow \mathbb{R}^{\left( n+m\right) }$$ such that $dF=\xi .$ The next lemma allows us to conclude the proof of the theorem.
1. The map $F:M\rightarrow \mathbb{R}^{n},$ is an isometry.
2. The map $$\begin{aligned}
\Phi _{E} &:&E\rightarrow M\times \mathbb{R}^{n} \\
X &\in &E_{m}\mapsto (F(m),\xi (X))
\end{aligned}$$ is an isometry between $E$ and the normal bundle of $F(M)$ into $\mathbb{R} ^{\left( n+m\right) },$ preserving connections and second fundamental forms.
<!-- -->
1. Let $X,Y\in \Gamma (TM\oplus E),$ consequently $$\begin{aligned}
\left\langle \xi (X),\xi (Y)\right\rangle &=&-\frac{1}{2}\left( \xi (X)\xi
(Y)-\xi (Y)\xi (X)\right) \notag \\ &=&-\frac{1}{2}\left( \tau \lbrack \varphi
][X][\varphi ]\tau \lbrack \varphi ][Y][\varphi ]-\tau \lbrack \varphi
][Y][\varphi ]\tau \lbrack \varphi ][X][\varphi ]\right) \notag \\
&=&-\frac{1}{2}\tau \lbrack \varphi ]\left( [X][Y]-[Y][X]\right) [\varphi
]=\tau \lbrack \varphi ]\left( \left\langle X,Y\right\rangle \right)
[\varphi ] \notag \\
&=&\left\langle X,Y\right\rangle \tau \lbrack \varphi ][\varphi
]=\left\langle X,Y\right\rangle .
\end{aligned}$$ This implies that $F$ is an isometry, and that $\Phi _{E}$ is a bundle map between $E$ and the normal bundle of $F(M)$ into $\mathbb{R}^{n}$ which preserves the metrics of the fibers.
2. Denote by $B_{F}$ and $\nabla ^{\prime F}$ the second fundamental form and the normal connection of the immersion $F$. We want to show that: $$\begin{aligned}
i)\xi (B(X,Y)) &=&B_{F}(\xi (X),\xi (Y)), \\
ii)\xi (\nabla _{X}^{\prime }\eta ) &=&(\nabla _{\xi (X)}^{\prime F}\xi
(\eta )),
\end{aligned}$$ for all $X,Y\in \Gamma (TM)$ and $\eta \in \Gamma (E)$.
$i)$ First note that: $$B^{F}(\xi (X),\xi (Y)):=\{\nabla _{\xi (X)}^{F}\xi (Y)\}^{\bot }=\{X(\xi
(Y))\}^{\bot },$$ where the superscript $\bot $ means that we consider the component of the vector which is normal to the immersion. We know that $$\begin{aligned}
X(\xi (Y))&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2} \sum\nolimits_{j=1}^{m}Y\cdot e_{j}\cdot B(X,e_{j})\cdot \varphi -\frac{1}{2 }A^{l}(X)iY\cdot \varphi \right\rangle \right\rangle \notag \\
&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2}\left(
\sum\nolimits_{j=1}^{m}\sum\nolimits_{k=1}^{m}y^{k}e_{k}\cdot e_{j}\cdot
B(X,e_{j}) \right. \right. \right. \notag \\
&& -A^{l}(X)iY \Big) \cdot \varphi \bigg\rangle \bigg\rangle \notag \\
&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2}\left(
\sum\nolimits_{j=1}^{m}y^{j}e_{j}\cdot e_{j}\cdot
B(X,e_{j}) \right.\right. \right. \notag \\
& &\left.\left.\left. +\sum\nolimits_{j=1}^{m}\sum\nolimits_{k=1,k\neq j}^{m}y^{k}e_{k}\cdot e_{j}\cdot B(X,e_{j})-A^{l}(X)iY\right) \cdot \varphi
\right\rangle \right\rangle \notag \\
&=&(id-\tau )\left\langle \left\langle \varphi ,\frac{1}{2}\left(
-B(X,Y)+D\right) \cdot \varphi \right\rangle \right\rangle ,
\end{aligned}$$ where $$\begin{aligned}
D &=& \sum\nolimits_{j=1}^{m}\sum\nolimits_{k=1,k\neq j}^{m}y^{k}e_{k}\cdot
e_{j}\cdot B(X,e_{j})-A^l(X)iY \\
\tau \lbrack D] &=&[D].
\end{aligned}$$ Consequently $$\begin{aligned}
X(\xi (Y)) &=&\frac{1}{2}(id-\tau )\left\langle \left\langle \varphi ,\left(
-B(X,Y)+D\right) \cdot \varphi \right\rangle \right\rangle \\
&=&-\tau \lbrack \varphi ]\tau \lbrack B(X,Y)][\varphi ]=\left\langle
\left\langle \varphi ,B(X,Y)\cdot \varphi \right\rangle \right\rangle \\
&=&\xi (B(X,Y)).
\end{aligned}$$ Therefore we conclude $$\begin{aligned}
B_{F}(\xi (X),\xi (Y)) &:&=B^{F}(\xi (X),\xi (Y)):=\{\nabla _{\xi
(X)}^{F}\xi (Y)\}^{\bot }=\{X(\xi (Y))\}^{\bot } \\
&=&\{\xi (B(X,Y))\}^{\bot }=\xi (B(X,Y)),
\end{aligned}$$ here we used the fact that $F=\int \xi $ is an isometry: $B(X,Y)\in
E\Rightarrow \xi (B(X,Y))\in TF(M)^{\bot }.$ Then $i)$ follows.
$ii)$ First note that $$\begin{aligned}
\nabla _{\xi (X)}^{F}\xi (\eta )&=&\left\{ X(\xi (\eta ))\right\} ^{\bot
}=\left\{ X\left\langle \left\langle \eta \cdot \varphi ,\varphi
\right\rangle \right\rangle \right\} ^{\bot } \notag \\ &=& \left\langle \left\langle
\nabla _{X}\eta \cdot \varphi ,\varphi \right\rangle \right\rangle ^{\bot
}+\left\langle \left\langle \eta \cdot \nabla _{X}\varphi ,\varphi
\right\rangle \right\rangle ^{\bot }+\left\langle \left\langle \eta \cdot
\varphi ,\nabla _{X}\varphi \right\rangle \right\rangle ^{\bot }.
\end{aligned}$$ I will show that: $$\left\langle \left\langle \eta \cdot \nabla _{X}\varphi ,\varphi
\right\rangle \right\rangle ^{\bot }+\left\langle \left\langle \eta \cdot
\varphi ,\nabla _{X}\varphi \right\rangle \right\rangle ^{\bot }=0.$$ In fact $$\begin{aligned}
&&\left\langle \left\langle \eta \cdot \nabla _{X}\varphi ,\varphi
\right\rangle \right\rangle +\left\langle \left\langle \eta \cdot \varphi
,\nabla _{X}\varphi \right\rangle \right\rangle \notag \\
&=&(id-\tau )\left\langle \left\langle \eta \cdot \nabla _{X}\varphi
,\varphi \right\rangle \right\rangle \notag \\
&=&(-id+\tau )\left\langle \left\langle \left[ \frac{1}{2} \sum\nolimits_{j=1}^{m}\eta \cdot e_{j}\cdot B(X,e_{j})\cdot \varphi -\frac{ 1}{2}A^{l}(X)i\eta \cdot \varphi \right] ,\varphi \right\rangle \right\rangle
\notag \\
&=&(-id+\tau )\left\langle \left\langle \left[ -\frac{1}{2} \sum\nolimits_{j=1}^{m}\sum\nolimits_{p=1}^{n}\sum \nolimits_{k=1}^{n}n^{p}b_{j}^{k}e_{j}\cdot f_{p}\cdot f_{k}-\frac{1}{2} A^{l}(X)i\eta \right] \cdot \varphi ,\varphi \right\rangle \right\rangle \notag \\
&=&(-id+\tau )\left\langle \left\langle \left[ \frac{1}{2} \sum\nolimits_{j=1}^{m}\sum\nolimits_{p=1}^{n}n^{p}b_{j}^{p}e_{j} \right. \right. \right. \notag \\
&& \left. \left. \left. -\frac{1}{ 2}\sum\nolimits_{j=1}^{m}\sum\nolimits_{p=1}^{n}\sum\nolimits_{k=1,k\neq
p}^{n}n^{p}b_{j}^{k}e_{j}\cdot f_{l}\cdot f_{k}-\frac{1}{2}A^{l}(X)iN\right]
\cdot \varphi ,\varphi \right\rangle \right\rangle ,
\end{aligned}$$ from what $$\begin{aligned}
&&\left\langle \left\langle N\cdot \nabla _{X}\varphi ,\varphi \right\rangle
\right\rangle +\left\langle \left\langle N\cdot \varphi ,\nabla _{X}\varphi
\right\rangle \right\rangle \\
&=&\tau \lbrack \varphi ][\frac{1}{2}\sum_{j=1}^{n} \sum_{l=1}^{m}n^{l}b_{j}^{l}e_{j}][\varphi ]+\tau \lbrack \varphi ][\frac{1}{ 2}\sum_{j=1}^{n}\sum_{l=1}^{m}n^{l}b_{j}^{l}e_{j}][\varphi ] \\
&=&\tau \lbrack \varphi
][\sum_{j=1}^{n}\sum_{l=1}^{m}n^{l}b_{j}^{l}e_{j}][\varphi ]=\tau \lbrack
\varphi ][V][\varphi ]=:\xi (V)\in TF(M) \\
&\Rightarrow &\left\langle \left\langle \eta \cdot \nabla _{X}\varphi
,\varphi \right\rangle \right\rangle ^{\bot }+\left\langle \left\langle \eta
\cdot \varphi ,\nabla _{X}\varphi \right\rangle \right\rangle ^{\bot }=0.
\end{aligned}$$ In conclusion $$\nabla _{\xi (X)}^{F}\xi (\eta )=\left\langle \left\langle \nabla _{X}\eta
\cdot \varphi ,\varphi \right\rangle \right\rangle ^{\bot }=\left\langle
\left\langle \nabla _{X}\eta \cdot \varphi ,\varphi \right\rangle
\right\rangle ^{\bot }=\xi (\nabla _{X}\eta )^{\bot }=\xi (\nabla
_{X}^{\prime }\eta ).$$ At the end $ii)$ follows.
With these Lemmas the theorem is proved.
[1]{}
Bär, C., Extrinsic Bounds for Eigenvalues of the Dirac Operator, *Annals of Global Analysis and Geometry*, **16(6)**, 573-596 (1998).
Bayard, P., Lawn, M. A., Roth, J., Spinorial representation of surfaces into 4-dimensional space forms, *Ann. Glob. Anal. Geom.* **44** 433-453 (2013).
Bayard, P., On the spinorial representation of spacelike surfaces into $4$-dimensional Minkowski space, *Journal of Geometry and Physics*, **74**, 289-313 (2013).
Bayard, P., M. A. Lawn, J. Roth, Spinorial Representation of Submanifolds in Riemannian Space Froms, to appear Pacific Journal of Mathematics. \[arXiv:1505.02935v4 \[math-ph\]\] 2016.
Friedrich, T., On the Spinor Representation of Surfaces in Euclidean $3$-space, *Jour. of Geom. and Phys.* **28**, 143-157 (1998).
Friedrich, T., *Dirac Operators in Riemannian Geometry*, Graduate Studies in Mathematics **25**, American Mathematical Soc., Providence, 2000.
Hitchin, N., Harmonic Spinors, *Advances in Mathematics*, **14(1)**, 1-55 (1974).
Lawn, M. A., Immersions of Lorentzian surfaces in $\mathbb{R}^{2,1},$ *Jour. of Geom. and Phys* **58** 683–700 (2008).
Lawn, M. A., Roth, J., Isometric immersions of Hypersurfaces into 4-dimensional manifolds via spinors, *Diff. Geom. Appl.* **28** **2** 205-219 (2010).
Lawn, M. A., Roth, J., Spinorial Characterizations of Surfaces into 3-dimensional Pseudo-Riemannian Space Forms, *Math. Phys. Anal. Geom.* **14** 185-195 (2011).
Lawson, H. B. Jr. and Michelson, M-L., *Spin Geometry*, Princeton University Press, Princeton, 1989.
Morel, B., Surfaces in $S^{3}$ and $H^{3}$ via spinors, *Séminaire de Théorie spectrale et géométrie(Grenoble)*, **23** 131-144 (2004-2005).
Nicolaescu, L. I., *Notes on Seiberg-Witten Theory*, American Mathematical Society, Princeton, 2000.
[^1]: [email protected]
[^2]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The combination of a thin disk and a narrowly-collimated jet is a typical structure that is observed in the vicinity of a massive object such as AGN, black hole or YSO. Despite a large variety of their scales and possible diversity of involved processes, a simple and universal principle dictates the geometric similarity of the structure; we show that the singularity at the origin ($r=0$) of the Keplerian rotation ($V_\theta \propto r^{-1/2}$) is the determinant of the structure. The collimation of jet is the consequence of the alignment —so-called Beltrami condition— of the flow velocity and the “generalized vorticity” that appears as an axle penetrating the disk (the vorticity is generalized to combine with magnetic field as well as to subtract the friction force causing the accretion). Typical distributions of the density and flow velocity are delineated by a similarity solution of the simplified version of the model.'
address: |
$^{1}$Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8561, Japan\
$^{2}$Faculty of Exact and Natural Sciences, Javakhishvili Tbilisi State University, Tbilisi 0128, Georgia\
$^{3}$Andronikashvili Institute of Physics, Javakhishvili Tbilisi State University, Tbilisi 0177, Georgia
author:
- 'Z. Yoshida$^{1}$ and N. L. Shatashvili$^{2,3}$'
title: 'Beltrami structures in disk-jet system: alignment of flow and generalized vorticity'
---
Introduction
============
An accretion disk often combines with spindle-like jet of ejecting gas, and constitutes a typical structure that accompanies a massive object of various scales, ranging from young stars [@bib:Hartigan] to galactic nuclei [@bib:Jones]. The mechanism that rules each part of different systems might be not universal. Since early 70s after the discovery of radio galaxies and quasars (see e.g. [@bib:begelman3] and references therein) the main evidence from detecting jets in different classes of astrophysical systems which are observed to produce collimated jets near the massive central object prove the direct association with an accretion disk, although possibly reflecting different accretion regimes; the opposite is not true in some objects for which the accretion disks do not require collimated jets (viscous transport/disk winds may play the similar role in the energy balance) [@bib:bland1; @bib:livio; @bib:ferrari]. The macroscopic disk-jet geometry, however, bears a marked similarity despite the huge variety of the scaling parameters such as Lorentz factor, Reynolds number, Lundquist number, ionization fractions, etc. [@bib:bland1; @bib:bland1-2; @bib:livio; @bib:livio2; @bib:ferrari; @bib:zanni]. The search for a general principle that dictates such similarity provided the stimulus for this paper.
Before formulating a model of the global structure, we start by reviewing the basic properties of disk-jet systems. In the disk region, the transport processes of mass, momentum and energy depend strongly on the scaling parameters; many different mechanisms have been proposed and examined carefully. The classical (collisional) processes are evidently insufficient to account for the accretion rate, thus turbulent transports (involving magnetic perturbations) must be invoked[@bib:BalbusHawley]. Winds may also remove the angular momentum from the disk[@bib:magn; @bib:pellet; @bib:bogov]. The connection of the disk and the jet is more complicated. While it is evident that the mass and energy of the jet are fed by the accreting flow, the mechanism and process of mass/energy transfer are still not clear. It is believed that the major constituent of jets is the material of an accretion disk surrounding the central object. Although for the fastest outflows the contributions to the total mass flux may come from outer regions as well [@bib:zanni]. In AGN one may think of taking some energy from the central black hole. Livio[@bib:livio] summarizes the conjectures as follows: (i) powerful jets are produced by systems in which on top of an accretion disk threaded by a vertical field, there exists an additional source of energy/wind, possibly associated with the central object (for example, stellar wind from porotostar may accelerate YSO jets, as estimated by Refs[@bib:ferreira; @bib:ferreira2; @bib:ferreira3]); (ii) launching of an outflow from an accretion disk requires a hot corona or a supply from some additional source of energy (see also [@bib:matsumoto; @bib:matsumoto2; @bib:matsumoto3]); (iii) extensive hot atmosphere around the compact object can provide additional acceleration. Magnetic fields are considered to play an important role in defining the local accretion[@bib:bland1; @bib:bland1-2; @bib:bland2; @bib:bland3; @bib:bland5; @bib:zanni]. When magnetic field is advected inwards by accreting material or/and generated locally by some mechanism, i.e. if there is a bundle of magnetic field lines which is narrowest near the origin (central object) and broader afar off, the centrifugal force due to rotation may boost the jet along the magnetic field lines up to a super-Alfvénic speed[@bib:begelman3; @bib:begelman4; @bib:begelman5; @bib:bland4; @bib:shibata; @bib:shibata2; @bib:anderson2]. In the case of AGN, there is an alternative idea suggested by Blandford & Znajek[@bib:bland1] based on electro-dynamical processes extracting energy from a rotating black hole. Extra-galactic radio jets might be accelerated by highly disorganized magnetic fields that are strong enough to dominate the dynamics until the terminal Lorentz factor is reached[@bib:begelman-pressure]. Following the twin-exhaust model by Blandford & Rees[@bib:bland], the collimation under this scenario is provided by the stratified thermal pressure from an external medium. The acceleration efficiency then depends on the pressure gradient of the medium. In addition to the energetics to account for the acceleration of ejecting flow, we have to explain how the streamlines/magnetic-field lines change the topology through the disk-jet connection[@bib:Shiraishi] (see also [@bib:shakura; @bib:bland1-2]).
Despite the diversity and complexity of holistic processes, there must yet be a simple and universal principle that determines the geometric similarity of disk-jet compositions. In the present paper we will show that the collimated structure of jet is a natural consequence of the *alignment* of the velocity and the “generalized vorticity” —here the notion of *vorticity* will be generalized to combine with magnetic field (or, electromagnetic vorticity) as well as to include the effective friction causing the accretion. In a Keplerian thin disk, the vorticity becomes a vertical vector with the norm $\propto r^{-3/2}$ ($r$ is the radius from the center of the disk), which appears as a spindle of a thin disk. Then, *alignment* is the only recourse in avoiding singularity of force near the axis of Keplerian rotation. We are not going to discuss the “process” that produces the global structure; we will, however, elucidate a “necessary condition” imposed on the structure that can live long.
To demonstrate how the alignment condition arises and how it determines the *singular* structure of a *thin* disk and *narrowly-collimated* jet, we invoke the simplest (minimum) model of magnetohydrodynamics (Sec.\[sec:formulation\]). The alignment condition (so-called *Beltrami* relation) will be derived in Sec.\[Sec:Disk-Jet\]. In Sec.\[sec:Beltrami\], we will formulate a system of equations which gives general *Beltrami structures* satisfying the alignment condition. In Sec.\[sec:similarity-solution\], we will give an analytic solution to a simplified version of the equations by a similarity-solution method.
Momentum Equation {#sec:formulation}
=================
We start by formulating a magnetohydrodynamic (MHD) model. Let $ \bm{P} = \rho \bm{V}$ denote the momentum density, where $\rho$ is the mass density and $\bm{V}$ is the (ion) flow velocity. The momentum balance equations for the ion and electron fluids are $$\begin{aligned}
& &
\partial_t \bm{P} + \nabla\cdot(\rho\bm{V}\otimes\bm{V})
\nonumber \\
& &~~~~= \epsilon^{-1}\rho\left(\bm{V}\times\bm{B}-\partial_t\bm{A}\right)
\nonumber \\
& &~~~~~~~~~-\rho\nabla(\phi+\varphi) - \nabla p_i - \nu \bm{P},~~~~
\label{momentum-i-1}
\\
& &~~~0= \epsilon^{-1}\rho\left(\bm{V}_e \times\bm{B} -\partial_t\bm{A}\right)
% \nonumber \\
% & &~~~~~~~~~
-\rho\nabla\varphi +\nabla p_e,~~~~
%+\nu_e \rho \bm{V}_e,
\label{momentum-e-1}\end{aligned}$$ where $\bm{B}$ is the magnetic field, $\varphi$ and $\bm{A}$ are the the electromagnetic potentials, $\phi$ is the gravity potential, $p_i$ and $p_e$ are the ion and electron pressures, and $\nu$ is the (effective) friction coefficient. We are assuming singly charged ions, for simplicity. The variables are normalized as follows: We choose a representative flow velocity $V_0$ and a mass density $\rho_0$ in the disk, and normalize $\bm{V}$ and $\rho$ by these units. The energy densities $|\bm{B}|^2/8\pi$ , $e\varphi$, $\rho\phi$, $p_i$ and $p_e$ are normalized by the unit kinetic energy density $${\cal{E}}_0=\frac{\rho_0 V_0^2}{2}. \label{Unit_energy}$$ The independent variables (coordinate $\bm{x}$ and time $t$) are normalized by the system size $L_0$ and the corresponding transit time $T_0=L_0/V_0$. The scale parameter $\epsilon$ is defined by $$\epsilon = \frac{\delta_i}{L_0},$$ where $\delta_i = mc/\sqrt{4\pi e^2\rho_0}$ is the ion inertia length ($m$: ion mass, $e$: elementary charge).
In equation(\[momentum-e-1\]), we are neglecting the inertia and the gravitation of electrons, as well as the electric resistivity. The electron velocity $\bm{V}_e$ is given by $$\bm{V}_e = \bm{V}- \epsilon \rho^{-1}\nabla\times\bm{B} .$$ The system (\[momentum-i-1\])-(\[momentum-e-1\]) includes a finite dissipation by the effective friction force on the (ion) momentum, not by conventional viscosities or resistivity. This is for the simplicity of analysis. We do need a finite dissipation to allow accretion, but the detail mechanism of dissipation is not essential in the scope of our arguments.
In this paper, we consider stationary solutions, so we set $\partial_t=0$. To study the macroscopic structure of an astronomical system, we may assume $\epsilon \ll 1$. Let us expand $\bm{B}$ as $$\bm{B} = \bm{B}^{(0)} + \epsilon \bm{B}^{(1)} + \cdots,$$ where the magnitude of each $\bm{B}^{(n)}$ is of order unity. All other independent and dependent variables are assumed to be at most of order unity. Then, equation(\[momentum-e-1\]) reads as $$\begin{aligned}
0&=&\epsilon^{-1}\rho\bm{V} \times\bm{B}^{(0)}
\\ & &
+\rho\left[
\bm{V}\times\bm{B}^{(1)}-\rho^{-1}(\nabla\times\bm{B}^{(0)})\times\bm{B}^{(0)}\right]
\\ & &
-\rho\nabla\varphi +\nabla p_e %+\nu_e \rho \bm{V}
+ O(\epsilon).\end{aligned}$$ The term of order $\epsilon^{-1}$ demands $$\bm{B}^{(0)} = \mu \bm{P}, \label{Beltrami-0}$$ where $\mu$ is a certain scaler function, which is the reciprocal Alfvén Much number. Operating divergence on both sides of equation (\[Beltrami-0\]), we find $\nabla\cdot(\mu \bm{P}) =
\bm{P}\cdot\nabla\mu=0$. From order $\epsilon^{0}$ terms, we obtain $$\bm{V}\times\bm{B}^{(1)} = \rho^{-1}(\nabla\times\bm{B}^{(0)})\times\bm{B}^{(0)}
+\nabla\varphi - \rho^{-1}\nabla p_e. % -\nu_e \bm{V} .$$ Using these relations in equation(\[momentum-i-1\]), we obtain $$\nabla\cdot(\rho\bm{V}\otimes\bm{V}) = [\nabla\times(\mu\bm{P})]\times(\mu\bm{P})
-\rho\nabla\phi - \nabla p - \nu \bm{P}, \label{momentum-i-1-scaled}$$ where $p=p_i+p_e$.
In order to derive a term that balances with the friction term, we decompose the “inertia term” \[the left-hand side of equation(\[momentum-i-1-scaled\])\] as follows: we first write $$\rho = \rho_1 \rho_2 \label{decomposition1}$$ ($\rho_2$ will be determined as a function of $\nu$ in equation(\[friction-balance\])), and denote $$\bm{{P}_1} = \rho_1\bm{V}, \quad \bm{P}_2 = \rho_2 \bm{V}. \label{decomposition2}$$ Using these variables, we may write $$\begin{aligned}
\nabla\cdot(\rho\bm{V}\otimes\bm{V})&=&\nabla\cdot(\bm{P}_1\otimes\bm{P}_2)
\nonumber \\
&=&(\nabla\cdot\bm{P}_1)\bm{P}_2 + (\bm{P}_1\cdot\nabla)\bm{P}_2 .~~
\label{decomposition3}\end{aligned}$$ In the conventional formulation of fluid mechanics, we choose $\rho_2=1$ and $\rho_1=\rho$. Then, $\nabla\cdot(\rho\bm{V}\otimes\bm{V}) = (\nabla\cdot\bm{P})\bm{V}
+ \rho(\bm{V}\cdot\nabla)\bm{V}$. Combining with $\partial_t
\bm{P}=\rho\,\partial_t\bm{V} + \bm{V}\,\partial_t \rho$, and using the mass conservation law $\partial_t \rho + \nabla\cdot\bm{P}=0$, we observe that the left-hand side of equation(\[momentum-i-1-scaled\]) reduces into the standard inertia term $\rho(\bm{V}\cdot\nabla)\bm{V}$ (here we are considering the steady state with $\partial_t\bm{V} =0$). In the preset analysis, however, we choose a different separation of the inertia term to match the term $(\nabla\cdot\bm{P}_1)\bm{P}_2$ with the friction term $-\nu\bm{P}$; this strategy will be explained in the next section.
Multiplying $(\rho_2/\rho_1)$ on both sides of equation(\[momentum-i-1-scaled\]), we obtain $$\begin{aligned}
\bm{P}_2\times\bm{\Omega} &=& \frac{1}{2}\nabla P_2^2 + \rho_2^2 \nabla\,(\phi + h)
\nonumber \\
& & +\rho_2\nu\,\bm{P}_2 + \frac{\rho_2}{\rho_1}\,(\nabla\cdot\bm{P}_1)\bm{P}_2 ,
% (\bm{P}_2\cdot\nabla)\bm{P}_2
% = \bm{P}_2 \times[\epsilon^{-1}\rho_2\bm{B}]
% - (\rho_2/\rho_1) [ \rho\nabla\Phi
% + \nabla p + \nu \bm{P} + (\nabla\cdot\bm{P}_1)\bm{P}_2] .
\label{momentum-2}\end{aligned}$$ where $$\begin{aligned}
\bm{\Omega} &=& \nabla\times\bm{P}_2 -
\mu\rho_2\nabla\times(\mu\,\bm{P})
\label{generalized-vorticity}
\\
&=& \nabla\times\bm{P}_2
- \mu\rho_2\nabla\times\bm{B}^{(0)}
\nonumber
% \tilde{\bm{\Omega}_2} = \bm{\Omega}_2 + \epsilon^{-1}\rho_2\bm{B}
% = \nabla\times\bm{P}_2 + \epsilon^{-1}\rho_2\bm{B},\end{aligned}$$ is a *generalized vorticity*, and $h$ is the enthalpy density ($\nabla h = \rho^{-1}\nabla p$).
Disk-Jet Structure {#Sec:Disk-Jet}
==================
Here we assume toroidal symmetry ($\partial_\theta =0$ in the $r$-$\theta$-$z$ coordinates). We consider a massive central object (a singularity at the origin), which yields $\phi = -MG/r$ (we may neglect the mass in the disk in evaluating $\phi$). Then, the flow velocity is $\bm{V} \approx V_\theta \bm{e}_\theta$ with the Keplerian velocity $V_\theta \propto r^{-1/2}$ in the disk region. The vorticity is $\nabla\times\bm{V} = \Omega_z \bm{e}_z$ with $\Omega_z \propto r^{-3/2}$. The momentum is strongly localized in the thin disk, and the vorticity diverges near the axis (in this macroscopic view). This singular configuration allows only a special geometric structure to emerge. By equation(\[momentum-2\]), following conclusions are readily deducible:
\(i) In the disk, a radial flow (which is much smaller than $V_\theta$) is caused by the friction. The friction force $\rho_2\nu
\,\bm{P}_2$ is primarily in the azimuthal (toroidal) direction, which may be balanced with the term $(\rho_2/\rho_1)\,(\nabla\cdot\bm{P}_1)\bm{P}_2$ that has been extracted from the inertia term $\nabla\cdot(\rho\bm{V}\otimes\bm{V})$; see equation(\[decomposition3\]). Using the steady-state mass conservation law $\nabla\cdot\bm{P}=0$, we observe $$\begin{aligned}
\rho_1^{-1} \nabla\cdot\bm{P}_1 &=& \rho_1^{-1} \nabla\cdot(\bm{P}/\rho_2)
\\
&=& \rho_2 \bm{V}\cdot\nabla\rho_2^{-1} = - \bm{V}\cdot\nabla\log\rho_2 .\end{aligned}$$ Hence, the balance of the friction and the partial inertia term demands $$\bm{V}\cdot\nabla\log\rho_2 = \nu, \label{friction-balance}$$ which determines the parameter $\rho_2$. The remaining part $\rho_1$ of the density is, then, determined by the mass conservation law: From $$\begin{aligned}
\nabla\cdot\bm{P}&=&\nabla\cdot(\rho\bm{V})
\\
&=& \rho_2\bm{V}\cdot\nabla\rho_1
+ \rho_1\bm{V}\cdot\nabla\rho_2 +
\rho_1\rho_2\nabla\cdot\bm{V}
\\
&=& 0\end{aligned}$$ and equation(\[friction-balance\]), we obtain a relation $$\bm{V}\cdot\nabla\log\rho_1 = -\nabla\cdot\bm{V}-\nu.
\label{friction-balance'}$$
\(ii) After balancing the third and fourth terms in equation(\[momentum-2\]), the remaining terms must not have an azimuthal (toroidal) component. In fact, the right-hand side (gradients) have only poloidal ($r$-$z$ plane) components, and hence, the left-hand side cannot have a toroidal component.
\(iii) In the vicinity of the axis threading the central object, the flow $\bm{V}(=\bm{P}_2/\rho_2$) must *align* to the *generalized vorticity* $\bm{\Omega}$ (that is dominated by $\nabla\times\bm{V} \propto
r^{-3/2}\bm{e}_z$), producing the collimated jet structure, to minimize the Coriolis force $\bm{V}\times\bm{\Omega}$: Otherwise, the remaining potential forces (gradients of potentials which have only poloidal components) cannot balance with the rotational (toroidal) Coriolis force. The alignment condition, which we call *Beltrami condition*, reads as $$\bm{\Omega} = \lambda \bm{P}, \label{Beltrami1}$$ where $\lambda$ is a certain scalar function. The pure electromagnetic Beltrami condition is the well-known *force-free* condition $\nabla\times\bm{B} =
\lambda\bm{B}$[@bib:chandrasekhar; @bib:low; @bib:yoshida-giga]. Equation (\[Beltrami1\]) demands the alignment of the generalized vorticity and momentum[@bib:beltrami].
\(iv) When the Beltrami condition eliminates the left-hand side of equation(\[momentum-2\]), the remaining potential forces must balance and achieve the *Bernoulli condition*[@bib:beltrami] which reads as $$\begin{aligned}
\frac{1}{2\rho_2^2}\nabla P_2^2 + \nabla( \phi + h)
% \nonumber \\
&=& \nabla \left( \frac{1}{2}V^2 + \phi + h \right) + V^2 \nabla \log\rho_2
\nonumber \\
&=& 0.
\label{Bernoulli-1}\end{aligned}$$
The system of determining equations is summarized as follows: By equation(\[friction-balance\]), we determine the “artificial ingredient” $\rho_2$ for a given $\nu$. This equation involves $\bm{V}=\bm{P}/\rho$ which is governed by the Beltrami equation (\[Beltrami1\]). After determining $\bm{V}$ and $\rho_2$, we can solve the Bernoulli equation (\[Bernoulli-1\]) to determine the enthalpy $h$ (the gravitational potential is approximated by $\phi=-MG/r$).
Beltrami Vortex Structure {#sec:Beltrami}
=========================
General two-dimensional structure {#subsec:general2D}
---------------------------------
In this section, we rewrite the Beltrami-Bernoulli equations (\[friction-balance\])-(\[Bernoulli-1\]) in a more manageable form by invoking the Clebsch parameterization[@bib:yoshida2009]. In an axisymmetric geometry, the divergence-free vector $\bm{P}$ may be parameterized as $$\bm{P} = \nabla\psi\times\nabla\theta + I \nabla \theta,
\label{Grad-form-1}$$ where $I = \rho r V_\theta$. Both $\psi$ and $I$ do not depend on $\theta$. Since $\bm{P}\cdot\nabla\psi=0$, the level sets (contours) of $\psi$ are the streamlines of $\bm{P}$ (or those of $\bm{V}=\bm{P}/\rho$). Using the expression (\[Grad-form-1\]) in equation(\[friction-balance\]) yields $$\begin{aligned}
\nu = \frac{1}{\rho} \bm{P}\cdot\nabla \log \rho_2 &=&
\frac{1}{\rho}\nabla\log\rho_2\times\nabla\psi\cdot\nabla\theta
\nonumber
\\
&\equiv& \frac{1}{r\rho}\{\log\rho_2,\psi\},
\label{friction-balance-2}\end{aligned}$$ where $\{a,b\} \equiv (\partial_r b) (\partial_z a) - (\partial_r
a)(\partial_z b)$. For a given set of $\bm{P}$, $\rho$ and $\nu$, we can solve equation(\[friction-balance-2\]) to determine $\rho_2$, as well as $\rho_1=\rho/\rho_2$ which is consistent to equation(\[friction-balance’\]).
Let us rewrite the momentum equation (\[momentum-2\]) using the Clebsch parameterization (\[Grad-form-1\]). We observe $$\begin{aligned}
\bm{\Omega} &=& \nabla\times\left(\rho_1^{-1}\bm{P}\right) -
\mu\rho_2\nabla\times\left(\mu\bm{P}\right)
% =\nabla\times [\rho_1^{-1} \nabla\psi\times\nabla\theta + I_2 \nabla \theta]
\nonumber
\\
&=& -\left[ \left(\rho_1^{-1}-\mu^2\rho_2 \right){\cal L}\psi
% \right.
% \nonumber \\
% & & ~~~\left.
+ \nabla\psi\cdot\left(\nabla\rho_1^{-1} -\mu\rho_2\nabla\mu \right) \right] \nabla\theta
\nonumber \\
& &
+ \left[ \nabla \left(\rho_1^{-1}I \right) - \mu\rho_2
\nabla \left( \mu I \right) \right] \times \nabla \theta ,~
\label{Grad-form-2}\end{aligned}$$ where $${\cal L}\psi \equiv r \partial_r(r^{-1}\partial_r \psi) + \partial_z^2 \psi.$$ As mentioned above, $\bm{P}_2\times\bm{\Omega}$ may not have a toroidal component, i.e., $$\left[ \nabla \left(\rho_1^{-1}I \right) - \mu\rho_2
\nabla \left( \mu I \right) \right] \times \nabla\psi =0,$$ which is equivalent to the existence of a scalar function $\lambda$ such that $$\begin{aligned}
\lambda \nabla\psi&=&
\nabla \left(\rho_1^{-1}I \right) - \mu\rho_2
\nabla \left( \mu I \right)
\nonumber \\
&=&
(\rho_1^{-1} - \mu^2\rho_2)\nabla I
+ I \nabla \rho_1^{-1}
% \nonumber \\
% & &
- \mu\rho_2 I \nabla\mu.
\label{Beltrami-I}\end{aligned}$$ The poloidal component of the momentum equation (\[momentum-2\]) reads $$\begin{aligned}
& & \left[ \left(\rho_1^{-1}-\mu^2\rho_2 \right)({\cal L}\psi)
\right.
\nonumber \\
& & ~~\left. + \nabla\psi\cdot\left(\nabla\rho_1^{-1} -\mu\rho_2\nabla\mu \right) \right]\nabla\psi
+ I \nabla I
\nonumber
\\
& & ~~~~
= - \frac{r^2}{2}\nabla \left[r^{-2}\rho_1^{-2}(|\nabla\psi|^2+ I^2) \right]
\nonumber
\\
& & ~~~~~~~~-r^2 \rho_2^2 \nabla(\phi + h) .
\label{Grad-form-3}\end{aligned}$$
We have to determine a self-consistent set of functions $\psi$, $I$, $h$, $\rho_1$, $\rho_2$, $\lambda$ and $\mu$ (the friction coefficient $\nu$ and the gravitational potential $\phi=-MG/r$ are given functions). Our system of equations are (\[friction-balance-2\]), (\[Beltrami-I\]) and (\[Grad-form-3\]), which are simultaneous nonlinear partial differential equations including hyperbolic parts. Since $\bm{P}\cdot\nabla\mu = 0$ \[see equation(\[Beltrami-0\])\], we may assume that $\mu = \mu(\psi)$, and give this function as a “Cauchy data” (an arbitrary function of $\psi$ that is constant along the characteristics). Another function (for example $\rho_1$) may be arbitrarily prescribed to leave five unknown functions to be determined by five equations of the system. However, the hyperbolic parts must be integrated by some Cauchy data, not by boundary values. We note that the factor $(\rho_1^{-1}-\mu^2\rho_2)$ multiplying the elliptic operator ${\cal L}$ in equation(\[Grad-form-3\]) may cause the *Alfvén singularity*.
To proceed with analytic calculations, we will consider simplified systems in which we can integrate the hyperbolic part of the equations easily.
Unmagnetized Beltrami flow
--------------------------
Here, we consider unmagnetized (or super-Alfvénic; $\mu\approx0$) Beltrami-Bernoulli flows. With $\mu=0$, equation(\[Beltrami-I\]) reduces to $$\lambda \nabla\psi =
\nabla \left(\rho_1^{-1}I \right) ,
\label{Beltrami-I-unmag}$$ which implies $\rho_1^{-1}I = I_2(\psi)$ (Cauchy data) and $\lambda = \lambda(\psi) = I_2'(\psi) $ (we denote $f'(\psi)=df(\psi)/d\psi$). We notice that equation(\[Beltrami-I-unmag\]) is just the toroidal (azimuthal) component of the Beltrami condition (\[Beltrami1\]). Applying the Beltrami condition also to the poloidal component of the momentum equation, we can simplify equation(\[Grad-form-3\]); with $\mu=0$ and $I = \rho_1 I_2(\psi)$, we obtain $${\cal L} \psi
- \nabla\psi\cdot\nabla\log\rho_1
= - \rho_1^2 I_2'(\psi) I_2(\psi) .
\label{Beltrami3'}$$ The Beltrami condition has decoupled the gradient forces \[the right-hand side of equation(\[Grad-form-3\])\] from the momentum equation, which must balance separately: This is the Bernoulli condition (\[Bernoulli-1\]) which now reads as $$\begin{aligned}
\nabla h &=&
-\nabla \left[ \frac{1}{2r^2\rho^2}\left(|\psi|^2+ I^2 \right) + \phi \right]
\nonumber \\
& & - \frac{1}{r^2\rho^2}\left(|\psi|^2+ I^2 \right) \nabla \log\rho_2.
\label{Bernoulli-2}\end{aligned}$$
Now the determining equations are much simplified — for a given distribution $\rho_1$ and Cauchy data $I_2(\psi)$, we may solve equation(\[Beltrami3’\]) to determine $\psi$. Then, for a given $\nu$, equation(\[friction-balance-2\]) yields $\rho_2$. Finally, the enthalpy $h$ is determined by equation(\[Bernoulli-2\]).
A simple magnetized Beltrami flow
---------------------------------
If we assume $$\rho_1 = \rho_1(\psi),
\label{Beltrami-rho1}$$ (i.e., $\bm{V}\cdot\nabla\rho_1 = 0$ and thus $\nabla\cdot
\bm{P}_2=0$), we may rather easily include a magnetic field into the solution ($\mu=\mu(\psi)\neq 0$). By equation(\[friction-balance’\]), this assumption implies that the compressibility is only by the frictional deceleration (then, $0=\nabla\cdot\bm{P} = \nabla\cdot(\rho_1 \bm{P}_2) =
\nabla\cdot\bm{P}_2$).
Under the assumption (\[Beltrami-rho1\]), the second and third terms of equation(\[Beltrami-I\]) parallel $\nabla\psi$. Therefore, $\nabla I$ must align to $\nabla\psi$, implying $I = I(\psi)$. In this case, $\lambda$ is given by $$\lambda = (\rho_1^{-1} - \mu^2\rho_2)I' - (\rho_1^{-2}\rho_1' + \rho_2 \mu\mu')I .
\label{Beltrami-I-2}$$ On the right-hand side of equation(\[Beltrami-I-2\]), only $\rho_2$ is not a function of $\psi$. The Beltrami condition demands vanishing of the left-hand side of equation(\[Grad-form-3\]), which, with a Cauchy data $I(\psi)$ and the $\lambda$ of equation(\[Beltrami-I-2\]), reads as $$(\rho_1^{-1}-\mu^2\rho_2) {\cal L} \psi
- (\rho_1^{-2}\rho_1' + \rho_2 \mu\mu') |\nabla\psi|^2
% \nonumber \\
% & & ~~~~
= - \lambda I(\psi) .
\label{Beltrami3}$$ This equation (governing $\psi$) is coupled with equation(\[friction-balance-2\]) through $\rho_2$. The Bernoulli condition (\[Bernoulli-2\]) can be separately solved to determine $h$.
Analytic Similarity Solution {#sec:similarity-solution}
============================
A similarity solution modeling disk-jet structure
-------------------------------------------------
In this section, we construct a *similarity solution* of the un-magnetized ($\mu=0$) model (\[Beltrami3’\]), which describes a fundamental disk-jet structure. We define $$\tau \equiv \frac{z}{r}
\quad (r>0),
\label{similarity-0}$$ and an orthogonal variable ($\nabla\tau\cdot\nabla\sigma=0$) $$\sigma \equiv \sqrt{r^2 + z^2} \ . \label{similarity-0'}$$ We consider $\psi$ such that $$\psi = \psi(\tau) = -J \tau^p -D \tau^{-q},
\label{similarity-1}$$ where $J$ and $p$ ($D$ and $q$) are positive constants, which control the strength of the jet (disk) flow. As shown in Fig.\[fig:flow\], this $\psi$ has a disk-jet-like geometry.
![ The momentum field (contours of $\psi$ that describe the streamlines of the poloidal component of $\bm{P}$) of the similarity solution (with $D=1$, $p=1$, $J=0.1$ and $q=1$). []{data-label="fig:flow"}](fig1.eps)
The level sets of $\psi$ (hence, those of $\tau$) are the streamlines of $\bm{P}$. On the other hand, $\sigma$ serves as the coordinate directed parallel to the streamlines. We assume that $\rho_1$ is written as $$\rho_1(\tau,\sigma) = \rho_\perp(\tau) \rho_\parallel(\sigma),
\label{similarity-1'}$$ and, then, $\log\rho_1= \log\rho_\perp(\tau)+\log\rho_\parallel(\sigma)$.
Let us see how the stream function $\psi$ of equation(\[similarity-1\]) satisfies equations (\[Beltrami3’\]), (\[Bernoulli-2\]) and (\[friction-balance-2\]), i.e., we determine all other fields $I_2(\psi)$, $\rho_1$, $\rho_2$, $\nu$ and $h$ that allow this $\psi$ to be the solution. For arbitrary $f(\tau)$ and $g(\tau)$, we observe $${\cal L} f =
\frac{1}{r^2} \left[ (\tau^2+1) f'' + 3\tau f' \right] ,$$ $$\nabla f \cdot \nabla g
= \frac{1}{r^2} (\tau^2+1)f' g' .$$ Hence, the left-had side of equation(\[Beltrami3’\]) is (denoting $g(\tau)\equiv\log\rho_\perp(\tau)$) $$\begin{aligned}
& & {\cal L} \psi - \nabla \psi \cdot \nabla \log \rho_1
\nonumber \\
& & ~~= \frac{1}{r^2} \left[ (\tau^2+1) \psi'' + 3\tau \psi' - (\tau^2+1)g'\psi' \right].
% \nonumber \\ ~
\label{similarity-2}\end{aligned}$$ For this quantity to balance with the right-hand side of equation(\[Beltrami3’\]), which has no explicit dependence on $r^{-2}$, both sides must be zero, i.e., we have to set $I_2'(\psi) =0$ (the implication of this simple condition will be discussed later). Then, equation(\[Beltrami3’\]) reduces to $$(\tau^2+1) \psi'' + 3\tau \psi' - (\tau^2+1)g'\psi' = 0.
% \\
% & & \quad = Jp(p-1)\tau^{p-2} + Dq(q+1)\tau^{-q-2}
% \\
% & & \quad \quad ~~~
% + \left[ 3\tau -(\tau^2+1)g' \right](Jp\tau^{p-1}-Dq\tau^{-q-1}) = 0
\label{Beltrami3''}$$ We note that the Beltrami condition (\[Beltrami3”\]) is freed from $\rho_\parallel(\sigma)$. This fact merits in solving equation(\[friction-balance-2\]); see equation(\[friction-balance-3\]).
![ The distribution of $\rho_{\perp}$ of the similarity solution (with $D=1$, $p=1$, $J=0.1$ and $q=1$). Contour curves are given in the log scale of $\rho_{\perp}$. []{data-label="fig:density_1"}](fig2.eps)
For the specific form (\[similarity-1\]) of $\psi$, we have to determine an appropriate $g=\log \rho_\perp$ to satisfy equation(\[Beltrami3”\]), i.e., $$\begin{aligned}
g' &=&
\frac{\psi''}{\psi'} + \frac{3\tau}{\tau^2 + 1}
\nonumber
\\
&=& \frac{J\,p\,(p-1)\tau^{p+q} +
D\,q\,(q+1)}{J\,p\,\tau^{p+q+1}-D\,q\,\tau}
% \nonumber
% \\
% & &
+\frac{3\tau}{\tau^2+1} . \label{similarity-3}\end{aligned}$$ Solving equation(\[similarity-3\]), we obtain $$g \equiv \log\rho_\perp =
\log\frac{|Jp\tau^{p+q}-Dq|}{\tau^{q+1}}
+\frac{3}{2}\log(\tau^2+1) ,$$ and, thus, $$\rho_\perp = \frac{(\tau^2+1)^{3/2}|Jp\tau^{p+q}-Dq|}{\tau^{q+1}} .
\label{similarity-4}$$ In Fig.\[fig:density\_1\], we show the profile of $\rho_\perp(\tau)$.
Bernoulli relation in the disk region
-------------------------------------
As mentioned above, this solution assumes $I_2'(\psi)
~(=\lambda)=0$, and hence, $I_2 = \rho_2 r V_\theta$ must uniformly distribute. In the disk region (the vicinity of $z=0$), we may approximate $V_\theta \approx \sqrt{MG/r}$ (Keplerian velocity). Hence, $\rho_2 \propto r^{-1/2}$. In Fig.\[fig:density\], we show the profile of $\rho=\rho_1\rho_2$ for the case of $\rho_1\propto\rho_\perp$ (i.e., $\rho_\parallel=$constant).
![ The density $\rho$ in the similarity solution (with $D=1$, $p=1$, $J=0.1$ and $q=1$). We assume $\rho_{\parallel}(\sigma)\approx z^{-2/3}$ and $\rho_2\approx
r^{-1/2}$. A levelset surface of $\rho$ is shown in the doiman $r<5$ and $|z|<5$. []{data-label="fig:density"}](fig3.eps)
For $\rho=\rho_1\rho_2=\rho_\parallel \rho_\perp r^{-1/2}$, equation(\[friction-balance-2\]) reads as $$\nu = \frac{-d\psi/dz}{2 \rho_\parallel \rho_\perp} r^{-3/2}
\propto \frac{r^{-5/2}}{\rho_\parallel(r)}
\label{friction-balance-3}$$ along each streamline in the disk region. For a given $\nu$, we can solve equation(\[friction-balance-3\]) for $\rho_\parallel$ to determine the density profile. In the disk region the Bernoulli relation (\[Bernoulli-1\]) accounts as follows: by $\nabla\cdot\bm{P}=0$, we have $P_r =
\rho V_r \propto r^{-1}$. If $\rho_\parallel(\sigma) =$ constant, for example, $\rho\propto r^{-1/2}$ (evaluated along a streamline in the disk region). Then, we have $V_r = V_{r0} r^{-1/2}$ with a (negative) constant $V_{r0}$. Combining the azimuthal velocity $V_\theta = V_{\theta 0} r^{-1/2}$ (which must be slightly smaller than the Keplerian velocity $\sqrt{MG/r}$), we obtain $$V^2 = (V_{r0}^2 + V_{\theta0}^2) r^{-1} = V_0^2 r^{-1}.$$ By $\rho_2 \propto r^{-1/2}$, we obtain $ \partial_r (\log \rho_2)
= -(1/2) r^{-1}$. Hence, the Bernoulli relation (\[Bernoulli-1\]) demands $$\partial_r h = (V_{0}^2-MG) r^{-2},$$ which yields $h = (MG -V_{0}^2) r^{-1}$. In this estimate, all components of the energy density (gravitational potential $\phi$, kinetic energy $V^2/2$ and enthalpy $h$) have a similar profile ($\propto r^{-1}$).
Bernoulli relation in the jet region
------------------------------------
In the jet region (vicinity of $r=0$), the streamlines (contours of $\tau=z/r$) are almost vertical, and we may approximate $\sigma \approx z$.
Let us first estimate $\rho_2$ using equation(\[friction-balance-2\]), which is approximated, in the jet region, by $$\begin{aligned}
r \rho \nu &=& \{\log\rho_2,\psi\}
\nonumber \\
&\approx&
(\partial_r\psi)\,(\partial_z\log\rho_2)
\nonumber \\
&=& J\,p\,\tau^{p+1}\frac{1}{z}\,
\partial_z(\log\rho_2), \label{jet-Bernoulli-1}\end{aligned}$$ which shows that $\rho_2$ is an increasing function of $|z|$. Using $\rho = \rho_\parallel(\sigma)\rho_\perp(\tau)\rho_2$, we integrate equation(\[jet-Bernoulli-1\]) along the streamline ($\tau=$constant, $\sigma \approx z$): $$\frac{d\rho_2}{\rho_2^2} = - d\left(\frac{1}{\rho_2}\right)
=\frac{\nu}{Jp}\ \tau^{p+2}\,\rho_\perp(\tau) \rho_\parallel(z) z^2
dz . \label{jet-Bernoulli-2}$$ With this $\rho_2(z)$, we may estimate the toroidal (azimuthal) component of the velocity: $V_\theta = I_2/(r\rho_2) =
(I_2\tau)/(z\rho_2)$, where $I_2$ and $\tau$ are constant (the latter is constant along each streamline). We find that the kinetic energy $V_\theta^2/2$ of the azimuthal velocity decreases as a function of $|z|$ (both by the geometric expansion factor $z^{-2}$ and the friction damping effect $\rho_2^{-2}$). The steep gradient of the corresponding hydrodynamic pressure yields a strong boost near the foot point ($z\approx 0$).
The poloidal component of the kinetic energy is estimated as follows: We may approximate $$\begin{aligned}
\frac{1}{2}(V_r^2 + V_z^2) &=& \frac{1}{2\rho^2 r^2}\,|\nabla\psi|^2
\nonumber \\
&\approx& \frac{1}{2\rho^2 r^2} \left(
Jp\frac{z^p}{r^{p+1}}\right)^2
\nonumber \\
&=&
\frac{(Jp)^2\tau^{2p+4}}{2\rho^2z^4} . \label{jet-Bernoulli-2'}\end{aligned}$$ Here, the vertical distribution of the density $\rho =
\rho_\perp(\tau) \rho_\parallel(\sigma) \rho_2$ is primarily dominated by $\rho_\parallel(\sigma)\approx\rho_\parallel(z)$.
At long distance from the origin, the jet has a natural similarity property. For simplicity, let us ignore the effect of the friction ($\nu=0$), and assume $\rho_2=1$. Then, $\rho_\parallel \propto
|z|^{-3/2}$ yields $(V_r^2 + V_z^2)/2 \propto z^{-1}$, which may balance with the gravitational potential energy $\phi = -MG
|z|^{-1}$. Note, that the azimuthal component of the kinetic energy disappears at large scale ($V_\theta^2 \propto z^{-2}$). The Bernoulli condition (\[Bernoulli-1\]) gives $h$ that also has a similar distribution of $\propto |z|^{-1}$.
In Fig.\[fig:density\], we show the profile of $\rho =
\rho_\perp\rho_\parallel\rho_2$ with $\rho_\parallel \propto
|z|^{-3/2}$ (jet region) and $\rho_2 \propto r^{-1/2}$ (disk region).
Summary and Concluding Remarks
==============================
We have shown that the combination of a *thin* disk and *narrowly-collimated* jet is the unique structure that is amenable to the singularity of the Keplerian vorticity; the *Beltrami* condition, imposing the *alignment* of flow and *generalized vorticity*, characterizes such a geometry. Here the conventional vorticity is generalized to combine with magnetic field (or, electromagnetic vorticity) as well as to subtract the friction force causing the accretion.
In Section\[sec:similarity-solution\], we have given an analytic solution in which the *generalized vorticity* is purely kinematic ($\bm{\Omega}=\nabla\times\bm{P}_2$ with the momentum $\bm{P}_2$ that is modified by the friction effect). The principal force that ejects the jet is, then, the hydrodynamic pressure dominated by $V_\theta^2/2$. Additional magnetic force may contribute to jet acceleration if the self-consistently generated large-scale magnetic field is sufficiently large [@bib:Bgen; @bib:Bgen2; @bib:Bgen3; @bib:Bgen4; @bib:Bgen5]. Such structures can be described by the generalized model of Subsection\[subsec:general2D\].
The similarity solution has a singularity at the origin (where the model gravity $\phi=-MGr^{-1}$ is singular), which disconnects the disk part (parameterized by $D$ and $q$) and the jet part (parameterized by $J$ and $p$). To “connect” both subsystems (or, to relate $D$, $q$, $J$, and $p$), we need a *singular perturbation* that dominates the small-scale hierarchy[@bib:yoshida2004]. The connection point must switch the topology of the flow: In the poloidal cross section, the streamlines of the mass flow, which connect the accreting inflow and jet’s outflow, describe hyperbolic curves converging to the radial and the vertical axes. On the other hand, the vorticity lines (or generalized magnetic field lines) thread the disk. This topological difference of these two vector fields demands “decoupling” of them. Instead of disconnecting them by a singularity, we will have to consider a small-scale structure in which the topological switch can occur[@bib:Shiraishi]. The Hall effect (scaled by $\epsilon$) or the viscosity/resistivity (scaled by reciprocal Reynolds number) yields a singular perturbation (then, the generalized *magneto-Bernoulli* mechanism [@bib:mnsy; @bib:MSaccel] may effectively accelerate the jet-flow). In a weakly ionized plasma, the electron equation (\[momentum-e-1\]) is modified so that the Hall effect is magnified by the ratio of the neutral and electron densities; the ambipoler diffusion effect also yields a higher-order perturbation[@bib:krishan2006]. The mechanism of singular perturbation and the local structure of the disk-jet connection point may differ depending on the plasma condition near the central object: in AGN, the plasma is fully ionized but rather collisional (we also need a relativistic equation of state with possible presence of pairs), and, in YSO, the plasma is partially ionized; the relevant dissipation mechanisms determine the scale hierarchy in the vicinity of the singularity. The study of above concrete cases was not the scope of present paper and will be considered elsewhere. Our goal was to show how the alignment of flow and *generalized vorticity* condition arises and how it determines the *singular* structure of a *thin* disk and *narrowly-collimated* jet invoking the simplest (minimum) model of magnetohydrodynamics.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are thankful to Professor R. Matsumoto and Professor G. Bodo for their discussions and valuable comments. The authors also appreciate discussions with Professor S. M. Mahajan and Professor V. I. Berezhiani. This work was initiated at Abdus Salam International Center for Theoretical Physics, Trieste, Italy. NLS is grateful for the hospitality of Plasma Physics Laboratory of Graduate School of Frontier Sciences at the University of Tokyo during her short term visit in 2007. Work of NLS was partially supported by the Georgian National Foundation Grant projects 69/07 (GNSF/ST06/4-057) and 1-4/16 (GNSF/ST09-305-4-140).
References {#references .unnumbered}
==========
Anderson J M, Li Z Y, Krasnopolsky R and Blandford R D 2005 The Structure of Magnetocentrifugal Winds. I. Steady Mass Loading [*ApJ*]{} [630]{} 945
Balbus S A and Hawley J F 1998 Instability, turbulence, and enhanced transport in accretion disks [*Rev. Mod. Phys.*]{} 70 1
Begelman M C, Blandford R D and Rees M J 1984 Theory of extragalactic radio sources [*Rev. Mod. Phys.*]{} 56 255
Begelman M C 1993 Conference summary [*Astrophysical Jets*]{} ed D Burgarella et al (Cambridge: Cambridge Univ. Press) pp. 305-315
Begelman M C 1998 Instability of Toroidal Magnetic Field in Jets and Plerions [*ApJ*]{} 493 291
Blandford R D and Rees M J 1974 A ’twin-exhaust’ model for double radio sources [*MNRAS*]{} 169 395
Blandford R D and Znajek R L 1977 Electromagnetic extraction of energy from Kerr black holes [*MNRAS*]{} 179 433
Blandford R D and Payne D G 1982 Hydromagnetic flows from accretion discs and the production of radio jets [*MNRAS*]{} 199 883
Blandford R D 1994 Particle acceleration mechanisms [*ApJS*]{} 90 515
Bogovalov S V and Kelner S R 2010 Accretion and Plasma Outflow from Dissipationless Discs [*IJMP D*]{} 19 339 Chandrasekhar S 1956 On Force-Free Magnetic Fields [*Proc. Natl. Acad. Sci. USA*]{} 42 1
Celotti A and Blandford R D 2001 Black Holes in Binaries and Galactic Nuclei: Diagnostics, Demography and Formation [*ESO Astrophysics Symposia*]{} ed L Kaper et al (Berlin, Heidelberg: Springer-Verlag), 206 Ferrari A 1998 Modeling Extragalactic Jets [*Annu. Rev. Astron. Astrophys.*]{} 36 539
Ferreira J 1997 Magnetically-driven jets from Keplerian accretion discs [*A&A*]{} 319 340
Ferreira J, Dougados C and Cabrit S 2006 Which jet launching mechanism(s) in T Tauri stars? [*A&A*]{} 453 785
Ferreira J, Dougados C and Whelan E 2007 Jets from Young Stars I: Models and Constraints [*Lecture Notes in Physics*]{} ed J Ferreira et al (Berlin, Heidelberg: Springer-Verlag) 723 181 Hartigan P, Edwards S and Ghandour L 1995 Disk Accretion and Mass Loss from Young Stars [*ApJ*]{} 452 736
Hawley J F, Gammie C F and Balbus S A 1995 Local Three-dimensional Magnetohydrodynamic Simulations of Accretion Disks [*ApJ*]{} 440 742
Heinz S and Begelman M C 2000 Jet Acceleration by Tangled Magnetic Fields [*ApJ*]{} 535 104
Jones D L, Werhle A E, Meier D L and Piner B G 2000 The Radio Jets and Accretion Disk in NGC 4261 [*ApJ*]{} 534 165
Königl A and Pudritz R E 2000 Disk Winds and the Accretion-Outflow Connection [*Protostars and Planets IV*]{} ed V Mannings et al (Tuscon: Univ. Arizona Press) 759
Krasnopolsky R, Li Z Y and Blandford R D 1999 Magnetocentrifugal Launching of Jets from Accretion Disks. I. Cold Axisymmetric Flows [*ApJ*]{} 526 631
Krasnopolsky R, Li Z Y and Blandford R D 2003 Magnetocentrifugal Launching of Jets from Accretion Disks II. Inner Disk-driven Winds [*ApJ*]{} 595 631
Krishan V and Yoshida Z 2009 Kolmogorov dissipation scales in weakly ionized plasmas [*MNRAS*]{} 395 2039 Kudoh T and Shibata K 1997 Magnetically Driven Jets from Accretion Disks. I. Steady Solutions and Application to Jets/Winds in Young Stellar Objects [*ApJ*]{} 474 362
Kudoh T, Matsumoto R and Shibata K 2003 MHD simulations of jets from accretion disks [*Astrophys. Sp. Sci.*]{} 287 99
Kuwabara T, Shibata K, Kudoh T and Matsumoto R 2005 The Acceleration Mechanism of Resistive Magnetohydrodynamic Jets Launched from Accretion Disks [*ApJ*]{} 621 921
Livio M 1997 The Formation Of Astrophysical Jets [*Accretion Phenomena and Related Outflows; IAU Colloquium 163*]{} ed. D T Wickramasinghe et al (San Francisco: ASP) [*ASP Conference Series*]{} 121 845
Livio M., 1999, Phys. Rep., 311, 225
Low B C 1982 Nonlinear force-free magnetic fields [*Rev. Geophys. Space Phys.*]{} 20 145
Machida M and Matsumoto R 2003 Global Three-dimensional Magnetohydrodynamic Simulations of Black Hole Accretion Disks: X-Ray Flares in the Plunging Region [*ApJ*]{} 585 429
Mahajan S M and Yoshida Z 1998 Double Curl Beltrami Flow: Diamagnetic Structures [*Phys. Rev. Lett.*]{} 81 4863
Mahajan S M, Nikol’skaya K I, Shatashvili N L and Yoshida Z 2002 Generation of Flows in the Solar Atmosphere Due to Magnetofluid Coupling [*ApJ*]{} 576 L161
Mahajan S M, Shatashvili N L, Mikeladze S V and Sigua K I 2006 Acceleration of plasma flows in the closed magnetic fields: Simulation and analysis [*Phys. Plasmas*]{} 13 062902
Matsumoto R and Tajima T 1995 Magnetic viscosity by localized shear flow instability in magnetized [*ApJ*]{} 445 767
Matsumoto R, Machida M and Nakamura K 2004 Global 3D MHD Simulations of Optically Thin Black Hole Accretion Disks [*Prog. Theor. Phys. Suppl.*]{} 155 124 Ogilvie G I and Livio M 1998 On the Difficulty of Launching an Outflow from an Accretion Disk [*ApJ*]{} 499 329
Pelletier G and Pudritz R E 1992 Hydromagnetic disk winds in young stellar objects and active galactic nuclei [*ApJ*]{} 394 117
Shakura N I and Sunyaev R 1973 Black holes in binary systems. Observational appearance [*A&A*]{} 24 337
Shiraishi J, Yoshida Z and Furukawa M 2009 Topological Transition from Accretion to Ejection in a Disk-Jet System -— Singular Perturbation of the Hall Effect in a Weakly Ionized Plasma [*ApJ*]{} 697 100
Tout C A and Pringle J E 1996 Can a disc dynamo generate large-scale magnetic fields? [*MNRAS*]{} 281 219
Vlemmings W H T, Bignall H E and Diamond P J 2007 Green Bank Telescope Observations of the Water Masers of NGC 3079: Accretion Disk Magnetic Field and Maser Scintillation [*ApJ*]{} 656 198
Yoshida Z and Giga Y 1990 Remarks on spectra of operator ROT [*Math. Z.*]{} 204 235
Yoshida Z, Mahajan S M and Ohsaki S 2004 Scale hierarchy created in plasma flow [*Phys. Plasmas*]{} 11 3660
Yoshida Z 2009 Clebsch parameterization: Basic properties and remarks on its applications [*J. Math. Phys.*]{} 50 113101
Zanni C, Ferrari A, Rosner R, Bodo G and Massaglia S 2007 MHD simulations of jet acceleration from Keplerian accretion disks. The effects of disk resistivity [*A&A*]{} 469 811
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
B.C. Allanach, S. Lola and K. Sridhar[^1]\
CERN, Geneva 23, CH-1211, Switzerland
title: 'Investigating the Supersymmetric Explanation of Anomalous CDF lepton(s) photon(s) Missing- Events'
---
Introduction
============
In spite of the remarkable agreement of the Standard Model (SM) with available data from high-energy experiments, it is expected to be only a low-energy manifestation of a more complete theory at energy scales beyond a TeV. This new TeV-scale physics is expected to ameliorate the problems that beset the SM because of the huge discrepancy between the electroweak scale and the Planck (or GUT) scale. The most popular candidate for such an extension of the SM has been its supersymmetric generalisation which, in its simplest form, is the Minimal Supersymmetric Standard Model (MSSM). The gauge structure of the MSSM essentially replicates that of the SM but, in the Yukawa sector, in addition to the usual Yukawa couplings of the fermions to the Higgs (responsible for the fermion masses), other interactions involving squarks or sleptons are possible.
The relevant part of the superpotential containing the Yukawa interactions involving squarks or sleptons is given in terms of the chiral superfields by W\_[RPV]{}=\_[ijk]{} L\_iL\_j[|E]{}\_k+’\_[ijk]{}L\_iQ\_j[|D\_k]{}+ ”\_[ijk]{}[|U\_i]{}[|D\_j]{}[|D\_k]{}+\_i L\_i H\_2 \[eq:superpot\] where $L$ $(Q)$ are the left-handed lepton (quark) superfields while ${\bar
E},{\bar D},$ and ${\bar U}$ contain the corresponding right-handed fields, and $i,j,k$ generation indices. $\lambda$ and $\lambda'$ are lepton-number ($L-$) violating, the $\lambda''$ couplings are baryon-number ($B-$) violating and the last term is a $L$-violating bilinear coupling. The simultaneous existence of the $L-$ and $B-$-violating couplings can induce a catastrophically high rate for proton decay and are usually forbidden in the MSSM by invoking a discrete symmetry called $R$-parity where $R=(-1)^{(3B+L+2S)}$, where $S$ is the spin of the particle, so that the SM particles have $R=1$, while their superpartners have $R=-1$. However, $R$-conservation is too strong a requirement to avoid the unwanted proton decay for it can be effectively forbidden assuming that either the $L$-violating or the $B$-violating couplings in Eq. \[eq:superpot\] are present, but not both. Limits on the $R$-violating couplings derived from existing experimental information have been summarised in Ref. [@rparrev].
In the presence of $R$-violating couplings, the lightest supersymmetric particle (LSP), which is usually the neutralino, is $not$ stable and can decay through $R$-violating modes [@pheno]. This is in contrast to the $R$-conserving MSSM where the LSP is stable and this stability is a very desirable feature if the LSP were to be a viable dark matter candidate. In the $R$-violating case, the neutralino cannot be a dark matter candidate unless the $R$-violating couplings are very small so as to ensure that the lifetime of the neutralino is much more than the age of the universe. The situation can be saved, however, in theories where the gravitino (the spin-3/2 superpartner of the graviton) is the lightest supersymmetric particle: a circumstance that can be realised very naturally in theories with gauge-mediated supersymmetry breaking [@gmsb]. The light gravitino is long-lived enough to account for dark matter (or, at least, the hot component of dark matter) even in the presence of $R$-violating couplings [@bmy].
Physics of Light gravitinos
===========================
Even though gravity is naturally incorporated if supersymmetry is realised as a gauge symmetry, the gravitational sector is usually irrelevant for collider phenomenology because of the feebleness of gravitational interactions. But if supersymmetry is broken spontaneously, the gravitino acquires a mass by absorbing the would-be goldstino and in the high-energy limit the gravitino has the same interactions as the goldstino [@light-gra]. These interactions are proportional to $1/m_{\tilde G}$ and consequently the interactions of the gravitino can become important for processes at collider energies in the $m_{\tilde G}
\rightarrow 0$ limit. The mass of the gravitino is related to $F_0$, the fundamental scale-squared of supersymmetry breaking, by the following relation: m\_[G]{} = [F\_0 3 M\_P]{} . $M_P= 2.4 \times 10^{18}$ GeV is the reduced Planck mass, and using this value one obtains m\_[G]{} = 5.9 10\^[-5]{} [F\_0 ([500 [GeV]{}]{})\^2]{} [eV]{} . Given a lower bound on the value of $F_0$ one can then deduce a lower bound on the mass of the gravitino which, in turn, yields a bound on the interactions of the gravitino with the SM particles.
To make these considerations more concrete, we write down the relevant part of the supersymmetric Lagrangian containing the gravitino interactions: = [1 8M\_P]{} |\^A\^\^ G\_F\_\^A+[1 M\_P]{} |\_L\^\^G\_D\_+ [h.c.]{} , \[e1\] where $\tilde G$ is the gravitino field, $\lambda^A$ the gaugino field, $F_{\mu\nu}^A$ the corresponding field strength and $(\phi, \psi)$ the scalar and the fermionic components of the chiral supermultiplets. At the level of a effective interaction, the spin-3/2 gravitino field can be well described by its spin-1/2 goldstino component when it appears as an external state, i.e. G\_= \_G . Using this limit in Eq.\[e1\], allows one to compute the decay widths of the process $\chi_i \rightarrow \gamma/Z \tilde G$, for example. These are: $$\begin{aligned}
\Gamma (\chi^0_i \rightarrow \gamma \tilde G) &=& {\kappa_{i\gamma} \over 48
\pi} {m_{\chi^0_i}^5 \over M_P^2 m_{\tilde G}^2} \nonumber \\
\Gamma (\chi^0_i \rightarrow Z \tilde G) &=& {2 \kappa_{iZ_T} + \kappa_{iZ_L}
\over 96 \pi} {m_{\chi^0_i}^5 \over M_P^2 m_{\tilde G}^2} \biggl \lbrack 1 -
{m_Z^2 \over m_{\chi^0_i}^2} \biggr \rbrack ^4 ,\end{aligned}$$ where $$\begin{aligned}
\kappa_{i \gamma} &=& \vert N_{i1} {\rm cos} \theta_W + N_{i2} {\rm sin}
\theta_W \vert^2 \nonumber \\
\kappa_{i Z_T} &=& \vert N_{i1} {\rm sin} \theta_W - N_{i2} {\rm cos}
\theta_W \vert^2 \nonumber \\
\kappa_{i Z_L} &=& \vert N_{i3} {\rm cos} \beta - N_{i4} {\rm sin}
\beta \vert^2 .\end{aligned}$$ The $N_{ij}$ are the $\chi^0_i$ components in standard notation. The neutralino can also decay into a gravitino and a neutral Higgs particle and the corresponding expressions for these are (\^0\_i G) = [\_[i]{} 96 ]{} [m\_[\^0\_i]{}\^5 M\_P\^2 m\_[G]{}\^2]{} 1 - [m\_\^2 m\_[\^0\_i]{}\^2]{} \^4 , where the Higgsino components are given by $$\begin{aligned}
\kappa_{i h^0} &=& \vert N_{i3} {\rm sin} \alpha - N_{i4} {\rm cos}
\alpha \vert^2 \nonumber \\
\kappa_{i H^0} &=& \vert N_{i3} {\rm cos} \alpha + N_{i4} {\rm sin}
\alpha \vert^2 \nonumber \\
\kappa_{i A^0} &=& \vert N_{i3} {\rm sin} \beta + N_{i4} {\rm cos}
\beta \vert^2 .\end{aligned}$$ From the above equations, it is easy to convince oneself that the decay modes into the photon dominates over the decays into the $Z$ or the neutral Higgs boson because the decays into the latter states are phase-space suppressed. If the neutralino is bino-dominated, then the branching ratio into a photon and gravitino is nearly 100%.
The CDF anomaly
===============
CDF has recently presented results on the production of combinations involving at least one photon and one lepton ($e$ or $\mu$) in $p{\bar p}$ collisions at $\sqrt{s}= 1.8 ~{\rm TeV}$, using 86.34 pb$^{-1}$ of Tevatron 1994-95 data [@CDF]. In general the results were consistent with the standard model, however 16 photon-lepton events with large ${{\not\!\!E}_{T}}$ were observed, with $7.6\pm0.7$ are expected. Moreover, 11 of these events involved muons (with 4.2 $\pm$ 0.5 expected) and only 5 electrons (with 3.4 $\pm$ 0.3 expected), therefore there is a clear asymmetry, which indicates the existence of a lepton flavour-violating process involving muons.
As we proposed in an earlier paper [@us], we suggest that the excess can be simply understood in terms of smuon resonance production via an $L$-violating $\lambda'$ coupling[^2] which decays predominantly into a bino-dominated neutralino and a muon, with the neutralino further decaying into a photon and a gravitino. The production and decay has been shown in the Feynman diagram in Figure \[feyn\]. The merit of this model that we proposed is that it is a natural explanation of the characteristics of the CDF anomaly: 1) the flavour-dependence is a direct consequence of the $R$-violating coupling and, 2) the fact that the excess is seen in final states involving photons emerges very neatly in the model because the decay $\chi_1^0 \rightarrow \gamma \tilde G$ dominates overwhelmingly over other decay modes. We note that both $R$-parity violation and the existence of a very light gravitino are needed to explain the anomaly, in our model. Nonetheless, we emphasise that if one has $R$-parity violating supersymmetry, a light gravitino is preferable from dark matter considerations, as explained in the previous section.
A light gravitino has also been previously invoked [@kane] to explain the $ee \gamma \gamma {{\not\!\!E}_{T}}$ CDF event [@abe], detected in searches for anomalous production of missing transverse energy ($ {{\not\!\!E}_{T}}> 12$ GeV), in events containing two isolated, central photons. The event was explained in terms of the $R$-conserving production of a pair of selectrons and the subsequent decay of each of these selectron into a $\gamma \tilde G$ final state. It has been shown [@baer] that this explanation is excluded in the framework of the minimal uni-messenger gauge mediated supersymmetry breaking (GMSB) model, because of the anomalously large rates for jets $+ \gamma + {{\not\!\!E}_{T}}$ events predicted by this model. The problem can be traced back to the mass spectrum of the uni-messenger models: in this version of GMSB models the charginos and second-lightest neutralinos are light which lead to large jets $+ \gamma + {{\not\!\!E}_{T}}$ rates not seen in experiment. However, in multi-messenger models of GMSB the charginos and the second-lightest neutralinos are heavier and one can have a viable explanation of the CDF event in these models which is not in conflict with other existing experimental information. For our purposes, again a light neutralino of 100 GeV mass and a reasonably light smuon in the mass range of about 150 GeV is needed but we require all the other supersymmetric particles to be very massive. In the present paper, we perform detailed fits to the experimentally measured distributions of the anomalous events in order to determine the masses of the lightest neutralino and the smuon with the assumption that all the other masses are heavy enough not to be produced at the Tevatron. We do not attempt to place our scenario in the context of some specific model of GMSB, but point out that this is indeed possible in the case of multi-messenger models of GMSB. A detailed model-dependent study is relegated to a later publication.
Constraints
===========
If the anomalous events seen by the CDF experiment are to be attributed to the production of a smuon resonance involving an $R$-violating operator, we can ask what the precise form of this operator is. To get a substantial cross-section for the production of the smuon resonance one needs to couple it to valence quarks in the initial state. This observation is then sufficient to specify the $R$-violating operator to be $L_2Q_1\bar{D}_1$ corresponding to the coupling $\lambda'_{211}$. This operator generates the interactions ${\tilde \mu}u\bar{d}$ and ${\tilde \nu}_\mu d \bar{d}$ (and charge conjugates), along with other supersymmetrised copies involving squarks. Therefore, if we invoke this operator to explain the CDF anomaly we will simultaneously predict effects in other channels which will manifest itself through the production of either sneutrinos or squarks. In our model, since we take the squarks to be heavy, their effects on experimental observables will be negligible. On the other hand, the sneutrinos are necessarily relatively light and can be produced resonantly and should lead to observable effects in experimental situations. In the present paper, we not only analyse the smuon resonance production in the context of the CDF anomaly but also provide predictions for both the smuon and the sneutrino channels at Run I and Run II of the Tevatron. The smuons, sneutrinos and the lightest neutralino are all light enough to be pair-produced through $R$-conserving channels. We also provide predictions for these cross-sections at Run I and Run II.
For our analysis, we have essentially four parameters at our disposal: the gravitino mass, $m_{\tilde G}$, the bino mass parameter $M_1$, the smuon mass parmeter, $m_{\tilde \mu}$ and the $R$-violating coupling $\lambda'_{211}$. The coupling, $\lambda'_{211}$, is constrained from $R_\pi = \Gamma (\pi \rightarrow e \nu) / (\pi \rightarrow \mu \nu)$ [@bgh] to be $< 0.059 \times \frac{m_{\tilde{d_R}}}{100 ~{\rm GeV}}$ [@rparrev]. We note that the constraint involves a squark mass which is large in our model. So the constraint from $R_\pi$ for our purposes is not very relevant. However, instead of simultaneously fitting the four parameters using the experimental data, we choose to work with fixed values of the $\lambda'_{211}$ and $m_{\tilde G}$ and perform fits in $M_1$ and $m_{\tilde \mu}$. While the production of the smuon resonance is through the $R$-violating mode, its decay needs to go through the $R$-conserving channel to a neutralino and muon final state. The $R$-violating decay of the slepton is possible but constrained, in principle, by the Tevatron di-jet data [@cdfjets] which exclude a $\sigma . B> 1.3 \times
10^4$ pb at 95% C.L. for a resonance mass of 200 GeV. However, in practice this does not provide a restrictive bound upon our scenario as long as the $R$-violating coupling is sufficiently small. We also add that the di-jet bound is not very restrictive because it suffers from a huge QCD background. By restricting $\lambda'_{211}$ to be small, we also avoid the possible $R$-violating decays of the $\chi_1^0 \rightarrow \mu jj$ or $\chi_1^0 \rightarrow \nu jj$ final states. With these considerations in mind we choose $\lambda'_{211}=0.01$. The gravitino mass is also fixed at $10^{-3}$ eV in our fits. We will discuss the effects of varying the gravitino mass and the $R$-violating coupling later in this paper.
Defining the model
==================
The supersymmetric model parameters that are relevant for our discussion, are: $M_1$, $M_2$, $\mu$, $\tan\beta$ and $m_0$, which determine the chargino, neutralino and sfermion masses at low energies. Since no other exotic cascade decays at CDF, are observed, we assume that:
$\bullet$ Charginos and other superparticles (except the slepton and the lightest neutralino) are too heavy to be produced at the current energies.
$\bullet$ The decays of the lightest neutralino to gauge bosons other than the photon are coupling and/or phase-space suppressed.
These considerations constrain the allowed supersymmetric parameter space, which as we are going to show, still has some generality within it. To see this, let us look at the formulae that give the chargino and neutralino masses and mixings in terms of the fundamental supersymmetric parameters.
The tree-level neutralino mass matrix in the $\psi^0_j (-i \tilde{B},-i \tilde{W_{3}},\tilde{H}^{0}_{1},\tilde{H}^{0}_{2}) $ basis are the mass eigenstates of the matrix $$Y=\left(
\begin{array}{cccc}
M_{1} & 0 & -m_{Z}\sin\theta_{W}\cos\beta
& m_{Z}\sin\theta_{W}\sin\beta \\
0 & M_{2} & m_{Z}\sin\theta_{W}\cos\beta
& -m_{Z}\cos\theta_{W}\sin\beta \\
-m_{Z}\sin\theta_{W}\cos\beta
& m_{Z}\cos\theta_{W}\cos\beta
& 0 & -\mu \\
m_{Z}\sin\theta_{W}\sin\beta
& -m_{Z}\cos\theta_{W}\sin\beta
& -\mu & 0 \\
\end{array}
\right)$$ and are defined by $ \chi^0_i = N^{ij} \psi^0_j$, with $N_{ij}$ being the unitary matrix which diagonalises $Y$. The respective mixings in the basis $(\tilde{\gamma},\tilde{Z})$ instead of $(\tilde{B}, \tilde{W}_3)$, are given by $N'_{j1} = N_{j1} \cos \theta_W + N_{j2} \sin \theta_W$, $N'_{j2} = -N_{j1} \sin \theta_W + N_{j2} \cos \theta_W$, $N'_{j3} = N_{j3}$ and $N'_{j4} = N_{j4}$. Finally, the respective chargino mass matrix in the $(\tilde{W}^{\pm}, \tilde{H}^{\pm})$ basis is $$X = \left( \begin{array}{cc}
M_{2} & m_{W}\sqrt{2}\sin\beta \\
m_{W}\sqrt{2}\cos\beta & \mu
\end{array} \right)$$
In our work we will not make use of the GUT inspired relation $ M_1 = \frac{5}{3} \tan^2 \theta_W M_2 $, but will instead keep $M_1$ and $M_2$ generic. As we see from the above formulas, a light neutralino can arise either via a light $M_1$, a light $M_2$, or a light $\mu$. In the second and third cases however, the chargino is also going to be light enough to be seen in cascade decays, which is not the case in CDF data. Moreover, constraining the relative masses of $M_1$ and $M_2$ roughly determines the photino component of the lightest neutralino. Under these conditions $\tan\beta$ is expected to play a relatively moderate role. Table \[tttt\] contains the lightest chargino and neutralino masses, and the magnitude of the photino-component of the lightest neutralino, for different model parameters. In this section only, we constrain ourselves to small and intermediate values of $M_2$ and $\mu$, since for larger values our requirements are more easily fulfilled.
We see that demanding a lightest neutralino in the range $90-120$ GeV, with the chargino remaining relatively heavy leads to a neutralino mixing in a photino component that is significantly constrained, and lies in the range (0.81-0.88). This also holds for larger values of $M_2,\mu$ which are not included in the table.
We use a single slepton mass parameter $m_{\tilde l}\equiv{m_{{\tilde
\mu},{\tilde e}}}_R={m_{\tilde
e,{\tilde \mu}}}_L$. Neglecting small fermion mass terms, the tree-level slepton masses are then (for the first two generations) $$\begin{aligned}
m_{{\tilde e,{\tilde \mu}}_L} &=& m_{\tilde l}^2 + (\frac{1}{2} - \sin
\theta_W^2) M_Z^2 \cos(2 \beta) \nonumber \\
m_{{\tilde e,{\tilde \mu}}_R} &=& m_{\tilde l}^2 - \sin
\theta_W^2 M_Z^2 \cos(2 \beta), \nonumber \\
m_{{\tilde \nu}_e,{\tilde \nu}_\mu} &=& m_{\tilde l}^2 + \frac{1}{2}\sin
\theta_W^2 M_Z^2 \cos(2 \beta)
\label{slepmasses}\end{aligned}$$ with negligible mixing proportional to the electron and muon masses respectively.
We use the [ISASUSY]{} part of the [ISAJET7.58]{} package [@isajet] to generate the spectrum, branching ratios and decays of the sparticles. For an example of parameters, we choose (in the notation used by ref. [@isajet]) $\lambda'_{211}=0.01$, $m_{3/2}=10^{-3}$ eV, $\tan
\beta=10$, $A_{t,\tau,b}=0$, $\mu$ together with other flavour diagonal soft supersymmetry breaking parameters are set to 2000 GeV. We emphasise that this is a representative point in the supersymmetric parameter space and not a special choice. Any superparticles except the first two generation sleptons, the lightest neutralino and the gravitino do not appear in this analysis because they are too heavy to be produced or to contribute to cascade decays in CDF data.
Fitting kinematical distributions
=================================
We now simulate the signal events for the process in Figure \[feyn\]. The Standard Model background is taken from ref. [@CDF]. We use [HERWIG6.3]{} [@herwig] including parton showering (but not including isolation cuts) to calculate cross-sections for single slepton production. The slepton mass parameter $m_{\tilde l}$ and the bino mass parameter[^3] $M_1$ are allowed to vary in order to see what range of neutralino and slepton masses are preferred by the experiment.
We simulate the detector by the following:
- Photons can be detected if they do [*not*]{} have rapidity $1.0<|\eta|<1.1$, $|\eta|<0.05$. The region $0.77<\eta<1.0, 75^\circ<\phi<90^\circ$ is also excluded because it is not instrumented. If these constraints are satisfied, we assume 81$\%$ detection efficiency for the photons.
- Muons have a 60$\%$ detection efficiency if $|\eta_\mu|<0.6$ or 45$\%$ if $0.6\leq \eta_\mu\leq 1.1$
Rapidity of a particle is defined as $\eta= - \ln \tan (\theta/2)$, where $\theta$ is the longitudinal angle between the particle’s momentum and the beam. $\phi$ is the transverse angle between the particle’s momentum and the $x$-axis. We also implement the cuts used in the experimental analysis to beat the background down: $E_T(\mu)>25$ GeV, $E_T(\gamma)>25$ GeV and ${{\not\!\!E}_{T}}>25$ GeV. Because we do not perform jet reconstruction, we do not perform isolation cuts.
CDF gave one-dimensional projections in the $\mu \gamma {{\not\!\!E}_{T}}$ events for the following kinematic variables $$\begin{aligned}
E_T = \sqrt{p_x^2 + p_y^2}, \nonumber \\
m_{12} = \sqrt{p_1.p_2} \nonumber\\
M_T^2 = E_T^2 - p_x^2 - p_y^2\nonumber\\
\Delta \phi_{12} = \phi_1 - \phi_2 \nonumber \\
H_T = {{\not\!\!E}_{T}}+ E_T(\gamma) + E_T(\mu) \nonumber\\
\Delta R_{\mu \gamma} = \sqrt{\Delta \phi_{\mu \gamma}^2 + (\eta_\mu-\eta_\gamma)^2}\end{aligned}$$ where $p_{x,y}$ are the $x$ and $y$ (i.e. transverse) components of the momentum respectively and $p_{1,2}$ refer to 4-momenta of particles labeled by 1 and 2.
Once we have a statistically large sample of signal events simulated, we have a prediction for the number of expected signal plus background events in bin $i$ of a distribution: $N_{S+B}^i$. We use the average background presented in ref. [@CDF]. The Poisson distributed probability density function (PDF) of observing $N_{obs}^i$ events compared to $N_{S+B}^i$ is p\^i\_[S+B]{}(N\^i\_[obs]{},N\^i\_[S+B]{}) = . In any one distribution, $p_{S+B} \equiv \Pi_i p^i_{S+B}$ gives the total PDF for that distribution, assuming each bin is uncorrelated to the others. Unfortunately, we are not able to make this assumption between different distributions of variables, because they contain data on the same events and should contain some level of correlation. Finally, the log likelihood of signal plus background $\ln p_{S+B}$ is calculated and normalised by subtracting the analogous log likelihood for the Standard Model background prediction -2 L -2(p\_[S+B]{} - p\_[B]{}). \[logL\] A negative value then indicates that the data favour the signal plus background hypothesis over just background. When performing parameter ($x_i$) estimation, one determines the equivalent number of $\sigma$ away from the best-fit point (which has parameters $x_b$) by ()\^2 = -2 ( L(x\_i) - L(x\_b) ). \[numsig\] The value of $\ln L(x_i)$ calculated is then equivalent to a probability which matches the number of $\Delta\sigma$ that our model fits the data better than the Standard Model in conventional Gaussian statistics. Thus, the measure of number of $\Delta\sigma$ here is purely a measure of probability in tails of PDFs, not a statement about the Gaussian nature of that PDF.
We now perform a fit to $M_1$ and $m_{\tilde l}$ keeping all other fundamental parameters (but not branching ratios) constant. We must perform the fit one time for each different distribution; because the distributions come from the same events, they are all correlated to some extent. We cannot therefore assume that the distributions are uncorrelated in order to fit more than one distribution at a time, i.e. the correlations must be taken into account. While we can generate the correlations in the signal events by Monte-Carlo, we do not own multi-dimensional data on the kinematic variables in the data. We therefore cannot perform a fit to more than one distribution at any time. However, it is possible to see to what extent the individual fits to each variable are compatible with each other. The fit corresponds to maximising the log likelihood obtained from Eq. \[logL\] for one of the distributions. In Table \[bestfit\] we present the best fit points, indicating that light neutralino masses, in the range of 67-111 GeV are to be expected. For small $M_1$ the lightest neutralino mass is determined by $M_1$, while $M_2$ controls the chargino mass. If $M_2$ is heavy (as assumed here), no cascade decays involving charginos are kinematically favoured.
For the rest of this section, we concentrate on the best-fit point for $E_T(\mu)$, because this gives the best likelihood out of all the fits. In Table \[tab:res1\], we show the percentage of events making it through each of the cuts for this best-fit point. The table shows that 11.4$\%$ of sleptons produced end up as detected $\mu \gamma {{\not\!\!E}_{T}}$ events in CDF. The cross-section of 0.091 pb predicts a total of 7.86 events in the $\mu \gamma {{\not\!\!E}_{T}}$ channel. This is higher than the observed excess because the $E_T(\mu)$ distribution itself prefers it.
The relevant branching ratios of the smuon for this point are $$\begin{aligned}
BR({\tilde \mu}_L \rightarrow \chi_1^0 \mu) = 0.984, \nonumber \\
BR({\tilde \mu}_L \rightarrow \bar{u} d) = 0.015, \nonumber \\
BR({\tilde \mu}_L \rightarrow {\tilde \mu} {\tilde G}) = 0.001,\end{aligned}$$ with a lifetime of $1\times 10^{-23}$ sec, whereas for the lightest neutralino we have $$\begin{aligned}
BR(\chi^0_1 \rightarrow {\tilde G} \gamma) = 0.975, \nonumber \\
BR(\chi_1^0 \rightarrow {\tilde G} e^- e^+) = 0.019,\end{aligned}$$ with a lifetime of $1 \times 10^{-19}$ sec. At such small values of $\lambda'_{211}$ and $m_{\tilde G}$, R-parity violating decays of the lightest neutralino are negligible. The light sparticle masses are m\_[\_1\^0]{} = 86.8, m\_[[e]{}\_L, \_L]{} = 130.8, m\_[\_L]{}=104.2 , m\_[[e]{}\_R, \_R]{} = 129.7, whereas we have set all of the other sparticles except for the gravitino to be heavy (around 2000 GeV), so that they play no role in our analysis.
For this best-fit parameter point, we show the predicted distributions of lepton $E_T$, photon $E_T$ and ${{\not\!\!E}_{T}}$ in the histograms of Fig. \[dists\] and compare them with the excess of the data over the Standard Model background. $\Delta \sigma$ is labeled on each plot and is the equivalent number of sigma that this best-fit point fits a particular distribution better than the Standard Model. Obviously the largest $\Delta \sigma=3.31$ is for the lepton $E_T$, since the fit is performed to this variable. But we also see that at this point, the other distributions also fit the data better than the Standard Model: all fit the data better than the Standard model to, at least, 2$\sigma$ except for the fits to $E_T(\gamma)$ for which $\Delta \sigma=1.94$. The photon $E_T$ seems to be steeper in the data than in either the Standard Model or in our model and this is a feature at other values of $M_1, m_{\tilde l}$. In Fig. \[dists2\] we show the mass distributions. The data seems to indicate a bump extra to the Standard Model at lower values of $m_{\mu
\gamma}$, as shown in Fig. \[dists2\]a. The angular distributions in Fig. \[dists3\] show that our best-fit point fits the observed excess well in events where the $\gamma$ and ${{\not\!\!E}_{T}}$ are roughly back-to-back in Fig. \[dists3\]a. While Fig. \[dists3\]b does not seem a particularly better fit than the Standard Model by eye, nearly all of the difference in $\ln L$ comes from the last bin, where the Standard Model predicted hardly any events.
To calculate 95$\%$ $C.L.$ regions, we scan over the parameters $\Delta m\equiv
m_{\tilde l}-M_1$ and $M_1$, calculating $(\sigma)^2$ from eq. \[numsig\] at each point and for each kinematical distribution. The 95$\% C.L.$ is then given by $(\sigma)^2=-2 \Delta \ln L_{BF}+5.99$, where $\ln L_{BF}$ is the log likelihood at the best-fit point of the kinematic variable being examined. The 95$\%$ $C.L.$ regions of $\Delta m$ and $M_1$ for each separate fitted kinematical distribution are displayed in Fig. \[scans\]. The horizontal region at the bottom of the plot displays the LEP bound from neutralino pair production where the neutralinos decay to photons and ${{\not\!\!E}_{T}}$ [@PDG]. We note [@hutchmonkey] that analysis of LEP2 data at the highest energies should be able to cover the region up to $M_1=100$ GeV or so. The “overlap” region $\Delta m\approx 30-40$ GeV and $M_1\approx 90-120$ is encouragingly within the 95$\%$ confidence-level regions for all distributions except for $E_T(\gamma)$ (shown as the area inside the white line in Fig. \[scans\]a), which prefers $M_1<90$ GeV, below the LEP bound. Ideally, a correlated fit would be performed to all distributions simultaneously. Then, the significance of not having such a good fit for $E_T(\gamma)$ in the overlap region could be calculated. The most constraining variables are $E_T(\mu)$ and $m_{\mu \gamma}$, which require $(\Delta m,M_1)<(50,200)$ GeV and $(40,150)$ GeV respectively. The $M_T({{\not\!\!E}_{T}},\gamma)$ region constrains $M_{\chi_1^0}$ to be less than 120 GeV. The $H_T$ variable is not plotted because it does not constrain any of the parameter space at the 95$\%$ C.L.
We now display predictions for various quantities overlaid upon the 95$\%$ C.L. region from the $E_T(\mu)$ distribution for different values of $M_1$ and $\Delta m$. For example, in Fig. \[runIexpected\]a, it is shown that the expected number of detected signal $\mu \gamma {{\not\!\!E}_{T}}$ events (including the cuts described above) is 3-7 in the 95$\%$ C.L. region. Each parameter point specifies a particular sneutrino mass by eq. \[slepmasses\], and the $\lambda'_{211}$ coupling will also lead to resonant production of sneutrinos. The sneutrinos decay dominantly into neutrino and neutralino, leading to a $\gamma {{\not\!\!E}_{T}}$ signal. We use identical cuts to that used for the $\mu \gamma {{\not\!\!E}_{T}}$ channel, except for the cuts involving muons. Fig. \[runIexpected\]b shows that between 5 and 35 events of this nature are expected within the $95\%$ C.L. region. Standard Model backgrounds should be small, with dominant physics background coming from $\gamma Z$ production, where $Z \rightarrow \nu
\bar\nu$. We emphasise that measuring this interesting channel would provide an independent check on our model. In an R-parity conserving channel, a bound on the gravitino mass of $m_{\tilde
G}> 2.7 \times 10^{-5}$ eV [@mangano] comes from the non-observation of signal $\gamma {{\not\!\!E}_{T}}$ in D0 data [@dzero]. They place a bound of roughly 10 signal events for minimum $E_T$’s of 25 GeV at the 95$\%$ C.L. for a luminosity of 13 pb$^{-1}$. This would correspond to an upper bound of around 66 if we scale up to 86 pb$^{-1}$, as used here. The D0 bound is therefore not restrictive[^4]. The predicted rate anyway depends heavily upon the values of the unfitted parameters (see conclusions).
In Figure \[massBRs\] we show the range of relevant masses and branching ratios over the best-fit region. As Figure \[massBRs\]a indicates, $0.95\leq BR({\tilde \mu} \rightarrow \mu \chi_1^0)<0.99$ in the best-fit region, thus other decay modes of resonant smuon production ought to be suppressed. Similarly, $0.91\leq BR({\chi_1^0} \rightarrow {\tilde G} \gamma)<0.98$ thus the competing ${{\not\!\!R}_{P}}$ (lepton and 2 jets) and $e^+ e^- {\tilde G}$ decay modes of the resulting neutralino are also suppressed to unobservable levels at Run I. These branching ratios are dependent upon $\lambda'_{211}$ and $m_{\tilde G}$. The range of viable right-handed smuon mass is $130 < m_{{{\tilde \mu}}_R} <
210$ GeV, as shown in Figure \[massBRs\]c. The lightest neutralino mass is approximately equal to $M_1$ and varies up to 170 GeV in the best-fit region, as shown in Figure \[massBRs\]d.
The approximate level of other processes can be roughly estimated by calculating the expected numbers of pairs of sparticles at Run I. Neutralino production is predicted to be at an unobservable level, but the light sleptons have a non-negligible number of expected pairs produced at Run I. Using 86.34 pb of luminosity, we display the expected number of selectron, selectron-sneutrino and sneutrino pairs produced at Run I in fig. \[eeggmet1\]. No experimental cuts at all have been applied to these events, so detected numbers of these events might be expected to be a factor of 5-10 less than the numbers that are displayed in the figure. We can see from Figure \[eeggmet1\]a that in the 95$\%$ C.L. region that fits the $E_T(\mu)$ distribution, there are between about 0.1 and 1 selectron pairs expected, depending upon the actual value of the parameters $\Delta m$ and $M_1$. The dominant decays of the selectrons is ${\tilde e} \rightarrow e
\chi_1^0$, again followed by ${\tilde \chi_1^0} \rightarrow \gamma {\tilde
G}$. Thus the selectron pairs provide the correct signature to describe the $ee \gamma \gamma {{\not\!\!E}_{T}}$ event recorded by CDF at run I. When detector efficiencies taken into account, the expected number of $ee \gamma \gamma
{{\not\!\!E}_{T}}$ events, while being less than one, are nevertheless still much higher than the expected Standard Model background. Selectron pair production is predicted to be at the same level as smuon pair production, since they have approximately equal masses and they are produced via gauge interactions. However, the muon detection efficiency is somewhat lower than for electrons, so the expected number of [*detected*]{} $\mu \mu \gamma \gamma
{{\not\!\!E}_{T}}$ events from smuon pair production is smaller. It is also possible to produce sneutrinos, and selectron-sneutrino pairs (with final state $e \gamma \gamma {{\not\!\!E}_{T}}$) are predicted to be at the 0.1-2 event level before cuts. In Figure \[eeggmet1\]c, we see that sneutrino pair production is predicted to be at the level of 0.1 to 2 events. This final state will manifest itself as ${{\not\!\!E}_{T}}\gamma \gamma$.
Predictions for Run II
======================
At Run II of the Tevatron, assuming 2 fb$^{-1}$ of luminosity, our model can be ruled out or verified by again looking for an excess in the $\mu \gamma {{\not\!\!E}_{T}}$ channel, with much higher statistics. For example, assuming the same cuts and detector efficiencies as in our Run I analysis, we expect 193 signal events at Run II for our best-fit point because the cross-section increases to 0.096 pb for detected events. This estimate will be subject to change once the relevant cuts and detector efficiencies for Run II are known. We also expect an excess of 740 events in the ${{\not\!\!E}_{T}}\gamma$ channel from resonant sneutrino production at the best-fit point. We calculate the number of events in the ${{\not\!\!E}_{T}}\mu \gamma$ and ${{\not\!\!E}_{T}}\gamma$ channel and display them in Figure \[blahblah\], assuming 2 fb$^{-1}$ of collected luminosity (assuming the same efficiencies and cuts as used in Run I). We see that at least 70 events in the ${{\not\!\!E}_{T}}\mu \gamma$ channel are expected and at least 150 in the ${{\not\!\!E}_{T}}\gamma$ channel. This will be sufficient to measure parameters much more accurately, or rule the model out completely.
If a $\mu \gamma {{\not\!\!E}_{T}}$ signal is seen at Run II, the kinematic distributions will determine the viable parameter space more accurately than Run I data. Sparticle pair production may also be viable, and can provide constraints upon the parameter space. For this reason, we show the expected number of slepton pairs produced in Figure \[eeggmet4\]. Figure \[eeggmet4\]a shows that between 2 and 30 selectron (and smuon) pairs are expected, between 3 and 50 sneutrino-selectron and sneutrino-smuon pairs (Figure \[eeggmet4\]b), and between 2 and 40 sneutrino pairs (Figure \[eeggmet4\]c). Once efficiencies and cuts are taken into account, this adds up to at most a handful of events in each channel. Nevertheless, this would provide independent confirmation for our scenario and Standard Model background rates would be still extremely low.
Conclusions
===========
We have demonstrated that R-parity violating supersymmetry with a light gravitino can explain an anomalously high measured cross-section for the $\mu \gamma {{\not\!\!E}_{T}}$ channel. It also explains features observed in the kinematic variables of the signal events. We have used this information to constrain the slepton and neutralino mass parameters in the model. Whereas we could not perform a combined fit to all the different kinematic distributions, if $M_1 \approx 90$ and $\Delta m_{\tilde l}\approx 30$ GeV, all of the distributions are fit well. Ideally a combined fit to all kinematical variables would be performed for our model and a measure of the fit probability calculated (along with that of the Standard Model). Such a fit would require simulation of the backgrounds as well as knowledge of the multi-dimensional distributions of variables rather than the one-dimensional projections available to us. We have seen qualitatively that the $ee \gamma \gamma {{\not\!\!E}_{T}}$ event observed in Run I is fit much better by our model than by the Standard Model.
We chose representative parameters $m_{\tilde G}=10^{-3}$ eV, $\lambda'_{211}=0.01$ and $\tan \beta=10$. Varying the first two parameters does not change the kinematics of the event, merely the branching ratios of the decays. Thus the total number of signal events in the relevant channel changes, but the kinematic shapes in the signal events remain the same. A higher value of $m_{\tilde G}$ decreases the number of neutralinos decaying to a photon and a gravitino, but this can be compensated for by increasing $\lambda'_{211}$ to increase the production cross-section. However, at some value of $\lambda'_{211}$, R-parity violating decays will dominate over the gravitino decays of the neutralino. We also note that changing $\tan \beta$ has the effect of changing the relationship between $m_{\tilde l}$ and the slepton masses, as eq. \[slepmasses\] shows. Thus, different values of $\tan \beta$ could potentially prefer different regions of $\Delta
m$. Higher values of $\lambda'_{211}$ can also be accommodated by increasing both the mass of the sleptons and the mass of the neutralino to decrease the hard production cross-section. These points are illustrated in turn in table \[tab:pars\], where the parameters are all varied to have in such a way as to produce a number of events comparable to the best fit value of 7.8. Point 1 illustrates that a higher value for $m_{\tilde G}$, which gives lower branching ratios of $\chi_1^0 \rightarrow
\gamma {\tilde G}$, can be compensated by an increase in $\lambda'_{211}$, raising the resonant smuon production cross-section. The kinematic fits still look favourable: point 1 fits the data better than the Standard Model to $\Delta \sigma=1.7-3.3$ depending upon the variable. However, it does seem difficult to raise $m_{\tilde G}$ further because the required increase in $\lambda'_{211}$ then gives dominant decay modes to be R-parity violating, decreasing the number of signal events $N_{{{\not\!\!E}_{T}}\mu \gamma}$. Points 2 and 3 illustrate the insensitivity to values of $\tan \beta$. We note in point 4 that a heavy neutralino of 200 GeV also can provide enough events by raising $\lambda'_{211}$, but that the $E_T(\gamma)$ distribution is predicted to be too hard compared with the data. Also, the signal bump in $m_{\mu \gamma}$ goes to higher energies, disagreeing somewhat with the data. The other distributions seem to fit the data quite well however. Point 4 also shows that the predicted number of $\gamma {{\not\!\!E}_{T}}$ events does vary with $\lambda'_{211}$ and $m_{\tilde G}$, and so our prediction for the numbers of these events is parameter dependent.
Run II will provide a definitive test of our scenario, by again looking in the $\mu \gamma {{\not\!\!E}_{T}}$ and ${{\not\!\!E}_{T}}\gamma$ channels. Other final states with small SM backgrounds are expected at the few-event level. For example, observation of $ee \gamma \gamma {{\not\!\!E}_{T}}$, $\mu
\mu \gamma \gamma {{\not\!\!E}_{T}}$, $\mu \gamma \gamma {{\not\!\!E}_{T}}$ and $e \gamma \gamma {{\not\!\!E}_{T}}$ would provide independent confirmation of our scenario. We note, however, that towards the higher values of $M_1$ expected, less than one event in each of the pair production channels is likely once detector effects and cuts are taken into account.
We would like to warmly thank H. Frisch and D. Toback for their gracious help and advice regarding the data. We would like to thank D.Hutchcroft for discussions concerning LEP data and A. Barr, G. Blair and J. Holt for discussions on the statistics. We also acknowledge discussions with G. Altarelli, J. Ellis and M. Mangano.
[99]{}
B. C. Allanach, A. Dedes, and H. K. Dreiner, Phys. Rev. [D60]{}, 075014 (1999), hep-ph/9906209; H. Dreiner, ‘Perspectives on Supersymmetry’, Ed. by G.L. Kane, World Scientific; G. Bhattacharyya, hep-ph/9709395, presented at Workshop on Physics Beyond the Standard Model, Tegernsee, Germany, 8-14 Jun 1997.
For some of the earlier references, see: F. Zwirner, Phys. Lett. B132 (1983) 103 ; L. Hall and M. Suzuki, Nucl. Phys. B231 (1984) 419; J. Ellis et al, Phys. Lett. B150 (1985) 142 ; S. Dawson, Nucl. Phys. B261 (1985) 297 ; R. Barbieri and A. Masiero, Nucl. Phys. B267 (1986) 679.
G. Giudice and R. Rattazzi, Phys. Rept. 322 (1999) 419 and references therein.
S. Borgani, A. Masiero and M. Yamaguchi, Phys. Lett. B386 (1996) 189.
P. Fayet, Phys. Lett. B[70]{} (1977) [461]{}; Phys. Lett. B[175]{} (1986) [471]{}.
D. Acosta et. al, hep-ex/0110015.
B.C. Allanach, S. Lola and K. Sridhar, “Explaining Anomalous CDF $\mu \gamma$ Missing-$E_T$ Events With Supersymmetry”, hep-ph/0111014.
S. Dimopoulos, R. Esmailzadeh, L. J. Hall, and G. D. Starkman, Phys. Rev.D41 (1990) 2099; J. Kalinowski, R. Rückl, H. Spiesberger, and P. M. Zerwas Phys. Lett. B414 (1997) 297; J. L. Hewett and T. G. Rizzo, hep-ph/9809525, proceedings of (ICHEP98), Vancouver; B. C. Allanach [*et al.*]{}, hep-ph/9906224, contribution to Physics at Run II Workshop, Batavia, November 98; H. Dreiner, P. Richardson and M. Seymour, Phys. Rev. D63 (2001) 055008; JHEP 0004:008 (2000); hep-ph/0001224; G. Moreau, M. Chemtob, F. Deliot, C. Royon, and E. Perez, Plys. Lett. B475 (2000) 184; G. Moreau, E. Perez, and G. Polesello, Nucl. Phys. B604 (2001) 3.
S. Ambrosanio, G. L. Kane, G. D. Kribs, S. P. Martin, and S. Mrenna, Phys. Rev. Lett. 76 (1996) 3498; G. L. Kane and S. Mrenna, Phys. Rev. Lett. 77 (1996) 3502; S. Ambrosanio, G. L. Kane, G. D. Kribs, S. P. Martin, and S. Mrenna, Phys. Rev. D55 (1997) 1372.
H. Baer, M. Brhlik, C. Chen and X. Tata, Phys. Rev. D55, 4463 (1997).
F.Abe et al., Phys. Rev. D59, 092002 (1999).
V. Barger, G. F. Giudice and T. Han, Phys. Rev. D 40 (1989) 2987.
F. Abe [*et al.*]{}, Phys. Rev. D55, 5263 (1997)
F. E. Paige, S. D. Protopescu, H. Baer and X. Tata, “ISAJET 7.40: A Monte Carlo event generator for p p, anti-p p, and $e^+
e^-$ reactions”, hep-ph/9810440.
G. Corcella [*et al.*]{}, “HERWIG 6.3” ; G. Marchesini, B. R. Webber, G. Abbiendi, I. G. Knowles, M. H. Seymour and L. Stanco, JHEP 01 (2001) 010 hep-ph/0011363; [*ibid.*]{} hep-ph/0107071. “HERWIG: A Monte Carlo event generator for simulating hadron emission reactions with interfering gluons. Version 5.1 - April 1991”, [Comput. Phys.Commun.]{} [67]{} (1992) 465.
Particle Data Book, D.E. Groom [*et al*]{}, Eur. Phys. C15 (2000) 1.
D. Hutchcroft, talk presented at [*COSMO-01*]{}, hep-ex/0111085.
A. Brignole, F. Feruglio, M.L. Mangano and F. Zwirner, Nucl. Phys. B526 (1998) 136.
D0 collaboration, Phys. Rev. Lett. 78 (1997) 3640 and Phys. Rev. D57 (1998) 3817, hep-ex/9710031.
[^1]: On leave of absence from the Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India.
[^2]: Smuon resonances at hadron colliders have been previously studied in a different context [@previous].
[^3]: All parameters take their quoted values at the electroweak scale.
[^4]: Also note that our cuts are completely different to those in the D0 analysis.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The NaNO$_{2}$ nanocomposite ferroelectric material in porous glass was studied by neutron diffraction. For the first time the details of the crystal structure including positions and anisotropic thermal parameters were determined for the solid material, embedded in a porous matrix, in ferro- and paraelectric phases. It is demonstrated that in the ferroelectric phase the structure is consistent with bulk data but above transition temperature the giant growth of amplitudes of thermal vibrations is observed, resulting in the formation of a “premelted state”. Such a conclusion is in a good agreement with the results of dielectric measurements published earlier.'
author:
- 'A.V. Fokin, Yu.A. Kumzerov, N.M. Okuneva'
- 'A.A. Naberezhnov'
- 'S.B. Vakhrushev'
- 'I.V. Golosovsky, A.I. Kurbakov'
title: Temperature Evolution of Sodium Nitrite Structure in a Restricted Geometry
---
INTRODUCTION {#intro}
============
The problem of the physical properties of materials in a restricted geometry is one of the “hot” points of modern solid state physics and is not only of fundamental interest but also of practical importance. Along with films, filaments etc. there is large and important group of restricted geometry objects, namely materials confined within porous media (hereinafter we will call them confined materials - CM). In recent years properties of CM and in particular various types of phase transitions (PT) (superconducting [@p1; @p2], superfluid [@p3; @p4], melting-freezing [@p5; @p6; @p7; @p8; @p9; @p10; @p11] and others PTs [@p12; @p13; @p14; @p15; @p16; @p17; @p18; @p19; @p20; @p21; @p22] in different CM have been extensively studied by different experimental methods including calorimetry [@p5; @p7; @p20], NMR [@p9; @p21], ultrasonic [@p8; @p9] and dielectric [@p12; @p14; @p15] measurements, Raman [@p10; @p13], X-ray [@p12; @p16; @p17; @p18] and neutron scattering [@p10; @p14; @p22; @p23; @p24; @p25; @p26], differential thermal analysis [@p19] etc. It has been shown that CM can form either a system of isolated particles [@p10] or a net of interconnected dendrite clusters [@p14] and their physical properties differ drastically from those in corresponding bulk samples and strongly depend on different characteristics of porous matrices and embedded substances such as pore size and geometry, wetting ability, surface tension, interaction between CM and surface of host matrix and so on.
Finite-size effects in ferroelectrics were observed for the first time at the beginning of 1950s [@p16]. It was shown that the physical properties of dispersed ferroelectrics are significantly different from those of the bulk materials in particular when the characteristic size becomes comparable with correlation length of order parameter critical fluctuations. In detail the modern theoretical and experimental situation is well described in the review [@p27] and references therein. During last years the development of new nanotechnologies gave the strong impetus to the study of ferroelectric microcomposites as a new basis of ferroelectric memories or an active component in fine-composite materials, however the principal attention was devoted to the thin films or granular ferroelectrics. On the other hand the very interesting and surprising results were recently obtained for ferroelectric CM. In particular the dielectric measurements of NaNO$_{2}$, KH$_{2}$PO$_{4}$ (KDP) and Rochelle salt confined in different porous matrices have shown [@p14; @p15; @p28] the growth of dielectric permittivity $\varepsilon $ above temperature of ferroelectric phase transition T$_{c}$ for all materials and all matrices and unexpected increase of T$_{c}$ at decreasing of characteristic size D for KDP. The most remarkable result was the giant growth of $\varepsilon $ (up to 10$^{8}$ at 100 Hz - the record value!) at approaching to the bulk melting temperature (T$_{melt} $= 557 K) that was observed for NaNO$_{2}$ embedded into an artificial opal matrix [@p15]. The temperature dependence of $\varepsilon $ in CM case essentially differs from analogous dependence for the bulk NaNO$_{2}$ [@p29] typical for classical ferroelectrics, and this giant growth of dielectric permittivity was attributed to the extremely broadened melting process [@p15], but no experimental evidence was presented.
We have attempted to study the temperature evolution of structure of confined NaNO$_{2}$ in porous glass with pores size 7 nm at temperatures below and above T$_{c}$ by the method of neutron diffraction to clarify the microscopic origin of observed anomalies of dielectric permittivity. This method was successfully used for study of structure evolution of water, cyclohexane [@p23; @p24; @p25; @p26] and liquid mercury [@p10] confined within porous media at melting-freezing PT, but no diffraction study of microcomposite ferroelectric materials (except our preliminary results [@p14]) were performed earlier. Moreover we do not know any paper dealing with the detailed structure refinement (including determination of thermal parameters) of any confined solid materials.
RESULTS {#sec:1}
=======
The samples were prepared by immersing of the preliminary warmed up platelets of the porous glass in the melted NaNO$_{2}$ in the sealed quartz container. The glass samples were tested by the mercury intrusion porosimetry and the pore sizes were found to be 7$\pm $1 nm. The volume amount of the salt was about 25%. Measurements were performed on the powder diffractometer G4-2 (LLB, Saclay, France) at 2.3434 [Å]{} at room temperature (RT), 400 K, 420 K, 440 K, 450 K and 460 K, i.e. below and above ferroelectric PT temperature T$_{c}$ $\approx$ 438 K. The diffraction patterns for 420 K (ferroelectric phase) and 460 K (paraelectric phase) are presented in Fig. \[1\]. The diffuse background observed in addition to the normal diffraction peaks due to porous silica glass was used to determine the nearest Si-O and O-O distances. These parameters were found to be almost equal to those for glass silicate tetrahedron SiO$_{4}$ [@p30] and were practically temperature independent therefore the cavity sizes in the host glass matrix do not depend on temperature.
The observed diffraction peaks corresponding to the orthorhombic structure were slightly asymmetric with the width larger then the instrumental resolution, but much smaller then the value expected for the scattering on the isolated 7 nm particles. It leads to a conclusion that due to high wetting ability the sodium nitrite forms a kind of interconnected clusters probably of the dendrite type. Their average size was found from structure refinement to be about 45 nm and was practically temperature independent. One should mention that the situation is quite different from the case of non-wetting compounds like the liquid mercury, which forms on cooling a system of independent particles with characteristic size equal to the average pore diameter [@p10].
The diffraction patterns in ferroelectric phase were fitted in the frames of the **Im2m** space group and following the paper [@p31] a pseudo-mirror plane perpendicular to **b** axis was included at **y**=0 to take into account incomplete ordering of the NO$_{2}$ groups. In this case the long-range order parameter can be determined as **ç=f**$_{1}$**-f**$_{2}$ [@p31], where the fraction **f**$_{1}$ of the NO$_{2}$ groups was placed on one side of the plane and **f**$_{2}$**=1-f**$_{1}$ on the other side.
Below T$_{c}$ our results are in a good agreement with published data for the bulk NaNO$_{2}$, however the anisotropic thermal parameters $\beta_{ij}$ are slightly higher then for the bulk [@p31]. The diffraction patterns above T$_{c}$ correspond to the paraelectric phase with space group **Immm.** The heating through T$_{c}$ results in the decrease of intensity of most of the peaks. The observed effect is much stronger than that in the case of bulk NaNO$_{2}$ [@p31] and is in agreement with our earlier data [@p14].
DISCUSSION {#sec:2}
==========
The results of the refinement procedure have revealed two main distinguishing features of the temperature evolution of structure of embedded sodium nitrite.
The first one is the essential increase of elementary cell volume upon passing through T$_{c}$ (insert on Fig. \[2\]). Here one can note that a similar phenomenon was observed for the overwhelming majority of different materials at melting [@p32]. The detailed analysis of temperature dependences of lattice parameters **a**, **b** and **c** shows that in ferroelectric phase the confined NaNO$_{2}$ expands in the **a** and **b** directions and contracts in the **c** direction similar to the bulk material, but above T$_{c}$ the lattice parameters **a** and **b** increase rapidly and **c** decreases slowly then in the bulk. As far back as 1961 S. Nomura [@p29] had shown that the misalignment of the NO$_{2}$ anion is responsible for the anomalous thermal expansion in the **b** direction (along macroscopic polarization). He had also supposed that the expansion in the **a** direction and the contraction in the **c** direction could be explained by rotation or rotational vibration about arbitrary axis of non-spherical NO$_{2}$ group, which holds its long axis parallel to the **c** direction in the equilibrium. Later K. Takahashi and W. Kinase [@p34] have proposed the microscopic mechanism of the ferroelectric PT and have shown that the mixed type of NO$_{2}$ group rotation around the **a** and **c** axes is achieved. They have shown also that in ferroelectric phase there are the potential barriers, which have strong angular dependences and limit the rotation of the NO$_{2}$ around **a** and **c** axes.
In terms of such a model the observed behavior of unit cell parameters could be explained by the increase of these rotations above T$_{c}$, experimentally displaying as a growth of thermal vibrations of ions. And indeed, the second distinguishing feature is the steep growth of the thermal parameters $\beta_{ij}$ (Fig. \[2\]) above T$_{c}$ pointing out the “looseness” (or softening) of the structure.
Using obtained $\beta_{ij}$, the thermal vibration ellipsoids for constituent ions were constructed at all measured temperatures and have been compared with those for the bulk material [@p31]. The results of refinement at 420 K (below T$_{c}$) and 460 K (above T$_{c}$) are presented as ellipsoids of 50% probability in Fig. \[3\] and as ellipsoids of 5% probability in Fig. \[4\] (inasmuch as oxygen thermal displacements are very large for porous sample). For the bulk material these ellipsoids are close to a sphere at all temperatures and their characteristic sizes increase insignificantly on heating. For sodium nitrite in porous glass below T$_{c}$ these ellipsoids are clearly anisotropic and slightly larger then for the bulk, but on heating through T$_{c}$ the shape and size of the thermal vibration ellipsoids change drastically. In the paraelectric phase (above T$_{c}$) the vibrations of Na and N form practically flat disks perpendicular the **b** direction for Na and the **a** direction for N as a result of mixed rotation around **a** and **c** axes, while oxygen ions form very stretched ellipsoids predominantly along the **a** and **c** directions, as it should be expected at increasing of rotation around **b** axis. The obtained values of oxygen thermal displacements along the **c** and **a** directions at 460 K (above T$_{c}$) are equal to 1.21 [Å]{} and 0.93 [Å]{} respectively (i.e. more than 25% of O-O (3.34 [Å]{}) distance for neighboring NO$_{2}$ groups) and essentially exceed the Lindemann criterion, which states that bulk material will melt when the average value of thermal displacements of nuclei exceeds 10% - 15% of internuclear distances [@p32; @p35; @p36].
Having the picture of ion thermal vibrations and keeping in mind the results of structure refinement, we can suppose that the “looseness” (or softening) of the structure is a distinctive (and true intrinsic) feature of this CM corresponding to the formation above T$_{c}$ (more then 100 degrees below the bulk T$_{melt}$) “premelted state” initially manifesting itself in the oxygen sublattice. In this case the mentioned above growth of dielectric permittivity of NaNO$_{2}$ above T$_{c}$ [@p15] could be explained by appearance of ionic current due to jumping diffusion of constituent ions, firstly oxygen ones.
Early analogous premelted effects were studied in metasilicates (Na$_{2}$SiO$_{3}$, Li$_{2}$SiO$_{3}$ [@p37; @p38]) and some others minerals [@p39], such as diopside, anorthite etc., where the heat capacity anomalies have been observed beginning from 100 to 200 degrees below relevant melting temperatures, and for those it was demonstrated, that premelting represents unquenchable configurational changes within phases remaining crystalline up to congruent melting points [@p39]. It is different from premelting of ice in porous glass [@p40] or in exfoliated graphite [@p41] matrices, where the premelted state is formed at the surface layer between the ice and host matrix and strongly depends on surface curvature. In our case the “premelted state” has a volume character and probably originates from some size effect of yet unclear nature.
In conclusion, for the first time the details of the structure of a solid material embedded in a porous matrix was determined by neutron diffraction. The temperature evolution of structure in a restricted geometry was studied for the ferroelectric NaNO$_{2}$ embedded in a porous glass and it was shown that this CM forms a kind of interconnected clusters probably of the dendrite type with practically temperature independent average size about 45 nm. Above T$_{c}$ the volume “premelted” state is formed, manifesting itself in a sharp growth of the thermal motion parameters, softening of lattice and increasing of lattice volume. In such a case the possible appearance of ionic current due to oxygen jumping diffusion is proposed as a reason of observed giant growth of dielectric permittivity. On cooling below T$_{c}$ macroscopic polarization and potential barriers suppress the lattice softening and the normal ferroelectric phase exists, but even at room temperature the thermal vibrations are different from those for the bulk material.
The work was supported by the RFBR (grants 00-02-16883, 01-02-17739), the Russian Program “Neutron Researches of Solids” (the contract 01.40.01.07.04) and “Solid State Nanostructures” (grant 99-1112).
M.J. Graf, T.E. Huber, and C.A. Huber, Phys. Rev. **B 45**, 3133 (1992).
E.V. Charnaya, C. Tien, K.J. Lin, C.S. Wur, Yu.A. Kumzerov, Phys. Rev. **B 58**, 467 (1998).
M.H.W. Chan, K.I. Blum, S.Q. Murphy, G.K.S. Wong, and J.D. Reppy, Phys. Rev. Lett. **61**, 1950 (1988).
M. Larson, N. Mulders, and G. Ahlers, Phys. Rev. Lett. **68**, 3896 (1992).
R. Mu and V.M. Malhotra , Phys. Rev. **B** **44**, 4296 (1991).
J.A. Duffy, N.J. Wilkinson, H.M. Fretwell, and M.A. Alam, J. Phys.: Condens. Matter **7**, L27 (1995) and references therein.
C.L. Jackson and G.B. McKenna, J. Chem. Phys., **93**, 9002 (1990).
J.R. Beamish, A. Hikata, L. Tell, and C. Elbaum, Phys. Rev. Lett. **50**, 425 (1983)
B.F. Borisov, E.V. Charnaya, P.G. Plotnikov, W.-D. Hoffmann, D. Michel, Yu.A. Kumzerov, C. Tien, C.S. Wur, Phys. Rev. **B 58**, 5329 (1998).
Yu.A. Kumzerov, A.A. Nabereznov, S.B. Vakhrushev, and B.N. Savenko , Phys. Rev. **B 52**, 4772 (1995).
M.-C. Bellissent-Funel, J. Lal, L. Bosio, J.Chem.Phys. **98**, 4246 (1993).
T. Kanata, T. Yoshukawa, K. Kubota, Solid State Comm. **62**, 765 (1987).
K. Ishikawa, K. Yoshikawa, and N. Okada, Phys. Rev. **B 37**, 5852 (1988).
E.V. Colla, A.V. Fokin, E.Yu. Koroleva, Yu.A. Kumzerov, S.B. Vakhrushev, B.N. Savenko, NanoStructured Materials **12**, 963 (1999).
S.V. Pan ’kova, V.V. Poborchii and V.G. Solov ’ev, J.Phys.: Condens. Matter, **8**, L203 (1996).
K. Anliker, H.R. Brugger, W. Kanzig, Helv. Phys. Acta **27**, 99 (1954).
K. Saegusa, W. Rhine, and H.K. Bowen, J. Am. Ceram. Soc. **76**, 1505 (1993).
K. Uchino, E. Sadanaga, and T. Hirose, J. Am. Ceram. Soc. **72**, 1555 (1989).
P. Marquardt and H. Gleiter, Phys. Rev. Lett. **48**, 1423, (1982).
W.L. Zhong, Y.G. Wang, and P.L. Zhang, Phys.Letts. A **189**, 121 (1995).
W. Buchheit, V. Kreibig, D. Müller, and A. Voight, Z. Phys. **B 32**, 83 (1978).
I.V. Golosovsky, I. Mirebeau, G. André, D.A. Kurdyukov, Yu.A. Kumzerov, and S.B. Vakhrushev, Phys. Rev. Lett. **86**, 5783 (2001).
D.C. Steytler and J.C. Dore, J. Phys. Chem. **87**, 2458 (1983).
J.C. Dore, M. Dunn, and P. Chieux, J. Phys.(Paris), **48**, C1 (1987).
M.J. Benham, J.C. Cook, J-C. Li, D.K. Ross, P.L. Hall, B. Sarkissian, Phys. Rev. **B 39**, 633 (1989).
P. Wiltzius, F.S. Bates, S.B. Dierker, and G.D.Wignall Phys. Rev. **A 36**, 2991 (1987)
W.L. Zhong, Y.G. Wang and P.L. Zhang, Ferroelectrics Review **1**, 131 (1998).
E.V. Colla, A.V. Fokin, Yu.A. Kumzerov, Solid State Commun. **103**, 127 (1997).
S. Nomura, J. Phys. Soc. of Japan **16**, 2440 (1961).
I. Naray-Szabo, Krystalykemia, Academiai Kiado, Budapest, 1969.
M.I. Kay, Ferroelectrics **4**, 235 (1972).
A.R. Ubbelohde, Melting and Crystal Structure, Clarendon Press: Oxford, 1965.
http://GPEengineeringSoft.com
K. Takahashi and W. Kinase, J. Phys. Soc. Japan **61**, 329 (1992).
W.A. Curtin and N.W. Ashcroft, Phys. Rev. Lett. **56**, 2775 (1986).
R. Brout, Phase Transitions, University of Brussels, NY-Amsterdam, 1965.
A.M. George, P. Richet, J.F. Stebbins, American Mineralogist **83**, 1277 (1998).
P. Richet, B.O. Mysen, D. Andrault, Physics and Chemistry of Minerals **23**, 157 (1996).
P. Richet, J. Ingrin, B.O. Mysen, P. Courtial, P. Gillet, Earth and Planetary Science Letters **121**, 589 (1994).
T. Ishizaki, M. Maruyama, Y. Furukawa, J.G. Dash, Journal of Crystal Growth **163**, 455 (1996).
J.M. Gay, J. Suzanne, J.G. Dash, H.Y Fu, Journal of Crystal Growth **125**, 33 (1992).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A novel concept of quantum turbulence in finite size superfluids, such as trapped bosonic atoms, is discussed. We have used an atomic $^{87}\mathrm{Rb}$ BEC to study the emergence of this phenomenon. In our experiment, the transition to the quantum turbulent regime is characterized by a tangled vortex lines formation, controlled by the amplitude and time duration of the excitation produced by an external oscillating field. A simple model is suggested to account for the experimental observations. The transition from the non-turbulent to the turbulent regime is a rather gradual crossover. But it takes place in a sharp enough way, allowing for the definition of an effective critical line separating the regimes. Quantum turbulence emerging in a finite-size superfluid may be a new idea helpful for revealing important features associated to turbulence, a more general and broad phenomenon.'
author:
- 'R. F. Shiozaki$^1$'
- 'G. D. Telles$^1$'
- 'V. I. Yukalov$^2$'
- 'V. S. Bagnato$^1$'
title: Transition to quantum turbulence in a finite size superfluid
---
Introduction
============
Quantum Turbulence (QT) is a phenomenon related to the vortex dynamics in superfluids and it can be realized in many different ways. In the case of liquid helium, below the $\lambda$-point, moving grids and vibrating objects can generate a fully tangled configuration of quantized vortices which characterizes QT [@feyman; @hall]. Within the context of low temperature physics, QT has been studied since its discovery over fifty years ago [@vinen]. One of the main motivations of studying QT is to establish its relation with turbulence in classical fluids, where there is no requirement for the vortex angular momentum quantization. For many years, QT could only be studied in liquid helium ($^{4}$He and $^{3}$He) [@donelly]. However, recently [@prl] Bose-Einstein Condensate (BEC) of trapped gases has provided the main ingredients to study QT in a more simple superfluids. The vortex nucleation in a BEC can be produced by introducing a rotation in the trapped cloud [@rot1; @rot2; @rot3]. When quantized vortices are generated, they arrange to form the Abrikosov lattices [@shaeer]. These crystalline structures of quantized vortices result from mutual interactions of the rotating field producing a repulsion between vortex pairs which is balanced by the trapping potential. In such structures, with a collection of vortices having the same direction and circulation sign, QT cannot spontaneously occur. Since the main characteristic of QT is a spatial tangled distribution of quantized vortices, one must have vortex lines distributed in many spatial directions to reach such a configuration. One method to achieve this was proposed by Kobayashi and Tsubota [@michi]. In their proposal, combined rotations around two orthogonal axes induce the nucleation of quantized vortex lines in orthogonal directions with a clear evolution to QT. In a recent work [@pra] a variant of the procedure suggested in Ref. [@michi] was implemented. Henn *et al.* [@pra] demonstrated that a special type of oscillatory excitation imposed on a BEC generates vortices. When vortices and anti-vortices are formed and proliferate through the sample, the emergence of turbulence is observed [@prl] as a configuration of tangled vortices. In fact, such oscillatory excitations generate vortex-antivortex pairs [@threevortex]. The method of creating vortices by means of oscillating external fields is a particular case of the general method of creating coherent topological modes by such oscillating fields [@YYB97; @YYB02; @YB09].
Quantized vortices inside a condensed atomic cloud are the necessary ingredients to produce quantum turbulence. Once the cloud is filled with vortices and anti-vortices distributed in various directions, but not arranged in a lattice, turbulence should naturally be established. The production of vortices in a BEC is therefore the first step. The majority of experimental groups have produced separate vortices and vortex arrays by introducing a single-axis rotation in a BEC. The stirring technique has been used with great success. In these experiments a similarity with experiments performed with $^{4}$He [@turbulence] could be directly observed. For large size BECs produced by the MIT group [@shaeer] a large number of vortices with the same circulation could be produced and configurations like Abrikosov lattices (originally observed for the type-II superconductors) were clearly seen. As much as $130$ vortices could be accommodated in the large MIT BEC. In such experiments, however, no tangled configurations could be observed, but only crystalline structures were produced.
In this communication, we analyze the process of generating QT in BEC, focusing on the consequences of having a finite-size sample which constitutes an important property of superfluids originating from cold trapped atoms. In this case, the formed vortices spread inside the trap until turbulence is developed. We analyze this effect in terms of the amplitude and time of excitation by the oscillatory perturbation. We suggest a simple model demonstrating the existence of an effective transition line between non-turbulent and turbulent regimes. A comparison with experimental data is presented, showing a good qualitative agreement. We start with a brief description of the emergence of QT in a BEC, followed by a general analysis and a comparison with experimental observations. Finally, arguments are presented concerning the importance of the phenomenon of quantum turbulence in finite superfluids.
Experimental observation
========================
The vortex nucleation can be done by imposing an external oscillating field on the condensed trapped atomic cloud, producing an effect equivalent to rotation. Indeed, those oscillations generate vortex-antivortex pairs [@YYB97; @YYB02; @YB09]. Vortex nucleation is expected to take place when the perturbation provides enough energy.
Our system is composed of $^{87}$Rb atoms forming a BEC described earlier [@prl; @pra]. We used a combination of oscillations introduced by the superposition of coils to the conventional trapping field. The anisotropic harmonic potential of a quadrupole-Ioffe-configuration (QUIC) trap is approximately given by $V=\frac{1}{2} m (\omega_{\rho}^2 \rho^2 + \omega_z^2 z^2)$. In our case, $\omega_{\rho} / \omega_{z} = 9$, which corresponds to a cigar shaped trap. Superimposing off-axis coils on this trap modifies the potential so that the long axis is rotated while the minimum is displaced. This type of excitation can produce different numbers of vortices, depending on the oscillation amplitude and excitation time [@pra]. This type of excitation should create both vortices and anti-vortices. Observations of three vortices inside an atomic cloud provide good evidence for the existence of vortex and anti-vortex pairs, as discussed elsewhere [@threevortex]. Since the vortex and anti-vortex pairs are not exactly parallel to each other (oscillations are not confined to a plane) the vortex and anti-vortex are expected to live longer than if they would be parallel. We have fixed the excitation frequency of $200{\ensuremath{\, \mathrm{Hz}}}$ and considered a range of amplitude and time intervals of the applied external field. The results are observed after a time-of-flight (TOF) of $20{\ensuremath{\, \mathrm{ms}}}$. The overall cloud characteristics are determined by the trapping harmonic potential, whose frequencies are $f_z=23{\ensuremath{\, \mathrm{Hz}}}$ and $f_{\rho}=207{\ensuremath{\, \mathrm{Hz}}}$. The cigar shaped BEC contains about $2\times10^5$ atoms with a typical condensate fraction ranging from $40$ to $70\%$.
As the oscillation amplitude/time increases, the vortices start to be nucleated in agreement with Ref. [@pra]. The number of vortices in the cloud is counted as the clearly distinguishable dark regions in the absorption image after TOF and it changes with the amplitude and time interval of the excitation [@diagram]. We have varied the amplitude from $0$ to $200{\ensuremath{\, \mathrm{mG/cm}}}$ and the excitation time up to $60{\ensuremath{\, \mathrm{ms}}}$. We are interested in studying the crossover region from the observation of vortices to the turbulent cloud. The region in the diagram, separating both regimes, as well as the typical images characterizing the so-called non-turbulent and turbulent regimes are shown in Fig.\[fig:clouds3d\] and Fig.\[fig:diagram\].
![Atomic optical density images of: (a) non-turbulent cloud with well-defined separate vortices and (b) turbulent cloud, where the partial absorption changes along the image due to the existence of tangled vortices. The images were taken after 15 ms of free expansion.[]{data-label="fig:clouds3d"}](Fig1.eps)
In order to distinguish between the non-turbulent and turbulent regimes, we have adopted the following criterion. In Fig.\[fig:clouds3d\](a), we present a density profile, where one can clearly see the vortices as the dark regions (valleys) spread inside the cloud. In this case, the typical Thomas-Fermi aspect ratio inversion is observed during the TOF (free fall), as expected in a *non-turbulent* gas. On the other hand, the turbulent regime is characterized by tangled vortex lines spread all over the cloud. As a result, the dark region contrast essentially fades away, as is seen in the absorption images of Fig.\[fig:clouds3d\](b). Besides that, the behavior of a QT cloud during the TOF is different than that of a regular BEC. The axes aspect ratio is kept essentially unchanged [@prl]. The tangled vortices make the whole system isotropic [@Y10]. Combining these characteristics, we define a *turbulent* cloud. This is the criterion for distinguishing between the turbulent and non-turbulent regime.
Theoretical model
=================
According to Fig.\[fig:diagram\], there exists a well-defined parameter region, where the non-turbulent regime evolves to the turbulent regime. To understand the observed behavior, we propose a simple model based on energy-balance arguments, which are quite general and do not depend on the actual mechanism type needed for the vortex creation or its dynamics inside the cloud. The nucleation of vortices is due to the instability of collective excitations arising from the energy pumped by external oscillating perturbations [@Y10; @orsay].
![Diagram, on the excitation amplitude-time plane, demonstrating the crossover between the non-turbulent atomic cloud, with well-defined separate vortices, and the turbulent cloud, with tangled vortices. The plotted line is based on Eq. (\[eq:Acrit\]) considering halfway points with the same amplitude as experimental critical points.[]{data-label="fig:diagram"}](Fig2.eps)
We start by noting that there should exist a certain energy amount that is necessary to pump into the superfluid atomic cloud for the vortex formation. Following [@pethick], we write down the energy needed for the vortex nucleation as $$\label{eq:Evort}
E_{vort}=\frac{\hbar^2}{m l_0^2} \ln{\frac{l_0}{\xi}} \; ,$$ where, $\xi=(\sqrt{8 \pi n a_s})^{-1}$ is the healing length, $n$ is the BEC peak density, $a_s$ is the s-wave scattering length, and $l_0$ is the vortex line length. We assume that the latter is approximately equal to the effective cloud’s harmonic oscillator length, $$\label{eq:lzero}
l_0=a_{ho}= \sqrt{\frac{\hbar}{m (\omega_{\rho}^2 \omega_z)^{1/3}}} \; .$$
Then, let $R_{pump}$ be the rate of the total energy pumped into the cloud by the external oscillating field, and $\eta$ be the energy fraction converted to rotation. Therefore the total energy needed for the vortex formation can be written as $$\label{eq:Ezero}
E_{pump}=\eta R_{pump} (t-t_0) \; ,$$ where $t$ is the elapsed excitation time and $t_0$ is the minimal time period needed for the first vortex creation. The first vortex is created when $E_{pump} \approx E_{vort}$. If after the time $t$ of pumping, the number of vortices $N_{vort}$ is formed, then the energy balance implies that $$\label{eq:best}
\eta R_{pump} (t-t_0) = N_{vort} E_{vort} \; ,$$ and the expected number of vortices is $$\label{eq:number}
N_{vort} = \frac{\eta R_{pump}}{E_{vort}} (t-t_0) \; .$$
This gives us a good estimate for the number of vortices observed in the atomic cloud, as a function of the excitation elapsed time, which can be compared to the time dependence reported in [@diagram]. Here, we have not explicitly taken into account the vortex-antivortex annihilation, but this effect could be incorporated into the value of the coefficient $\eta$. This estimate depends neither on the cloud motion during the excitation, nor on the presence of collective modes that certainly arise [@icap]. Assuming that the turbulence onset takes place when the atomic cloud is heavily populated by vortices, and $N_{vort} \xi$ is of the order of the characteristic trap size, we conclude that turbulence should arise when the number of vortices is $$N_{vort} \approx \frac{l_0}{\xi} \; .$$ Near the frontier between the two regimes, the above equations result in the critical behavior of $R_{pump}$, generating turbulence: $$\label{eq:Rcrit}
R_{pump}= \frac{l_0E_{vort}}{\xi \eta (t-t_0)} \; .$$
The energy pump rate, $R_{pump}$, is proportional to the ratio between the external field amplitude, $A$, and the oscillating (pump) frequency. Therefore, the critical pumping amplitude can be expressed as $$\label{eq:Acrit}
A_c(t) = \frac{C}{t-t_0} \; .$$
The borderline separating the non-turbulent and the turbulent regimes is shown in Fig.\[fig:diagram\]. For our experimental system, we have found $C \approx 1.6 {\ensuremath{\, \mathrm{ms(G/cm)}}}$, $t_0 \approx 17{\ensuremath{\, \mathrm{ms}}}$, $\xi \approx 0.06 {\ensuremath{\, \mathrm{\mu m}}}$, $l_0 \approx 1.08 {\ensuremath{\, \mathrm{\mu m}}}$, and $E_{vort} \approx 20 {\ensuremath{\, \mathrm{nK\times k_B}}}$, as the characteristic values. With these quantities, we expect $N_{vort}\approx 20$ for the QT onset, which is in good agreement with the recent work [@diagram], where the turbulence is observed when the number of vortices is close to the value $20$ found above.
Role of the finite size of atomic systems
=========================================
The existence of the critical borderline, separating the non-turbulent and turbulent regions in the diagram of Fig.\[fig:diagram\], is closely related to the finite-size of the trapped superfluid system, whose effective size is given by the characteristic anisotropic harmonic trap length $l_0$. Quantum turbulence develops when the trapped atomic cloud becomes densely filled with vortices. Then the energy, pumped into the system, transforms not only into the newly formed vortices, but also to their rapid motion, with the formation of their tangled distribution, accompanied by the appearance of reconnections and the formation of Kelvin waves. At this point, the absorption images become hazy, which is a manifestation of the arising turbulence [@feshbach].
The standard experiments with $^{4}$He and $^{3}$He do not fulfill the conditions, where finite-size effects would become crucially important. Hence, the superfluid clouds, formed by trapped atomic BECs, represent a whole new class of systems, whose properties essentially depend on finite-size effects, as well as on interactions.
Effects, related to finite temperature, should also be important, though, at this time, we do not have enough data for performing the corresponding analysis. The pumped energy is, certainly, partially transformed into the thermal cloud, which is necessary to take into account for a more detailed consideration [@Y09]. It can also be that the losses, caused by the thermal cloud, could be responsible for the value of the delay time $t_0$. The existence of the critical line in Fig.\[fig:diagram\] requires that $t$ be larger than $t_0$. In order to generate vortices at $t \approx t_0$, one needs a very large amplitude. But a strong or long pumping should produce a large admixture of the thermal fraction [@Y10].
Conclusions
===========
Summarizing, we have presented experimental data and offered an explanation for the transition between a non-turbulent and turbulent regimes observed in a superfluid formed by a Bose-Einstein condensate of trapped atoms. The transition region occurs on the amplitude-time parameter of the related excitation. The character of the observed transition region is closely connected with the finiteness of the superfluid, since quantum turbulence arises when the sample becomes densely saturated with vortex lines. A simple model allows us to qualitatively understand the observed main features of the phenomenon.
Further studies, both experimental and theoretical, are necessary for the better understanding of this phenomenon of quantum turbulence in finite-size superfluids. This concept of quantum turbulence in finite systems can also be applied to small $^{4}$He droplets, though their experimental realization can be much more complicated than the creation of atomic clouds in traps. The possible existence of finite-size effects in turbulent superfluids opens a novel direction in the investigation of finite systems, such as trapped atoms and liquid droplets.
At low temperatures, fermions, depending on the sign of their interactions, can form either superfluid molecular BEC or paired superconductor-type fluid, both of which can exhibit superfluid properties [@Ket08; @Sal09]. It is, therefore, feasible to produce quantum turbulence in such systems and to observe finite-size effects in fermionic turbulent fluids, similar to those observed in trapped bosonic superfluids.
Acknowledgements
================
We appreciate collaboration with E. Henn, J.A. Seman, G. Roati, K. Magalhães, F. Poveda-Cuevas, S. Muniz, M. Kobayashi, K. Kasamatsu, and M. Tsubota. This work was supported by FAPESP and CNPq. One of the authors (V.I.Y.) acknowledges financial support from the Russian Foundation for Basic Research.
natexlab
bibnamefont
bibfnamefont
citenamefont
url
urlprefix
\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
R.P. Feyman, Prog. Low Temp. Phys. **17**, 1 (1955).
H.E. Hall and W.F. Vinen, Proc. R. Soc. Lond. A **238**, 204 (1956).
W.F. Vinen, Proc. R. Soc. Lond. A **240**, 114 (1957).
R.J. Donelly, Quantized Vortices in He-II (Cambridge Univ. Press, Cambridge, 1991).
E.A.L. Henn, J.A. Seman, G. Roati, K.M.F. Magalhães, and V.S. Bagnato, Phys. Rev. Lett. **103**, 045301 (2009).
M.R. Matthews, B.P. Anderson, P.C. Haljan, D.S. Hall, C.E. Wieman, and E.A. Cornell, Phys. Rev. Lett. **83**, 2498 (1999).
K.W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. **84**, 806 (2000).
S. Inouye, S. Gupta, T. Rosenband, A.P. Chikkatur, A. Görlitz, T.L. Gustavson, A.E. Leanhardt, D.E. Pritchard, and W. Ketterle, Phys. Rev. Lett. **87**, 080402 (2001).
J.R. Abo-Shaeer, C. Raman, J.M. Vogels, and W. Ketterle, Science **292**, 476 (2001).
M. Kobayashi and M. Tsubota, Phys. Rev. A **76**, 045603 (2007).
E.A.L. Henn, J.A. Seman, E.R.F. Ramos, M. Caracanhas, P. Castilho, E.P. Olímpio, G. Roati, D.V. Magalhães, K.M.F. Magalhães, and V.S. Bagnato, Phys. Rev. A **79**, 043618 (2009).
J.A. Seman, E.A.L. Henn, M. Haque, R.F. Shiozaki, E.R.F. Ramos, M. Caracanhas, P. Castilho, C. Castelo Branco, P.E.S. Tavares, F.J. Poveda-Cuevas, G. Roati, K.M.F. Magalhães, and V.S. Bagnato, Phys. Rev. A **82**, 033616 (2010).
V.I. Yukalov, E.P. Yukalova, and V.S. Bagnato, Phys. Rev. A **56**, 4845 (1997).
V.I. Yukalov, E.P. Yukalova, and V.S. Bagnato, Phys. Rev. A **66**, 043602 (2002).
V.I. Yukalov and V.S. Bagnato, Laser Phys. Lett. **6**, 399 (2009).
E.J. Yarmchuk, M.J.V. Gordon, and R.E. Packard, Phys. Rev. Lett. **43**, 214 (1979).
J.A. Seman, E.A.L. Henn, R.F. Shiozaki, G. Roati, F.J. Poveda-Cuevas, K.M.F. Magalhães, V.I. Yukalov, M. Tsubota, M. Kobayashi, K. Kasamatsu, and V.S. Bagnato, arXiv:1007.4953 (2010).
V.I. Yukalov, Laser Phys. Lett. **7**, 467 (2010).
K. Kasamatsu, M. Kobayashi, M. Tsubota, and V.S. Bagnato, International Symposium on Quantum Fluids and Solids, **PS2-22** (2010).
C.J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases (Cambridge Univ. Press, Cambridge, 2001).
J. A. Seman, R. F. Shiozaki, F. J. Poveda-Cuevas, E. A. L. Henn, K.M.F. Magalhães, G. Roati, G. D. Telles, and V. S. Bagnato. Turbulence in a trapped Bose-Einstein condensate. *Journal of Physics*. Conference Series (Online), v. **264**, p. 012004, 2011.
C. Chin, R. Grimm, P. Julienne, and E. Tiesinga , Rev. Mod. Phys. **82**, 1225 (2010).
V.I. Yukalov, Laser Phys. **19**, 1 (2009).
W. Ketterle and M.W. Zwierlein, Riv. Nuovo Cimento **31**, 247 (2008).
L. Salasnich, Laser Phys. **19**, 642 (2009).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove the Prym–Green conjecture on minimal free resolutions of paracanonical curves of odd genus. The proof proceeds via curves lying on ruled surfaces over an elliptic curve.'
address:
- 'Humboldt-Universität zu Berlin, Institut für Mathematik, Unter den Linden 6 `` 10099 Berlin, Germany'
- 'Stanford University, Department of Mathematics, 450 Serra Mall `` CA 94305, USA'
author:
- Gavril Farkas
- Michael Kemeny
title: '[The resolution of paracanonical curves of odd genus]{}'
---
Introduction
============
The study of torsion points on Jacobians of algebraic curves has a long history in algebraic geometry and number theory. On the one hand, torsion points of Jacobians have been used to rigidify moduli problems for curves, on the other hand, such a torsion point determines an unramified cyclic cover over the curve in question, which gives rise to a (generalized) Prym variety, see [@BL] Chapter 12 for an introduction to this circle of ideas.
Pairs $[C,\tau]$, where $C$ is a smooth curve of genus $g\geq 2$ and $\tau \in \text{Pic}^0(C)$ is a non-trivial torsion line bundle of order $\ell\geq 2$ form an irreducible moduli space $\cR_{g,\ell}$. One may view this moduli space as a higher genus analogue of the level $\ell$ modular curve $X_1(\ell)$. There is a finite cover $$\cR_{g,\ell}\rightarrow \cM_g$$ given by forgetting the $\ell$-torsion point. Following ideas going back to Mumford, Tyurin and many others, linearizing the Abel–Prym embedding of the curve in its Prym variety leads to the study of the properties of $[C, \tau]$ in terms of the projective geometry of the level $\ell$ *paracanonical* curve $$\varphi_{K_C\otimes \tau}:C\hookrightarrow \PP^{g-2}$$ induced by the line bundle $K_C\otimes \tau$. In practice, this amounts to a qualitative study of the equations and the syzygies of the paracanonical curve in question. For instance, in the case $\ell=2$, there is a close relationship between the study of these syzygies and the Prym map $$\cR_{g,2} \to \mathcal{A}_{g-1}$$ to the moduli space of principally polarized abelian varieties of dimension $g-1$, which has been exploited fruitfully for some time, see for instance [@beauville]. For higher level, the study of these syzygies has significant applications to the study of the birational geometry of $\cR_{g,\ell}$, see [@CEFS].
Denoting by $\Gamma_C(K_C\otimes \tau):=\bigoplus_{q\geq 0} H^0\Bigl(C, (K_C\otimes \tau)^{\otimes q}\Bigr)$ the homogeneous coordinate ring of the paracanonical curve, for integers $p,q\geq 0$, let $$K_{p,q}(C,K_C\otimes \tau):=\mbox{Tor}^p\Bigl(\Gamma_C(K_C\otimes \tau),\mathbb C\Bigr)_{p+q}$$ be the Koszul cohomology group of $p$-th syzygies of weight $q$ of the paracanonical curve and one denotes by $b_{p,q}:=\mbox{dim } K_{p,q}(C,K_C\otimes \tau)$ the corresponding Betti number.
The *Prym-Green Conjecture* formulated in [@CEFS] predicts that the minimal free resolution of the paracanonical curve corresponding to a general level $\ell$ curve $[C,\tau]\in \cR_{g,\ell}$ of genus $g\geq 5$ is *natural*, that is, in each diagonal of its Betti table, at most one entry is non-zero. The naturality of the resolution amounts to the vanishing statements $b_{p,2}\cdot b_{p+1,1}=0$, for all $p$. As explained in [@CEFS], for odd genus $g=2n+1$ this is equivalent to the vanishing statements $$\label{pgeq}
K_{n-1,1}(C, K_C\otimes \tau)=0 \; \; \text{and} \; \ \ K_{n-3,2}(C, K_C\otimes \tau)=0.$$ Since the differences $b_{p,2}-b_{p+1,1}$ are known, naturality entirely determines the resolution of the general level $\ell$ paracanonical curves and shows that its Betti numbers are as small as the geometry (that is, the Hilbert function) allows. We refer to [@FaLu] and [@CEFS] for background on this conjecture and its important implications on the global geometry of $\cR_{g,\ell}$.
In particular, a positive solution to the Prym-Green Conjecture for bounded genus $g<23$ has been shown to be instrumental in determining the Kodaira dimension of $\cR_{g,\ell}$ for small values of $\ell$. The Prym–Green Conjecture is obviously inspired by the classical Green’s Conjecture for syzygies of canonical curves stating that the minimal resolution of a general canonical curve $C\subseteq \PP^{g-1}$ is natural. The main result of this paper is a complete solution to this conjecture in odd genus:
\[pgmain\] The Prym-Green Conjecture holds for any odd genus $g$ and any level $\ell$.
Theorem \[pgmain\] implies that the general level $\ell$ paracanonical curve of genus $g=2n+1\geq 5$ has the following minimal resolution:
$1$ $2$ $\ldots$ $n-3$ $n-2$ $n-1$ $n$ $\ldots$ $2n-2$
----------- ----------- ---------- ------------- ------------- ------------- ----------- ---------- --------------
$b_{1,1}$ $b_{2,1}$ $\ldots$ $b_{n-3,1}$ $b_{n-2,1}$ 0 0 $\ldots$ 0
$0$ $0$ $\ldots$ $0$ $b_{n-2,2}$ $b_{n-1,2}$ $b_{n,2}$ $\ldots$ $b_{2n-2,2}$
where, $$b_{p,1}=\frac{p(2n-2p-3)}{2n-1}{2n\choose p+1} \ \ \mbox{ if } p\leq n-2,\ \mbox{ } \ \ b_{p,2}=\frac{(p+1)(2p-2n+5)}{2n-1}{2n\choose p+2} \ \ \ \mbox{ if } p\geq n-2.$$
In odd genus, the conjecture has been established before for level $2$ in [@generic-secant] (using Nikulin surfaces) and for high level $\ell\geq \sqrt{\frac{g+2}{2}}$ in [@high-level] (using Barth–Verra surfaces). Theorem \[pgmain\] therefore removes any restriction on the level $\ell$. Apart from that, we feel that the rational elliptic surfaces used in this paper are substantially simpler objects than the $K3$ surfaces used in [@generic-secant] and [@high-level] and should have further applications to syzygy problems. The Prym–Green Conjecture in even genus, amounting to the single vanishing statement $$\label{pgeven}
K_{\frac{g}{2}-2,1}(C,K_C\otimes \tau)=0,$$ (or equivalently, $K_{\frac{g}{2}-3,2}(C,K_C\otimes \tau)=0$) is still mysterious. It is expected to hold for any genus and level $\ell>2$. For level $2$, it has been shown to fail in genus $8$ in [@CFVV]; a *Macaulay* calculation carried out in [@CEFS] indicates that the conjecture very likely fails in genus $16$ as well. This strongly suggests that for level $2$ the Prym–Green Conjecture fails for general *Prym canonical* curves of genera having high divisibility properties by $2$ and in these cases there should be genuinely new methods of constructing syzygies. At the moment the vanishing (\[pgeven\]) is not even known to hold for arbitrary even genus $g$ in the case when $\tau$ is a general line bundle in $\mbox{Pic}^0(C)$.
By semicontinuity and the irreducibility of $\cR_{g,\ell}$, it is enough to establish the vanishing (\[pgeq\]) for one particular example of a paracanonical curve of odd genus. In our previous partial results on the Prym–Green Conjecture, we constructed suitable examples $[C,\tau]$ in terms of curves lying on various kinds of lattice polarized $K3$ surfaces, namely the Nikulin and Barth–Verra surfaces. In each case, the challenge lies in realizing the $\ell$-torsion bundle $\tau$ as the restriction of a line bundle on the surface, so that the geometry of the surface can be used to prove the vanishing of the corresponding Koszul cohomology groups, while making sure that the curve $C$ in question remains general, for instance, from the point of view of Brill-Noether theory. In contrast, in this paper we use the elliptic ruled surfaces recently introduced in [@farkas-tarasca] (closely related to the very interesting earlier work of Treibich [@treibich]), in order to provide explicit examples of pointed Brill-Noether general curves defined over $\mathbb Q$. These surfaces also arise when one degenerates a projectively embedded $K3$ surface to a surface with isolated, elliptic singularities. They have been studied in detail by Arbarello, Bruno and Sernesi in their important work [@ABS] on the classification of curves lying on $K3$ surfaces in terms of their Wahl map.
Whereas our previous results required a different $K3$ surface for each torsion order $\ell$ for which the construction worked, in the current paper we deal with *all* orders $\ell$ using a single surface. This is possible because on the elliptic ruled surface in question, a general genus $g$ curve admits a canonical degeneration within its linear system to a singular curve consisting of a curve of genus $g-1$ and an elliptic tail. This leads to an inductive structure involving curves of every genus and makes possible inductive arguments, while working on the same surface all along.
We introduce the elliptic ruled surface central to this paper. For an elliptic curve $E$, we set $$\phi:X:=\PP(\mathcal{O}_E\oplus \eta)\rightarrow E,$$ where $\eta \in \text{Pic}^0(E)$ is neither trivial nor torsion. We fix an origin $a \in E$ and let $b\in E$ be such that $\eta=\OO_E(a-b)$. Furthermore, choose a point $r\in E\setminus \{b\}$ such that $\zeta:=\OO_E(b-r)$ is torsion of order precisely $\ell$. The scroll $X \to E$ has two sections $J_0$ respectively $J_1$, corresponding to the quotients $\mathcal{O}_E\oplus \eta \twoheadrightarrow \eta$ and $\mathcal{O}_E\oplus \eta \twoheadrightarrow \mathcal{O}_E$ respectively. We have $$J_1 \cong J_0-\phi^*\eta, \ \; \; \; N_{J_0/X} \cong \OO_{J_0}(\phi^*\eta), \ \; \; \; N_{J_1/X} \cong \OO_{J_1}(\phi^*\eta^{\vee}),$$ where we freely mix notation for divisors and line bundles. For any point $x\in E$ we denote by $f_x$ the fibre $\phi^{-1}(x)$. We let $$C \in |gJ_0+f_r|$$ be a general element; this is a smooth curve of genus $g$. We further set $$L:=\OO_X\bigl((g-2)J_0+f_a\bigr).$$ Using that $K_X=-J_0-J_1$, the adjunction formula shows that the restriction $L_{C}$ is a level $\ell$ paracanonical bundle on $C$, that is, $[C, \tau]\in \cR_{g,\ell}$, where $\tau:=\phi_C^*(\zeta) \cong L_C\otimes K_C^{\vee}$, with $\phi_C:C\rightarrow E$ being the restriction of $\phi$ to the curve $C$. In this paper we verify the Prym–Green Conjecture for this particular paracanonical curve of genus $g=2n+1$.
Denoting by $\widetilde{X}$ the blow-up of $X$ at the two base points of $|L|$ and by $\widetilde{L}\in \mbox{Pic}(\widetilde{X})$ the proper transform of $L$, one begins by showing that the first vanishing $K_{n-1,1}(C,K_C\otimes \tau)=0$ required in the Prym–Green Conjecture is a consequence of the vanishing of $K_{n-1,1}(\widetilde{X},\widetilde{L})$ and that of the mixed Koszul cohomology group $K_{n-2,2}(\widetilde{X}, -C,\widetilde{L})$ respectively (see Section \[defin\] for details). By the Lefschetz hyperplane principle in Koszul cohomology, the vanishing of $K_{n-1,1}(\widetilde{X}, \widetilde{L})$ is a consequence of Green’s Conjecture for a general curve $D$ in the linear system $|L|$ on $X$. Since $D$ has been proven in [@farkas-tarasca] to be Brill-Noether general, Green’s Conjecture holds for $D$. We then show (see (\[vanishing-kos\])) that a sufficient condition for the second vanishing appearing in (\[pgeq\]) is that $$K_{n-2,2}\bigl(D,\OO_D(-C),K_D\bigr)=0 \ \ \mbox{ and } \ \ K_{n-1,2}\bigl(D, \OO_D(-C),K_D\bigr)=0.$$ Via results from [@farkas-mustata-popa] coupled with the usual description of Koszul cohomology in terms of kernel bundles, we prove that these vanishings are both consequences of the following transversality statement between difference varieties in the Jacobian $\mbox{Pic}^2(D)$ $$\label{condd}
\OO_D(C)-K_D-D_2\nsubseteq D_n-D_{n-2},$$ where, as usual, $D_m$ denotes the $m$-th symmetric product of $D$ (see Lemma \[suffcond1\]). This last statement is proved inductively, using the canonical degeneration of $D$ inside its linear system to a curve of lower genus with elliptic tails. It is precisely this feature of the elliptic surface $X$, of containing Brill-Noether general curves of *every* genus (something which is not shared by a $K3$ surface), which makes the proof possible. To sum up this part of the proof, we point that by using the geometry of $X$, we reduce the first half of the Prym–Green Conjecture, that is, the statement $K_{n-1,1}(C,K_C\otimes \tau)=0$ on the curve $C$ of genus $g$, to the geometric condition (\[condd\]) on the curve $D$ of genus $g-2$.
The second vanishing required by the Prym–Green Conjecture, that is, $K_{n-3,2}(C,K_C\otimes \tau)=0$ falls in the range covered by the Green-Lazarsfeld *Secant Conjecture* [@GL]. This feature appears only in odd genus, for even genus the Prym–Green Conjecture is *beyond* the range in which the Secant Conjecture applies (see Section \[secant1\] for details). For a curve $C$ of genus $g=2n+1$ and maximal Clifford index $\mbox{Cliff}(C)=n$, the Secant Conjecture predicts that for a non-special line bundle $L\in \mbox{Pic}^{2g-2}(C)$, one has the following equivalence $$K_{n-3,2}(C,L)=0\Longleftrightarrow L-K_C\notin C_{n-1}-C_{n-1}.$$ Despite significant progress, the Secant Conjecture is not known for arbitrary $L$, but in [@generic-secant] Theorem 1.7, we provided a sufficient condition for the vanishing to hold. Precisely, whenever $$\label{difftransl}
\tau+C_2\nsubseteq C_{n+1}-C_{n-1},$$ we have $K_{n-3,2}(C,K_C\otimes \tau)=0$. Thus the second half of the Prym–Green Conjecture has been reduced to a transversality statement of difference varieties very similar to (\[condd\]), but this time on the same curve $C$. Using the already mentioned elliptic tail degeneration inside the linear system $|C|$ on $X$, we establish inductively in Section \[secant1\] that (\[difftransl\]) holds for a general curve $C\subseteq X$ in its linear system. This completes the proof of the Prym–Green Conjecture.
[**Acknowledgments:**]{} The first author is supported by DFG Priority Program 1489 *Algorithmische Methoden in Algebra, Geometrie und Zahlentheorie*. The second author is supported by NSF grant DMS-1701245 *Syzygies, Moduli Spaces, and Brill-Noether Theory*.
Elliptic surfaces and paracanonical curves {#defin}
==========================================
We fix a level $\ell\geq 2$ and recall that pairs $[C,\tau]$, where $C$ is a smooth curve of genus $g$ and $\tau\in \mbox{Pic}^0(C)$ is an $\ell$-torsion point, form an irreducible moduli space $\cR_{g,\ell}$. We refer to [@CEFS] for a detailed description of the Deligne-Mumford compactification $\rr_{g,\ell}$ of $\cR_{g,\ell}$.
Normally we prefer multiplicative notation for line bundles, but occasionally, in order to simplify calculations, we switch to additive notation and identify divisors and line bundles. If $V$ is a vector space and $S:=\mbox{Sym } V$, for a graded $S$-module $M$ of finite type, we denote by $K_{p,q}(M,V)$ the Koszul cohomology group of $p$-th syzygies of weight $q$ of $M$. If $X$ is a projective variety, $L$ is a line bundle and $\mathcal{F}$ is a sheaf on $X$, we set as usual $K_{p,q}(X,\mathcal{F},L):=K_{p,q}\bigl(\Gamma_X(\mathcal{F},L), H^0(X,L)\bigr)$, where $\Gamma_X(\mathcal{F},L):=\bigoplus_{q\in \mathbb Z} H^0\bigl(X,\mathcal{F}\otimes L^{\otimes q}\bigr)$ is viewed as a graded $\mbox{Sym } H^0(X,L)$-module. For background questions on Koszul cohomology, we refer to the book [@aprodu-nagel].
Assume now that $g:=2n+1$ is odd and let us consider the decomposable elliptic ruled surface $\phi:X\rightarrow E$ defined in the Introduction. Retaining all the notation, our first aim is to establish the vanishing of the linear syzygy group $ K_{n-1,1}(C, K_C\otimes \tau)$. Before proceeding, we confirm that $\tau:=\phi_C^*(\zeta)$ is non-trivial of order precisely $\ell$, so that $[C,\tau]$ is indeed a point of $\cR_{g,\ell}$.
For any $1 \leq m \leq \ell-1$, the line bundle $\tau^{\otimes m} \in \mathrm{Pic}^0(C)$ is not effective.
Since the order of $\zeta$ is precisely $\ell$, we have $H^0(X,\phi^*(\zeta^{\otimes m}))\cong H^0(E,\zeta^{\otimes m})=0$ for $1 \leq m \leq \ell-1$. So it suffices to show $H^1\bigl(X,\phi^*(\zeta^{\otimes m})(-C)\bigr)=0$. By Serre duality, this is equivalent to $H^1\bigl(X,\phi^*(r+\eta-m\zeta)((g-2)J_0)\bigr)=0$. Applying the Leray spectral sequence this amounts to $$H^1\Bigl(E,\OO_E\bigl(a+(m+1)r-(m+1)b\bigr)\otimes \text{Sym}^{g-2}(\mathcal{O}_E\oplus\eta)\Bigr)=0,$$ which is clear for degree reasons.
Any linear system which is a sum of a positive multiple of $J_0$ and a fibre of $\phi$ has two base points, see [@farkas-tarasca], Lemma 2. In particular, the linear system $|L|$ on $X$ has two base points $p \in J_1$ and $q^{(g-2)} \in J_0$. Here $$\{p\}:=f_a\cdot J_1 \ \mbox{ and } \ \{q^{(g-2)}\}:=f_{s^{(g-2)}}\cdot J_0,$$ where the point $s^{(g-2)}\in E$ is determined by the condition $\eta^{\otimes (g-2)}\cong \OO_E\bigl(s^{(g-2)}-a\bigr)$.
Let $\pi:\widetilde{X}\to X$ be the blow-up of $X$ at these two base points, with exceptional divisors $E_1$ respectively $E_2$ over $p$ respectively $q^{(g-2)}$. We denote by $\widetilde{L}:=\pi^*L-E_1-E_2$ the proper transform of $L$. Note that $K_{\widetilde{X}}=-\widetilde{J}_0-\widetilde{J}_1$, where $\widetilde{J}_0=J_0-E_2$ and $\widetilde{J}_1=J_1-E_1$ are the proper transforms of $J_0$ and $J_1$. We now observe that the base points of the two linear systems $|L|$ and $|C|$ on $X$ are disjoint.
Let $x_0 \in J_0$ and $x_1 \in J_1$ be the two base points of $|C|$. Then $x_0,x_1 \notin \{ p,q^{(g-2)}\}$.
First, since $r\neq a$, we obtain that $J_1 \cap f_a \neq J_1 \cap f_r$, therefore $p \neq x_1$. Next, recall that $\{q^{(g-2)}\}=J_0 \cap f_{s^{(g-2)}}$, where $\OO_E(s^{(g-2)}-a)=\eta^{\otimes (g-2)}$ and $\{x_0\}= J_0 \cap f_{t^{(g)}}$, where the point $t^{(g)}\in E$ is determined by the equation $\OO_E(t^{(g)}-r)=\eta^{\otimes g}$. We need to show $\eta^{\otimes (g-2)}(a) \neq \eta^{\otimes g}(r)$. Else, since $\OO_E(a-r)=\eta\otimes \zeta$, it would imply $\zeta=\eta$, which is impossible, for $\zeta$ is a torsion class, whereas $\eta$ is not.
Since the curve $C$ does not pass through the points $p$ and $q^{(g-2)}$ which are blown-up, we shall abuse notation by writing $C$ for $\pi^*(C)$. We set $S:=\text{Sym}\ H^0(\widetilde{X},\widetilde{L})$ and consider the short exact sequence of graded $S$-modules $$0 \longrightarrow \bigoplus_{q \in \mathbb{Z}} H^0(\widetilde{X},q\widetilde{L}-C) \longrightarrow \bigoplus_{q \in \mathbb{Z}} H^0(\widetilde{X},q\widetilde{L}) \longrightarrow M \longrightarrow 0,$$ where the first map is defined by multiplication with the section defining $C$ and the module $M$ is defined by this exact sequence. By the corresponding long exact sequence in Koszul cohomology, see [@green-koszul] Corollary 1.d.4, that is, $$\cdots \longrightarrow K_{p,1}(\widetilde{X}, \widetilde{L})\longrightarrow K_{p,1}\bigl(M,H^0(\widetilde{X}, \widetilde{L})\bigr)\longrightarrow K_{p-1,2}(\widetilde{X},-C,\widetilde{L})\longrightarrow \cdots,$$ the vanishing of the Koszul cohomology group $K_{p,1}\bigl(M,H^0(\widetilde{X}, \widetilde{L})\bigr)$ follows from $K_{p,1}(\widetilde{X},\widetilde{L})=0$ and $K_{p-1,2}(\widetilde{X},-C,\widetilde{L})=0$. The reason we are interested in the Koszul cohomology of $M$ becomes apparent in the following lemma:
We have the equality $K_{p,1}\bigl(M,H^0(\widetilde{X}, \widetilde{L})\bigr)\cong K_{p,1}(C,K_C\otimes \tau)$, for every $p \geq 0$.
The restriction map induces an isomorphism $H^0(\widetilde{X},\widetilde{L}) \cong H^0(C,K_C\otimes \tau)$. First of all, note that the restriction map is injective, since $\widetilde{L}-C=\pi^*(-2J_0+f_a-f_r)-E_1-E_2$ is not effective (as it has negative intersection with the nef class $\pi^*(f_r)$). Next, $h^0(\widetilde{X},\widetilde{L})=h^0(X,L)=g-1$ by a direct computation using the projection formula, see also [@farkas-tarasca], Lemma 2. As $h^0(C,K_C\otimes \tau)=g-1$, the restriction to $C\subseteq \widetilde{X}$ induces the claimed isomorphism.
Let $M_q$ denote the $q$-th graded piece of $M$. We have an isomorphism $M_0\cong H^0(\widetilde{X},\mathcal{O}_{\widetilde{X}})$ and we have already seen that $H^0(\widetilde{X},\widetilde{L}-C)=0$, so $M_1\cong H^0(\widetilde{X},\widetilde{L})\cong H^0(C,K_C\otimes \tau)$. So we have the following commutative diagram $$\small{\xymatrix{
\bigwedge^{p+1} H^0(\widetilde{L}) \otimes M_0 \ar[r]^{} \ar[d] &\bigwedge^{p} H^0(\widetilde{L}) \otimes M_1
\ar[r]^{\delta_1 \; \; \;} \ar[d] &\bigwedge^{p-1} H^0(\widetilde{L}) \otimes M_2 \ar[d] \\
\bigwedge^{p+1} H^0(K_C+ \tau) \ar[r]^{} &\bigwedge^{p} H^0(K_C+\tau) \otimes H^0(K_C+ \tau)\ar[r]^{\delta'_1 \; \; \;} & \bigwedge^{p-1} H^0(K_C+ \tau) \otimes H^0(2K_C+ 2\tau)}
}$$ where the two leftmost vertical maps are isomorphisms and the rightmost vertical map is injective. Thus the middle cohomology of each row is isomorphic, so that we have the equality $K_{p,1}(M,H^0(\widetilde{X}, \widetilde{L}))\cong K_{p,1}(C,K_C\otimes \tau)$, for any $p \geq 0$.
The vanishing of the Koszul cohomology group $K_{n-1,1}(C,K_C\otimes \tau)$.
----------------------------------------------------------------------------
We can summarize the discussion so far. In order to establish the first vanishing required by the Prym–Green Conjecture for the pair $[C, \tau]$, that is, $K_{n-1,1}(C, K_C\otimes \tau)=0$, it suffices to prove that $$\begin{aligned}
K_{n-1,1}(\widetilde{X},\widetilde{L})&=0, \: \; \text{and} \\
K_{n-2,2}(\widetilde{X},-C,\widetilde{L})&=0.\end{aligned}$$ The first vanishing is a consequence of Green’s Conjecture on syzygies of canonical curves.
We have $K_{n-1,1}(\widetilde{X},\widetilde{L})=0$.
Let $D \in |\widetilde{L}|$ be a general element, thus $D$ is a smooth curve of genus $2n-1$. We have an isomorphism $K_{n-1,1}(\widetilde{X},\widetilde{L})\cong K_{n-1,1}(D,K_D)$, as $K_{\widetilde{X}|_{D}} \cong \mathcal{O}_D$ and by applying [@aprodu-nagel], Theorem 2.20 (note that one only needs that the restriction $H^0(\widetilde{X},\widetilde{L})\to H^0(D,K_D)$ is surjective, and not $H^1(\widetilde{X}, \mathcal{O}_{\widetilde{X}})=0$, for this result). As $D$ is a smooth curve of genus $2n-1$, the vanishing in question is a consequence of Green’s Conjecture, which is known to hold for curves of maximal gonality, see [@V2], [@hirsch]. Hence it suffices to show that $D$ has maximum gonality $n+1$. But $D$ is the strict transform of a smooth curve in $|L|$ and is a Brill–Noether general curve by [@farkas-tarasca] Remark 2, in particular it has maximal gonality.
We now turn our attention to the vanishing of the second Koszul group $K_{n-2,2}(\widetilde{X},-C, \widetilde{L})$. The following argument is inspired by [@green-koszul], Theorem 3.b.7.
Let $D \in |\widetilde{L}|$ be general and let $p \geq 0$. Assume $K_{m,2}(D,\mathcal{O}_{D}(-C),K_{D})=0$ for $m \in \{p,p+1\}$. Then $$K_{p,2}(\widetilde{X},-C,\widetilde{L}) =0.$$
Set as before $S:=\text{Sym}\ H^0(\widetilde{X},\widetilde{L})$ and consider the exact sequence of graded $S$-modules $$0 \longrightarrow \bigoplus_{q \in \mathbb{Z}} H^0(\widetilde{X}, (q-1)\widetilde{L}-C) \longrightarrow \bigoplus_{q \in \mathbb{Z}} H^0(\widetilde{X}, q\widetilde{L}-C)\longrightarrow B \longrightarrow 0,$$ serving as a definition for $B$, and where the first map is given by multiplication by a general section $s \in H^0(\widetilde{X},\widetilde{L})$. We now argue along the lines of [@generic-secant] Lemma 2.2. Taking the long exact sequence in Koszul cohomology and using that multiplication by a section $s \in H^0(\widetilde{X},\widetilde{L})$ induces the zero map on Koszul cohomology, we get $$K_{p,q}\bigl(B,H^0(\widetilde{X},\widetilde{L})\bigr) \cong K_{p,q}(\widetilde{X},-C,\widetilde{L}) \oplus K_{p-1,q}(\widetilde{X},-C,\widetilde{L}),$$ for all $p,q \in \mathbb{Z}$.
Let $D=Z(s)$ be the divisor defined by $s \in H^0(\widetilde{X},\widetilde{L})$, and consider the graded $S$-module $$N:= \bigoplus_{q \in \mathbb{Z}} H^0(D, qK_D-C_D).$$ We have the inclusion $B \subseteq N$ of graded $S$ modules. We claim $B_1=N_1=0$. By intersecting with the nef class $f_r$, we see $H^0(\widetilde{X}, \widetilde{L}-C)=0$, implying $B_1=0$. As $\deg(K_D-C_D)=-4$, we have $N_1=0$. Upon taking Koszul cohomology, this immediately gives the inclusion
$$K_{p,2}\bigl(B,H^0(\widetilde{X},\widetilde{L})\bigr) \subseteq K_{p,2}\bigl(N,H^0(\widetilde{X},\widetilde{L})\bigr).$$ In particular, $K_{p,2}(\widetilde{X},-C,\widetilde{L}) \subseteq K_{p+1,2}\bigl(B,H^0(\widetilde{X},\widetilde{L})\bigr)\subseteq K_{p+1,2}\bigl(N,H^0(\widetilde{X},\widetilde{L})\bigr)$.
To finish the proof, it will suffice to show $$\label{splitting}
K_{p,2}\bigl(N,H^0(\widetilde{X},\widetilde{L})\bigr) \cong K_{p,2}\bigl(D,\mathcal{O}_{D}(-C),K_{D}\bigr) \oplus K_{p-1,2}\bigl(D,\mathcal{O}_{D}(-C),K_{D}\bigr).$$ Since $\widetilde{L}\cdot \widetilde{J}_0=0$ and $\widetilde{L}\cdot \widetilde{J}_1=0$, it follows that $\OO_D(K_{\widetilde{X}})\cong \OO_D$. We now closely follow the proof of Lemma 2.2 in [@generic-secant]. The section $s$ induces a splitting $H^0(\widetilde{X}, \widetilde{L})\cong \mathbb C\{s\}\oplus H^0(D,K_D)$, giving rise for every $p$ to isomorphisms $$\bigwedge^p H^0(\widetilde{X},\widetilde{L})\cong \bigwedge^{p-1} H^0(D,K_D)\oplus \bigwedge^p H^0(D,K_D).$$ The desired isomorphism (\[splitting\]) follows from a calculation which is identical to the one carried out in the second part of the proof of [@generic-secant] Lemma 2.2. There one works with a $K3$ surface, but the only thing needed for the argument to work is that $\OO_D(K_{\widetilde{X}})\cong \OO_D$.
To establish that $K_{n-1,1}(C,K_C\otimes \tau)=0$, it thus suffices to show $$\begin{aligned}
\label{vanishing-kos}
K_{n-2,2}\bigl(D,\mathcal{O}_{D}(-C), K_{D}\bigr)=0 \; \; \text{and \;} K_{n-1,2}\bigl(D,\mathcal{O}_{D}(-C), K_{D}\bigr)=0.\end{aligned}$$ Via a well-known description of Koszul cohomology using kernel bundles, cf. [@aprodu-nagel] Proposition 2.5, taking into account that $H^0(D,K_D-C_D)=0$, these two statements are equivalent to $$\begin{aligned}
\label{vanishing-kos2}
H^0\Bigl(D,\bigwedge^{n-2}M_{K_D}\otimes(2K_D-C_D)\Bigr)=0 \; \; \text{and \;} H^0\Bigl(D,\bigwedge^{n-1}M_{K_D}\otimes (2K_D-C_D)\Bigr)=0,
\end{aligned}$$ where we recall that $M_{K_D}$ is the *kernel bundle*, defined by the short exact sequence $$0 \longrightarrow M_{K_D} \longrightarrow H^0(D,K_D) \otimes \mathcal{O}_D \longrightarrow K_D \longrightarrow 0.$$ Both statements (\[vanishing-kos2\]) will be reduced to general position statements with respect to divisorial difference varieties of the various curves on $X$.
Containment between difference varieties on curves.
---------------------------------------------------
If $C$ is a smooth curve of genus $g$, we denote by $C_a-C_b\subseteq \mbox{Pic}^{a-b}(C)$ the image of the difference map $v:C_a\times C_b\rightarrow \mbox{Pic}^{a-b}(C)$. We occasionally make use of the realization given in [@farkas-mustata-popa] of the *divisorial* difference varieties as non-abelian theta divisors associated to exterior powers of the kernel bundle of $K_C$. Precisely, for $i=0, \ldots, \lfloor \frac{g-1}{2}\rfloor$, one has the following equality of divisors on $\mbox{Pic}^{g-2i-1}(C)$: $$\label{fmp2}
C_{g-i-1}-C_i=\Bigl\{\xi\in \mbox{Pic}^{g-2i-1}(C): H^0\Bigl(C,\bigwedge^i M_{K_C}\otimes K_C\otimes \xi^{\vee}\Bigr)\neq 0\Bigr\}.$$
We now make an observation concerning a containment relation between difference varieties.
\[routine-difference\] Let $C$ be a smooth curve, $a\geq 2$, $b \geq 0$, $c >0$ be integers and $A\in \mathrm{Pic}^{a+b-c}(C)$. Assume $A-C_a \subseteq C_b-C_c$. Then $A-C_{a-2} \subseteq C_{b+1}-C_{c-1}$.
Let $B$ be an arbitrary effective divisor of degree $a-2$, and let $y_0 \in C$ be a fixed point. Since $A-C_a \subseteq C_b-C_c$, we have a well-defined morphism $$\begin{aligned}
f: C & \to C_b-C_c \subseteq \text{Pic}^{b-c}(C) \\
x &\mapsto A-(B+x+y).\end{aligned}$$ We further have the difference map $v:C_b \times C_c\rightarrow \mbox{Pic}^{b-c}(C)$ given by $v(F_1,F_2):=\OO_C(F_1-F_2)$, where $F_1$ and $F_2$ are effective divisors of degrees $b$ and $c$ respectively, as well as the projection $p_2: C_b \times C_c \to C_c$.
Suppose firstly that $\dim p_2\bigl(v^{-1}(\mbox{Im}(f))\bigr) \geq 1$. As the divisor $y_0+C_{c-1} \subseteq C_c$ is ample, see [@fulton-laz-connectedness] Lemma 2.7, $p_2\bigl(v^{-1}(\mbox{Im}(f))\bigr)$ must meet $y_0+C_{c-1}$. This means that there exists a point $x \in C$ such that $A-(B+x+y)\equiv F_1-F_2$, with $F_1 \in C_b$ and $F_2 \in C_c$ being effective divisors such that $F_2=y_0+F_2'$, where $F_2'\in C_{c-1}$ is effective. But then $$A-B=\OO_C(F_1+x-F_2') \in C_{b+1}-C_{c-1}.$$
Assume now $p_2\bigl(v^{-1}(\mbox{Im}(f))\bigr) \subseteq C_c$ is finite. Then one can find a divisor $F_2 \in C_c$, such that for every $x \in C$, there is a divisor $F_x\in C_b$ with $A-B-x-y=F_x-F_2$. Picking $x\in \mbox{supp}(F_2)$, we write $F_2=x+F_2'$, where $F_2'\in C_{c-1}$. Then $A-B=\OO_C(F_x+y_0-F_2') \in C_{b+1}-C_{c-1}.$
We may now restate the vanishing conditions (\[vanishing-kos\]) in terms of difference varieties. From now on we revert to the elliptic surface $\phi:X\rightarrow E$ and recall that $C\in |gJ_0+f_r|$.
\[suffcond1\] Set $g=2n+1$ with $n\geq 2$ and choose a general curve $D \in |(g-2)J_0+f_a|$. Suppose $$C_D-K_D-D_2 \nsubseteq D_n-D_{n-2}.$$ Then $K_{n-1,1}(C,K_C\otimes \tau)=0$, for a general level $\ell$ curve $[C,\eta]\in \cR_{g,\ell}$.
By assumption, there exist points $x,y\in D$ such that $C_D-K_D-x-y \notin D_n-D_{n-2}.$ It follows from (\[fmp2\]) that this is equivalent to $H^0\bigl(D,\bigwedge^{n-2}M_{K_D}\otimes (2K_D-C_D+x+y)\bigr)=0$, implying $H^0\bigl(D,\bigwedge^{n-2}M_{K_D}\otimes (2K_D-C_D)\bigr)=0$. This is equivalent to $K_{n-2,2}(D,\mathcal{O}_{D}(-C),K_{D})=0$.
Next, by Lemma \[routine-difference\], our assumption implies $C_D-K_D-D_4 \nsubseteq D_{n-1}-D_{n-1}$. Thus $H^0\bigl(D,\bigwedge^{n-1}M_{K_D}\otimes (2K_D-C_D+T)\bigr)=0$, for some effective divisor $T\in D_4$, therefore $H^0\bigl(D,\bigwedge^{n-1}M_{K_D}\otimes (2K_D-C_D)\bigr)=0$ as well, amounting to $K_{n-1,2}\bigl(D,\mathcal{O}_{D}(-C), K_{D}\bigr)=0$.
Any smooth divisor $D \in |L|$ carries two distinguished points, namely $p$ and $q^{(g-2)}$. We will prove that, if $D \in |L|$ is general, then $$\label{pgtoprove}
C_D-K_D-p-q^{(g-2)} \notin D_n-D_{n-2}.$$
Let us first introduce some notation. For an integer $m \geq 1$, we define the line bundle $$L_m:=\OO_X(mJ_0+f_a) \in \text{Pic}(X).$$ A general element $D \in |L_m|$ is a smooth curve of genus $m$, having two distinguished points $p \in J_1$ and $q^{(m)} \in J_0$, which as already explained, are the base points of $|L_m|$. Recall that for each $j=0,\ldots, m-1$, we introduced the divisorial difference variety $$D_j-D_{m-1-j} \subseteq \text{Pic}^{2j+1-m}(D).$$ This difference variety is empty for $j<0$ or $j>m-1$. We shall prove (\[pgtoprove\]) inductively by contradiction, using the fact that in any family of curves on the surface $X$, there is a canonical degeneration to a curve with an elliptic tail.
The induction step
------------------
Assume that for a general curve $D\in |L_{g-2-j}|$ one has $$C_D-K_D-p-(2i+1)q^{(g-2-j)} \in D_{n-i}-D_{n-2-j+i}, \text{\; for some $0 \leq i \leq j$}.$$ Then for a general curve $Z\in |L_{g-3-j}|$, one has
$$\label{indtodo}
C_Z-K_Z-p-(2i'+1)q^{(g-3-j)} \in Z_{n-i'}-Z_{n-3-j+i'}, \text{\; for some $0 \leq i' \leq j+1$}.$$
Notice that the assumption $D_{n-i}-D_{n-2-j+i} \neq \emptyset$ for a curve $D\in |L_{g-2-j}|$ implies $$\begin{aligned}
\label{relevant-bds}
0 \leq n-i \leq g-3-j.\end{aligned}$$
Let $D \in |L_{g-2-j}|$ be general. In order to prove the induction step, we degenerate $D$ within its linear system to the curve of compact type $$Y:=J_0+Z,$$ for a general $Z \in |L_{g-3-j}|$. Notice that $J_0 \cdot Z=q^{(g-3-j)}=:q$ and the marked point $p$ lies on $Z\setminus \{q\}$. On $Y$, in the spirit of limit linear series, we choose the twist of bidegree $\bigl(0, 2g-2j-6\bigr)$ of its dualizing sheaf, that is, the line bundle $$\widetilde{K} \in \text{Pic}(Y)$$ characterized by $\widetilde{K}\otimes \OO_{J_0} \cong \mathcal{O}_{J_0}$ and $\widetilde{K}\otimes \OO_{Z} \cong K_{Z}(2q)$. We establish a few technical statements to be used later in the proofs.
\[main-coh-lem\] Assume the bounds (\[relevant-bds\]). Then, for any $0 \leq i \leq j \leq g-4$, we have:
1. \[coh1-part1\] $h^0(Y,\widetilde{K})=h^0(D, K_{D})=g-2-j.$
2. $H^0\Bigl(Y,\OO_Y(C-J_1-(2i+1)J_0)\otimes \widetilde{K}^{\vee}\Bigr)=0$
3. $h^0\Bigl(Y,\OO_Y\bigl(C-J_1-(2i+1)J_0\bigr)\Bigr)=h^0\Bigl(D,C_D\bigl(-p-(2i+1)q^{(g-2-j)}\bigr)\Bigr) $
4. $h^0\Bigl(Y,\OO_Y(C-J_1-(2i+1)J_0)\otimes \widetilde{K}\Bigr)=h^0\Bigl(D,C_D\otimes K_D\bigl(-p-(2i+1)q^{(g-2-j)}\bigr)\Bigr)$.
\(i) As $\widetilde{K}$ is a limit of canonical bundles on smooth curves, $h^0(Y,\widetilde{K}) \geq g-2-j=h^0(D, K_{D})$. So it suffices to show $h^0(Y,\widetilde{K})\leq h^0(D, K_D)$. Twisting by $\widetilde{K}$ the short exact sequence $$\begin{aligned}
\label{MV} 0 \longrightarrow \mathcal{O}_{J_0}(-q) \longrightarrow \mathcal{O}_{Y} \longrightarrow \mathcal{O}_{Z} \longrightarrow 0\end{aligned}$$ and taking cohomology, we get $h^0(Y,\widetilde{K}) \leq h^0(Z,K_{Z}(2q))=g-2-j$, as required.
\(ii) Set $A_d:=\OO_Y\bigl(C-J_1-(2i+1)J_0\bigr)\otimes \widetilde{K}^{\otimes d} \in \text{Pic}(Y)$. One needs to show $H^0(Y,A_{-1})=0$. Via the projection $\phi:X\rightarrow E$ we identify the section $J_0$ with the elliptic curve $E$. We have $\OO_{J_0}(A_{-1}) \cong \eta^{\otimes (g-2i-1)}(r)$. Furthermore $\OO_{J_0}(q) \cong \eta^{\otimes (g-3-j)}(a)$, hence $$\OO_{J_0}(A_{-1}(-q)) \cong \eta^{\otimes (j-2i+2)}(r-a).$$ We have $H^0(E, \eta^{\otimes (j-2i+2)}(r-a))=H^0\bigl(E,\zeta^{\vee}\otimes \eta^{\otimes (j-2i+1)}\bigr)=0$, for $\zeta$ is $\ell$-torsion, whereas $\eta$ is not a torsion bundle. From the short exact sequence (\[MV\]) twisted by $A_{-1}$, in order to conclude it suffices to show that the restricted line bundle $$\begin{aligned}
\OO_{Z}({A_{-1}}) &\cong \OO_{Z}\bigl((g-2i-3)J_0-J_1+f_r\bigr)\otimes K_{Z}^{\vee}\\
& \cong \OO_{Z}\bigl((j+1-2i)J_0+f_r-f_a\bigr)\end{aligned}$$ is not effective. We will firstly show $H^0(X,(j+1-2i)J_0+f_r-f_a)=0$. If $j+1-2i <0$, this is immediate since then $\bigl((j+1-2i)J_0+f_r-f_a\bigr) \cdot f_r<0$ and the curve $f_r$ is nef. If $j+1-2i \geq 0$, we use the isomorphism $$H^0(X,(j+1-2i)J_0+f_r-f_a) \cong H^0\bigl(E,\mathcal{O}_E(r-a) \otimes \text{Sym}^{j+1-2i}(\mathcal{O}_E\oplus \eta)\bigr)=0.$$ In order to conclude, it is enough to show $H^1(X,(j+1-2i)J_0+f_r-f_a-Z)=0$. By Serre duality, this is equivalent to $$H^1(X,K_X+Z+f_a-f_r-(j+1-2i)J_0)=0.$$ We compute $$K_X+Z+f_a-f_r-(j+1-2i)J_0=(g-6+2i-2j)J_0+\phi^*\eta+2f_a-f_r,$$ where $g-6+2i-2j \geq -1$ by (\[relevant-bds\]). If $g-6+2i-2j \geq 0$, then $$H^1(X,(g-6+2i-2j)J_0+\phi^*\eta+2f_a-f_r)= H^1\bigl(E,\mathcal{O}_E(2a-r+\eta) \otimes \text{Sym}^{{g-6+2i-2j}}(\mathcal{O}_E \oplus \eta)\bigr),$$ which vanishes for degree reasons. Finally, if $g-6+2i-2j=-1$, an application of the Leray spectral sequence implies $H^1(X, -J_0+\phi^*\eta+2f_a-f_r)=0$, as well. This completes the proof.
\(iii) By Riemann–Roch and semicontinuity, it suffices to show $H^1(Y,A_0)=0$, that is, $$H^1\bigl(Y, \OO_Y((g-2i-1)J_0-J_1+f_r)\bigr)=0.$$
If so, then the bundle $\OO_X(C-J_1-(2i+1)J_0)$ has the same number of sections, when restricted to a general element $D\in |L_{g-2-j}|$ or to its codimension $1$ degeneration $Y$ in its linear system. By $(\ref{relevant-bds})$, we have $g-2i-1\geq 0$. First, starting from $H^1(X,-J_1+f_r)=0$, which is an easy consequence of the Leray spectral sequence, one shows inductively that $H^1(X, mJ_0-J_1+f_r)=0$ for all $m\geq 0$, in particular also $H^1\bigl(X, (g-2i-1)J_0-J_1+f_r\bigr)=0$.
To conclude, it is enough to show $H^2\bigl(X, (g-2i-1)J_0-J_1+f_r-Y\bigr)=0.$ By Serre duality, $$H^2\bigl(X, (g-2i-1)J_0-J_1+f_r-Y\bigr)\cong H^0\bigl(X,(2i-2-j)J_0+f_a-f_r\bigr)^{\vee}.$$
If $2i-2-j<0$, then the class $(2i-2-j)J_0+f_a-f_r$ is not effective on $X$ as it has negative intersection with $f_r$. If $2i-2-j \geq 0$ then this class is not effective by projecting to $E$.
\(iv) It suffices to show $H^1(Y, A_1)=0$. We use the exact sequence on $Y$ $$0 \longrightarrow \mathcal{O}_{Z}(-q) \longrightarrow \mathcal{O}_{Y} \longrightarrow \mathcal{O}_{J_0} \longrightarrow 0.$$ As $\deg \OO_{J_1}({A_1})=1$, it is enough to show $H^1\bigl(Z, \OO_{Z}(A_1)(-q)\bigr)=0$. By direct computation $$\deg \OO_{Z}(A_1(-q))=\deg K_{Z}+2n-2i+g-3-j.$$ From (\[relevant-bds\]), $n-i \geq 0$, whereas $j \leq g-4$ by assumption, so $g-3-j>0$ and the required vanishing follows for degree reasons.
We now have all the pieces needed to prove the induction step. The transversality statement (\[condd\]) the first half of the Prym–Green Conjecture has been reduced to, is proved inductively, by being part of a system of condition involving difference varieties of curves of every genus on the surface $X$.
\[inductionstep\] Fix $0 \leq j \leq g-3$ and assume that for a general curve $D\in |L_{g-2-j}|$ $$C_D-K_D-p-(2i+1)q^{(g-2-j)} \in D_{n-i}-D_{n-2-i+j}, \ \mbox{ for some } 0 \leq i \leq j.$$ Then for a general curve $Z\in |L_{g-3-j}|$, the following holds $$C_Z-K_Z-p-(2i'+1)q^{(g-3-j)} \in Z_{n-i'}-Z_{n-3-j+i'}, \ \mbox{ for some } 0 \leq i' \leq j+1.$$
Using the determinantal realization of divisorial varieties (\[fmp2\]) emerging from [@farkas-mustata-popa], the assumption may be rewritten as $$H^0\Bigl(D, \bigwedge^{n-2-j+i}M^{\vee}_{K_{D}} \otimes K_D^{\vee} \otimes \OO_D\bigl(C-p-(2i+1)q^{(g-2-j)}\bigr)\Bigr) \neq 0,$$ or, equivalently, $$H^0\Bigl(D, \bigwedge^{n-i}M_{K_{D}} \otimes \OO_D(C-J_1-(2i+1)J_0)\Bigr) \neq 0.$$ By Lemma \[main-coh-lem\] (ii), $H^0\bigl(D, \OO_D(C-J_1-(2i+1)J_0)\otimes K_{D}^{\vee}\bigr)=0$, so this is amounts to $$K_{n-i,0}\bigl(D,\OO_D(C-J_1-(2i+1)J_0), K_{D}\bigr) \neq 0.$$ We now let $D$ degenerate inside its linear system to the curve $Y=J_0+Z$, where $Z\in |L_{g-3-j}|$ and $J_0\cdot Z=q^{(g-3-j)}=:q$. By semicontinuity for Koszul cohomology [@BG], together with Lemma \[main-coh-lem\], this implies $$K_{n-i,0}\bigl(Y,\OO_Y(C-J_1-(2i+1)J_0), \widetilde{K}\bigr) \neq 0,$$ where $\widetilde{K}$ is the twist of the dualizing sheaf of $Y$ introduced just before Lemma \[main-coh-lem\] . This is the same as saying that the map $$\bigwedge^{n-i}H^0(Y, \widetilde{K}) \otimes H^0(Y,A_0) \to \bigwedge^{n-i-1}H^0(Y, \widetilde{K}) \otimes H^0(Y,A_1)$$ is not injective, where recall to have defined the line bundles $A_d:=\OO_Y(C-J_1-(2i+1)J_0)\otimes \widetilde{K}^{\otimes d}$. As seen in the proof of Lemma \[main-coh-lem\] (i), restriction induces an isomorphism $$H^0(Y, \widetilde{K}) \cong H^0\bigl(Z, K_{Z}(2q)\bigr).$$ Using the identification between $J_0$ and $E$, we have seen in the proof of Lemma \[main-coh-lem\] (ii) that $\OO_{J_0}(A_d)(-q) \cong \eta^{\otimes (j-2i+2)}(r-a)$ is a nontrivial line bundle of degree $0$ on $E$, therefore $H^i\bigl(J_0,\OO_{J_0}{A_d}(-q)\bigr)=0$ for $i=0,1$. Thus, restriction to $Z$ induces an isomorphism $$H^0(Y,A_d) \cong H^0(Z,\OO_Z(A_d)).$$ This also gives that the map $$\bigwedge^{n-i}H^0\bigl(Z,K_{Z}(2q)\bigr) \otimes H^0\bigl(Z,\OO_Z(A_0)\bigr) \rightarrow \bigwedge^{n-i-1} H^0\bigl(Z, K_{Z}(2q)\bigr) \otimes H^0\bigl(Z,\OO_Z(A_1)\bigr),$$ fails to be injective. As one has $$H^0(Z, \OO_Z(A_1)) \subseteq H^0\bigl(Z,\OO_Z(A_1+2q)\bigr) \ \mbox{ and } \ H^0\bigl(Z,\OO_Z(A_{-1}-2q)\bigr)=0,$$ we obtain $K_{n-i,0}\bigl(Z,\OO_Z(A_0), K_{Z}(2q)\bigr) \neq 0$, which can be rewritten as $$\label{beauville}
H^0\Bigl(Z,\bigwedge^{n-i}M_{K_{Z}(2q)} \otimes \OO_Z(A_0)\Bigr) \neq 0.$$
We compute the slope $\mu\Bigl(\bigwedge^{n-i} M_{K_{Z}(2q)}\otimes \OO_Z(A_0)\Bigr)=g(Z)-1,$ where $\mu\bigl(M_{K_Z(2q)}\bigr)=-2$. By Serre-Duality, then condition (\[beauville\]) can be rewritten as $$H^0\Bigl(Z,\bigwedge^{n-i}M^{\vee}_{K_Z(2q)} \otimes K_{Z}\otimes \OO_Z(-A_0)\Bigr) \neq 0.$$ We now use that Beauville in [@beauville-stable] Proposition 2 has described the theta divisors of vector bundles of the form $\bigwedge^{n-i} M_{K_Z(2q)}$ as above. Using [@beauville-stable], from (\[beauville\]) it follows that either $$\OO_Z(A_0)- K_{Z} \in Z_{n-i}-Z_{n-3+i-j},$$ or else $$\OO_Z(A_0)- K_{Z}-2q \in Z_{n-i-1}-Z_{n-2+i-j}.$$ Taking into account that $\OO_Z(A_0)=\OO_Z(C-p-(2i+1)q)$, the desired conclusion now follows. As a final remark, we note that, whilst in [@beauville-stable] it is assumed that $Z$ is non-hyperelliptic (which, using the Brill-Noether genericity of $Z$, happens whenever $g-3-j \geq 3$), the above statement is a triviality for $g-3-j=1$, whereas in the remaining case $g-3-j=2$ it follows directly from the argument in [@beauville-stable] Proposition 2. Indeed, in this case we have a short exact sequence $$0 \longrightarrow \bigwedge^{n-i-1}M^{\vee}_{K_Z}(2q) \longrightarrow \bigwedge^{n-i}M^{\vee}_{K_Z(2q)} \longrightarrow \bigwedge^{n-i}M^{\vee}_{K_Z}\longrightarrow 0.$$ The claim now follows immediately from [@farkas-mustata-popa] §3. This completes the proof.
By the above Proposition and induction, we now reduce the proof of (\[vanishing-kos\]) to a single statement on elliptic curves on the ruled surface $X$:
\[pgpartone\] Set $g=2n+1$ and $\ell\geq 2$. Then for a general element $[C,\tau]\in \cR_{g,\ell}$, one has $K_{n-1,1}(C,K_C\otimes \tau)=0$.
We apply Lemma \[suffcond1\] and the sufficient condition (\[pgtoprove\]). By the inductive step described above, reasoning by contradiction, it suffices to show that if $D\in |L_1|$ is general, then $$\begin{aligned}
C_{D}-K_{D}-p-(2i+1)q^{(1)} \notin D_{n-i}-D_{n-i},\end{aligned}$$ for each $1 \leq i \leq g-3$. Assume this is not the case. The bounds (\[relevant-bds\]) force $n=i$ and the difference variety on the right consists of $\{\OO_{D}\}$. One needs to prove that $\OO_{D}(C-p-(2n+1)q^{(1)}) \cong \OO_{D}({f_r}-J_1)$ is not effective. As $H^0(X,f_r-J_1)=0$ and $D\in |J_0+f_a|$, it suffices to prove $$H^1(X,f_r-J_1-J_0-f_a)=H^1(X,f_r-f_a+K_X)=0,$$ or equivalently by Serre duality, that $H^1(X,f_a-f_r)=0$. This follows immediately from the Leray spectral sequence.
The Green-Lazarsfeld Secant Conjecture for paracanonical curves {#secant1}
===============================================================
We recall the statement of the Green-Lazarsfeld Secant Conjecture [@GL]. Let $p$ be a positive integer, $C$ a smooth curve of genus $g$ and $L$ a non-special line bundle of degree $$\label{bound}
d\geq 2g+p+1-\mbox{Cliff}(C).$$ The Secant Conjecture predicts that if $L$ is $(p+1)$-very ample then $K_{p,2}(C,L)=0$ (the converse implication is easy, so one has an equivalence). The Secant Conjecture has been proved in many cases in [@generic-secant], in particular for a general curve $C$ and a general line bundle $L$. In the extremal case $d=2g+p+1-\mbox{Cliff}(C)$, Theorem 1.7 in [@generic-secant] says that whenever $$L-K_C+C_{d-g-2p-3}\nsubseteq C_{d-g-p-1}-C_{2g-d+p}$$ (the left hand side being a divisorial difference variety), then $K_{p,2}(C,L)=0$. Theorem 1.7 in [@generic-secant] requires $C$ to be Brill-Noether-Petri general, but the proof given in *loc. cit.* shows that for curves of odd genus the only requirement is that $C$ have maximum gonality $\frac{g+3}{2}$.
In the case at hand, we choose a general curve on the decomposable elliptic surface $X$ $$C \in |gJ_0+f_r|$$ of genus $g=2n+1$ and Clifford index $\mbox{Cliff}(C)=n$. We apply the above result to $L_{C}=K_C\otimes \tau$, with $\tau=\OO_C(\zeta)$. In order to conclude that $K_{n-3,2}(C,K_C\otimes \tau)=0$, it suffices to show $$\tau+C_2 \nsubseteq C_{n+1}-C_{n-1}.$$ We have two natural points on $C$, namely those cut out by intersection with $J_0$ and $J_1$, and for those points it suffices to show $$\label{pgtoshow2}
\phi^*\zeta\otimes \OO_C(J_0+J_1) \notin C_{n+1}-C_{n-1}.$$
We first establish a technical result similar to Lemma \[main-coh-lem\] and because of this analogy we use similar notations. For $0 \leq i \leq g-1$, let $Y \in |(g-i)J_0+f_r|$ be the union of $J_0$ and a general curve $Z \in |(g-i-1)J_0+f_r|$. We set $x_0:=Z\cdot J_0=f_{t^{g-i-1}}\cdot J_0$, where $t^{(g-i-1)}\in E$ satisfies $\OO_E(t^{(g-i-1)}-r)=\eta^{\otimes (g-i-1)}$. We denote $\widetilde{K} \in \text{Pic}(Y)$ the twist at the node of the dualizing sheaf of $Y$ such that $\OO_{J_0}(\widetilde{K})\cong \mathcal{O}_{J_0}$ and $\OO_{Z}(\widetilde{K})\cong K_Z(2x_0)$. We recall that $Z$ has a second distinguished point, namely $x_1=Z\cdot J_1$.
\[coh-lem-quad\] Let $Y \in |(g-i)J_0+f_r|$ for $0 \leq i \leq g-1$ be as above and assume $j$ is an integer satisfying $0 \leq j \leq n-1$ and $0 \leq i-j \leq n+1$. For a general $D\in |(g-i)J_0+f_r|$, we have:
1. $h^0(Y,\widetilde{K})=h^0(D,K_{D})$.
2. $H^0\Bigl(Y,\OO_Y\bigl(\phi^*\zeta^{\vee}-(2j+1-i)J_0-J_1\bigr)\Bigr)=0$
3. $h^0\Bigl(Y,\widetilde{K}^{\otimes m}\otimes \phi^*{\zeta^{\vee}}(-(2j+1-i)J_0-J_1)\Bigr)=h^0\Bigl(D,K_{D}^{\otimes m}\otimes \phi^*{\zeta^{\vee}}(-(2j+1-i)J_0-J_1)\Bigr)$, for $m \in \{1,2\}$.
\(i) This is similar to Lemma \[main-coh-lem\] (i) and we skip the details.
\(ii) Set $k=-(2j+1-i)$. If $k \leq 0$, the statement is clear for degree reasons, so assume $k \geq 1$. Then $H^0\bigl(X,\phi^*\zeta^{\vee}\otimes \OO_X(k J_0-J_1)\bigr)\cong H^0\bigl(E,(\eta\otimes \zeta^{\vee})\otimes\text{Sym}^{k-1}(\mathcal{O}_E \oplus \eta)\bigr)=0,$ so it suffices to show that $H^1\bigl(X,\phi^*\zeta^{\vee}\otimes \OO_X(kJ_0-J_1-Y)\bigr)=0$. By Riemann–Roch, this is equivalent to $$H^1\bigl(X,\phi^*\zeta\otimes \OO_X((g-k-i-1)J_0+f_r)\bigr)=0.$$ Using the given bounds, $g-k-i-1\geq -1$. It suffices to show $H^1\bigl(X,\phi^*\zeta\otimes \OO_X(mJ_0+f_r)\bigr)=0$, for $m \geq -1$. This follows along the lines of the proof of Lemma \[main-coh-lem\].
\(iii) By Riemann–Roch and semicontinuity, it suffices to show that for $m=1,2$, one has $$H^1\Bigr(Y,\widetilde{K}^{\otimes m}\otimes \OO_Y(\phi^*{\zeta^{\vee}}-(2j+1-i)J_0-J_1)\Bigr)=0.$$ Consider the exact sequence
$$\label{exseq}
0 \longrightarrow \mathcal{O}_{Z}(-x_0) \longrightarrow \mathcal{O}_{Y} \longrightarrow \mathcal{O}_{J_0} \longrightarrow 0.$$
Then $\OO_{J_0}\bigl(\widetilde{K}^{\otimes m}\otimes \phi^*{\zeta^{\vee}}(-(2j+1-i)J_0-J_1)\bigr)\cong \zeta^{\vee}\otimes \eta^{\otimes (i-2j-1)} \neq 0 \in \text{Pic}^0(E)$. So it suffices to show $H^1\bigl(Z,K_{Z} \otimes \phi^*\zeta^{\vee}(-(2j-i)x_0-x_1)\bigr)=0$ and $H^1\bigl(Z,K_{Z}^{\otimes 2}\otimes \phi^*\zeta^{\vee}(-(2j-i-2)x_0-x_1)\bigr)=0$. The second vanishing is automatic for degree reasons (using the bounds on $i$ and $j$), so we just need to establish the first one. By Serre duality, this is equivalent to $$H^0\Bigl(Z, \phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_Z((2j-i+1)x_0)\Bigr)=0.$$ This is obvious if $2j-i+1<0$, so assume $2j-i+1 \geq 0$. Using once more the Leray spectral sequence, it follows $H^0\bigl(X,\phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X((2j-i+1) J_0)\bigr)=0$, so it suffices to prove $$H^1\Bigl(X,\phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X\bigl((2j-i+1)J_0-Z\bigr)\Bigr)=0.$$ Using Serre duality and the bound $g-2j-4 \geq -1$, this goes through as in the proof of \[coh2-part1\].
The following proposition provides the induction step, to be proved in order to establish the second half of the Prym–Green Conjecture:
Let $0 \leq i \leq g-2$. Suppose that for a general curve $D\in |(g-i)J_0+f_r|$ there exists an integer $0\leq j\leq i$ such that $$\OO_{D}\bigl(\phi^*\zeta+(2j-i+1)J_0+J_1\bigr) \in D_{n+1-i+j}-D_{n-1-j}.$$ Then for a general curve $Z\in |(g-i-1)J_0+f_r|$, there exists $0\leq j'\leq i+1$ such that $$\OO_Z\bigl(\phi^*\zeta+(2j'-i)J_0+J_1\bigr) \in Z_{n-i+j'}-Z_{n-1-j'}.$$
By assumption, $j \leq n-1$ and $i-j \leq n+1$. Applying again the determinantal description of divisorial difference varieties from [@farkas-mustata-popa] §3 and with Serre duality, the hypothesis turns into $$H^0\Bigl(D, \bigwedge^{n-1-j}M_{K_D}\otimes \OO_D\bigl(\phi^*\zeta^{\vee}-(2j-i+1)J_0-J_1\bigr)\Bigr) \neq 0.$$ By Lemma \[coh-lem-quad\] and semicontinuity, $H^0\bigl(D,\OO_D(\phi^*{\zeta^{\vee}}-(2j+1-i)J_0-J_1)\bigr)=0$, so the above is equivalent to $$K_{n-1-j,1}\bigl (D, \OO_D(\phi^*\zeta^{\vee}-(2j-i+1)J_0-J_1), K_{D}\bigr) \neq 0.$$ By Lemma \[coh-lem-quad\] and semicontinuity for Koszul cohomology, we then also have $$K_{n-1-j,1}\bigl(Y,\OO_Y(\phi^*\zeta^{\vee}-(2j-i+1)J_0-J_1), \widetilde{K}\bigr) \neq 0,$$ where, recall that $Y=Z\cup J_0$, with $Z\cdot J_0=x_0$. Consider again the exact sequence (\[exseq\]) and since $H^0(J_0,\OO_{J_0}(mJ_0+\phi^*\zeta^{\vee}))=0$ for any $m$, the inclusion map yields isomorphisms $$H^0\Bigl(Z, K_{Z}^{\otimes m}\otimes \OO_Y\bigl(\phi^*\zeta^{\vee}+(2m-2j+i-2)x_0-x_1\bigr)\Bigr)
\cong H^0\Bigl(Y,\widetilde{K}^{\otimes m}\otimes \OO_Y\bigl(\phi^*\zeta^{\vee}-(2j-i+1)J_0-J_1\bigr)\Bigr).$$ valid for all positive integers $m$. Recall the isomorphism $H^0(Y,\widetilde{K}) \cong H^0\bigl(Z,K_{Z}(2x_0)\bigr)$ given by restriction. We define the graded $\mbox{Sym } H^0(Z,K_Z(2x_0))$-module $$A:=\bigoplus_{q\in \mathbb Z} H^0\Bigl(Z,\OO_Z(\widetilde{K}^{\otimes q}+\phi^*\zeta^{\vee}-(2j-i+1)x_0-x_1)\Bigr),$$ as well as the graded $\mbox{Sym } H^0(Y,\widetilde{K})$-module $$B:=\bigoplus_{q\in \mathbb Z} H^0\Bigl(Y,\widetilde{K}^{\otimes q}\otimes \OO_Y(\phi^*\zeta^{\vee}-(2j-i+1)J_0-J_1)\Bigr).$$ We then have the following commutative diagram, where the vertical arrows are isomorphisms induced by tensoring the exact sequence (\[exseq\]): $$\xymatrix{
\bigwedge^{n-1-j} H^0\bigl(Z, K_Z(2x_0)\bigr) \otimes A_1
\ar[r] \ar[d] &\bigwedge^{n-2-j} H^0\bigl(Z, K_Z(2x_0)\bigr) \otimes A_2 \ar[d] \\
\bigwedge^{n-1-j} H^0(Y, \widetilde{K}) \otimes B_1 \ar[r]& \bigwedge^{n-2-j} H^0(Y, \widetilde{K}) \otimes B_2,}$$ Thus it follows $K_{n-1-j,1}\Bigl(Z,\OO_Z(\phi^*\zeta^{\vee}+(i-2j-2)x_0-x_1), K_Z(2x_0)\Bigr) \neq 0$, or, equivalently, $$H^0\Bigl(Z,\bigwedge^{n-1-j}M_{K_{Z}(2x_0)} \otimes K_Z\otimes \OO_Z(\phi^*\zeta^{\vee}+(i-2j-2)x_0-x_1\Bigr) \neq 0.$$ We now compute the slope $\chi\Bigl(\bigwedge^{n-1-j}M_{K_Z}(2x_0) \otimes K_Z \otimes \OO_Z(\phi^*\zeta^{\vee}+(i-2j-2)x_0-x_1\Bigr))=0$. Applying once more the description given in [@beauville-stable] Proposition 2 for the theta divisors of the exterior powers of the vector bundle $M_{K_Z(2x_0)}$, we obtain that either $$\OO_Z\bigl(\phi^*\zeta+(2j-i)x_0+x_1\bigr) \in Z_{n-i+j}-Z_{n-1-j},$$ or $$\OO_Z\bigl(\phi^*\zeta+(2j+2-i)x_0+x_1\bigr) \in Z_{n+1-i+j}-Z_{n-2-j},$$ which establishes the claim.
We now complete the proof of the Prym–Green Conjecture for odd genus.
\[finalcheck\] Set $g=2n+1$ and $\ell\geq 2$. Then for a general element $[C,\tau]\in \cR_{g,\ell}$ one has $K_{n-3,2}(C,K_C\otimes \tau)=0$.
Using the inductive argument from Proposition \[coh-lem-quad\], it suffices to prove the base case of the induction, that is, show that if $D\in |J_0+f_r|$ is general and $0\leq j\leq g-1$, then $$\OO_D\bigl(\phi^*\zeta+(2j-g+2)J_0+J_1\bigr) \notin D_{n-1-j}-D_{n-1-j},$$ for any $0 \leq j \leq g-1$. Suppose this is not the case, which forces $j=n-1$ and then $$\OO_D\bigl(\phi^*(\zeta\otimes \eta^{\vee})\bigr)\cong \OO_{D}(\phi^* \zeta-J_0+J_1) \cong \mathcal{O}_{D}.$$ Since $H^0\bigl(X, \phi^*(\zeta\otimes \eta^{\vee})\bigr)=0$, this implies $H^1\bigl(X,\phi^*(\zeta\otimes \eta^{\vee}\otimes \OO_X(-J_0-f_r)\bigr) \neq 0$. Observe $H^2(X,\phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X(-J_0-f_r)\bigr)=0$ by Serre duality. Taking the cohomology exact sequence associated to $$0 \longrightarrow \phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X(-J_0-f_r) \longrightarrow \phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X(-f_r) \longrightarrow \OO_{J_0}\bigl(\phi^*(\zeta\otimes \eta^{\vee})-f_r\bigr) \longrightarrow 0$$ and using the Leray spectral sequence, we immediately get $H^1\bigl(X,\phi^*(\zeta\otimes \eta^{\vee})\otimes \OO_X(-J_0-f_r)\bigr)=0$, which is a contradiction.
[aaaaaa]{} E. Arbarello, A. Bruno and E. Sernesi, [*[On hyperplane sections of K3 surfaces]{}*]{}, arXiv:1507.05002, to appear in Algebraic Geometry. M. Aprodu and J. Nagel, [*[Koszul cohomology and algebraic geometry]{}*]{}, University Lecture Series **52**, American Mathematical Society, Providence, RI (2010). A. Beauville, [*[Prym varieties and the Schottky problem]{}*]{}, Inventiones Math. **41** (1977), 149–196. A. Beauville, [*[Some stable vector bundles with reducible theta divisor]{}*]{}, Manuscripta Math. **110** (2003), 343–349. M. Boratyński and S. Greco, [*[Hilbert functions and Betti numbers in a flat family]{}*]{}, Annali di Mat. Pura ed Applicata **42** (1985), 277–292. C. Birkenhake and H. Lange, [*[Complex abelian varieties]{}*]{}, Grundlehren der mathematischen Wissenschaften **302** (2004), Springer Verlag. A. Chiodo, D. Eisenbud, G. Farkas and F.-O. Schreyer, [*[Syzygies of torsion bundles and the geometry of the level $\ell$ modular variety over $\overline{\mathcal{M}}_g$]{}*]{}, Inventiones Math. **194** (2013), 73–118. E. Colombo, G. Farkas, A. Verra and C. Voisin, [*[Syzygies of Prym and paracanonical curves of genus 8]{}*]{}, arXiv:1612.01026, EPIGA **1** (2017), paper 7. G. Farkas and M. Kemeny, [*[The generic Green–Lazarsfeld Secant Conjecture]{}*]{}, Inventiones Math. **203** (2016), 265–301. G. Farkas and M. Kemeny, [*[The Prym-Green Conjecture for torsion line bundles of high order]{}*]{}, Duke Math. Journal **166** (2017), 1103–1124. G. Farkas and M. Kemeny, [*[Linear syzygies for curves of prescribed gonality]{}*]{}, arXiv:1610.04424. G. Farkas and K. Ludwig, [*[The Kodaira dimension of the moduli space of Prym varieties]{}*]{}, Journal of the European Math. Society **12** (2010), 755–795. G. Farkas, M. Mustaţă and M. Popa, [*[Divisors on $\cM_{g, g+1}$ and the Minimal Resolution Conjecture for points on canonical curves]{}*]{}, Annales Sci. de L’École Normale Supérieure **36** (2003), 553–581. G. Farkas and N. Tarasca, [*[Du Val curves and the pointed Brill-Noether Theorem]{}*]{}, Selecta Mathematica **23** (2017), 2243–2259. W. Fulton and R. Lazarsfeld, [*[On the connectedness of degeneracy loci and special divisors]{}*]{}, Acta Mathematica **146** (1981), 271–283. M. Green, [*[Koszul cohomology and the cohomology of projective varieties]{}*]{}, Journal of Differential Geometry **19** (1984), 125–171. M. Green and R. Lazarsfeld, [*[On the projective normality of complete linear series on an algebraic curve]{}*]{}, Inventiones Math. **83** (1986), 73–90. A. Hirschowitz and S. Ramanan, [*[New evidence for Green’s Conjecture on syzygies of canonical curves]{}*]{}, Annales Scientifiques de l’École Normale Supérieure **31** (1998), 145–152. A. Treibich, [*[Revêtements tangentiels et condition de Brill-Noether]{}*]{}, C. R. Acad. des Sci. Paris. Sér. I. Math **316** (1993), 815–817. C. Voisin, [*[Green’s generic syzygy conjecture for curves of even genus lying on a $K3$ surface]{}*]{}, Journal of European Math. Society **4** (2002), 363–404. C. Voisin, [*[Green’s canonical syzygy conjecture for generic curves of odd genus]{}*]{}, Compositio Mathematica **141** (2005), 1163–1190.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Developing speech technologies for low-resource languages has become a very active research field over the last decade. Among others, Bayesian models have shown some promising results on artificial examples but still lack of *in situ* experiments. Our work applies state-of-the-art Bayesian models to unsupervised Acoustic Unit Discovery (AUD) in a real low-resource language scenario. We also show that Bayesian models can naturally integrate information from other resourceful languages by means of *informative prior* leading to more consistent discovered units. Finally, discovered acoustic units are used, either as the 1-best sequence or as a lattice, to perform word segmentation. Word segmentation results show that this Bayesian approach clearly outperforms a Segmental-DTW baseline on the same corpus.'
address: |
1. Brno University of Technology, Brno, Czech Republic, 2. LIMSI, CNRS, Université Paris Saclay\
3. LIG, CNRS, Université Grenoble Alpes, 4. University of Illinois, Urbana, IL, USA\
5. Centre for Language Studies, Radboud University, Nijmegen, Netherlands\
6. CoML, ENS/EHESS/PSL Research University/CNRS/INRIA, Paris, France\
7. Johns Hopkins University, Baltimore, MD USA
bibliography:
- 'refs.bib'
title: Bayesian Models for Unit Discovery on a Very Low Resource Language
---
Acoustic Unit Discovery, Low-Resource ASR, Bayesian Model, Informative Prior.
Introduction {#sec:intro}
============
Out of nearly 7000 languages spoken worldwide, current speech (ASR, TTS, voice search, etc.) technologies barely address 200 of them. Broadening ASR technologies to ideally all possible languages is a challenge with very high stakes in many areas and is at the heart of several fundamental research problems ranging from psycholinguistic (how humans learn to recognize speech) to pure machine learning (how to extract knowledge from unlabeled data). The present work focuses on the narrow but important problem of unsupervised Acoustic Unit Discovery (AUD). It takes place as the continuation of an ongoing effort to develop a Bayesian model suitable for this task, which stems from the seminal work of [@Lee2012] later refined and made scalable in [@Ondel2016]. This model, while rather crude, has shown that it can provide a clustering accurate enough to be used in topic identification of spoken document in unknown languages [@Kesiraju2017]. It was also shown that this model can be further improved by incorporating a Bayesian “phonotactic” language model learned jointly with the acoustic units [@Ondel2017]. Finally, following the work in [@Johnson2016] it has been combined successfully with variational auto-encoders leading to a model combining the potential of both deep neural networks and Bayesian models [@Ebbers2017]. The contribution of this work is threefold:
- we compare two Bayesian models ([@Ondel2016] and [@Ebbers2017]) for acoustic unit discovery (AUD) on a very low resource language speech corpus,
- we investigate the use of “informative prior” to improve the performance of Bayesian models by using information from resourceful languages,
- as an extrinsic evaluation of AUD quality, we cascade AUD with sequence/lattice based word discovery [@HeymannW2013].
Models {#sec:models}
======
The AUD model described in [@Lee2012; @Ondel2016] is a non-parametric Bayesian Hidden Markov Model (HMM). This model is topologically equivalent to a phone-loop model with two major differences:
- since it is trained in an unsupervised fashion the elements of the loop cannot directly be interpreted as the actual phones of the target language but rather as some acoustic units (defined as 3-states left-to-right sub-HMM) whose time scale approximately corresponds the phonetic time scale.
- to cope with the unknown number of acoustic units needed to properly describe speech, the model assumes a theoretically infinite number of potential acoustic units. However, during inference, the prior over the weight of the acoustic units (a Dirichlet Process [@TehJor2010]) will act as a sparsity regularizer leading to a model which explains the data with a relatively small number of units.
In this work, we have used two variants of this original model. The first one (called HMM model in the remainder of this paper), following the analysis led in [@Kurihara2007], approximates the Dirichlet Process prior by a mere symmetric Dirichlet prior. This approximation, while retaining the sparsity constraint, avoids the complication of dealing with the variational treatment of the stick breaking process frequent in Bayesian non-parametric models. The second variant, which we shall denote Structured Variational AutoEncoder (SVAE) AUD, is based upon the work of [@Johnson2016] and embeds the HMM model into the Variational AutoEncoder framework [@KingmaW2013]. A very similar version of the SVAE for AUD was developed independently and presented in [@Ebbers2017]. The main noteworthy difference between [@Ebbers2017] and our model is that we consider a fully Bayesian version of the HMM embedded in the VAE; and the posterior distribution and the VAE parameters are trained jointly using the Stochastic Variational Bayes [@Johnson2016; @Hoffman2013]. For both variants, the prior over the HMM parameters were set to the conjugate of the likelihood density: Normal-Gamma prior for the mean and variance of the Gaussian components, symmetric Dirichlet prior over the HMM’s state mixture’s weights and symmetric Dirichlet prior over the acoustic units’ weights. For the case of the uninformative prior, the prior was set to be vague prior with one pseudo-observation [@Bishop2006:PRM] [^1].
Informative Prior {#sec:informative_prior}
=================
Bayesian Inference differs from other machine learning techniques by introducing a distribution $p(\GlobalParams)$ over the parameters of the model. A major concern in Bayesian Inference is usually to define a prior that makes as little assumption as possible. Such a prior is usually known as uninformative prior. Having a completely uninformative prior has the practical advantage that the prior distribution will have a minimal impact on the outcome of the inference leading to a model which bases its prediction purely and solely on the data. In the present work, we aim at the opposite behavior, we wish our AUD model to learn phone-like units from the unlabeled speech data of a target language given the knowledge that was previously accumulated from another resourceful language. More formally, the original AUD model training consists in estimate the *a posteriori* distribution of the parameters given the unlabeled speech data of a target language $\Matrix{X}_t$:
$$p(\GlobalParams | \Matrix{X}_t) = \frac{p(\Matrix{X}_t | \GlobalParams) p(\GlobalParams)}{p(\Matrix{X}_t)}$$
The parameters are divided into two subgroups $\GlobalParams = \{\Params,
\LatentVar_t\}$ where $\Params$ are the global parameters of the model, and $\LatentVar_t$ are the latent variables which, in our case, correspond to the sequences of acoustic units. The global parameters are separated into two independent subsets : $\Params = \{ \Params_A, \Params_L\}$, corresponding to the acoustic parameters ($\Params_A$) and the “phonotactic” language model parameters ($\Params_L$). Replacing $\Matrix{\Params}$ and following the conditional independence of the variable induced by the model (see [@Ondel2016] for details) leads to:
$$\label{eq:bayes_noinf_prior}
p(\LatentVar_t, \Params | \Matrix{X}_t) \propto p(\Matrix{X}_t | \LatentVar_t, \Params_A) p(\LatentVar_t|\Params_L) p(\Params_L)p(\Params_A)$$
If we further assume that we have at our disposal speech data in a different language than the target one, denoted $\Matrix{X_p}$, along with its phonetic transcription $\LatentVar_p$, it is then straightforward to show that: $$\begin{aligned}
\label{eq:bayes_inf_prior}
p(\Params, \LatentVar_t | \Matrix{X}_t, \Matrix{X}_p, \LatentVar_p) &\propto & p(\Matrix{X}_t | \LatentVar_t, \Params_A)p(\LatentVar_t | \Params_L)p(\Params_A|\Matrix{X}_p, \LatentVar_p)\end{aligned}$$ which is the same as Eq. \[eq:bayes\_noinf\_prior\] but for the distribution of the acoustic parameters which is based on the data of the resourceful language. In contrast of the term uninformative prior we denote $p(\Params_A | \Matrix{X}_p, \LatentVar_p)$ as an informative prior. As illustrated by Eq. \[eq:bayes\_inf\_prior\], a characteristic of Bayesian inference is that it naturally leads to a sequential inference. Therefore, model training can be summarized as:
- given some prior data $\Matrix{X}_p$ from a resourceful language, estimate a posterior distribution over the acoustic parameters $p(\Params_A|\Matrix{X}_p)$
- for a new unlabeled speech corpus, estimate the posterior distribution but considering the learned posterior distribution $p(\Params_A|\Matrix{X}_p)$ as a “prior”.
Practically, the computation of the informative prior as well as the final posterior distribution is intractable and we seek for an approximation by means of the well known Variational Bayes Inference [@Jordan1999]. The approximate informative prior $q_1(\Params_A)$ is estimated by optimizing the variational lower bound of the evidence of the prior data $\Matrix{X}_p$:
$$\begin{split}
q_1^* = \operatorname*{arg\,max}_{q_1} \; & \Exp_{q_1(\Params_A)} \big[ \ln
p(\Matrix{X}_p, \Params_A | \LatentParams_p,) \big] \\
& - \KL (q_1(\Params_A) || p(\Params_A))
\end{split}$$
where $\KL$ is the Kullback-Leibler divergence. Then, the posterior distribution of the parameters given the target data $q_2(\Matrix{\LatentParams}_t, \Params_A, \Params_L)$ can be estimated by optimizing the evidence of the target data $\Matrix{X}_t$:
$$\label{eq:vb_training}
\begin{split}
q_2^* = \operatorname*{arg\,max}_{q_2} \; & \Exp_{q_2(\Matrix{\LatentParams}_t,
\Params_A, \Params_L)} \big[ \ln
p(\Matrix{X}_t, \Matrix{\LatentParams}_t, \Params_A, \Params_L) \big] \\
& - \KL (q_2(\Params_A) || q_1(\Params_A)) \\
& - \KL (q_2(\LatentParams_t, \Params_L) || p(\LatentParams_t, \Params_L))
\end{split}$$
Note that when the model is trained with an uninformative prior the loss function is the as in Eq. \[eq:vb\_training\] but with $p(\Matrix{\eta}_A)$ instead of the $q_1(\Matrix{\eta}_A)$. For the case of the uninformative prior, the Variational Bayes Inference was initialized as described in [@Ondel2016]. In the informative prior case, we initialized the algorithm by setting $q_2(\Params_A) =
q_1(\Params_A)$.
Experimental Setup {#sec:setup}
==================
Corpora and acoustic features {#subsec:database}
-----------------------------
We used the Mboshi5K corpus [@mboshi-arxiv] as a test set for all the experiments reported here. [[Mboshi]{}]{} (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project [@addaBulbSLTU2016]. This speech dataset was collected following a real language documentation scenario, using [Lig\_Aikuma]{}[^2], a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets [@blachon2016]. The corpus is multilingual (5130 [[Mboshi]{}]{} speech utterances aligned to French text) and contains linguists’ transcriptions in [[Mboshi]{}]{} (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community[^3]. More details on this corpus can be found in [@mboshi-arxiv].
TIMIT is also used as an extra speech corpus to train the informative prior. We used two different set of features: the mean normalized MFCC + $\Delta$ + $\Delta\Delta$ generated by HTK and the Multilingual BottleNeck (MBN) features [@Grezl2014] trained on the Czech, German, Portuguese, Russian, Spanish, Turkish and Vietnamese data of the Global Phone database.
Acoustic unit discovery (AUD) evaluation {#subsec:acoustic unit evaluation}
----------------------------------------
To evaluate our work we measured how the discovered units compared to the forced aligned phones in term of segmentation and information. The accuracy of the segmentation was measured in term of Precision, Recall and F-score. If a unit boundary occurs at the same time (+/- 10ms) of an actual phone boundary it is considered as a true positive, otherwise it is considered to be a false positive. If no match is found with a true phone boundary, this is considered to be a false negative. The consistency of the units was evaluated in term of normalized mutual information (NMI - see [@Ondel2016; @Ondel2017; @Ebbers2017] for details) which measures the statistical dependency between the units and the forced aligned phones. A NMI of 0 % means that the units are completely independent of the phones whereas a NMI of 100 % indicates that the actual phones could be retrieved without error given the sequence of discovered units.
Extension to word discovery {#subsec:word_discovery}
---------------------------
In order to provide an extrinsic metric to evaluate the quality of the acoustic units discovered by our different methods, we performed an unsupervised word segmentation task on the acoustic units sequences, and evaluated the accuracy of the discovered word boundaries. We also wanted to experiment using lattices as an input for the word segmentation task, instead of using single sequences of units, so as to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise. A model capable of performing word segmentation both on lattices and text sequences was introduced by [@HeymannW2013]. Building on the work of [@Mochihashi09bayesian; @Neubig12bayesian] they combine a nested hierarchical Pitman-Yor language model with a Weighted Finite State Transducer approach. Both for lattices and acoustic units sequences, we use the implementation of the authors with a bigram language model and a unigram character model[^4]. Word discovery is evaluated using the *Boundary* metric from the *Zero Resource Challenge 2017* [@ludusan2014] and [@zrc2017]. This metric measures the quality of a word segmentation and the discovered boundaries with respect to a gold corpus (Precision, Recall and F-score are computed).
Results and Discussion {#sec:results}
======================
First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN. Results are shown in Table \[tab:vad\_feat\]. Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in [@Ondel2017]. Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well. Another possibility may be that the initialization scheme of the model is not suitable for this type of features. Indeed, Variational Bayesian Inference algorithm converges only to a local optimum of the objective function and is therefore dependent of the initialization. We believe the second explanation is the more likely since, as we shall see shortly, the best results in term of word segmentation and NMI are eventually obtained with the MBN features when the inference is done with the informative prior. Next, we compared the HMM and the SVAE models when trained with an uninformative prior (lines with “Inf. Prior” set to “no” in Table \[tab:inf\_prior\]). The SVAE significantly improves the NMI and the precision showing that it extracts more consistent units than the HMM model. However, it also degrades the segmentation in terms of recall. We further investigated this behavior by looking at the duration of the units found by both models compared to the true phones (Table \[tab:unit\_duration\]). We observe that the SVAE model favors longer units than the HMM model hence leading to fewer boundaries and consequently smaller recall.
We then evaluated the effect of the informative prior on the acoustic unit discovery (Table \[tab:inf\_prior\]). On all 4 combinations (2 features sets $\times$ 2 models) we observe an improvement in terms of precision and NMI but a degradation of the recall. This result is encouraging since the informative prior was trained on English data (TIMIT) which is very different from Mboshi. Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language. Finally, similarly to the SVAE/HMM case described above, we found that the degradation of the recall is due to longer units discovered for models with an informative prior (numbers omitted due to lack of space).
Word discovery results are given in Table \[tab:word-boundary\] for the *Boundary* metric [@ludusan2014; @zrc2017]. We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table \[tab:inf\_prior\] and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph *Lattices* is slightly better than from flat sequences of AUD symbols (*1-best*); iii) our results outperform a pure speech based baseline based on segmental DTW [@aren] (F-score of 19.3% on the exact same corpus).
Conclusion {#sec:conclusion}
==========
We have conducted an analysis of the state-of-the-art Bayesian approach for acoustic unit discovery on a real case of low-resource language. This analysis was focused on the quality of the discovered units compared to the gold standard phone alignments. Outcomes of the analysis are i) the combination of neural network and Bayesian model (SVAE) yields a significant improvement in the AUD in term of consistency ii) Bayesian models can naturally embed information from a resourceful language and consequently improve the consistency of the discovered units. Finally, we hope this work can serve as a baseline for future research on unsupervised acoustic unit discovery in very low resource scenarios.
Acknowledgements
================
This work was started at JSALT 2017 in CMU, Pittsburgh, and was supported by JHU and CMU (via grants from Google, Microsoft, Amazon, Facebook, Apple), by the Czech Ministry of Education, Youth and Sports from the National Programme of Sustainability (NPU II) project “IT4Innovations excellence in science - LQ1602” and by the French ANR and the German DFG under grant ANR-14-CE35-0002 (BULB project). This work used the Extreme Science and Engineering Discovery Environment (NSF grant number OCI-1053575 and NSF award number ACI-1445606).
[^1]: Because of lack of space, we have only given a rudimentary description of the models. Note that the HMM model was described at length in [@Ondel2016] whereas the full description of the SVAE model is yet to be published. However, the implementation of both models is available at https://github.com/amdtkdev/amdtk
[^2]: [http://lig-
aikuma.imag.fr](http://lig-
aikuma.imag.fr)
[^3]: It will be made available for free from ELRA, but its current version is online on: <https://github.com/besacier/mboshi-french-parallel-corpus>
[^4]: It would be more natural to use a 4-gram or an even higher order spelling model for word discovery, but we wanted to be able to validate our metric by matching it with the model of [@Goldwater09bayesian] ($dpseg$) which implements a bigram language model based on a unigram model of characters (see details in Table \[tab:word-boundary\]).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that the space of long knots in an euclidean space of dimension larger than three is a double loop space, proving a conjecture by Sinha. We also construct a double loop space structure on framed long knots, and show that the map forgetting the framing is not a double loop map in odd dimension. However there is always such a map in the reverse direction expressing the double loop space of framed long knots as a semidirect product. A similar compatible decomposition holds for the homotopy fiber of the inclusion of long knots into immersions. We also show via string topology that the space of closed knots in a sphere, suitably desuspended, admits an action of the little 2-discs operad in the category of spectra. A fundamental tool is the McClure-Smith cosimplicial machinery, that produces double loop spaces out of topological operads with multiplication.'
address: 'Dipartimento di matematica, Università di Roma “Tor Vergata”, Roma, Italy'
author:
- Paolo Salvatore
title: 'Knots, operads and double loop spaces'
---
Introduction
============
The space $Emb_n$ of long knots in ${\mathbb R}^n$ is the space of embeddings ${\mathbb R}\to {\mathbb R}^n$ that agree with a fixed inclusion of a line near infinity. The space $Emb_n$ is equipped with the Whitney topology, and it can be identified up to homotopy with the subspace of based knots in $S^n$ with fixed derivative at the base point. The proof that $Emb_2$ is contractible goes back to Smale. The components of $Emb_3$ correspond to classical knots. The homotopy type of those components has been completely described by Ryan Budney [@Bu2]. For $n>3$ the space $Emb_n$ is connected by Whitney’s theorem. The rational homology of $Emb_n$ for $n>3$ has been recently computed by Lambrechts, Turchin and Volic [@LTV].
Rescaling and concatenation defines a natural product on the space of long knots that is associative up to higher homotopies. Thus $Emb_n$ is an $A_\infty$-space and in the case $n>3$, being connected, has the homotopy type of a loop space. The product is homotopy commutative, essentially by passing one knot through the other. This suggested that $Emb_n$ could be (up to weak equivalence) a double loop space. Budney and Sinha proved that two spaces closely related to $Emb_n$ are double loop spaces, for $n>3$, by different approaches. A framed long knot in ${\mathbb R}^n$ is a long knot in ${\mathbb R}^n$ together with a choice of framing ${\mathbb R}\to SO(n)$, standard near infinity, such that the first vector of the framing gives the unit tangent vector map ${\mathbb R}\to S^{n-1}$ of the knot. Budney shows in [@Bu] that the space $fEmb_n$ of framed long knots in ${\mathbb R}^n$ is a double loop space for $n>3$. This is achieved by constructing an explicit action of the little 2-cubes operad on a space homotopy equivalent to the group-like space $fEmb_n$. The operad action is also defined for $n=3$, and makes $fEmb_3$ into a free 2-cubes algebra on the non-connected space of prime long knots.
Sinha shows in [@sinha] that the homotopy fiber $Emb'_n$ of the unit tangent vector map $Emb_n \to \Omega S^{n-1}$ is a double loop space, and the map is nullhomotopic. His approach goes via the cosimplicial machinery by McClure and Smith [@MS] that produces double loop spaces out of non-symmetric operads in based spaces. Under this correspondence $Emb'_n$ is produced by an operad equivalent to the little $n$-discs operad, the Kontsevich operad. We show that this machinery, applied to an operad equivalent to the framed little $n$-discs operad, gives a double loop space structure on framed long knots in ${\mathbb R}^n$, that presumably coincides with the one described by Budney. We believe that the fact that the framed little discs is a cyclic operad [@Bu3] together with the McClure-Smith machinery for cyclic objects will lead to a [*framed*]{} little 2-discs action on framed long knots.
Let us consider the principal fibration $$\Omega SO(n-1) \to fEmb_n \to Emb_n$$ forgetting the framing. Such fibration is trivial because its classifying map $Emb_n \to SO(n-1)$ is the composite of the (nullhomotopic) unit tangent vector map and the holonomy $\Omega S^{n-1} \to SO(n-1)$. Given the splittings $$\label{'}
Emb'_n \simeq Emb_n \times \Omega^2 S^{n-1}$$ and $$\label{f}
fEmb_n \simeq Emb_n \times \Omega SO(n-1)$$ Sinha asked in [@sinha] whether one could restrict the double loop structure to the first factor. We answer this affirmatively.
\[main\] The space $Emb_n$ of long knots in ${\mathbb R}^n$ is a double loop space for $n>3$.
The double loop space structure is not produced directly from an operad as hoped in [@sinha], but is deduced by diagram chasing on a diagram of cosimplicial spaces.
The splittings (\[’\]) and (\[f\]) respect the single loop space structures but not the double loop space structures, as the projections on the factor $Emb_n$ are not double loop maps in general.
\[not2\] The map forgetting the framing $fEmb_n \to Emb_n$ and the map from the homotopy fiber $Emb'_n \to Emb_n$ are not double loop maps for $n$ odd.
We prove this by showing that the maps in question do not preserve the Browder operation, a natural bracket on the homology of double loop spaces. This is based on computations by Turchin [@T].
There are instead double loop maps $Emb_n \to Emb'_n$ and $Emb_n \to fEmb_n$ that together with the fiber inclusions $\Omega^2 S^{n-1} \to Emb'_n$ and $\Omega SO(n-1) \to fEmb_n$ produce essentially semidirect product extensions of double loop spaces. We state this precisely in the following theorem.
\[frame\]
There is a commutative diagram of double loop spaces and double loop maps $$\xymatrix{
Emb_n \ar[r] \ar@{=}[d] & Emb'_n \ar@<1ex>[r] \ar[d] & \Omega^2 S^{n-1} \ar[d] \ar@<1ex>[l] \\
Emb_n \ar[r] & fEmb_n \ar@<1ex>[r] & \Omega SO(n-1) \ar@<1ex>[l] }$$
The rows deloop twice to fibrations with sections, and the vertical maps are induced by the holonomy $\Omega S^{n-1} \to SO(n-1)$.
Also this theorem develops the approach by Sinha. The double loop spaces and double loop maps are produced by applying the McClure-Smith machinery to suitable operads and operad maps.
At the end of the paper we apply ideas from string topology to show that the shifted homology of the space $Emb(S^1,S^n)$ of all knots in the $n$-sphere behaves as the homology of a double loop space. More precisely this structure is induced by the action of an operad equivalent to the little 2-cubes at the spectrum level rather than at the space level.
\[sphere\] The spectrum $\Sigma^{1-2n} \Sigma^\infty Emb(S^1,S^n)_+$ is an $E_2$-ring spectrum.
A similar result has been obtained independently by Abbaspour-Chataur-Kallel. The case $n=3$ is joint work with Kate Gruher [@GS].
Here is a plan of the paper: in section \[two\] we recall some background material on operads, cosimplicial spaces and prove theorem \[main\]. In section \[three\] we study the space of framed knots via cosimplicial techniques and prove theorem \[frame\]. In section \[four\] we recollect some material on the Deligne conjecture and we give a proof of theorem \[not2\]. In the last section \[five\] we develop the string topology of knots proving theorem \[sphere\].
I would like to thank Ryan Budney, Pascal Lambrechts, Riccardo Longoni, Dev Sinha and Victor Turchin for helpful conversations regarding this material.
Cosimplicial spaces and knots {#two}
=============================
We recall that a topological operad $O$ is a collection of spaces $O(k), \,k \geq 0$, together with a unit $\iota \in O(1)$ and composition maps $$\circ_t:O(k)\times O(l) \to O(k+l-1)\;$$ for $1\leq t \leq k$ satisfying appropriate axioms [@may2]. The operad is [*symmetric*]{} if the symmetric group $\Sigma_k$ acts on $O(k)$ for each $k$, compatibly with the composition maps. We say that a space $A$ is acted on by an operad $O$, or it is a $O$-algebra, if we are given maps $O(n) \times A^n \to A$ satisfying appropriate associativity and unit axioms [@may2]. The concepts of (symmetric) operads and their algebras can be defined likewise in any (symmetric) monoidal caegory.
Let $F({\mathbb R}^n,k)$ be the ordered configuration space of $k$ points in ${\mathbb R}^n$. The direction maps $\theta_{ij}:F({\mathbb R}^n,k) \to S^{n-1}$ are defined for $i \neq j$ by $$\theta_{ij}(x_1,\dots,x_n)=(x_i-x_j)/|x_i-x_j|.$$ Let us write $B_n(k) = (S^{n-1})^{k(k-1)/2}$. We can think of $B_n(k)$ as the space of formal ’directions’ between $k$ distinct points in ${\mathbb R}^n$, where the directions are indexed by distinct pairs of integers between 1 and $k$. By convention we set $B_n(1)$ and $B_n(0)$ equal to a point.
[@sinha] The collection $B_n(k)$ forms a symmetric topological operad.
The action of the symmetric group $\Sigma_k$ on $B_n(k)$ permutes both indices. Intuitively the operad composition replaces a point by an infinitesimal configuration and relabels the points. More precisely we must specify the composition rule $\circ_t:B_n(k) \times B_n(l) \to B_n(k+l-1)$ for $1 \leq t \leq k$. For elements $\alpha=(\alpha_{ij})_{1\leq i<j\leq k} $ and $\beta=( \beta_{ij})_{1\leq i<j\leq l}$ the composition is $$(\alpha \circ_t \beta)_{ij}=
\begin{cases}
\alpha_{ij} \; {\rm for}\; i<j\leq t \\
\beta_{i-t+1,j-t+1} \;{\rm for}\; t\leq i < j \leq t+l-1 \\
\alpha_{i-l+1,j-l+1}\; {\rm for}\; t+l \leq i<j \\
\alpha_{i,t} \; {\rm for}\; i<t \leq j < t+l \\
\alpha_{t,j} \; {\rm for}\; t \leq i < t+l \leq j
\end{cases}.$$
Let $\theta^k: F({\mathbb R}^n,k) \to B_n(k)$ be the product of all direction maps $\theta^k(x)=(\theta_{ij}(x))_{1\leq i <j \leq n}.$ For $k\geq 2$ let ${\mathcal K}_n(k) \subset B_n(k)$ be the closure of the image of $\theta^k$. We set also ${\mathcal K}_n(0)=B_n(0)=\{*\}$ and ${\mathcal K}_n(1)=B_n(1)=\{\iota\}$. The restriction $\theta^k:F({\mathbb R}^n,k) \to {\mathcal K}_n(k)$ is a $\Sigma_k$-equivariant homotopy equivalence.
[@sinha] The collection ${\mathcal K}_n(k)$ forms a suboperad of $B_n(k)$ that is weakly equivalent to the little $n$-discs operad.
The operad ${\mathcal K}_n$ is known as the Kontsevich operad.
We say that a non-symmetric topological operad has a multiplication if there is a choice of base points $m_k \in O(k)$ for each $k$ such that the structure maps are based maps. This is the same as a non-symmetric operad in based spaces.
The operads $B_n$ and ${\mathcal K}_n$ have a multiplication, defined by setting all components $\theta_{ij}\, (i<j)$ of the base points $m_k$ equal to a fixed direction. We choose the last vector of the canonical basis of ${\mathbb R}^n$ as fixed direction.
We recall the definition of a cosimplicial space. Let $\Delta$ be the category with standard ordered sets $[k]=\{0<\dots<k\}$ as objects $(k \in {\mathbb N})$ and monotone maps as morphisms. A cosimplicial space is a covariant functor from the category $\Delta$ to the category of topological spaces. For each $k$ the simplicial set $\Delta(\_,[k])$ is also called the simplicial $k$-simplex $\Delta^k_*$ . Its geometric realization is the standard $k$-simplex $\Delta^k$. All simplexes fit together to form a cosimplicial space. In fact if we apply geometric realization to the bisimplicial set (functor from $\Delta$ to simplicial sets) $\Delta(*',*)$ in the variable $*'$ then we obtain a cosimplicial space denoted by $\Delta^*$.
The totalization $Tot(S^*)$ of a cosimplicial space $S^*$ is the space of natural transformations $\Delta^* \to S^*$. There is a standard cosimplicial map $\tilde{\Delta^*} \to \Delta^*$, where $\tilde{\Delta}^*$ is an appropriate cofibrant resolution. The homotopy totalization ${\widetilde{Tot}}(S^*)$ is the space of natural transformations $\tilde{\Delta}^* \to S^*$. This is also the homotopy limit of the functor from $\Delta$ to spaces defining the cosimplicial space. Precomposition induces a canonical map $Tot(S^*) \to {\widetilde{Tot}}(S^*)$ that is a weak equivalence when $S^*$ is fibrant, in the sense that it satisfies the matching condition [@hirschhorn].
An operad $(O,p)$ with multiplication defines a cosimplicial space $O^*$ sending $[k]$ to $O(k)$. The cofaces operator $d^i:O(k) \to O(k+1)$ is defined by $$\begin{cases}
d^i(x)=x \circ_i m_2\;{\rm for}\; 1\leq i \leq k \\
d^0(x)=m_2 \circ_1 x \\
d^{n+1}(x)=m_2 \circ_2 x.
\end{cases}$$ The codegeneracies $s^i:O(k)\to O(k-1)$ are defined by $s^i(x)=x \circ_i m_0$.
[(McClure-Smith)]{} \[ms\] Let $O$ be an operad with multiplication. Then the totalization $Tot(O^*)$ (respectively the homotopy totalization $\widetilde{Tot}(O^*)\;$) admits an action of an operad ${\mathcal{D}}_2$ (respectively ${\tilde{\mathcal D}}_2$) weakly equivalent to the little 2-cubes operad.
By the recognition principle [@may2] if $Tot(O^*)$ or $\widetilde{Tot}(O^*)$ is connected then it is weakly equivalent to a double loop space.
Given a simplicial set $S_*$, considered as simplicial space with discrete values, and a space $X$, we obtain a cosimplicial space $map(S_*,X)$, often denoted $X^{S_*}$. If $S_*$ is a simplicial based set and $X$ is a based space then we obtain similarly a cosimplicial space $map_\bullet(X_*,S)$. Let us denote by $|S|$ the geometric realization of $S$. The following is standard.
\[homeo\] The adjoint maps of the evaluation maps $$map(|S|,X) \times \Delta_k \to map(S_k,X)$$ induce a homeomorphism $map(|S|,X) \cong Tot(map(S_*,X))$. In the based version we obtain a homeomorphism from the based mapping space $$map_\bullet(|S|,X) \to Tot(map_\bullet(S_*,X)).$$ The canonical maps from these totalizations to the homotopy totalizations are weak equivalences.
Let $\Delta^k_*$ be the simplicial $k$-simplex, and ${\partial}\Delta^k_*$ its simplicial subset obtained by removing the non-degenerate simplex in dimension $k$ and its degeneracies. The quotient $S^k_*:=\Delta^k_*/{\partial}\Delta^k_*$ is the simplicial $k$-sphere.
[@sinha] \[bn\]
The cosimplicial space $B_n^*$ is isomorphic to ${Map}_\bullet(S^2_*,S^{n-1}).$
Namely $B_n^k$ has a factor $S^{n-1}$ for each pair $1\leq i < j \leq k$ and $map_\bullet(S^2_*,S^{n-1})$ has a sphere factor for each $k$-simplex of $S^2_*$, namely for each non-decreasing sequence of length $k+1$ starting with 0 and ending with 2. Then $i$ corresponds to the position of the last $0$ and $j$ to the position of the last $1$. Propositions \[homeo\] and \[bn\] imply the following corollary.
The totalization $Tot(B_n^*)$ is homeomorphic to $\Omega^2(S^{n-1})$.
There is also a cosimplicial space ${\mathcal K}_n^* {\rtimes}S^{n-1}$, not defined by an operad with multiplication. This is constructed so that ${\mathcal K}_n^k {\rtimes}S^{n-1} ={\mathcal K}_n(k) \times (S^{n-1})^k $. Elements of this space can be thought of as configurations of $k$ points in ${\mathbb R}^n$, each labelled by a direction. The composition rule can be defined as follows, via the identification $S^{n-1} = {\mathcal K}_n(2)$. Given $(x;v_1,\dots,v_k) \in {\mathcal K}_n(k) \times (S^{n-1})^k$, we define for $1 \leq i \leq k$ $$d^i(x;v_1,\dots,v_k)=(x\circ_i v_i;v_1,\dots,v_i,v_i,\dots,v_k).$$ Intuitively these cofaces double a point in the associated direction, at infinitesimal distance. The first and last cofaces add a point labelled by the preferred direction ’before’ or ’after’ the configuration and are defined by $$d^0(x;v_1,\dots,v_k)=(m_2 \circ_1 x;v_1,\dots,v_k,m_2)$$ and $$d^{k+1}(x;v_1,\dots,v_k)=(m_2 \circ_2 x;m_2,v_1,\dots,v_k).$$ The codegeneracies forget a point and are defined by $$s^i(x;v_1,\dots,v_k)=(x\circ_i m_0;v_1,\dots,\hat{v_i},\dots,v_k).$$ The very same rules define a cosimplicial space $B_n^* {\rtimes}S^{n-1}$ with $B_n^k {\rtimes}S^{n-1} = (S^{n-1})^{k(k-1)/2} \times (S^{n-1})^k$ so that ${\mathcal K}_n^* {\rtimes}S^{n-1} \subset B_n^* {\rtimes}S^{n-1}$ is a cosimplicial subspace.
[(Sinha)]{} [@sinha] \[embn\] The homotopy totalization of ${\mathcal K}_n^* {\rtimes}S^{n-1}$ is weakly equivalent to $Emb_n$.
The proof of this theorem relies on Goodwillie calculus. From now on we will mean by $Emb_n$ the space of smooth maps from the interval $I$ to the cube $I^n$ sending the extreme points of the interval to centers of opposite faces of the cube, with derivative orthogonal to the faces.
The weak equivalence $Emb_n \to {\widetilde{Tot}}({\mathcal K}_n^* {\rtimes}S^{n-1})$ is constructed as follows, by evaluating directions between points of the knot and tangents. Regard an element of the $k$-simplex as a sequence of real numbers $0 \leq x_1 \leq \dots \leq x_k \leq 1$. There are maps $\beta_k:Emb_n \to map(\Delta^k,{\mathcal K}_n(k) \times (S^{n-1})^k)$ defined by $$\beta_k(f)(x_1,\dots,x_k)=\{ \theta^k(f(x_1),\dots,f(x_k)),
f'(x_1)/|f'(x_1),\dots, \,f'(x_k)/|f'(x_k) \}$$ when $x_1 <\dots <x_k$. If some $x_i=x_j$ for $i<j$ then we must replace the component $\theta_{ij}=f(x_j)-f(x_i)/|f(x_j)-f(x_i)| $ in the expression above by $f'(x_i)/|f'(x_i)|$. All maps $\beta_k$ fit together to define a map $\beta:Emb_n \to Tot({\mathcal K}_n^* {\rtimes}S^{n-1})$. The composite with the standard map to the homotopy totalization is the desired weak equivalence.
Let us recall some background on homotopy fibers: the homotopy fiber of a based map $f:X \to Y$ is defined by the pullback square $$\begin{CD} Hofib(f) @>>> X \\
@VVV @VVfV \\
PY @>ev>> Y
\end{CD}$$ with $PY$ the contractible space of paths in $Y$ sending 0 to the base point, and $ev$ the evaluation at the point 1. If $f$ is a fibration with fiber $F$ then there is a canonical homotopy equivalence $F \to Hofib(f)$ sending $x \in F \subset X$ to the pair $(x,c)$ with $c$ the constant loop at the base point of $Y$. The homotopy fiber is homotopy invariant, namely given a commutative diagram $$\begin{CD} X @>f>> Y \\
@VV\simeq V @V\simeq VV \\
X' @>f'>> Y'
\end{CD}$$ with the vertical arrows weak equivalences, then the induced map $Hofib(f) \to Hofib(f')$ is a weak equivalence. This is a special case of the homotopy invariance of homotopy limits ( theorem 18.5.3 (2) in [@hirschhorn]).
[(Sinha)]{} The homotopy fiber $Emb'_n$ of the unit tangent vector map $u: Emb_n \to \Omega S^{n-1}$ is weakly equivalent to the homotopy totalization $\widetilde{Tot}(K_n^*)$ , and thus is a double loop space for $n>3$.
Proof: The projection ${\mathcal K}_n(k) \times (S^{n-1})^k \to (S^{n-1})^k$ defines a map of cosimplicial spaces ${\mathcal K}_n^* {\rtimes}S^{n-1} \to map_\bullet(S^1_*, S^{n-1})$ and there is a commutative square $$\begin{CD} Emb_n @>>\tilde{\beta}> {\widetilde{Tot}}({\mathcal K}_n^* {\rtimes}S^{n-1}) \\
@VV{u}V @VV{\pi}V \\
\Omega S^{n-1} @>>> {\widetilde{Tot}}(map_\bullet(S^1_*, S^{n-1})).
\end{CD}$$ By theorem 18.5 (2) in [@hirschhorn] the homotopy totalization of a sequence of cosimplicial spaces $X^* \to Y^* \to Z^*$ that are levelwise fibrations is a fibration ${\widetilde{Tot}}X^* \to {\widetilde{Tot}}Y^* \to {\widetilde{Tot}}Z^*$.
Then we have a weak equivalence ${\widetilde{Tot}}({\mathcal K}_n^*) \to Hofib(\pi)$ and by homotopy invariance weak equivalences $Emb'_n = Hofib(u) \simeq Hofib(\pi) \simeq {\widetilde{Tot}}({\mathcal K}_n^*)$. We conclude by theorem \[ms\].
: We may substitute $\Omega S^{n-1}$ in the statement above by the space $Imm(I,I^n)$ of immersions $I \to I^n$ with fixed values and tangent vectors at the boundary, and $u$ by the inclusion $Emb(I,I^n) \to Imm(I,I^n)$, because the unit tangent vector map induces the Smale homotopy equivalence $Imm(I,I^n) \simeq \Omega S^{n-1}$.
In the next lemma we identify the totalization of $B_n^* {\rtimes}S^{n-1}$. There are standard simplicial inclusions $d^0_*:\Delta^1_* \to \Delta^2_*$ and $d^2_*:\Delta^1_* \to \Delta^2_*$ induced by the strictly monotone maps $[1]\to[2]$ avoiding respectively 2 and 0.
The totalization of the levelwise fibration of cosimplicial spaces $$B_n^* \to B_n^* {\rtimes}S^{n-1} \to map(\Delta^1_* / \partial \Delta^1_*, S^{n-1})$$ is the fibration $$map_\bullet(\Delta^2/{\partial}\Delta^2, S^{n-1}) \to map_\bullet(\Delta^2/(d^0(\Delta^1) \cup d^2(\Delta^1)),S^{n-1}) \to map_\bullet(\Delta^1/ {\partial}\Delta^1,S^{n-1}).$$
The space $B_n^k {\rtimes}S^{n-1}$ has a factor $S^{n-1}$ for each pair $1\leq i < j \leq k$ and a factor $S^{n-1}$ for each $1 \leq l \leq k$. The space $$map_\bullet(\Delta^2_k/d^0_k(\Delta^1_k) \cup d^2_k(\Delta^1_k),S^{n-1})$$ has a factor $S^{n-1}$ for each non-decreasing sequence of length $k+1$ containing 0,1,2 and a factor $S^{n-1}$ for each non-decreasing sequence of length $k+1$ starting with 0, ending with 2, without 1’s. For these latter sequences $l$ corresponds to the last position containing a 0. For the former sequences we apply the same correspondence as in the proof of proposition \[bn\].
.
If we map the sequence ${\mathcal K}_n^* \to {\mathcal K}_n^* {\rtimes}S^{n-1} \to map_\bullet(S^1_*,S^{n-1})$ to the sequence $B_n^* \to B_n^* {\rtimes}S^{n-1} \to map_\bullet(S^1_*,S^{n-1})$ we obtain a commutative diagram of cosimplicial spaces that at level $k$ is
$$\begin{CD} {\mathcal K}_n(k) @>>> (S^{n-1})^{k(k-1)/2} \\
@VVV @VVV \\
{\mathcal K}_n(k) \times (S^{n-1})^k @>>> (S^{n-1})^{k(k-1)/2} \times (S^{n-1})^k \\
@VVV @VVV \\
(S^{n-1})^k @= (S^{n-1})^k .
\end{CD}$$
The homotopy totalization functor gives a diagram of spaces weakly equivalent to those in the diagram
$$\begin{CD} Emb'_n @>>> \Omega^2 S^{n-1} \\
@VVV @VVV \\
Emb_n @>>> P\Omega S^{n-1} \\
@VVV @VVV \\
\Omega S^{n-1} @= \Omega S^{n-1}.
\end{CD}$$
Let us analyze the diagram of homotopy totalizations. By naturality the upper row is a map of algebras over the McClure-Smith operad ${\tilde{\mathcal D}}_2$, and then its homotopy fiber $F$ is also an algebra over ${\tilde{\mathcal D}}_2$. The homotopy fiber of the second row is weakly equivalent to $Emb_n$ by theorem \[embn\] and because the target is contractible. The homotopy fiber of the third row is contractible. The homotopy fibers of the rows in a diagram whose columns are fibrations form a fibration ( 18.5.1 in [@hirschhorn]), so that $F \simeq Emb_n$. This space is connected by Whitney’s theorem for $n>3$, and then by the recognition principle [@may2] is weakly equivalent to a double loop space. $\Box$
Framed knots and double loop fibrations {#three}
=======================================
We start by some general considerations on framed knots. By definition $fEmb_n$ is the pullback
$$\begin{CD} fEmb_n @>>> Emb_n \\
@VVV @V{u}VV \\
\Omega SO(n) @>>> \Omega S^{n-1}.
\end{CD}$$ Actually $fEmb_n$ is homeomorphic to the homotopy fiber of the composite $$Emb_n \stackrel{u}{\to} \Omega S^{n-1} \stackrel{h}{\to} SO(n-1)$$ of the holonomy $h$ and the unit tangent vector map $u$. The homeomorphism is induced by the projection $fEmb_n \to Emb_n$ and the map $fEmb_n \to PSO(n-1)$ considering the difference between the framing induced by the holonomy along the knot and the assigned framing of the framed knot. By naturality of the homotopy fiber construction the holonomy induces a map $Emb'_n \to fEmb_n$.
We will give next an operadic interpretation of framed knots. We recall [@SW] that a topological group $G$ acts on a topological operad $O$ if each $O(n)$ is a $G$-space and the operadic composition maps are $G$-equivariant. In other words $O$ is an operad in the category of $G$-spaces. In such case one can define the semidirect product [@Markl; @SW] $O {\rtimes}G$ with $n$-ary space $O(n) \times G^n$ and composition $$(p;g_1,\dots,g_n) \circ_i (q;h_1,\dots,h_m) =(p \circ_i g_i(q);g_1,\dots,g_ih_1,\dots,g_ih_m,\dots,g_n).$$
For example, the (trivial) action of a group $G$ on the commutative operad $Com$ defines a semidirect product $\underline{G} := Com {\rtimes}G$ such that $\underline{G}(n)=G^n$. The framed little $n$-discs operad is isomorphic to the semidirect $fD_n=D_n {\rtimes}SO(n)$, where $SO(n)$ rotates the picture of the little discs.
The natural action of $SO(n)$ on $S^{n-1}$ defines a $SO(n)$-action on the operad $B_n$, given that $B_n(k)=(S^{n-1})^{k(k-1)/2}$. This action restricts to an action on the operad ${\mathcal K}_n$. The arguments giving the weak equivalence between ${\mathcal K}_n$ and the little $n$-discs operad $D_n$ extend to show that the semidirect product operad $f{\mathcal K}_n= {\mathcal K}_n {\rtimes}SO(n)$ is weakly equivalent to the framed little $n$-discs operad. Namely in [@barcelona] we constructed a diagram of weak equivalences of operads $D_n \leftarrow WD_n \to F_n$, where $F_n$ is the Fulton-MacPherson operad. These arrows and the projection $F_n \to K_n$, that is also a weak equivalence [@sinha] are $SO(n)$-equivariant.
The homotopy totalization of the cosimplicial space $f{\mathcal K}_n^*$, for $n>3$, is weakly equivalent to the space $fEmb_n$ of framed long knots in ${\mathbb R}^n$.
The sequence of cosimplicial spaces $$map_\bullet(S^1_*,SO(n-1)) \to f{\mathcal K}_n ^* \to {\mathcal K}_n^* {\rtimes}S^{n-1}$$ is levelwise the fibration $SO(n-1)^k \to SO(n)^k \times {\mathcal K}_n(k) \to (S^{n-1})^k \times {\mathcal K}_n(k)$. There is a commutative diagram $$\xymatrix{
\Omega SO(n-1) \ar[d] \ar[r] & fEmb_n \ar[d]^{\tilde{f\beta}} \ar[r] & Emb_n \ar[d]^{\widetilde{\beta}} \\
\widetilde{Tot}(map_\bullet(S^1_*,SO(n-1))) \ar[r] & \widetilde{Tot}(f{\mathcal K}_n ^*) \ar[r] & \widetilde{Tot}(K_n^* {\rtimes}S^{n-1})
}$$ where the rows are fibrations. The middle arrow $\widetilde{f\beta}$ is the composite of a map $f\beta$ and the canonical map $Tot(f{\mathcal K}_n ^*) \to \widetilde{Tot}(f{\mathcal K}_n ^*)$, where $f\beta$ is adjoint to a collection of maps $fEmb_n \times \Delta_k \to {\mathcal K}_n(k) \times SO(n)^k$ that evaluate directions between points of the framed knot as before and in addition evaluate the framings at those points.
The left and right vertical maps are weak equivalences, and hence the middle vertical map $\widetilde{f\beta}$ is a weak equivalence.
Now $f{\mathcal K}_n$ is an operad with multiplication, so that ${\widetilde{Tot}}(f{\mathcal K}_n)$ has an action of the McClure-Smith operad ${\tilde{\mathcal D}}_2$ by theorem \[ms\]. The space $fEmb_n \simeq Emb_n \times \Omega SO(n-1)$ is grouplike for $n>3$, in the sense that its components form a group, namely ${\mathbb Z}_2$. By the recognition principle [@may2] we readily obtain :
The space of framed long knots in ${\mathbb R}^n$ is weakly equivalent to a double loop space for $n>3$.
This recovers the result by Budney [@Bu].
We characterize next the semidirect product operad $B_n {\rtimes}SO(n)$, that we will also call $fB_n$. We observe that there is an operad inclusion $i_n:\underline{SO(n-1)} \to fB_n$ that we define next. Let us identify $SO(n-1)$ to the subgroup of $SO(n)$ fixing the preferred direction $m_2 \in S^{n-1}=B_n(2)$. We recall that $m_k \in B_n(k)$ is the base point. Then $i_n(k)$ sends $(g_1,\dots,g_k)\in SO(n-1)^k$ to $(m_k,g_1,\dots,g_k) \in B_n(k) \times SO(n)^k$. We visualize the image as a configuration of points on a line parallel to the preferred direction, with the assigned framings. Clearly $i_n$ factors through the operad $f{\mathcal K}_n$. We remark that $i_n$ does not extend to a section $\underline{SO(n)} \to fB_n$ of the projection $fB_n \to \underline{SO(n)}$.
\[equi\]
The map $i_n: \underline{SO(n-1)} \to fB_n$ induces on the (homotopy) totalizations of the associated cosimplicial spaces a homotopy equivalence that is a double loop map, so that
$$\Omega SO(n-1) \simeq Tot(fB_n^*).$$
We have a pullback diagram of cosimplicial spaces $$\begin{CD}
fB_n^* @>>>B_n^* {\rtimes}S^{n-1} \\
@VVV @VVV \\
map_\bullet(S^1_*,SO(n)) @>>> map_\bullet(S^1_*,S^{n-1}).
\end{CD}$$ On totalizations we obtain the pullback diagram $$\begin{CD}
Tot(fB_n^*) @>>> P\Omega S^{n-1} \\
@VVV @VVV \\
\Omega SO(n) @>>> \Omega S^{n-1}.
\end{CD}$$
The inclusion $\underline{SO(n-1)} \to fB_n$ induces on totalizations the standard homotopy equivalence from $\Omega SO(n-1)$ to $Tot(fB_n^*)$, the homotopy fiber of the looped projection $\Omega SO(n) \to \Omega S^{n-1}$. We can replace totalizations by homotopy totalizations in the proposition since all cosimplicial spaces involved are fibrant.
[*Proof of theorem \[frame\]*]{}: We have a diagram of operads
$$\xymatrix{
& {\mathcal K}_n \ar[d] \ar[r] & B_n \ar[d]\\
\underline{SO(n-1)} \ar[r] \ar[dr] & f{\mathcal K}_n \ar[d] \ar[r]& fB_n \ar[dl] \\
& \underline{SO(n)}&
}$$
The operad inclusion $f{\mathcal K}_n \to fB_n$ gives on homotopy totalizations, by naturality of the McClure-Smith construction, a map of ${\tilde{\mathcal D}}_2$-algebras ${\widetilde{Tot}}(f{\mathcal K}_n) \to {\widetilde{Tot}}(fB_n)$, that by naturality of the recognition principle is a double loop map. Its homotopy fiber $F$ is weakly equivalent to $Emb_n$ as double loop space, by comparison with the homotopy fiber of ${\widetilde{Tot}}({\mathcal K}_n) \to {\widetilde{Tot}}(B_n)$ and by the arguments in the proof of theorem \[main\]. The double loop map ${\widetilde{Tot}}(f{\mathcal K}_n) \to {\widetilde{Tot}}(fB_n)$ has a double loop section because the operad inclusion $\underline{SO(n-1)} \to f{\mathcal K}_n \to fB_n$ induces a weak equivalence that is a double loop map on homotopy totalizations (proposition \[equi\]). This gives the fiber sequence of double loop maps with section $$Emb_n \to fEmb_n \stackrel{\rightarrow}{\leftarrow} \Omega SO(n-1).$$
Now there is a commutative diagram
$$\xymatrix{
\Omega SO(n-1) \ar^{j}[r] \ar^{\simeq}[d] & fEmb_n \ar[r] \ar^{\simeq}[d] & \Omega SO(n) \ar^{\simeq}[d]\\
{\widetilde{Tot}}(SO(n-1)^*) \ar[r] & {\widetilde{Tot}}(f{\mathcal K}_n^*)\ar[r] & {\widetilde{Tot}}(SO(n)^*)
}$$
and the inclusion $j:\Omega SO(n-1) \subset fEmb_n$ represents the subspace of all framings of the trivial knot. We conclude the proof by taking homotopy fibers over ${\widetilde{Tot}}(SO(n)^*)$.
Namely the homotopy fiber $K'$ of ${\widetilde{Tot}}{f{\mathcal K}_n^*} \to {\widetilde{Tot}}{SO(n)^*}$ (resp. $B'$ of ${\widetilde{Tot}}{fB_n^*} \to {\widetilde{Tot}}{SO(n)^*}$ ) is canonically weakly equivalent to ${\widetilde{Tot}}{{\mathcal K}_n^*} \simeq Emb'_n$ (resp. to $Tot(B_n^*) \simeq \Omega^2 S^{n-1}$). Let $\Omega'$ be the homotopy fiber of ${\widetilde{Tot}}SO(n-1)^* \to {\widetilde{Tot}}SO(n)^*$, canonically weakly eqivalent to $\Omega^2 S^{n-1}$ as double loop space. Then the double loop map $K' \to B'$ has a double loop section because the composite $\Omega' \to K' \to B'$ is a weak equivalence and a double loop map. This gives the fiber sequence of double loop maps with section $$Emb_n \to Emb'_n \stackrel{\rightarrow}{\leftarrow} \Omega^2 S^{n-1}.\quad \Box$$
An obstruction to double loop maps {#four}
==================================
In this section we will prove theorem \[not2\] by showing that the projection $fEmb_n \to Emb_n$ from framed knots to knots and the map $p:Emb'_n \to Emb_n$ from section \[two\] do not preserve the Browder operation in rational homology for $n$ odd. We need to review some notions on homology operations of double loop spaces.
An $n$-algebra is an algebra over the homology operad of the little $n$-discs operad.
In particular a 2-algebra is called a Gerstenhaber algebra. A (graded) $n$-algebra $A$ for $n>1$ is described by assigning a product and a bracket $$\_*\_: A_i \otimes A_j \to A_{i+j}$$ $$[\_,\_]: A_{i} {\otimes}A_j \to A_{i+j+n-1}$$ that satisfy essentially the axioms of a Poisson algebra, except for signs. We refer to [@SW] for a full definition. The action of the little $n$-discs operad on an $n$-fold loop space gives a natural $n$-algebra structure on its homology, such that the product is the Pontrjagin product and the bracket is called the Browder operation. In particular the homologies of the double loop spaces $Emb_n, Emb'_n$ and $fEmb_n$ have a natural structure of Gerstenhaber algebras.
Originally Gerstenhaber introduced the algebraic structure bearing his name while studying the Hochschild complex of associative algebras. More generally Gerstenhaber and Voronov introduced this structure on the Hochschild homology of an operad with multiplication in vector spaces. Let $O$ be an operad in vector spaces together with a multiplication, i.e. an operad map $Ass \to O$ from the associative operad. The image of the multiplication in $Ass$ is an element $m \in O(2)$. The operad composition maps define a bracket $$[\_,\_]:O(k) {\otimes}O(l) \to O(k+l-1)$$ by $$[x,y]=\sum_{i=1}^k \pm x \circ_i y - \sum_{i=1}^l
\pm y \circ_i x$$ for appropriate signs [@T]. The multiplication defines a star product $$\_ * \_ : O(k) {\otimes}O(l) \to O(k+l)$$ by $$x * y = m(x,y) := (m\circ_2 y)\circ_1 x .$$
The Hochschild complex of $O$ is the chain complex $(\bigoplus s^{-k}O(k), {\partial})$, where $s^{-k}$ is degree desuspension, and the differential is ${\partial}(x)=[m,x]$. The Hochschild homology $HH(O)$ of $O$ is the homology of such complex.
[@GV] The bracket and the star product induce a Gerstenhaber algebra structure on the Hochschild homology of an operad with multiplication in vector spaces.
Since the operad describing Gerstenhaber algebras is the homology of the little $2$-discs operad $D_2$, Deligne asked his famous question, now known as the Deligne conjecture, whether the homological action could be induced by an action of (singular) chains of the little discs $C_*(D_2)$ on the Hochschild complex. Many authors proved that indeed there was a natural action of a suitable operad quasi-isomorphic to $C_*(D_2)$ on the Hochschild complex.
If we work instead with operads with multiplications in [*chain complexes*]{} then the Deligne conjecture holds for the [*normalized*]{} Hochschild complex. In this context we say that an operad $O$ in chain complexes has a unital multiplication if we have a morphism of operads $Ass_* \to O$ , where $Ass_*$ is the operad describing [*unital*]{} associative algebras. This latter operad is also isomorphic as non-symmetric operad to the homology $H_*(D_1)$ of the little 1-discs. The image of the generator in $Ass_*(0)$ defining the unit is an element $u \in O(0)$.
The normalized Hochschild complex of a chain operad with (unital) multiplication is the subcomplex of the (full) Hochschild complex consisting of those elements $x \in O(k),\, k \in {\mathbb N}$ such that $x \circ_i u =0$ for all $1\leq i \leq k$.
[(McClure-Smith)]{} [@MS] The normalized Hochschild complex of a chain operad $O$ with unital multiplication has an action of an operad quasi-isomorphic to the singular chain operad of the little discs $C_*(D_2)$.
It is crucial that the normalized Hochschild complex of a chain operad with unital multiplication $O$ can be seen also as (co)normalization of a cosimplicial chain complex $O^*$ defined from the operad $O$ in a manner completely analogous as in the topological category (section \[two\]). We recall that the (co)normalization of a cosimplicial chain complex $O^*$ is the chain complex of cosimplicial maps $\Delta^* {\otimes}{\mathbb Z}\to O^*$, with differential induced by the cosimplicial chain complex $\Delta^* {\otimes}{\mathbb Z}$. This construction is the algebraic analog of the totalization of a cosimplicial space. Thus Theorem \[ms\] can be seen as a topological analog of the Deligne conjecture. We make this analogy precise in the following statement.
Let $O$ be a topological operad with multiplication. The Hochschild homology of the operad $C_*(O)$ of singular chains on $O$ is isomorphic to the homology of $\,{\widetilde{Tot}}(O^*)$. The bracket and the star product under the isomorphism $HH(C_*(O)) \cong H_*({\widetilde{Tot}}(O^*))$ correspond respectively to the Browder operation and the Pontrjagin product.
The Gerstenhaber algebra structure interacts well with a spectral sequence computing the homology of ${\widetilde{Tot}}(O^*)$, the Bousfield spectral sequence.
[@Bousfield] Given a cosimplicial space $K^*$, there is a second quadrant spectral sequence computing the homology of ${\widetilde{Tot}}{K^*}$. Its $E^1$-term is $E^1_{-p,q}=H_q(K^p)$, with the differential $\sum_{i=0}^{p+1}(-1)^i d_*^{i}:H_q(K^p) \to H_q(K^{p+1}).$
The filtration giving the spectral sequence is the decreasing filtration by cosimplicial degree in the normalization of $C_*(K^*)$.
Let $O$ be a topological operad with multiplication. Then the Bousfield spectral sequence for $H_*({\widetilde{Tot}}O^*)$ is a spectral sequence of Gerstenhaber algebras with bracket $$[\_,\_]: E^r_{-p,q} {\otimes}E^r_{-p',q'} \to E^r_{-p-p'+1,q+q'}$$ and product $$\_*\_ :E^r_{-p,q} {\otimes}E^r_{-p',q'} \to E^r_{-p-p',q+q'}.$$ The $E_2$-term is the Hochschild homology of the homology operad $H_*(O)$ as a Gerstenhaber algebra.
The star product sums filtration indices on elements in $C_*(O)$. The bracket $[x,y]$ sits in the $(m+n-1)$-th filtration term if $x$ sits in the $m$-th term and $y$ in the $n$-th term.
The Bousfield spectral sequence does not always converge, but it does for $K^*={\mathcal K}_n^*$ or $K^*={\mathcal K}_n^* {\rtimes}S^{n-1}$, as observed by Sinha [@sinha]. Arone, Lambrechts and Volic have recently announced a proof that in these two cases (for $n>3$) the spectral sequence collapses at the $E^2$-term over the rational numbers [@LTV]. A key ingredient in their proof is a result by Kontsevich showing the formality of the little $n$-discs operad [@Kontsevich], in the sense that the chain operad $C_*(D_n,{\mathbb Q})$ is quasi-isomorphic to its homology $H_*(D_n,{\mathbb Q})$. The same idea can be used to show that for $K={\mathcal K}_n$ there are no extension issues, in the sense that the $E^2$-term is isomorphic to $H_*(Emb'_n,{\mathbb Q})\cong H_*({\widetilde{Tot}}(K_n))$ as a Gerstenhaber algebra. We will not need these collapse results here because in low degree the spectral sequence must collapse and there are no extension issues.
The $E^2$ term is the Hochschild homology of the little $n$-discs operad homology $H_*(D_n)$, and has been extensively studied by Turchin [@T].
As we have seen the operad $H_*(D_n)$ is generated by a product $x_1\cdot x_2 \in H_0(D_n(2))$ and a bracket $\{x_1,x_2\} \in H_{n-1}(D_n(2))$. We use different symbols to avoid confusion with the product and the bracket in the Hochschild complex.
[*Proof of theorem \[not2\]*]{}. If $p:Emb'_n \to Emb_n$ is homotopic to a double loop map then it should induce on homology a homomorphism of Gerstenhaber algebras. We will show that this is not the case because the kernel of $p_*$ is not an ideal with respect to the bracket.
We are considering the case $n$ odd and $n>3$ over rational coefficients. The lowest dimensional class in the $E^2$-term for $Emb'_n$ is the element $\alpha=\{x_1,x_2\} \in E_{-2,n-1}$. There is no class that can kill it, so this element survives and represents the generator of $H_{n-3}(Emb'_n)\cong {\mathbb Q}$ coming from the factor $\Omega^2 S^{n-1}$ with respect to the splitting $Emb'_n \simeq Emb_n {\times}\Omega^2 S^{n-1}$. For similar reasons $H_{2n-6}(Emb'_n) \cong {\mathbb Q}^2$ is generated by the surviving elements $\beta=\{x_1,x_3\}\cdot\{x_2,x_4\}$ and $\alpha^2 = \alpha*\alpha = \{x_1,x_2\}\cdot \{x_3,x_4\}$.
The cosimplicial inclusion $p^*:{\mathcal K}_n^* \to {\mathcal K}_n^* {\rtimes}S^{n-1}$ induces a morphism of spectral sequences, and on homotopy totalizations gives a map that we can identify to $p:Emb'_n \to Emb_n$.
The lowest dimensional class in the $E^2$-term for $Emb_n \simeq
{\widetilde{Tot}}({\mathcal K}_n^* {\rtimes}S^{n-1})$ is the image $E^2(p)(\beta)$. This class survives to a class $p_*(\beta)$ generating $H_{2n-6}(Emb_n)\cong {\mathbb Q}$.
The computation by Turchin given in formula 2.9.21 of [@T] indicates that the $E^2$-term for $Emb'_n$ in dimension $3n-8$ has two generators, $[\alpha,\beta]$ and $[\alpha,\alpha^2]=2\alpha[\alpha,\alpha]$, that survive, so that $H_{3n-8}(Emb'_n)\cong {\mathbb Q}^2$. The $E^2$-term for $Emb_n$ in the same dimension has one generator, $E^2(p)[\alpha,\beta]$, that survives so that $H_{3n-8}(Emb_n) \cong {\mathbb Q}$ is generated by $p_*([\alpha,\beta])\neq 0$. But by dimensional reason $p_*(\alpha)=0$, so the bracket is not preserved by $p_*$.
Thus $p$ is not a double loop map. Actually this shows more: there is no double loop space splitting $Emb'_n \simeq Emb_n \times \Omega^2 S^{n-1}$.
Now $p$ factors through $fEmb_n$ via a double loop map $p':Emb'_n \to fEmb_n$, that is induced by the operad inclusion ${\mathcal K}_n \to f{\mathcal K}_n$. This map $p'$ can be identified to the map $Emb_n \times \Omega^2 S^{n-1} \to
Emb_n \times \Omega SO(n-1)$ induced by looping the holonomy $\Omega S^{n-1} \to SO(n-1)$. It is well known that $p'_*(\alpha)$ is non-trivial so by the same reason the projection $fEmb_n \to Emb_n$ is not a double loop map. $\Box$
We remark that the obstruction argument does not work rationally for $n$ even because in that case there is a Gerstenhaber structure on the $E^2$-term for $Emb_n$ such that $E^2(p)$ is a map of Gerstenhaber algebras. Namely additively this $E^2$-term is identified to the Hochschild homology of the Batalin-Vilkovisky operad $BV_n$ [@T]. This operad in vector spaces is the semidirect product of the little $n$-discs homology $H_*(D_n)$ and the exterior algebra on a generator in dimension $(n-1)$ [@SW]. Then $E^2(p)$ is naturally the map of Gerstenhaber algebras induced in Hochschild homology by the operad inclusion $H_*(D_n) \to BV_n$.
However only for $n=2$ the operad $BV_2$ is the homology of a topological operad, the framed little $2$-discs operad $fD_2$. It might be possible that torsion operations like Dyer-Lashof operations still give obstructions to a double loop structure on the projection $fEmb_n \to Emb_n$ for $n$ even.
String topology of knots {#five}
========================
We will show that the suspension spectrum of the space of knots in a sphere, suitably desuspended, is an $E_2$-ring spectrum, proving theorem \[sphere\].
We proved this for $n=3$ in our joint paper with Kate Gruher [@GS]. The original proof was based on the work by Budney, and on a generalized approach to string topology, expanding on fundamental ideas by Chas-Sullivan [@CS] and Cohen-Jones [@CJ]. Now, knowing that $Emb_n$ is a double loop space, we can produce a proof for $n>3$. We recall some terminology and we refer to [@GS] for details. We recall that an $E_2$-operad is a topological operad weakly equivalent to the little $2$-discs operad. Similarly an $E_2$-operad spectrum is an operad in the category of (symmetric) spectra weakly equivalent to the suspension spectrum of the little $2$-discs operad. For us an $E_2$-ring spectrum will be an algebra over an $E_2$-operad spectrum in the weak sense, meaning that the associativity and unit axioms hold up to homotopy. Given a manifold $M$ with tangent bundle $TM$ we denote by $-TM$ the opposite virtual bundle.
(Gruher-S.) \[gs\]
Let $X$ be an algebra over an $E_2$-operad $O$, $G$ a compact Lie group and $H \subset G$ a closed subgroup. Suppose that $H$ acts on $X$ and the structure maps are $H$-equivariant. Let $p:G {\times}_H X \to G/H$ be the projection. Then the Thom spectrum of the virtual bundle $p^*(-T(G/H))$ over $G {\times}_H X$ is an $E_2$-ring spectrum.
Let $Emb(S^1,S^n)$ be the space of smooth embeddings $S^1 \to S^n$.
[*Proof of thm \[sphere\]*]{}
It is convenient to use the model for the space of long knots $Emb_n$ given by embeddings of the interval into a cylinder $I \to D_{n-1} \times I$ , with $D_{n-1}$ the unit $(n-1)$-disc, sending 0 and 1 to $(0,0)$ and $(0,1)$ respectively with tangents directed along the positive direction of the long axis, namely the last coordinate axis. There is a natural action by $SO(n-1)$ on $Emb_n$ rotating long knots around the long axis.
We have seen in section \[two\] that $Emb_n$ is weakly equivalent to the homotopy fiber $F$ of ${\widetilde{Tot}}{\mathcal K}_n^* \to {\widetilde{Tot}}B_n^*$, by a sequence of weak equivalences $$F \to F' \to
{\widetilde{Tot}}({\mathcal K}_n^* {\rtimes}S^{n-1}) \leftarrow Emb_n \, ,$$ where $F'$ is the homotopy fiber of ${\widetilde{Tot}}(K_n^* {\rtimes}S^{n-1}) \to {\widetilde{Tot}}(B_n^* {\rtimes}S^{n-1})$. Actually all maps in the sequence are $SO(n-1)$-equivariant maps between $SO(n-1)$-spaces. Namely the action of $SO(n-1) \subset SO(n)$ on $S^{n-1}$ makes $B_n^*$ and $B_n^* {\rtimes}S^{n-1}$ into cosimplicial $SO(n-1)$-spaces, such that respectively ${\mathcal K}_n^*$ and ${\mathcal K}_n^* {\rtimes}S^{n-1}$ are $SO(n-1)$-invariant cosimplicial subspaces. Thus the induced maps on homotopy totalizations are $SO(n-1)$-equivariant. Moreover it is easy to see that the evaluation $Emb_n \to {\widetilde{Tot}}(K_n^* {\rtimes}S^{n-1})$ is $SO(n-1)$-equivariant. Thus $SO(n+1) {\times}_{SO(n-1)} Emb_n $ is weakly equivalent to $SO(n+1) {\times}_{SO(n-1)} F$. As observed by Budney and Cohen [@BC] there is a homotopy equivalence $Emb(S^1,S^n) \simeq SO(n+1) \times_{SO(n-1)} Emb_n$. We obtain then a weak equivalence $Emb(S^1,S^n) \simeq SO(n+1) \times_{SO(n-1)} F $.
The $SO(n-1)$-action makes ${\mathcal K}_n$ and $B_n$ into operads in the category of based $SO(n-1)$-spaces. Thus the homotopy totalizations of ${\mathcal K}_n^*$ and $B_n^*$ are algebras over the operad ${\tilde{\mathcal D}}_2$ in the category of $SO(n-1)$-spaces, where a trivial $SO(n-1)$-action is assumed on ${\tilde{\mathcal D}}_2$. The inclusion ${\widetilde{Tot}}{\mathcal K}_n^* \to {\widetilde{Tot}}B_n^*$ respects this structure, so that the homotopy fiber $F$ is also an algebra over ${\tilde{\mathcal D}}_2$ in $SO(n-1)$ spaces. By lemma \[gs\], with $G=SO(n+1),\, H=SO(n-1)$ and $O={\tilde{\mathcal D}}_2$, $(SO(n+1) \times_{SO(n-1)} F)^{-T(SO(n+1)/SO(n-1))}$ is an $E_2$-ring spectrum. But $SO(n+1)/SO(n-1)$ is (stably) parallelizable and has dimension $2n-1$, so that $$(SO(n+1) \times_{SO(n-1)} F)^{-T(SO(n+1)/SO(n-1))} \simeq \Sigma^{1-2n}\Sigma^{\infty} Emb(S^1,S^n)_+$$ is an $E_2$-ring spectrum. $\Box$
The following corollary has been proved independently by Abbaspour-Chataur-Kallel, who describe also a BV-algebra structure.
The homology $H_{*+2n-1}(Emb(S^1,S^n))$ has a natural structure of Gerstenhaber algebra.
[99]{}
P. Bousfield, On the homology spectral sequence of a cosimplicial space, Amer. J. Math. 109 (1987), 361-394. R. Budney, Little cubes and long knots, math.GT/0309427 R. Budney, Topology of spaces of knots in dimension 3, math.GT/0506524 R. Budney, The framed little discs operad is cyclic, math.AT/0607490 R. Budney and F. Cohen, On the homology of the space of knots, math.GT/0504206 M. Chas, D. Sullivan, *String topology*, preprint math.GT/9911159 R. Cohen, J. Jones, *A homotopy theoretic realization of string topology*, Math. Ann. **324** (2002), n.4, 773–798. K. Gruher, P. Salvatore, Generalized string topology operations, ArXiv math.AT/0602210. M. Gerstenhaber and A. Voronov, Homotopy $G$-algebras and moduli space operad, Internat. Math. Res. Notices 1995, no. 3, 141–153. P. Hirschhorn, Model categories and their localizations. Mathematical Surveys and Monographs, 99. AMS, Providence, RI, 2003. M. Kontsevich, Operads and motives in deformation quantization. Lett. Math. Phys. 48 (1999), no. 1, 35–72. P. Lambrechts, V. Turchin and I. Volic, The rational homology of the space of long knots in codimension $> 2$ , preprint. M. Markl, A compactification of the real configuration space as an operadic completion, J. Algebra 215 (1999), no. 1, 185–204. P. May, The geometry of iterated loop spaces. Lectures Notes in Mathematics, Vol. 271. J. McClure and J. Smith, Operads and cosimplicial objects: an introduction, math.QA/0402117 - Axiomatic, enriched and motivic homotopy theory, 133–171, NATO Sci. Ser. II Math. Phys. Chem., 131, 2004. P. Salvatore, Configuration spaces with summable labels. Cohomological methods in homotopy theory 375–395, Progr. Math., 196, Birkhäuser, Basel, 2001. P. Salvatore and N. Wahl, Framed discs operads and Batalin-Vilkovisky algebras, Q.J.Math. 54 (2003), 213-231
D. Sinha, Operads and knot spaces, J. Amer. Math. Soc. 19 (2006) 461-486.
V. Turchin, Sur l’homologie des espaces de noeuds non-compacts, math.QA/0010017 - On the homology of the spaces of long knots, Advances in topological quantum field theory, 23–52, NATO Sci. Ser. II Math. Phys. Chem., 179, 2004.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Effect algebras, introduced by Foulis and Bennett in 1994, are partial algebras which generalize some well known classes of algebraic structures (for example orthomodular lattices, MV algebras, orthoalgebras etc.). In the present paper, we introduce a new class of effect algebras, called [*homogeneous effect algebras*]{}. This class includes orthoalgebras, lattice ordered effect algebras and effect algebras satisfying Riesz decomposition property. We prove that every homogeneous effect algebra is a union of its blocks, which we define as maximal sub-effect algebras satisfying Riesz decomposition property. This generalizes a recent result by Riečanová, in which lattice ordered effect algebras were considered. Moreover, the notion of a block of a homogeneous effect algebra is a generalization of the notion of a block of an orthoalgebra. We prove that the set of all sharp elements in a homogeneous effect algebra $E$ forms an orthoalgebra $E_S$. Every block of $E_S$ is the center of a block of $E$. The set of all sharp elements in the compatibility center of $E$ coincides with the center of $E$. Finally, we present some examples of homogeneous effect algebras and we prove that for a Hilbert space $\mathbb H$ with ${\mathop{\mathrm{dim}}}(\mathbb H)>1$, the standard effect algebra $\mathcal E(\mathbb H)$ of all effects in $\mathbb H$ is not homogeneous.'
address: 'Department of Mathematics, Faculty of Electrical Engineering and Information Technology, Ilkovičova 3, 812 19 Bratislava, Slovakia '
author:
- Gejza Jenča
title: Blocks of homogeneous effect algebras
---
[^1]
Introduction
============
Effect algebras (or D-posets) have recently been introduced by Foulis and Bennett in [@FouBen:EAaUQL] for study of foundations of quantum mechanics. (See also [@KopCho:DP], [@GiuGre:TaFLfUP].) The prototype effect algebra is $(\mathcal E(\mathbb H),\oplus,0,I)$, where $\mathbb H$ is a Hilbert space and $\mathcal E(\mathbb H)$ consists of all self-adjoint operators $A$ of $\mathbb H$ such that $0\leq A\leq I$. For $A,B\in\mathcal E(\mathbb H)$, $A\oplus B$ is defined iff $A+B\leq 1$ and then $A\oplus B=A+B$. $\mathcal E(\mathbb H)$ plays an important role in the foundations of quantum mechanics [@Lud:FoQM], [@BusGraLah:OQP].
The class of effect algebras includes orthoalgebras [@FouGreRut:FaSiO] and a subclass (called MV-effect algebras or Boolean D-posets or Boolean effect algebras), which is essentially equivalent to MV-algebras, introduced by Chang in [@Cha:AAoMVL] (cf. e.g. [@ChoKop:BDP], [@BenFou:PSEA] for results on MV-algebras in the context of effect algebras). The class of orthoalgebras includes other classes of well-known sharp structures, like orthomodular posets [@PtaPul:OSaQL] and orthomodular lattices [@Kal:OL],[@Ber:OLaAA].
One of the most important results in the theory of effect algebras was proved by Riečanová in her paper [@Rie:AGoBfLEA]. She proved that every lattice ordered effect algebra is a union of maximal mutually compatible sub-effect algebras, called blocks. This result generalizes the well-known fact that an orthomodular lattice is a union of its maximal Boolean subalgebras. Moreover, as proved in [@JenRie:OSEiLOEA], in every lattice ordered effect algebra $E$ the set of all sharp elements forms a sub-effect algebra $E_S$, which is a sub-lattice of $E$; $E_S$ is then an orthomodular lattice, and every block of $E_S$ is the center of some block of $E$. On the other hand, every orthoalgebra is a union of maximal Boolean sub-orthoalgebras. Thus, although the classes of lattice ordered effect algebras and orthoalgebras are independent, both lattice ordered effect algebras and orthoalegebras are covered by their blocks. This observation leads us to a natural question:
Is there a class of effect algebras, say $\mathbb X$, with the following properties?
- $\mathbb X$ includes orthoalgebras and lattice ordered effect algebras.
- Every $E\in\mathbb X$ is a union of (some sort of) blocks.
In the present paper, we answer this question in the affirmative. We introduce a new class of effect algebras, called homogeneous effect algebras. This class includes lattice ordered effect algebras, orthoalgebras and effect algebras satisfying Riesz decomposition property (cf. e.g. [@Rav:OaSToEA]). The blocks in homogeneous algebras are maximal sub-effect algebras satisfying Riesz decomposition property. We prove that the set of all sharp elements $E_S$ in a homogeneous effect algebra $E$ forms a sub-effect algebra (of course, $E_S$ is an orthoalgebra) and every block of $E_S$ is the center of a block of $E$. In the last section we present some examples of homogeneous effect algebras and we prove that $\mathcal E(\mathbb H)$ is not homogeneous unless ${\mathop{\mathrm{dim}}}(\mathbb
H)\leq 1$.
Definitions and basic relationships
===================================
An [*effect algebra*]{} is a partial algebra $(E;\oplus,0,1)$ with a binary partial operation $\oplus$ and two nullary operations $0,1$ satisfying the following conditions.
1. If $a\oplus b$ is defined, then $b\oplus a$ is defined and $a\oplus b=b\oplus a$.
2. If $a\oplus b$ and $(a\oplus b)\oplus c$ are defined, then $b\oplus c$ and $a\oplus(b\oplus c)$ are defined and $(a\oplus b)\oplus c=a\oplus(b\oplus c)$.
3. For every $a\in E$ there is a unique $a'\in E$ such that $a\oplus a'=1$.
4. If $a\oplus 1$ exists, then $a=0$
Effect algebras were introduced by Foulis and Bennett in their paper [@FouBen:EAaUQL]. Independently, K\^ opka and Chovanec introduced an essentially equivalent structure called [*D-poset*]{} (see [@KopCho:DP]). Another equivalent structure, called [*weak orthoalgebras*]{} was introduced by Giuntini and Greuling in [@GiuGre:TaFLfUP].
For brevity, we denote the effect algebra $(E,\oplus,0,1)$ by $E$. In an effect algebra $E$, we write $a\leq b$ iff there is $c\in E$ such that $a\oplus c=b$. It is easy to check that every effect algebra is cancellative, thus $\leq$ is a partial order on $E$. In this partial order, $0$ is the least and $1$ is the greatest element of $E$. Moreover, it is possible to introduce a new partial operation $\ominus$; $b\ominus a$ is defined iff $a\leq b$ and then $a\oplus(b\ominus a)=b$. It can be proved that $a\oplus b$ is defined iff $a\leq b'$ iff $b\leq a'$. Therefore, it is usual to denote the domain of $\oplus$ by $\perp$. If $a\perp b$, we say that $a$ and $b$ are [*orthogonal*]{}. Let $E_0\subseteq E$ be such that $1\in E_0$ and, for all $a,b\in E_0$ with $a\geq b$, $a\ominus b\in E_0$. Since $a'=1\ominus a$ and $a\oplus b=(a'\ominus b)'$, $E_0$ is closed with respect to $\oplus$ and $~'$. We then say that $(E_0,\oplus,0,1)$ is a [*sub-effect algebra of $E$*]{}. Another possibility to construct a substructure of an effect algebra $E$ is to restrict $\oplus$ to an interval $[0,a]$, where $a\in E$, letting $a$ act as the unit element. We denote such effect algebra by $[0,a]_E$.
[**Remark.** ]{}For our purposes, it is natural to consider orthomodular lattices, orthomodular posets, MV-algebras, and Boolean algebras as special types of effect algebras. In the present paper, we will write shortly “orthomodular lattice” instead of “effect algebra associated with an orthomodular lattice” and similarly for orthomodular posets, MV-algebras, and Boolean algebras.
An effect algebra satisfying $a\perp a\implies a=0$ is called an [*orthoalgebra*]{} (cf. [@FouGreRut:FaSiO]). An effect algebra $E$ is an [*orthomodular poset*]{} iff, for all $a,b,c\in E$, $a\perp b\perp c\perp a$ implies that $a\oplus b\oplus c$ exists (cf. [@FouBen:EAaUQL]). An orthoalgebra is an [*orthomodular lattice*]{} iff it is lattice ordered.
Let $E$ be an effect algebra. Let $C=(c_1,\ldots,c_n)$ be a finite family of elements of $E$. We say that $C$ is [*orthogonal*]{} iff the sum $c_1\oplus\ldots\oplus c_n$ exists. We then write $\bigoplus C=c_1\oplus\ldots\oplus c_n$. For $n=0$, we put $\bigoplus C=0$. We say that $Ran(C)=\{c_1,\ldots,c_n\}$ is [*the range of $C$*]{}. Let $C=(c_1,\ldots,c_n),D=(d_1,\ldots,d_k)$ be orthogonal families of elements. We say that $D$ is a [*refinement of $C$*]{} iff there is a partition $P=\{P_1,\ldots,P_n\}$ of $\{1,\ldots,k\}$ such that, for all $1\leq i\leq n$, $c_i=\bigoplus_{j\in P_i}d_j$. Note that if $D$ is a refinement of $C$, then $\bigoplus C=\bigoplus D$.
A finite subset $M_F$ of an effect algebra $E$ is called [*compatible with cover in $X\subseteq E$*]{} iff there is a finite orthogonal family $C=(c_1,\ldots,c_n)$ with $Ran(C)\subseteq X$ such that for every $a\in M_F$ there is a set $A\subseteq\{1,\ldots,n\}$ with $a=\bigoplus_{i\in A}c_i$. $C$ is then called an [*orthogonal cover*]{} of $M_F$. A subset $M$ of $E$ is called [*compatible with covers in $X\subseteq E$*]{} iff every finite subset of $M$ is compatible with covers in $X$. A subset $M$ of $E$ is called [*internally compatible*]{} iff $M$ is compatible with covers in $M$. A subset $M$ of $E$ is called [*compatible*]{} iff $M$ is compatible with covers in $E$. An effect algebra $E$ is said to be [*compatible*]{} if $E$ is a compatible subset of $E$. If $\{a,b\}$ is a compatible set, we write $a{\leftrightarrow}b$. It is easy to check that $a{\leftrightarrow}b$ iff there are $a_1,b_1,c\in E$ such that $a_1\oplus c=a$, $b_1\oplus c=b$, and $a_1\oplus b_1\oplus c$ exists. A subset $M$ of $E$ is called [*mutually compatible*]{} iff, for all $a,b\in M$, $a{\leftrightarrow}b$. Obviously, every compatible subset of an effect algebra is mutually compatible. In the class of lattice ordered effect algebras, the converse also holds. It is well known that in an orthomodular poset, a mutually compatible set need not to be compatible (cf. e.g. [@PtaPul:OSaQL]).
A lattice ordered effect algebra $E$ is called an [*MV-algebra*]{} iff $E$ is compatible (cf. [@ChoKop:BDP]). An MV-algebra which is an orthoalgebra is a [*Boolean algebra*]{}. Recently, Z. Riečanová proved in her paper [@Rie:AGoBfLEA] that every lattice ordered effect algebra is a union of MV-algebras, which are maximal mutually compatible subsets. These are called [*blocks*]{}. She proved that every block of a lattice ordered effect algebra $E$ is a sub-effect algebra and a sublattice of $E$. Note that Riečanová’s results imply that every mutually compatible subset of a lattice ordered effect algebra is compatible. Indeed, let $M$ be a mutually compatible set. Then $M$ can be embedded into a block $B$, which is an MV-algebra and hence compatible. Since $B$ is compatible and $M\subseteq B$, $M$ is compatible.
On the other hand, it is easy to prove that every element of an orthoalgebra can be embedded into a maximal sub-orthoalgebra, which is a Boolean algebra.
We say that an effect algebra $E$ satisfies [*Riesz decomposition property*]{} iff, for all $u,v_1,\ldots,v_n\in E$ such that $v_1\oplus\ldots\oplus v_n$ exists and $u\leq v_1\oplus\ldots\oplus v_n$, there are $u_1,\ldots,u_n\in E$ such that,for all $1\leq i\leq n$, $u_i\leq v_i$ and $u=u_1\oplus\ldots\oplus u_n$. It is easy to check that an effect algebra $E$ satisfies Riesz decomposition property iff $E$ satisfies Riesz decomposition property with fixed $n=2$. A lattice ordered effect algebra $E$ satisfies Riesz decomposition property iff $E$ is an MV-algebra. An orthoalgebra $E$ satisfies Riesz decomposition property iff $E$ is a Boolean algebra.
Let $E_1,E_2$ be effect algebras. A map $\phi:E_1\mapsto E_2$ is called a [*morphism*]{} iff $\phi(1)=1$ and $a\perp b$ implies that $\phi(a)\perp\phi(b)$ and then $\phi(a\oplus b)=\phi(a)\oplus\phi(b)$. A morphism $\phi$ is an [*isomorphism*]{} iff $\phi$ is bijective and $\phi^{-1}$ is a morphism.
\[homogeneous\] An effect algebra $E$ is called [*homogeneous*]{} iff, for all $u,v_1,v_2\in E$ such that $v_1\perp v_2$, $u\leq v_1\oplus v_2$, $u\leq (v_1\oplus v_2)'$, there are $u_1,u_2$ such that $u_1\leq v_1$, $u_2\leq v_2$ and $u=u_1\oplus u_2$.
\[agoodclass\]
1. Every orthoalgebra is homogeneous.
2. Every effect algebra satisfying Riesz decomposition property is homogeneous.
3. Every lattice ordered effect algebra is homogeneous.
For the proof of (a), observe that $u\leq v_1\oplus v_2$ and $u\leq (v_1\oplus
v_2)'$ imply that $u\perp u$ and thus $u=0$. (b) is obvious. For the proof of (c), let $E$ be a lattice ordered effect algebra. Note that $v_1\perp v_2$, $u\leq (v_1\oplus v_2)'$ imply that the set $\{u,v_1,v_2\}$ is mutually orthogonal and thus mutually compatible. Therefore, by [@Rie:AGoBfLEA], $\{u,v_1,v_2\}$ can be embedded into a block $B$. Since $B$ is an MV-algebra, $B$ satisfies Riesz decomposition property, hence $E$ is homogeneous.
\[homogeneousn\] Let $E$ be a homogeneous effect algebra. Let $u,v_1,\ldots,v_n\in E$ be such that $v_1\oplus\ldots\oplus v_n$ exists, $u\leq v_1\oplus\ldots\oplus v_n$ and $u\leq (v_1\oplus\ldots\oplus v_n)'$. Then there are $v_1,\ldots,v_n$ such that, for all $1\leq i\leq n$, $v_i\leq u_i$ and $u=u_1\oplus\ldots u_n$.
(By induction.) For $n=1$, it suffices to put $u_1=u$. Assume that the proposition holds for $n=k$. Let $u,v_1,\ldots,v_{k+1}$ be such that $v_1\oplus\ldots\oplus v_{k+1}$ exists, $u\leq v_1\oplus v_2\oplus\ldots\oplus v_{k+1}$ and $u\leq (v_1\oplus v_2\oplus\ldots\oplus v_{k+1})'$. Since $E$ is homogeneous, there are $u_1\leq v_1$ and $z\leq v_2\oplus\ldots\oplus v_{k+1}$ such that $u=u_1\oplus z$. Since $$z\leq u\leq (v_1\oplus\ldots\oplus v_{k+1})'
\leq (v_2\oplus\ldots\oplus v_{k+1})'
\text{,}$$ we see that $z\leq(v_2\oplus\ldots\oplus v_{k+1})'$. Thus, we may apply induction hypothesis. The rest is trivial.
Blocks of homogeneous effect algebras
=====================================
Let $E$ be an effect algebra. We say that a sub-effect algebra $B$ of $E$ is a [*block of $E$*]{} iff $B$ is a maximal sub-effect algebra satisfying the Riesz decomposition property. This definition of a block is consistent with the definition of a block of the theory of orthoalgebras (maximal Boolean sub-orthoalgebra) and also in the theory of lattice ordered effect algebras (maximal mutually compatible subset).
In this section, we prove that blocks of homogeneous effect algebras coincide with the maximal internally compatible subsets, which contain $1$. As a consequence, every homogeneous effect algebra is a union of its blocks.
The main tool we use is the closure operation $M\mapsto\overline{M}$ which is defined on the system of all subsets of an effect algebra $E$ in the following way. Let $M$ be a subset of an effect algebra $E$. First we define certain subsets $M_n$ ($n\in {{\mathbb N}}$) of $E$ as follows : $M_0=M$ and for $n\in {{\mathbb N}}$ $$\begin{aligned}
\label{closure}
M_{n+1}&=&\{x:x\leq y,y'\text{ for some }y\in M_n\}\cup\\
\notag & &\{y\ominus x:x\leq y,y'\text{ for some }y\in M_n\}\text{.}\end{aligned}$$ Then we put $\overline M=\bigcup_{n\in {{\mathbb N}}}M_n$. Note that, for all $n\in {{\mathbb N}}$, $M_n\subseteq M_{n+1}$ and that $\overline{\overline M}=\overline M$. In an orthoalgebra, $M=\overline M$ for every set $M$.
\[maxcompat\] Let $E$ be an effect algebra. Let $M$ be an compatible subset of $E$. Then $M$ can be embedded into a maximal compatible subset of $E$.
The proof is an easy application of Zorn’s lemma and is left to the reader.
\[crucial\] Let $E$ be a homogeneous effect algebra. Let $M\subseteq E$ be a finite compatible set, $a,b\in M$, $a\geq b$. Let $C=(c_1,\ldots,c_k)$ be an orthogonal cover of $M$. Let $A,B\subseteq \{1,\ldots,k\}$ be such that $a=\bigoplus_{i\in A}c_i$ and $b=\bigoplus_{i\in B} c_i$. Then, there is a refinement of $C$, say $W=(w_1,\ldots,w_n)$ and sets $B_W\subseteq A_W\subseteq\{1,\ldots,n\}$ such that $(w_i)_{i\in A_W}$ is a refinement of $(c_i)_{i\in A}$ and $(w_i)_{i\in B_W}$ is a refinement of $(c_i)_{i\in B}$. Moreover, we have $Ran(W)\subseteq\overline{Ran(C_0)}$.
If $|B\setminus A|=0$ then $B\subseteq A$ and there is nothing to prove.
Let $l\in{{\mathbb N}}$. Assume that Proposition \[crucial\] holds for all $C,A,B$ with $|B\setminus A|=l$. Let $C_0,A_0,B_0$ be as in the assumption of Proposition \[crucial\], with $|B_0\setminus A_0|=l+1$.
To avoid double indices, we may safely assume that $A_0$ and $B_0$ are such that, for some $0\leq r,s,t\leq k$, $B_0\setminus A_0=\{1,\ldots,r\}$, $B_0\cap A_0=\{r+1,\ldots,s\}$, $A_0\setminus B_0=\{s+1,\ldots,t\}$.
Write $b_1=c_1\oplus\ldots\oplus c_{l+1}$, $d=c_{l+2}\oplus\ldots\oplus c_s$, $a_1=c_{s+1}\oplus\ldots\oplus c_t$. Since $b_1\oplus d=b\leq a=a_1\oplus d$, we see that $c_{l+1}\leq b_1\leq a_1$. Since $C_0$ is an orthogonal family, $c_{l+1}\leq {a_1}'$. By Proposition \[homogeneousn\], this implies that there are $v_{s+1},\ldots,v_t$ such that, for all $s+1\leq i\leq t$, $v_i\leq c_i$ and $c_{l+1}=v_{s+1}\oplus\ldots\oplus v_t$. Let us construct a refinement of $C_0$, say $C_1=(e_i)$, as follows. $$\begin{aligned}
C_1&=&(c_1,\ldots,c_l,
v_{s+1},\ldots,v_t,
c_{l+2},\ldots,c_s,\\
& &
c_{s+1}\ominus v_{s+1},\ldots,c_t\ominus v_t,
c_{t+1},\ldots,c_k,c_{l+1})\end{aligned}$$ Obviously, $C_1$ is a refinement of $C_0$ and $Ran(C_1)\subseteq\overline{Ran(C_0)}$. Moreover, we have
$
b=\bigoplus(c_1,\ldots,c_l,
v_{s+1},\ldots,v_t,
c_{l+2},\ldots,c_s)
$
and
$
a=\bigoplus(v_{s+1},\ldots,v_t,
c_{l+2},\ldots,c_s,
c_{s+1}\ominus v_{s+1},\ldots,c_t\ominus v_t)\text{.}
$
By latter equations, we can find sets $A_1,B_1$ of indices such that $a=\bigoplus_{i\in A_1}e_i$, $b=\bigoplus_{i\in B_1}e_i$ and $B_1\setminus A_1=\{1,\ldots,l\}$. Moreover, $(e_i)_{i\in A_1}$ is a refinement of $(c_i)_{i\in A_0}$ and $(e_i)_{i\in B_1}$ is a refinement of $(c_i)_{i\in B_0}$. As $|B_1\setminus A_1|=l$, we may apply the induction hypothesis on $C_1,A_1,B_1$ to find a refinement $W=(w_1,\ldots,w_n)$ of $C_1$ with $Ran(W)\subseteq \overline{Ran(C_1)}$ and sets $B_W\subseteq A_W\subseteq\{1,\ldots,n\}$ such that $(w_i)_{i\in A_W}$ is a refinement of $(e_i)_{i\in A_1}$ and $(w_i)_{i\in B_W}$ is a refinement of $(e_i)_{i\in B_1}$. Obviously, $W$ is a refinement of $C_0$ and we see that $$Ran(W)\subseteq\overline{Ran(C_1)}\subseteq\overline{\overline{Ran(C_0)}}=
\overline{Ran(C_0)}\text{.}$$ Similarly, $(w_i)_{i\in A_W}$ is a refinement of $(c_i)_{i\in A_0}$ and $(w_i)_{i\in B_W}$ is a refinement of $(c_i)_{i\in B_0}$. This concludes the proof.
\[minuscompat\] Let $M$ be a finite compatible subset of a homogeneous effect algebra $E$. Let $a,b\in M$ be such that $a\geq b$. Then $M\cup\{a\ominus b\}$ is a compatible set.
Let $W,A_W,B_W$ be as in Proposition \[crucial\]. Then $a\ominus b=\bigoplus_{i\in A_W\setminus B_W}w_i$, so $W$ is an orthogonal cover of $M\cup\{a\ominus b\}$.
Let $M$ be a finite compatible subset of a homogeneous effect algebra $E$. Let $a,b\in M$ be such that $a\perp b$. Then $M\cup\{a\oplus b\}$ is a compatible set.
It is easy to check that, for every compatible set $M_0$, $M_0\cup {M_0}'=M_0\cup\{a':a\in M_0\}$ is a compatible set. The rest follows from Corollary \[minuscompat\] and from the equation $a\oplus b=(a'\ominus b)'$.
Let $E$ be an effect algebra. The following are equivalent. \[equiv\]
1. $E$ satisfies Riesz decomposition property.
2. $E$ is homogeneous and compatible.
\(a) implies (b): It is evident that $E$ is homogeneous. It remains to prove that every $n$-element subset of $E$ is compatible. For $n=1$, there is nothing to prove. For $n>1$, let us assume that every $(n-1)$-element subset of $E$ is compatible. Let $X=\{x_1,\ldots,x_n\}$ be a subset of $E$. By induction hypothesis, $X_0=\{x_1,\ldots,x_{n-1}\}$ is compatible. Thus, there is an orthogonal cover of $X_0$, say $C=(c_1,\ldots,c_k)$. Since $x_n\leq (\bigoplus C)\oplus(\bigoplus C)'$ and $E$ satisfies Riesz decomposition property, there exist $y_1,y_2$ such that $y_1\leq (\bigoplus C)$, $y_2\leq(\bigoplus C)'$ and $x_n=y_1\oplus y_2$. Since $y_1\leq (\bigoplus C)$, there are $z_1,\ldots,z_k$ such that, for all $1\leq i\leq k$, $z_i\leq c_i$ and $y_1=z_1\oplus\ldots\oplus z_k$. Consequently, $$(z_1,c_1\ominus z_1,\ldots,z_k,c_k\ominus z_k,y_2)$$ is an orthogonal cover of $X$ and $X$ is compatible.
\(b) implies (a): Let $u,v_1,v_2\in E$ be such that $v_1\perp v_2$, $u\leq v_1\oplus v_2$. If $v_1=0$ or $v_2=0$, there is nothing to prove. Thus, let us assume that $v_1,v_2\not =0$. By Proposition \[crucial\], $v_1\leq v_1\oplus v_2$ implies that there an orthogonal cover $W=(w_1,\ldots,w_m)$ of $\{u,v_1,v_2,v_1\oplus v_2\}$ such that, for some $V_1\subseteq V\subseteq \{1,\ldots,m\}$, we have $\bigoplus_{i\in V}w_i=v_1\oplus v_2$ and $\bigoplus_{i\in V_1}w_i=v_1$. This implies that $\bigoplus_{i\in V\setminus V_1}w_i=v_2$. By Proposition \[crucial\], $u\leq v_1\oplus v_2$ implies that there is a refinement of $W$, say $Q=(q_1,\ldots,q_n)$, and some $U\subseteq Z\subseteq \{1,\ldots,n\}$ such that $\bigoplus_{i\in U}q_i=u$ and $\bigoplus_{i\in Z}q_i=v_1\oplus v_2$. Moreover, by Proposition \[crucial\], we may assume that $(q_i)_{i\in Z}$ is a refinement of $(w_i)_{i\in V}$. This implies that there is $Z_1\subseteq Z$ such that $\bigoplus_{i\in Z_1}q_i=v_1$. Put $u_1=\bigoplus_{i\in U\cap Z_1}q_i$ and $u_2=\bigoplus_{i\in U\cap(Z\setminus Z_1)}q_i$. It remains to observe that $u=u_1\oplus u_2$, $u_1\leq v_1$ and $u_2\leq v_2$.
Let $R_6$ be a 6-elements effect algebra with two atoms $\{a,b\}$, satisfying equation $a\oplus a\oplus a=a\oplus b\oplus b=1$. Since $(a,b,b)$ is an orthogonal cover of $R_6$, $R_6$ is a compatible effect algebra. However, $R_6$ does not satisfy Riesz decomposition property, since $a\leq b\oplus b$ and $a\land b=0$. This example shows that there are compatible effect algebras that do not satisfy Riesz decomposition property.
\[intclosure\] Let $M$ be a subset of a homogeneous effect algebra $E$ such that $M$ is compatible with covers in $\overline M$. Then $\overline M$ is internally compatible.
Consider (\[closure\]). Since each finite subset of $\overline M$ can be embedded into some $M_n$, it suffices to prove that, for all $n\in{{\mathbb N}}$, $M_n$ is compatible with covers in $\overline M$. By assumption, $M=M_0$ is compatible with covers in $\overline M$. Assume that, for some $n\in {{\mathbb N}}$, $M_n$ is compatible with covers in $\overline M$. Evidently, every finite subset of $M_{n+1}$ can be embedded into a set of the form $$\label{jedna}
\{x_1,y_1\ominus x_1,\ldots,x_k,y_k\ominus x_k\}\subseteq M_{n+1}\text{,}$$ where for all $1\leq i\leq k$ we have $x_i\leq y_i,{y_i}'$ and $y_i\in M_n$. We now prove the following
[*Claim.*]{} Let $x_i,y_i$ be as above. For every cover $C_0$ of $\{y_1,\ldots,y_k\}$, there is a refinement $W$ of $C_0$ such that $W$ covers $\{x_1,y_1\ominus x_1,\ldots,x_k,y_k\ominus x_k\}$ and $Ran(W)\subseteq\overline{Ran(C_0)}$.
[*Proof of the Claim.*]{} For $k=0$, we may put $W=C_0$. Assume that the Claim is satisfied for some $k=l\in {{\mathbb N}}$. Let $C_0$ be a cover of $\{y_1,\ldots,y_{l+1}\}\subseteq M_{n}$. Since $C_0$ is a cover of $\{y_1,\ldots,y_l\}$ as well, by induction hypothesis there is a refinement of $C_0$, say $C_1$, such that $C_1$ covers $\{x_1,y_1\ominus
x_1,\ldots,x_l,y_l\ominus x_l\}$ and $Ran(C_1)\subseteq\overline{Ran(C_0)}$. As $C_1$ is a refinement of $C_0$, $C_1$ covers $\{y_1,\ldots,y_{l+1}\}$. Thus, there are $(c_1,\ldots,c_m)\subseteq C_1$ such that $y_{l+1}=c_1\oplus\ldots\oplus c_m$. Since $x_{l+1}\leq y_{l+1},{y_{l+1}}'$, Proposition \[homogeneousn\] implies that there are $z_1,\ldots,z_m$ such that, for all $1\leq i\leq m$, $z_i\leq c_i$ and $x_{l+1}=z_1\oplus\ldots\oplus
z_l$. Let us construct a refinement $W$ of $C_1$ by replacing each of the $c_i$’s by the pair $(z_i,c_i\ominus z_i)$. Then $W$ is a refinement of $C_1$ and $W$ covers $\{x_1,y_1\ominus x_1,\ldots,x_{l+1},y_{l+1}\ominus x_{l+1}\}$. Moreover, for all $1\leq i\leq m$, $z_i\leq x_{l+1}\leq {y_{l+1}}'\leq {c_i}'$, hence $$Ran(W)\subseteq\overline{Ran(C_1)}\subseteq\overline{\overline{Ran(C_0)}}=
\overline{Ran(C_0)}\text {.}$$
Now, let $M_F$ be a finite subset of $M_{n+1}$. We may assume that $M_F$ is of the form (\[jedna\]). By the outer induction hypothesis, $M_n$ is compatible with covers in $\overline M$, thus $\{y_1,\ldots,y_k\}$ is compatible with cover in $\overline M$. Let $C$ be an orthogonal cover of $\{y_1,\ldots,y_k\}$ with $Ran(C)\subseteq\overline M$. By Claim, there is a refinement $W$ of $C$, such that $W$ covers $M_F$ and $Ran(W)\subseteq\overline{Ran(C)}\subseteq\overline{\overline M}=\overline M$. Thus, $M_F$ is compatible with covers in $\overline M$ and we see that $\overline M$ is internally compatible.
The following are immediate consequences of Proposition \[intclosure\].
\[theone\]
1. Let $M$ be an internally compatible subset of a homogeneous effect algebra $E$. Then $\overline M$ is an internally compatible set.
2. Let $M$ be a maximal internally compatible subset of a homogeneous effect algebra $E$. Then $M=\overline M$.
\[thetwo\] Let $E$ be a homogeneous effect algebra, let $M$ be an internally compatible set with $M=\overline M$. Let $a,b\in M$, $a\geq b$. Then $M\cup\{a\ominus b\}$ is an internally compatible set.
Let $M_F$ be a finite subset of $M$. Since $M$ is internally compatible, there is an orthogonal cover $C$ of $M_F\cup\{a,b\}$ with $Ran(C)\subseteq M$. By Corollary \[minuscompat\], $M_F\cup\{a,b,a\ominus b\}$ is then compatible with cover in $\overline{Ran(C)}$. Therefore, $M_F\cup\{a\ominus b\}$ is compatible with cover in $\overline{Ran(C)}$. Since $\overline{Ran(C)}\subseteq\overline M=M$, $M\cup\{a\ominus b\}$ is an internally compatible set.
As we will show later in Example \[lastone\], a sub-effect algebra of a homogeneous effect algebra need not to be homogeneous. However, we have the following relationship on the positive side.
\[subalg\] Let $E$ be a homogeneous effect algebra. Let $F$ be a sub-effect algebra of $E$ such that $F=\overline F$, where the closure is taken in $E$. Then $F$ is homogeneous.
Let $u,v_1,v_2\in F$ be such that $u\leq v_1\oplus v_2$ and $u\leq (v_1\oplus v_2)'$. Since $E$ is homogeneous, there are $u_1,u_2\in E$ such that $u_1\leq v_1$, $u_2\leq v_2$ and $u=u_1\oplus u_2$. For $i\in\{1,2\}$, we have $u_i\leq v_1\oplus v_2$ and $u_i\leq(v_1\oplus v_2)'$. Thus, $u_1,u_2\in\overline F=F$ and $F$ is homogeneous.
\[maxcompatisblock\] Let $E$ be a homogeneous effect algebra, let $B\subseteq E$. The following are equivalent.
1. $B$ is a maximal internally compatible set with $1\in B$.
2. $B$ is a block.
Assume that (a) is satisfied. By Corollary \[theone\], part (b), $B=\overline B$. By Proposition \[thetwo\], this implies that for all $a,b\in B$ such that $a\geq b$, $B\cup\{a\ominus b\}$ is an internally compatible set. Therefore, by maximality of $B$, $B$ is closed with respect to $\ominus$. Since $1\in B$, $B$ is a sub-effect algebra of $E$. Since $B$ is an internally compatible set, $B$ is a compatible effect algebra. By Corollary \[theone\](b), $B=\overline B$. By Proposition \[subalg\], this implies that $B$ is homogeneous. Since $B$ is homogeneous and compatible, Theorem \[equiv\] implies that $B$ satisfies Riesz decomposition property.
Assume that (b) is satisfied. By Theorem \[equiv\], $B$ is an internally compatible subset. By Lemma \[maxcompat\], $B$ can be embedded into a maximal internally compatible subset $B_{max}$ of $E$. By above part of the proof, $1\in B\subseteq B_{max}$ implies that $B_{max}$ is a block. Therefore, $B=B_{max}$ and (a) is satisfied.
\[embedfinite\] Let $E$ be a homogeneous effect algebra. Every finite compatible subset of $E$ can be embedded into a block.
Let $M_F$ be a finite compatible subset of $E$. Let $C=(c_1,\ldots,c_n)$ be an orthogonal cover of $M_F$. Then $M_F\cup\{1\}$ is compatible set, with cover $C^+=(c_1,\ldots,c_n,(\bigoplus C)')$. Thus, $M_F\cup\{1\}\cup Ran(C^+)$ is an internally compatible set containing $1$. Therefore, by Lemma \[maxcompat\], $M_F\cup\{1\}\cup Ran(C^+)$ can be embedded into a maximal compatible subset $B$ with $1\in B$. By Theorem \[maxcompatisblock\], $B$ is a block.
\[blockcover\] Let $E$ be a homogeneous effect algebra. Then $$E=\cup\{B:B\text{ is a block of }E\}\text{.}$$
By Corollary \[embedfinite\].
\[bigcor\] For an effect algebra $E$, the following are equivalent.
1. $E$ is homogeneous.
2. Every finite compatible subset can be embedded into a block.
3. Every finite compatible subset can be embedded into a sub-effect algebra of $E$ satisfying Riesz decomposition property.
4. The range of every finite orthogonal family can be embedded into a block.
5. The range of every finite orthogonal family can be embedded into a sub-effect algebra satisfying Riesz decomposition property.
6. The range of every orthogonal family with three elements can be embedded into a block.
7. The range of every orthogonal family with three elements can be embedded into a sub-effect algebra satisfying Riesz decomposition property.
(a)$\implies$(b) is Corollary \[embedfinite\]. The implication chains (b)$\implies$(c)$\implies$(e)$\implies$(g) and (b)$\implies$(d)$\implies$(f)$\implies$(g) are obvious. To prove that (h)$\implies$(a), assume that $E$ is an effect algebra satisfying (g), and let $u,v_1,v_2\in E$ be such that $u\leq v_1\oplus v_2$, $u\leq(v_1\oplus v_2)'$. Then $(u,v_1,v_2)$ is an orthogonal family with three elements. By (g), $\{u,v_1,v_2\}$ can be embedded into a sub-effect algebra $R$ satisfying Riesz decomposition property. Thus, there are $u_1,u_2\in R\subseteq E$ such that $u_1\leq v_1$, $u_2\leq v_2$ and $u=u_1\oplus u_2$. Hence, $E$ is homogeneous.
Can every compatible subset of a homogeneous effect algebra $E$ be embedded into a block ? This is true for orthomodular posets (cf. e.g. [@PtaPul:OSaQL]) and for lattice ordered effect algebras. By Theorem \[maxcompatisblock\] and Lemma \[maxcompat\], this question reduces to the question, whether a compatible subset can be embedded into an internally compatible subset containing $1$.
Compatibility center and sharp elements
=======================================
For a homogeneous effect algebra $E$, we write $$K(E)=\bigcap\{B:\text{$B$ is a block of $E$}\}\text{.}$$ We say that $K(E)$ is the [*compatibility center*]{} of $E$. Note that $K(E)=\overline{K(E)}$ and hence, by Proposition \[subalg\], $K(E)$ is homogeneous.
An element $a$ of an effect algebra is called [*sharp*]{} iff $a\land a'=0$. We denote the set of all sharp elements of an effect algebra $E$ by $E_S$. It is obvious that an effect algebra $E$ is an orthoalgebra iff $E=E_S$. An element $a$ of an effect algebra $E$ is called [*principal*]{} iff the interval $[0,a]$ is closed with respect to $\oplus$. Evidently, every principal element in an effect algebra is sharp. A principal element $a$ of an effect algebra is called [*central*]{} iff for all $b\in E$ there is a unique decomposition $b=b_1\oplus b_2$ with $b_1\leq a$, $b_2\leq a'$. The set of all central elements of an effect algebra $E$ is called [*the center of $E$*]{} and is denoted by $C(E)$. In [@GreFouPul:TCoaEA], the center of an effect algebra was introduced and the following properties of $C(E)$ were proved.
Let $E$ be an effect algebra. Then
- $C(E)$ is a sub-effect algebra of $E$.
- $C(E)$ is a Boolean algebra. Moreover, for all $a\in C(E)$ and $x\in E$, $a\land x$ exists.
- For all $a\in C(E)$, the map $\phi:E\mapsto[0,a]_E$ given by $\phi(x)=a\land x$ is a morphism.
- For all $a\in C(E)$, $E$ is naturally isomorphic to $[0,a]_E\times[0,a']_E$. Moreover, for all effect algebras $E_1,E_2$ such that there is an isomorphism $\phi:E\mapsto E_1\times E_2$, $\phi^{-1}(1,0)$ and $\phi^{-1}(0,1)$ are central in $E$.
A subset $I$ of an effect algebra $E$ is called an [*ideal*]{} iff the following condition is satisfied : $a,b\in I$, $a\perp b$ is equivalent to $a\oplus b\in I$. An ideal $I$ is called [*Riesz ideal*]{} iff, for all $i,a,b$ such that $i\in I$, $a\perp b$ and $i\leq a\oplus b$, there are $i_1,i_2$ such that $i_1\leq a$, $i_2\leq b$ and $i\leq i_1\oplus i_2$. Riesz ideals were introduced in [@GudPul:QoPAM].
For a lattice ordered effect algebra $E$, it was proved in [@Rie:CaCEiEA], that $C(E)=K(E)\cap E_S$. Moreover, as proved in [@JenRie:OSEiLOEA], for a lattice ordered effect algebra $E$, $E_S$ is a sublattice of $E$, a sub-effect algebra of $E$, and every block of $E_S$ is the center of a block of $E$. In the remainder of this section, we will extend some of these results to the class of homogeneous effect algebras.
\[blockcenter\] Let $a$ be an element of a homogeneous effect algebra $E$. The following are equivalent.
1. $a\in E_S$.
2. $a$ is central in every block of $E$ which contains $a$.
3. $a$ is central in some block of $E$.
\(a) implies (b): Assume that $a\in E$ is sharp, let $B$ be a block of $E$ such that $a\in B$. Since $a$ is sharp in $E$, $a$ is sharp in $B$. We will prove that $a$ is principal in $B$. Let $x_1,x_2\in B$ be such that $x_1,x_2\leq a$, $x_1\perp x_2$. Since $B$ is a sub-effect algebra of $E$, $x_1\oplus x_2\in B$. Since $B$ is internally compatible, $x_1\oplus x_2{\leftrightarrow}a$ in $B$. By [@ChePul:SILiPAM], Lemma 2, $x_1\oplus x_2{\leftrightarrow}a$ in $B$ implies that there are $y_1,y_2\in B$ such that $y_1\leq a$, $y_2\leq a'$ and $x_1\oplus x_2=y_1\oplus y_2$. Since $B$ satisfies Riesz decomposition property, $y_2\leq x_1\oplus x_2$ implies that there are $t_1,t_2\in B$ such that $t_1\leq x_1$, $t_2\leq x_2$ and $y_2=t_1\oplus t_2$. For $i\in\{1,2\}$, $t_i\leq a,a'$. Since $a$ is sharp in $B$, this implies that $t_1=t_2=0$. Thus, $x_1\oplus x_2=y_1\leq a$ and $a$ is principal in $B$ and hence $[0,a]\cap B$ is an ideal in $B$. Since $B$ satisfies Riesz decomposition property, every ideal in $B$ is a Riesz ideal. By [@ChePul:SILiPAM], an element $a$ of an effect algebra is central iff $[0,a]$ is a Riesz ideal. Therefore, $a$ is central in $B$.
\(b) implies (c): By Corollary \[blockcover\], every element of $E$ is in some block.
\(c) implies (a): Let $a\in C(B)$ for some block $B$, let $b\leq a,a'$. Since $B=\overline B$, $b\in B$. Thus, $b=0$ and $a$ is sharp.
\[rieszcenter\] Let $a$ be an element of an effect algebra $E$ satisfying Riesz decomposition property. The following are equivalent.
1. $a\in E_S$.
2. $a\in C(E)$.
3. $a$ is principal.
By Proposition \[blockcenter\], (a) is equivalent to (b). In every effect algebra, all principal elements are sharp. Every central element is principal.
For a homogeneous effect algebra $E$, $E_S$ is a sub-effect algebra of $E$. Moreover, $E_S$ is an orthoalgebra.
Obviously, $0,1\in E_S$ and $E_S$ is closed with respect to $~'$. Assume $a,b\in E_S$, $a\perp b$. Then $\{a,b\}$ is a finite compatible set. Thus, by Corollary \[embedfinite\], $\{a,b\}$ can be embedded into a block $B$. By Proposition \[blockcenter\], $a,b\in C(B)$. Since $C(B)$ is a sub-effect algebra of $B$, $a\oplus b\in C(B)$. By Proposition \[blockcenter\], $C(B)\subseteq E_S$, thus $a\oplus b\in E_S$.
Obviously, $E_S$ is an orthoalgebra.
Since, for a homogeneous effect algebra $E$, $E_S$ is an orthoalgebra, every compatible subset of $E_S$ can be embedded into a block of $E_S$, which is a Boolean algebra.
Let $E$ be a homogeneous effect algebra. For every block $B^0$ in $E_S$ and for every block $B$ of $E$ such that $B^0\subseteq B$, $B^0=C(B)$.
Let $B^0$ be a block of $E_S$. Let $B$ be a block of $E$ with $B^0\subseteq B$. By Proposition \[blockcenter\], $B^0\subseteq C(B)$. Since $B^0$ is a block of $E_S$ and $C(B)$ is a Boolean algebra, $B^0\subseteq C(B)$ implies that $B^0=C(B)$.
Let $B$ be a block of a homogeneous effect algebra $E$. Is it true that $C(B)$ is a block of $E_S$?
In a homogeneous effect algebra, $C(E)=C(K(E))=K(E)_S$.
It is evident that $C(E)\subseteq C(K(E))\subseteq K(E)_S$. Let $a\in K(E)_S$. We shall prove that $[0,a]$ is a Riesz ideal. By Lemma 2 of [@ChePul:SILiPAM], this implies that $a\in C(E)$. Suppose $x_1,x_2\leq a$, $x_1\perp x_2$. Then $\{x_1,x_2\}$ can be embedded into a block $B$ of $E$. Since $a\in K(E)$, $a\in B$. Since $a$ is sharp, $a$ is central in $B$. Thus, $a$ is principal in $B$ and hence $x_1\oplus x_2\leq a$. Therefore, $a$ is principal in $E$. Let $i\in[0,a]$, $x\perp y$, $i\leq x\oplus y$. Similarly as above, $\{a,x,y\}$ can be embedded into a block $B$ of $E$, such that $a\in C(B)$. Obviously, $i\leq (x\oplus y)\land a$ and, since $a$ is central in $B$, $(x\oplus y)\land a=(x\land a)\oplus(y\land a)$. Thus, $[0,a]$ is a Riesz ideal.
Let $E$ be a homogeneous effect algebra. Does $K(E)$ satisfy Riesz decomposition property ? This is true for orthoalgebras and for lattice ordered effect algebras.
Examples and counterexamples
============================
It is easy to check, that a direct product of a finite number of homogeneous effect algebras is a homogeneous effect algebra.
Let $E_1$ be an orthoalgebra. Let $E_2$ be an effect algebra satisfying Riesz decomposition property, which is not an orthoalgebra. If any of $E_1,E_2$ is not lattice ordered, then $E_1\times E_2$ is an example of a homogeneous effect algebra which is not lattice ordered. Moreover, since $E_2$ is not an orthoalgebra, $E_1\times E_2$ is not an orthoalgebra.
Another possibility to construct new homogeneous effect algebras from old is to make [*horizontal sums*]{} (sometimes called [*$0,1$-pastings)*]{}, which means simply identifying the zeroes and ones of the summands.
As shown in the next example, it is possible to construct a lattice ordered (and hence homogeneous) effect algebra by pasting of two MV-algebras in a central element.
![ []{data-label="box"}](box.eps)
![An eighteen elements lattice ordered effect algebra[]{data-label="L18"}](L18.eps)
We borrowed the basic idea for this example from Cohen [@Coh:AItHSaQL]. Consider a system consisting of a firefly in a box pictured in a Figure \[box\]. The box has five windows, separated by thin lines. We shall consider two experiments on this system :
1. Look at the windows $a,b,c$.
2. Look at the windows $c,d,e$.
Suppose that the window $c$ is covered with a grey filter. Unless the firefly is shining very brightly at the moment we are performing the experiment, we cannot be sure that we see the firefly in the $c$ window. The outcomes of experiment (A) are
--------------- --
$(a)$
$(b)$
$(c)$
$(c\oplus c)$
--------------- --
The outcomes of (B) are similar. The unsharp quantum logic of our experiment is an eighteen elements lattice ordered effect algebra $E$ with five atoms $a,b,c,d,e$, satisfying $$a\oplus b\oplus c\oplus c=c\oplus c\oplus d\oplus e=1\text{.}$$ The Hasse diagram of $E$ is given by Figure \[L18\]. This effect algebra is constructed by pasting of two MV-algebras $$A=\{0,a,b,c,a\oplus c,b\oplus c,c\oplus c,a',b',(c\oplus c)',c',1\}$$ and $$B=\{0,c,d,e,c\oplus c,c\oplus d,c\oplus e,d',e',(c\oplus c)',c',1\}\text{.}$$ $A$ and $B$ are then blocks of $E$. The compatibility center of $E$ is the MV-algebra $$K(E)=\{0,c,c\oplus c,(c\oplus c)',c',1\}$$ and the center of $E$ is $\{0,c\oplus c,(c\oplus c)',1\}$. $E_S$ forms a twelve-elements orthomodular lattice with two blocks; each of them is isomorphic to the Boolean algebra $2^3$ and they are pasted in one of their atoms (namely $c\oplus c$).
Let $E$ be an eighteen elements effect algebra with six atoms $a,b,c,d,e,f$, satisfying $$\label{inmind}
a\oplus b\oplus c=c\oplus d\oplus d\oplus e=e\oplus f\oplus a=1\text{.}$$ The Hasse diagram of $E$ is given by Figure \[gen\].
![A non-lattice ordered homogeneous effect algebra[]{data-label="gen"}](gen.eps)
This effect algebra is constructed by pasting of three blocks : two Boolean algebras $$\begin{aligned}
B_1&=&\{0,a,b,c,a',b',c',1\}\\
B_2&=&\{0,e,f,a,e',f',a',1\}\end{aligned}$$ and an MV-algebra $$B_3=\{0,c,d,e,d\oplus d,d\oplus e,c\oplus d,(d\oplus d)',c',d',e',1\}\text{.}$$ By (\[inmind\]), it is easy to see that the range of every orthogonal family with three elements can be embedded into a block. Thus, by Corollary \[bigcor\], $E$ is homogeneous. All elements except for $d,d',c\oplus d,d\oplus e$ are sharp and $E_S$ is an orthoalgebra with fourteen elements, called the [*Wright triangle*]{}, which is not an orthomodular poset.
\[nofunctions\] Let $E$ be a homogeneous effect algebra. Assume that there is an element $a\in E$ with $a\leq a'$, such that $E$ is isomorphic to $[0,a]_E$. Then $E$ satisfies Riesz decomposition property.
Let $B$ be a block containing $a$. Since $B$ is a maximal internally compatible subset of $E$, Corollary \[theone\](b) implies that $[0,a]=\{x\in E:x\leq a,a'\}\subseteq B$. This implies that $[0,a]_E$ satisfies Riesz decomposition property. Therefore, $E$ satisfies Riesz decomposition property.
For a Hilbert space $\mathbb H$, $\mathcal E(\mathbb H)$ is homogeneous iff ${\mathop{\mathrm{dim}}}(\mathbb H)\leq 1$.
The map $\phi:\mathcal E(\mathbb H)\mapsto [0,\frac{1}{2}I]$ given by $\phi(A)=\frac{1}{2}A$ is obviously an isomorphism and $\frac{1}{2}I\leq(\frac{1}{2}I)'$. Therefore, by Proposition \[nofunctions\], every homogeneous $\mathcal E(\mathbb H)$ satisfies Riesz decomposition property. However, it is well known that $\mathcal E(\mathbb H)$ satisfies Riesz decomposition property iff ${\mathop{\mathrm{dim}}}(\mathbb H)\leq 1$.
The following example shows that a sub-effect algebra of a homogeneous effect algebra need not to be homogeneous.
\[lastone\] Let $E=[0,1]\times[0,1]$, where $[0,1]\subseteq\mathbb R$ denotes the unit interval of the real line. Equip $E$ with a partial operation $\oplus$ with domain given by $(a_1,a_2)\perp (b_1,b_2)$ iff $a_1+b_1\leq 1$ and $a_2+b_2\leq 1$; then define $(a_1,a_2)\oplus (b_1,b_2)=(a_1+b_1,a_2+b_2)$. Then $(E,\oplus_E,(0,0),(1,1))$ is a homogeneous effect algebra (in fact, it is even an MV-algebra). Let $$F=\{(x_1,x_2)\in E:x_1+x_2\in\mathbb Q\}$$ Since $(1,1)\in F$ and $F$ is closed with respect to $\ominus$, $F$ is a sub-effect algebra of $E$.
It is easy to see that the map $\phi:F\mapsto[(0,0),(\frac{1}{2},\frac{1}{2})]_F$, given by $\phi(x_1,x_2)=(\frac{1}{2}x_1,\frac{1}{2}x_2)$ is an isomorphism. Note that $F$ is not a compatible effect algebra: for example, $\{(1,0),(\frac{1}{\pi},1-\frac{1}{\pi})\}$ is not compatible in $F$. Consequently, $F$ does not satisfy Riesz decomposition property and hence, by Proposition \[nofunctions\], $F$ is not homogeneous.
Let $\mu$ be the Lebesgue measure on $[0,1]$. Let $E\subseteq[0,1]^{[0,1]}$ be such that, for all $f\in E$,
1. $f$ is measurable with respect to $\mu$
2. $\mu(\mathrm{supp}(f))\in\mathbb Q$
3. $\mu(\{x\in [0,1]: f(x)\not\in\{0,1\}\})=0$,
where $\mathrm{supp}(f)$ denotes the support of $f$. It is easy to check that $E$ is a sub-effect algebra of $[0,1]^{[0,1]}$. Obviously, $E$ is not an orthoalgebra. We will show that $E$ is a homogeneous, non-lattice ordered effect algebra and that $E$ does not satisfy Riesz decomposition property. Note that, for all $u\in E$, $u\perp u$ iff $Ran(u)\subseteq[0,\frac 12]$ and $\mu(\mathrm{supp}(u))=0$. Thus, for all $u\in E$ and $u_0\in [0,1]^{[0,1]}$ such that $u_0\leq u$ and $u\perp u$, we have $u_0\in E$.
Let $u,v_1,v_2\in E$ be such that $u\leq v_1\oplus v_2$, $u\leq(v_1\oplus
v_2)'$. Since $[0,1]^{[0,1]}$ is an MV-algebra, there are $u_1,u_2\in [0,1]^X$ such that $u_1\leq v_1$, $u_2\leq v_2$ and $u=u_1\oplus u_2$. By above paragraph, $u\perp u$ and $u_1,u_2\leq u\in E$ imply that $u_1,u_2\in E$. Therefore, $E$ is homogeneous. Let $f,g$ be the characteristic functions of intervals $[0,\frac 23]$, $[\frac {1}{\pi},\frac {1}{\pi}+\frac{1}{2}]$, respectively. Then $f\land g$ does not exist in $E_S$. Therefore, $E_S$ is not lattice ordered and hence, by Theorem 3.3 of [@JenRie:OSEiLOEA], $E$ is not lattice ordered. Moreover, $E$ does not satisfy Riesz decomposition property. Indeed, assume the contrary. Then, by Proposition \[rieszcenter\], $E_S=C(E)$. In particular, $E_S$ is then a Boolean algebra. However, this is a contradiction, since $E_S$ is not lattice ordered.
[10]{}
M.K. Bennett and D.J. Foulis, *Phi-symmetric effect algebras*, Foundations of Physics **25** (1995), 1699–1722.
L. Beran, *Orthomodular lattices (algebraic approach)*, Dordrecht, 1985.
P. Bush, M. Grabowski, and P. Lahti, *Operational quantum physics*, Springer-Verlag, 1995.
C.C. Chang, *Algebraic analysis of many-valued logics*, Trans. Amer. Math. Soc. **89** (1959), 74–80.
G. Chevalier and S.Pulmannov[á]{}, *Some ideal lattices in partial abelian monoids*, preprint, Mathematical Institute of Slovak Academy of Sciences, Bratislava, 1998.
F. Chovanec and F. K[ô]{}pka, *[B]{}oolean [D]{}-posets*, Tatra Mt. Math. Publ **10** (1997), 1–15.
D.W. Cohen, *An introduction to [H]{}ilbert space and quantum logic*, Springer-Verlag, 1989.
D.J. Foulis and M.K. Bennett, *Effect algebras and unsharp quantum logics*, Found. Phys. **24** (1994), 1331–1352.
D.J. Foulis, R. Greechie, and G. R[ü]{}timann, *Filters and supports in othoalgebras*, Int. J. Theor. Phys. **35** (1995), 789–802.
R. Giuntini and H. Greuling, *Toward a formal language for unsharp properties*, Found. Phys. **19** (1994), 769–780.
R. Greechie, D. Foulis, and S. Pulmannov[á]{}, *The center of an effect algebra*, Order **12** (1995), 91–106.
S. Gudder and S. Pulmannov[á]{}, *Quotients of partial abelian monoids*, Algebra univers. **38** (1998), 395–421.
G. Jen[č]{}a and Z. Rie[č]{}anov[á]{}, *On sharp elements in lattice ordered effect algebras*, BUSEFAL **80** (1999), 24–29.
G. Kalmbach, *Orthomodular lattices*, Academic Press, New York, 1983.
F. K[ô]{}pka and F. Chovanec, *D-posets*, Math. Slovaca **44** (1994), 21–34.
G. Ludwig, *Foundations of quantum mechanics*, Springer-Verlag, 1983.
P. Pt[á]{}k and S. Pulmannov[á]{}, *Orthomodular structures as quantum logics*, Kluwer, Dordrecht, 1991.
K. Ravindran, *On a structure theory of effect algebras*, Ph.D. thesis, Kansas State Univ., Manhattan, Kansas, 1996.
Z. Rie[č]{}anov[á]{}, *Compatibility and central elements in effect algebras*, Tatra Mt. Math. Publ. **16** (1998), 1–8.
[to3em]{}, *A generalization of blocks for lattice effect algebras*, Int. J. Theor. Phys. **39** (2000), 855–865.
[^1]: This research is supported by grant G-1/7625/20 of MŠ SR, Slovakia
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we prove a theorem on the distribution of primes in cubic progressions on average.'
author:
- Timothy Foo
- Liangyi Zhao
bibliography:
- 'biblio.bib'
title: On Primes Represented by Cubic Polynomials
---
[**Mathematics Subject Classification (2010)**]{}: 11L05, 11L07, 11L15, 11L20, 11L40, 11N13, 11N32, 11N37
[**Keywords**]{}: primes in cubic progressions, primes represented by polynomials
Introduction
============
It was proved by Dirichlet that any linear polynomial in one variable represents infinitely many primes provided that the coeffiicents are co-prime. Though it has long been conjectured, analogous statements are not known for any polynomial of degree two or higher. Many partial results are known in this direction. The readers are referred to the survey article [@BZ2] for more information.
In [@SBLZ; @BZ], S. Baier and L. Zhao established certain theorems for the Bateman-Horn conjecture for quadratic polynomials on average. The latest result in those works states the following. For all $z >3$ and $B>0$, we have, for $z^{1/2 + \varepsilon}\leq K \leq z/2$ $$\sum_{1 \leq k \leq K}\left|\sum_{z < n^2+k \leq 2z}\Lambda(n^2+k) - \prod_{p>2} \left( 1 - \frac{\left(\frac{-k}{p}\right)}{p-1} \right) \sum_{z < n^2+k \leq 2z}1\right|^2 \ll \frac{Kz}{(\log z)^B},$$ where $\Lambda$ henceforth denotes the von Mangoldt function and $\left( \frac{-k}{p} \right)$ is the Legendre symbol.
The Bateman-Horn conjecture [@PTBRAH] says that if $f$ is an irreducible polynomial in ${\mathbb{Z}}[x]$ satisfying $$\gcd\{f(n): n \in {\mathbb{Z}}\}=1,$$ then $$\label{bhconj}
\sum_{n \leq x} \Lambda(f(n)) \sim \prod_p \left( 1- \frac{n_p-1}{p-1} \right) x,$$ where $n_p$ is the number of solutions to the equation $f(n) \equiv 0 \pmod{p}$ in ${\mathbb{Z}}/p {\mathbb{Z}}$. A little more can be said about the quantity $n_p$ in the specific case where $f(n)=n^d+k$. Specifically, in that case, we have that $$n_p = \sum_{i=0}^{d^{\prime}-1}\chi^{i}(-k)$$ where $\chi$ is a Dirichlet character of order $d^{\prime}=\gcd (p-1,d)$.
In this paper, we shall study the asymptotic formula in on average over a family of cubic polynomials. Our main theorem is the following.
Given $A,B > 0$, we have, for $x^3 (\log x)^{-A}< y < x^3$, $$\sum_{\substack{1 \leq k \leq y\\ \mu^2(k) = 1}}\left| \sum_{n \leq x}\Lambda(n^3 + k) - {\mathfrak{S}}(k)x\right|^2 = O\left(\frac{yx^2}{(\log x)^B}\right),$$ where $\mu$ henceforth denotes the Möbius function and $${\mathfrak{S}}(k) = \prod_{p}\left(1-\frac{n_p-1}{p-1}\right),$$ with $n_p$ being the number of solutions to the equation $n^3+k \equiv 0 \bmod p$ in ${\mathbb{Z}}/p{\mathbb{Z}}$.
From the theorem, we immediately deduce the following.
Given $A$, $B$, $C>0$ and ${\mathfrak{S}}(k)$ as defined in the theorem, we have, for $x^3 (\log x)^{-A} \leq y \leq x^3$, that $$\sum_{n \leq x} \Lambda (n^3+k) = {\mathfrak{S}}(k) x + O \left( \frac{x}{(\log x)^B} \right)$$ holds for all square-free $k$ not exceeding $y$ with at most $O( y (\log x)^{-C})$ exceptions.
We shall use the circle method to study this problem. The readers are referred to [@RCV] or Chapter 20 of [@HIEK] for an introduction to the circle method. The starting point for us is the identity $$\label{startpt}
\sum_{n \leq x}\Lambda(n^3+k) = \int\limits_0^1 \sum_{m \leq z}\Lambda(m)e(\alpha m)\sum_{n\leq x}e(-\alpha (n^3+k)) {\mathrm{d}}\alpha,$$ where henceforth $z=x^3+y$ and $e(w) = \exp(2 \pi i w)$.
Preliminary Lemmas
==================
In this section, we collect some preliminary lemmas which we shall need in the sequel.
\[Gallagher\] \[gallalem\] Let $2 < \Delta < N/2$ and $N < N^{\prime} < 2N $. For arbitrary $a_n \in {\mathbb{C}}$, we have $$\int\limits_{|\beta| < \Delta^{-1}} \left| \sum_{N<n<N^{\prime}}a_n e(\beta n)\right|^2 {\mathrm{d}}\beta \ll \Delta^{-2}\int\limits_{N-\Delta/2}^{N^{\prime}}\left|\sum_{\max(t,N)<n<\min(t+\Delta/2,N^{\prime})}a_n\right|^2 {\mathrm{d}}t$$ where the implied constant is absolute.
This is Lemma 1 in [@Galla] in a slightly modified form.
We shall also need the following lemma.
\[Wolke, Mikawa\] \[mikawalem\] Let $$\mathfrak{J}(q,\Delta) = \sum_{\chi \bmod q}\int\limits_N^{2N} \left|\sum_{t < n <t + q\Delta}^{\sharp}\chi(n)\Lambda(n) \right|^2 {\mathrm{d}}t$$
where the $\sharp$ over the summation symbol means that if $\chi=\chi_0$, then $\chi(n)\Lambda(n)$ is replaced by $\Lambda(n)-1$. Let $\varepsilon$, $A$ and $B>0$ be given. If $q \leq (\log N)^B$ and $ N^{1/5 + \varepsilon}<\Delta <N^{1-\varepsilon}$, then we have
$$\label{3} \mathfrak{J}(q,\Delta) \ll (q\Delta)^2 N(\log N)^{-A}$$
where the implied constant depends only on $\varepsilon$, $A$ and $B$.
This is Lemma 2 in [@Mika].
The following Bessel’s lemma plays an important role in many places of the proof.
\[Bessel\] \[bessineq\] Let $\phi_1,\phi_2,\dots,\phi_R$ be orthonormal members of an inner product space $V$ over ${\mathbb{C}}$ and let $\xi \in V$. Then $$\sum_{r=1}^R \left| (\xi,\phi_r) \right|^2 \leq (\xi , \xi).$$
This is a standard result. See [@PRH] for a proof.
The following lemma is needed in the estimates involving the minor arcs.
\[Weyl Shift\] \[weylshift\] If $f(x) = \alpha x^k + \dots$ is a polynomial with real coefficents and $k \geq 1$, then $$\left| \sum_{1 \leq n \leq N}e(f(n)) \right| \leq 2N\left\{N^{-k}\sum_{-N<l_1,l_2,\dots,l_{k-1}<N}\min \left( N,\frac{1}{\left\| \alpha k!\prod_{i=1}^{k-1}l_i \right\|} \right)\right\}^{2^{1-k}},$$ where $\| x \|$ denotes the distance of $x \in {\mathbb{R}}$ to the closest integer.
This is Proposition 8.2 of [@HIEK].
To dispose of the contributions from certain parts of the singular series, ${\mathfrak{S}}(k)$, we shall need the following lemmas.
\[Pólya-Vinogradov\] \[polyvino\] For any non-principle character $\chi$ modulo $q$, we have $$\left|\sum_{M<n\leq M+N}\chi(n)\right| \leq 6\sqrt{q}\log q.$$
This is Theorem 12.5 in [@HIEK].
\[Baier-Young\] \[B&Y\] For $(a_m)_{m \in {\mathbb{Z}}}$ with $a_m \in {\mathbb{C}}$ arbitrary, we have $$\sum_{Q<q<2Q} \ \sideset{}{^{\star}}\sum_{\substack{\chi \bmod q\\ \chi^3 = \chi_0}}\left| \ \sideset{}{^{*}}\sum_{M<m<2M} a_m\chi(m)\right|^2 \ll (QM)^{\varepsilon} \left( Q^{11/9} + Q^{2/3}M \right) \sideset{}{^{*}} \sum_{\substack{M<m<2M\\(m,3)=1}} |a_m|^2,$$ where the $*$ over the sum over $m$ indicates that the sum is over squarefree integers, the $\star$ over the sum over $\chi$ indicates that $\chi$ is primitive.
This is one of the bounds for the mean-values of cubic character sums in Theorem 1.4 in [@BaYo].
\[Huxley\] \[huxleylemma\] Let $K$ be a number field and $\mathfrak{r}$ denote an ideal in $K$. Suppose $u(\mathfrak{r})$ is a complex-valued function defined on the set of ideals in $K$. We have $$\sum_{\mathcal{N}(\mathfrak{f})\leq Q}\frac{\mathcal{N}(\mathfrak{f})}{\Phi(\mathfrak{f})} \sideset{}{^{\star}}\sum_{\chi \bmod{\mathfrak{f}}} \left| \sum_{\mathcal{N}(\mathfrak{r})\leq z}u(\mathfrak{r})\chi(\mathfrak{r}) \right|^2 \ll (z+Q^2)\sum_{\mathcal{N}(\mathfrak{r}) \leq z}|u(\mathfrak{r})|^2,$$ where $\mathcal{N}(\mathfrak{f})$ denotes the norm of the ideal $\mathfrak{f}$, $\Phi(\mathfrak{f})$ is Euler’s totient function generalized to the setting of number fields, the $\star$ over the sum over $\chi$ indicates that $\chi$ is a primitive character of the narrow ideal class group modulo $\mathfrak{f}$ and the implicit constant depends on $K$.
This is Theorem 1 of [@Hux].
\[dual\] Let $T=[t_{mn}]$ be a finite square matrix with entries from the complex numbers. The following two statements are equivalent:
1. For any complex numbers $\{ a_n \}$, we have $$\sum_m \left| \sum_n a_n t_{mn} \right|^2 \leq D \sum_n |a_n|^2.$$
2. For any complex numbers $\{ b_n \}$, we have $$\sum_n \left| \sum_m b_m t_{mn} \right|^2 \leq D \sum_m |b_m|^2.$$
This is a standard result. See Theorem 228 in [@GHHJELGP].
\[zerofree\] Let $K/{\mathbb{Q}}$ be a number field, $\xi$ a Hecke Grossencharakter modulo $(\mathfrak{m}, \Omega)$ where $\mathfrak{m}$ is a non-zero integral ideal in $K$ and $\Omega$ is a set of real infinite places where $\xi$ is ramified. Let the conductor $\Delta = |d_{K}|N_{K/{\mathbb{Q}}}\mathfrak{m}$. There exists an absolute effective constant $c'>0$ such that the L-function $L(\xi,s)$ of degree $d = [K:{\mathbb{Q}}]$ has at most a simple real zero in the region $$\sigma > 1 - \frac{c'}{d\log \Delta(|t|+3)}.$$ The exceptional zero can occur only for a real character and it is $< 1$.
This is Theorem 5.35 of [@HIEK].
\[Perron\]\[perron\] Suppose that $y \not= 1$ is a positive real number and let $$\delta(y) = \begin{cases}
1, & y > 1, \\
& \\
0, & \mbox{otherwise.}
\end{cases}$$ Furthermore, let $c>0$ and $T>0$. Then $$\frac{1}{2 \pi i}\int_{c-iT}^{c+iT}\frac{y^s}{s}ds = \delta(y) + O\left(y^c \min\left(1, T^{-1}\left|\log y\right|^{-1}\right)\right).$$
This is a standard result. See Chapter 17 in [@HD].
The Major Arcs
==============
The major arcs are defined by $$\label{majorarcdef}
{\mathfrak{M}}= \bigcup_{q \leq Q_1}\bigcup_{\substack{a=1 \\ \gcd (a,q) = 1}}^{q} J_{q,a}$$ where $$J_{q,a} = \left[\frac{a}{q}-\frac{1}{qQ},\frac{a}{q}+\frac{1}{qQ} \right] , \; Q_1 = (\log x)^c \; \; \mbox{and} \; \; Q = x^{1 - \varepsilon},$$ for some $c> 0$ fixed and suitable. Note that if $x$ is sufficiently large, $Q> Q_1$ and the intervals $J_{q,a}$ are disjoint. We shall henceforth assume that this is the case.
For $\alpha \in {\mathfrak{M}}$, we write $$\alpha = \frac{a}{q} + \beta, \; \mbox{with} \; |\beta| \leq \frac{1}{qQ}.$$
Now set $$S_1(\alpha) = \sum_{m \leq z}\Lambda(m)e(\alpha m) \; \; \; \mbox{and} \; \; \; S_2(\alpha) = \sum_{n \leq x}e(-\alpha n^3).$$
$S_1(\alpha)$ has been computed in Section 3 of [@SBLZ] and we have $$S_1( \alpha ) = T_1(\alpha) + E_1(\alpha) + O \left( \log^2 z \right)$$ where $$\label{T1E1def}
T_1(\alpha) = \frac{\mu(q)}{\varphi(q)}\sum_{m \leq z}e(\beta m) \;\;\; \mbox{and} \;\;\; E_1(\alpha) = \frac{1}{\varphi(q)} \sum_{\chi \bmod{q}} \tau(\overline{\chi}) \chi(a) \sum_{m\leq z}^{\sharp} \chi(m) \Lambda(m) e (\beta m)$$ with $\varphi$ being the Euler totient function, $$\label{gausssumdef}
\tau(\chi) = \sum_{r \bmod q}\chi(r)e\left(\frac{r}{q}\right)$$ the Gauss sum and the $\sharp$ of the summation symbol in $E_1(\alpha)$ having the same meaning as that in Lemma \[mikawalem\].
Now to compute $S_2(\alpha)$, we first note the following well-known relations between additive and multiplicative characters. $$\label{addmultcharrel}
e \left( \frac{a}{q} m \right) = \frac{1}{\varphi(q)} \sum_{\chi \bmod{q}} \chi(am) \tau(\overline{\chi})$$ provided that $\gcd (am, q) =1$. Therefore, using , we have $$S_2(\alpha) = \sum_{n \leq x}e\left(-\left(\frac{a}{q} + \beta\right)n^3\right) = \sum_{d|q}\frac{1}{\varphi(q_1^*)}\sum_{\chi \bmod q_1^*}\chi(-ad^*)\tau(\bar{\chi})\sum_{\substack{n\leq x\\ \gcd(n,q)=d}}\chi^3(n^{*})e(-\beta n^3),$$ where $$q^* = \frac{q}{d}, \; n^* = \frac{n}{d}, \; d^* = \frac{d^2}{\gcd(d^2, q^*)} \; \;
\mbox{and} \; \; q_1^* = \frac{q^*}{\gcd(d^2, q^*)}.$$ So $$\begin{aligned}
S_2(\alpha) &=& \sum_{d|q}\frac{1}{\varphi(q_1^*)}\sum_{\substack{\chi \bmod q_1^*\\ \chi^3 = \chi_0}}\chi(-ad^*)\tau(\bar{\chi})\sum_{\substack{n\leq x\\ \gcd(n,q)=d}}e(-\beta n^3)\\
&& \hspace*{1cm} +\sum_{d|q}\frac{1}{\varphi(q_1^*)}\sum_{\substack{\chi \bmod q_1^*\\ \chi^3 \not= \chi_0}}\chi(-ad^*)\tau(\bar{\chi})\sum_{\substack{n\leq x\\ \gcd(n,q)=d}}\chi^3(n^{*})e(-\beta n^3)\\
&=& T_2(\alpha) + E_2(\alpha),\end{aligned}$$ say. Now performing some computations similar to those in Section 3 of [@SBLZ], we have $$T_2(\alpha) = \sum_{d|q}\frac{1}{\varphi(q_1^*)} \sum_{\substack{l \bmod q_1^* \\ \gcd(l,q_1^*)=1}}e\left(-\frac{ad^*l^3}{q_1^*}\right)\sum_{\substack{n\leq x\\ \gcd(n,q)=d}}e(-\beta n^3).$$ Therefore, $$\label{majorarc1}
\begin{split}
\int\limits_{{\mathfrak{M}}} \sum_{m \leq z} & \Lambda(m)e(\alpha m)\sum_{n\leq x}e(-\alpha (n^3+k)) {\mathrm{d}}\alpha \\
& = \int\limits_{{\mathfrak{M}}} \left( T_1(\alpha) + E_1( \alpha) + O \left( \log^2 x \right) \right) \left( T_2(\alpha) + E_2(\alpha) \right) e(-\alpha k) {\mathrm{d}}\alpha .
\end{split}$$
The Singular Series
===================
The main term will be given by the following $$\begin{split}
\int\limits_{{\mathfrak{M}}}T_1(\alpha)& T_2(\alpha)e(-k\alpha) {\mathrm{d}}\alpha \\
& = \sum_{q \leq Q_1}\frac{\mu(q)}{\varphi(q)}\sum_{\substack{a \bmod q \\ \gcd(a,q)=1}}e\left(-\frac{ak}{q}\right)\sum_{d|q}\frac{1}{\varphi(q_1^*)}\sum_{\substack{l \bmod q_1^*\\ \gcd(l,q_1^*)=1}}e\left(-\frac{ad^*l^3}{q_1^*}\right)\int\limits_{|\beta| < \frac{1}{qQ}}\Pi_{q,d}(\beta) {\mathrm{d}}\beta
\end{split}$$ where $$\Pi_{q,d}(\beta) = \sum_{m \leq z}e(\beta m)\sum_{\substack{n \leq x\\ \gcd(n,q)=d}}e(-\beta n^3)e(-\beta k).$$ As in [@SBLZ], we have $$\int\limits_{|\beta| < \frac{1}{qQ}}\Pi_{q,d}(\beta) {\mathrm{d}}\beta = \frac{\varphi(q/d)}{q} x + O\left(\left(\frac{qQx}{d}\right)^{1/2} \right).$$ Hence $$\label{T1T2-2}
\begin{split}
\int\limits_{{\mathfrak{M}}} & T_1(\alpha) T_2(\alpha)e(-k\alpha) {\mathrm{d}}\alpha \\
&= \sum_{q \leq Q_1}\frac{\mu(q)}{\varphi(q)}\sum_{\substack{a \bmod q\\ \gcd(a,q)=1}}e\left(-\frac{ak}{q}\right)\sum_{d|q}\frac{1}{\varphi(q_1^*)} \sum_{\substack{l \bmod q_1^*\\ \gcd(l,q_1^*)=1}}e\left(-\frac{ad^*l^3}{q_1^*}\right)\left(\frac{\varphi(q/d)}{q}x+O\left(\left(\frac{qQx}{d}\right)^{1/2}\right)\right)
\end{split}$$ Due to the presence of $\mu(q)$, it suffices to consider only the $q$’s that are square-free. Therefore, we immediately get $d^* = d^2$ and $q_1^* = q^* = q/d$. Thus the right-hand side of becomes $$\label{2}
x\sum_{q \leq Q_1}\frac{\mu(q)}{\varphi(q)q}\sum_{\substack{a \bmod q\\ \gcd(a,q)=1}}e\left(-\frac{ak}{q}\right)\sum_{d|q}\sum_{\substack{l \bmod q/d\\ \gcd(l,q/d)=1}}e\left(-\frac{a (dl)^3}{q}\right)+ O\left( \sqrt{xQ}(\log x)^{c_1} \right).$$ We observe that $$\label{Sigmadef}
\Sigma(q) := \sum_{\substack{a \bmod q\\ \gcd(a,q)=1}}e\left(-\frac{ak}{q}\right)\sum_{d|q}\sum_{\substack{l \bmod q/d\\ \gcd(l,q/d)=1}}e\left(-\frac{a(dl)^3}{q}\right) = \sum_{\substack{a \bmod q\\ \gcd(a,q)=1}}e\left(-\frac{ak}{q}\right) \sum_{r \bmod q}e\left(-\frac{ar^3}{q}\right).$$ Suppose that $q = p$, a prime number. Then $$\Sigma(p) = \sum_{\substack{a \bmod p\\ \gcd(a,p)=1}}\sum_{r \bmod p}e\left(\frac{-a(k+r^3)}{p}\right) = (p-1)n_{k,p} + (-1)(p - n_{k,p}) = p(n_{k,p} - 1)$$ where $n_{k,p}$ is the number of solutions to $k+r^3 \equiv 0 \pmod{p}$ in ${\mathbb{Z}}/p{\mathbb{Z}}$. Therefore, letting $\chi_{1,p}$ and $\chi_{2,p}$ be the two cubic characters not equal to $\chi_0$ for a prime $p \equiv 1$ mod $3$, $$\Sigma(p) = \begin{cases}
p \left( \chi_{1,p}(-k) + \chi_{2,p}(-k) \right), & \mbox{if }p \equiv 1 \pmod{3} \\
& \\
0, & \mbox{if }p = 3 \mbox{ or } p \equiv 2 \pmod{3}
\end{cases}$$ since the map $r \longrightarrow r^3$ is a bijection on ${\mathbb{Z}}/p{\mathbb{Z}}$ when $p = 3$ or $p \equiv 2 \pmod{3}$. Furthermore, $\Sigma(q)$ is multiplicative in $q$. To see this, we have if $\gcd(q_1,q_2)=1$, then $$\Sigma(q_1)\Sigma(q_2) = \sum_{r_1\bmod q_1}\sum_{\substack{a_1 \bmod q_1\\(a_1,q_1)=1}}\sum_{r_2\bmod q_2}\sum_{\substack{a_2 \bmod q_2\\(a_2,q_2)=1}}e\left( f(k,a_1,a_2,q_1,q_2,r_1,r_2\right)$$ where $$\begin{split}
f(k,a_1,a_2,q_1,q_2,r_1,r_2) & = -k\frac{a_1q_2+a_2q_1}{q_1q_2}-\frac{a_1q_2(q_2r_1)^3+a_2q_1(q_1r_2)^3}{q_1q_2} \\
& \equiv -k \frac{a_1q_2+a_2q_1}{q_1q_2} - \frac{(a_1q_2+a_2q_1)(q_1r_2+q_2r_1)^3}{q_1q_2} \pmod{1}.
\end{split}$$ It is easy to observe that $a_1q_2+a_2q_1$ runs over the primitive residue classes modulo $q_1q_2$ as $a_1$ and $a_2$ run over the primitive residue classes modulo $q_1$ and $q_2$, respectively. The same can be said for $q_2r_1+q_1r_2$. Hence $$\Sigma(q_1) \Sigma(q_2) = \Sigma(q_1q_2).$$ Therefore, we have, for $q$ squarefree, $$\label{Sigmaeval}
\Sigma(q) = \prod_{p|q}p(n_{k,p} - 1) = \begin{cases}
\displaystyle{q\prod_p(\chi_{1,p}(-k) + \chi_{2,p}(-k))}, & \mbox{if }p \equiv 1 \pmod{3} \mbox{ for all } p|q , \\
& \\
0, & \mbox{otherwise.}
\end{cases}$$
Hence, combining , , and yields $$\label{mainterm}
\begin{split}
\int\limits_{{\mathfrak{M}}} T_1(\alpha) T_2(\alpha)e(-k\alpha) {\mathrm{d}}\alpha & =
x\sum_{q \leq Q_1}\frac{\mu(q)}{\varphi(q)q} \Sigma(q) + O\left( \sqrt{Qx}(\log x)^{c_1} \right) \\
&= x\sum_{q =1 }^{\infty}\frac{\mu(q)}{\varphi(q)} \prod_{p|q}(n_{k,p}-1) - x\Psi(k) + O\left( \sqrt{Qx}(\log x)^{c_1} \right) \\
&= \mathfrak{S}(k)x + O\left( |\Psi(k)|x + \sqrt{Qx}(\log x)^{c_1} \right)
\end{split}$$ where $$\label{Psikdef}
\Psi(k) = \sum_{q > Q_1}\frac{\mu(q)}{\varphi(q)} \prod_{p|q} \left( n_{k,p}-1 \right).$$
We note that the infinite sums appearing above are conditionally convergent, and this is shown later section 5.
The second moment of $\Psi(k)$
==============================
We aim to estimate the second moment of $\Psi(k)$, defined in , in this section. We shall assume that all $q$’s appearing in the subsequent computations are divisible only by primes $p \equiv 1$ mod $3$ and are squarefree. Dividing the $q$-range into three pieces, we have $$\label{splitPhi}
\sum_{\substack{k \leq y \\ \mu(k)^2 = 1}} |\Psi(k)|^2 \ll \Psi_1 + \Psi_2 + \Psi_3,$$ where $$\Psi_1 = \sum_{k \leq y} \left|\sum_{Q_1 < q \leq U}\frac{\mu(q)}{\varphi(q)}\prod_{p|q} \left(\chi_{1,p}(-k)+\chi_{2,p}(-k) \right)\right|^2,$$ $$\Psi_2 = \sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left|\sum_{U < q \leq 2^vU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2,$$ and $$\Psi_3 = \sum_{k \leq y}\left|\sum_{q > 2^vU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2$$ with $U$ and $v$ being parameters to be chosen later.
Expanding the modulus square, we have $$\label{Psi1expand}
\begin{split}
\Psi_1 \leq y& \sum_{Q_1 < q \leq U} \frac{\mu(q)^2}{\varphi(q)^2}2^{2\omega(q)} \\
& + \sum_{\substack{Q_1 < q_1,q_2 \leq U\\q_1 \not= q_2}}\frac{\mu(q_1)\mu(q_2)}{\varphi(q_1)\varphi(q_2)}\sum_{k \leq y}\prod_{p|q_1}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\prod_{p|q_2}(\chi_{1,p}(-k)+\chi_{2,p}(-k))
\end{split}$$ where $\omega(q)$ is the number of distinct primes dividing $q$.
Mindful of the well-known estimate $$\label{phiest}
\varphi(q) \gg \frac{q}{\log \log (10 q)},$$ the first term on the right-hand side of can be easily disposed. We have $$\label{Psi1diag}
y\sum_{Q_1 < q \leq U}\frac{\mu(q)^2}{\varphi(q)^2}2^{2\omega(q)} \ll \frac{y}{Q_1^{1-\varepsilon}}.$$ The second term on the right-hand side of is $$\label{Psi1offdiag}
\begin{split}
\sum_{\substack{Q_1 < q_1,q_2 \leq U\\q_1 \not= q_2}} & \frac{\mu(q_1)\mu(q_2)}{\varphi(q_1)\varphi(q_2)} \sum_{k \leq y}\prod_{p|q_1}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\prod_{p|q_2}(\chi_{1,p}(-k)+\chi_{2,p}(-k)) \\
&\ll \sum_{\substack{Q_1 < q_1,q_2 \leq U\\q_1 \not= q_2}}\frac{\sqrt{q_1q_2}\log(q_1q_2)2^{\omega(q_1)+\omega(q_2)}}{\varphi(q_1)\varphi(q_2)} \ll \log U \left(\sum_{Q_1 < q \leq U}\frac{\sqrt{q}2^{\omega(q)}}{\varphi(q)}\right)^2 \ll U^{1 + \varepsilon}
\end{split}$$ using the Polya-Vinogradov inequality, Lemma \[polyvino\], and . Note the resulting characters after expanding the products on the left-hand side of for are of modulus $q_1q_2$ and non-principle since $q_1 \neq q_2$.
Combining , and , we arrive at $$\label{Psi1est}
\Psi_1 \ll \frac{y}{Q_1^{1-\varepsilon}} + U^{1+\varepsilon}.$$
To estimate $\Psi_2$, we first use Cauchy’s inequality and get $$\label{Psi2aftercauchy}
\Psi_2 \ll v\sum_{r=1}^{[v+1]}\sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left|\sum_{2^{r-1}U < q \leq 2^rU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2.$$ Now expanding the product over $p$ inside the modulus signs in , we have $$\label{chiexpand}
\sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left|\sum_{2^{r-1}U < q \leq 2^rU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2
= \sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left| \sum_{2^{r-1}U < q \leq 2^rU} \ \sideset{}{^{\star}}\sum_{\substack{\chi \bmod q\\\chi^3 = \chi_0\\\chi \not= \chi_0}}\frac{\mu(q)}{\varphi(q)}\chi(k)\right|^2$$ Note that all the resulting $\chi$’s appearing on the right-hand side of are primitive modulo $q$. Using Lemma \[B&Y\] together with the duality principle, Lemma \[dual\], the right-hand side of is $$\ll y^{\varepsilon} \left( 2^{r-1}U \right)^{2/9 + \varepsilon} + y^{1 + \varepsilon}\left(\frac{1}{2^{r-1}U}\right)^{1/3 - \varepsilon}.$$ Summing the above over $r$ from $1$ to $R =[\log_2 \left( (y^{3 + 3\varepsilon/2})/U \right) ]$, we get $$\label{Psi2est1}
\sum_{r=1}^{R}\sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left|\sum_{2^{r-1}U < q \leq 2^rU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}\left( \chi_{1,p}(-k)+\chi_{2,p}(-k) \right)\right|^2 \ll y^{2/3 + \varepsilon} + \frac{y^{1 + \varepsilon}}{U^{1/3-\varepsilon}}.$$ For the $r$’s with $R < r \leq [v+1]$, we shall use the large sieve for number fields, Lemma \[huxleylemma\]. It is easy to reduce the expression in question to a sum of similar shape with the additional summation condition $\gcd (k, 3)=1$ included. Also recall that $q$ is assumed to be square-free and have only prime factors that are congruent to 1 modulo 3. Hence it suffices to estimate $$\label{1}
\sum_{\substack{n \in {\mathbb{Z}}[\omega] \\ \mathcal{N}(n) \leq y^2 \\ n \equiv 1 \bmod{3} \\ \mu(n)^2=1}} \left| \sum_{\substack{\pi \in \mathbb{Q}[\omega]\\2^{r-1}U < N(\pi) = q \leq 2^rU}} a_{\pi} \chi_{\pi}(n)\right|^2,$$ where $\omega = e(1/3)$ here and after, $a_{\pi} = \mu(q)/\varphi(q)$, $\chi_{\pi}(n) = \left( \frac{n}{\pi} \right)_3$ where $( \frac{\cdot}{\cdot} )_3$ is the cubic residue symbol. Recall that by cubic reciprocity $\chi_{\pi}(n) = \chi_n(\pi)$ for all the $n$ and $\pi$ appearing above. Using this, Lemma \[huxleylemma\] and , we have that is majorized by $$\ll (y^4 + 2^rU)\sum_{\substack{\mathcal{N}(\pi) = q\\2^{r-1}U < q \leq 2^rU}}\frac{1}{\varphi(q)^2} \ll (y^4 + 2^rU) \frac{\log\log(2^rU)\log(2^rU)}{2^rU}.$$
Therefore, $$\sum_{\substack{k \leq y \\ \mu(k)^2 = 1}}\left|\sum_{2^{r-1}U < q \leq 2^rU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2 \ll \left( 2^rU + y^4 \right)\frac{\log\log(2^rU)\log(2^rU)}{2^rU}.$$ Summing the above over $r$ from $R$ to $[v+1]$, we have $$\label{Psi2est2}
\begin{split}
\sum_{r=R}^{[v+1]} \sum_{\substack{k \leq y \\ \mu(k)^2 = 1}} & \left|\sum_{2^{r-1}U < q \leq 2^rU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k))\right|^2 \\
&\ll v^3 + v^2\log U + v(\log U)^2 + y^{1-3\varepsilon/2}\left( v+ \log U \right) \log(v+ \log U).
\end{split}$$
Therefore, and give that $$\label{Psi2est}
\Psi_2 \ll v\left(y^{2/3 + \varepsilon} + \frac{y^{1 + \varepsilon}}{U^{1/3-\varepsilon}} + v^3 + v^2\log U + v(\log U)^2 + y^{1-3\varepsilon/2}(v+ \log U)\log(v+ \log U)\right).$$
It still remains to consider $\Psi_3$. First, note that for primes $p \equiv 1 \bmod 3$, these primes split into prime ideals in $\mathbb{Q}[ \omega ]$ as $p = \pi_{p,1}\pi_{p,2}$. Let us consider $$\begin{split}
f(s,k) & = \prod_{p \equiv 1 \bmod 3}\left(1-\frac{\chi_{1,p}(-k) + \chi_{2,p}(-k)}{(p-1)p^s}\right) = \prod_{p \equiv 1 \bmod 3}\left(1-\frac{n_{k,p} - 1}{(p-1)p^s}\right) \\
& = \sum_{q}\frac{\mu(q)}{\varphi(q)q^s} \prod_{p|q}(n_{k,p}-1) = \prod_{\substack{p \equiv 1 \bmod 3\\ p = \pi_{p,1}\pi_{p,2}}}\left(1-\frac{\left(\frac{k}{\pi_{p,1}}\right)_3+\left(\frac{k}{\pi_{p,2}}\right)_3}{(p-1)p^s}\right).
\end{split}$$ Clearly, $f(0,k) = \mathfrak{S}(k)$ and $f(s,k)$ has no poles with $\Re(s) >0$. Set $$b_q = \frac{\mu(q)}{\varphi(q)} \prod_{p|q}(n_{k,p}-1)$$ so that the Dirichlet series generated by $b_q$ is precisely $f(s,k)$.
Consider the Hecke $L$-function $$\begin{aligned}
&&L\left(s+1,\left(\frac{k}{\cdot}\right)_3\right) = \prod_{\pi}\left(1-\frac{\left(\frac{k}{\pi}\right)_3}{N(\pi)^{s+1}}\right)^{-1}\nonumber\\
&=& \left(1-\frac{\left(\frac{k}{(1-e(1/3))}\right)_3}{3^{s+1}}\right)^{-1}\prod_{\substack{p \equiv 1 \bmod 3\\p = \pi_{p,1}\pi_{p,2}}}\left(1-\frac{\left(\frac{k}{\pi_{p,1}}\right)_3}{p^{s+1}}\right)^{-1} \left(1-\frac{\left(\frac{k}{\pi_{p,2}}\right)_3}{p^{s+1}}\right)^{-1}\prod_{p \equiv 2 \bmod 3}\left(1-\frac{\left(\frac{k}{p}\right)_3}{p^{2s+2}}\right)^{-1},\end{aligned}$$ where the product is over all prime ideals $\pi$ of ${\mathbb{Z}}[\omega]$. This is a Hecke $L$-function. Set $$f(s,k) = L^{-1}\left( s+1,\left(\frac{k}{\cdot}\right)_3 \right)h(s,k) .$$ Standard arguments will show that $h(s,k)$ is bounded by an absolute constant for $\Re (s) > -1/2 + \delta$ for any fixed $\delta >0$.
Using Perron’s formula, Lemma \[perron\], we have for any $C>0$, $$\label{8}
\begin{split}
\sum_{y_1 \leq q \leq y_2}b_q = \frac{1}{2\pi i}\int\limits_{C-iT}^{C+iT}& L^{-1}\left(s+1,\left(\frac{k}{\cdot}\right)_3\right)h(s,k)\frac{y_2^s - y_1^s}{s}ds \\
& + O\left( \sum_{j=1}^2 \sum_{q}|b_q|\left( \frac{y_j}{q} \right)^C\min \left( 1,T^{-1} \left| \log \frac{y_j}{q} \right|^{-1} \right) \right)
\end{split}$$
Now shift the line of integration to $\sigma-iT$ and $\sigma + iT$, with $$\sigma = -\frac{c'}{2 d\log \Delta(|T|+3)}$$ with $c'$, $d$ and $\Delta$ as given in Lemma \[zerofree\]. Note that the same lemma also ensures that we do not pick up any residues from the zeros of the Hecke $L$-function. Moreover, the residue at $s=0$ is cancelled out. We shall only estimate the contributions from the terms with $y_1$. The terms with $y_2$ can be treated in the same way and satisfy similar bounds. The integral along the vertical line segment $l$ from $\sigma-iT$ to $\sigma + iT$ is majorized by $$\label{9}
\sup_{s \in l}\left|L^{-1}\left(s,\left(\frac{k}{\cdot}\right)_3\right)\right|\log(\Delta |T|+3) \exp \left( -\frac{1 \log y_1}{2 \log (\Delta|T|+3)} \right) \ll \exp \left( -c_1 \sqrt{\log y_1} \right),$$ for some fixed $c_1 >0$. The last inequality above comes from setting $T$ to satisfy $\log y_1 = (\log T)^2$ and the fact that the inverse of the $L$-function on that line segment is bounded by a fixed power of $\log T$ using an analysis similar to that for the inverse of the Riemann zeta function in some zero free region found in Theorems 3.10 and 3.11 of [@ET].
The integral along the horizontal line segments $l_{\pm}$ from $\sigma\pm iT$ to $C\pm iT$ is majorized by $$\label{10}
\frac{ y_1^C \sup_{s \in l_{\pm}}\left|L^{-1}\left(s,\left(\frac{k}{}\right)_3\right)\right|}{T} \ll \exp \left( -c_2\sqrt{\log y_1} \right),$$ with some $c_2 >0$, upon setting $C= 1/\log y_1$. Futhermore, with our choices of $C$ and $T$, the $O$-term in is majorized by $$\label{Otermest}
\exp \left( -c_3\sqrt{\log y_1} \right)$$ for some $c_3>0$. The terms with $y_2$ in satisfies similar bounds.
Now, taking $y_1 = 2^v U$ and $y_2$ to infinity, we have that, upon combining , , and , $$\label{Psi3est}
\Psi_3 = \sum_{k \leq y} \left| \sum_{q > 2^vU}\frac{\mu(q)}{\varphi(q)}\prod_{p|q}(\chi_{1,p}(-k)+\chi_{2,p}(-k)) \right|^2 \ll y \exp \left( -c_4 \sqrt{2^vU} \right),$$ for some $c_4 >0$.
Now setting $$U=\sqrt{y}, \; v = \log_2\frac{\exp(y^{\varepsilon/2})}{U}$$ and combining , , and , we have $$\label{Phiest}
\sum_{\substack{k \leq y \\ \mu(k)^2 =1}} \left| \Phi(k) \right|^2 \ll \frac{y}{(\log x)^{c_5}},$$ for all $c_5 >0$.
The Error Terms from the Major Arcs
===================================
We now consider the second moment over $k$ of the error terms from the major arcs. By Bessel’s inequality, Lemma \[bessineq\], applied to the vector space $V=L^2([0,1])$ with $\xi(\alpha) = T_1(\alpha)E_2(\alpha)$ for $\alpha\in \mathfrak{M}$ and $\xi(\alpha) = 0$ for $\alpha \not\in \mathfrak{M}$, and $\phi_k=e(-\alpha k)$, we have $$\begin{split}
\sum_{k \leq y} & \left|\int\limits_{\mathfrak{M}}T_1(\alpha)E_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha\right|^2 \ll \int\limits_{\mathfrak{M}} \left| T_1(\alpha)E_2(\alpha) \right|^2 {\mathrm{d}}\alpha \\
&= \sum_{q < Q_1}\sum_{\substack{a \bmod q\\ \gcd(a,q)=1}} \left( \ \int\limits_{|\beta| < \delta}\left|T_1\left(\frac{a}{q} + \beta\right)E_2\left(\frac{a}{q}+\beta\right)\right|^2 {\mathrm{d}}\beta + \int\limits_{\delta < |\beta| < 1/(qQ)}\left|T_1\left(\frac{a}{q}+\beta\right)E_2\left(\frac{a}{q}+\beta\right)\right|^2 {\mathrm{d}}\beta \right) ,
\end{split}$$ where $\delta$ is to be chosen later. Because $T_1(\alpha)$, defined in , involves a geometric sum, it can be observed that $$T_1 \left( \frac{a}{q} + \beta \right) \ll \min \left( z, |\beta|^{-1} \right).$$
Therefore, $$\label{T1E2aftersplit}
\begin{split}
\sum_{k \leq y} & \left|\int\limits_{\mathfrak{M}}T_1(\alpha)E_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha\right|^2 \\
& \ll z^2 \sum_{q < Q_1}\sum_{\substack{a \bmod q\\ \gcd(a,q)=1}} \ \int\limits_{|\beta| < \delta}\left|E_2\left(\frac{a}{q}+\beta\right)\right|^2 {\mathrm{d}}\beta + \delta^{-2} \int\limits_{{\mathfrak{M}}}\left|E_2\left(\frac{a}{q}+\beta\right)\right|^2 {\mathrm{d}}\beta .
\end{split}$$ Performing computations similar to those on pages 978 of [@SBLZ] to estimate the integral on the right-hand side of , we have that the right-hand side of is $$\label{T1E2intest}
\ll z^2 x (\log x)^{c_6} \delta^2 + \frac{x (\log x)^{c_7}}{\delta^2 Q^2},$$ for some fixed positive $c_6$ and $c_7$. Now, taking $\delta = 1/\sqrt{z}$, and give $$\label{T1E2est}
\sum_{k \leq y} \left|\int\limits_{\mathfrak{M}}T_1(\alpha)E_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha\right|^2 \ll zx(\log x)^{c_6} + \frac{zx(\log x)^{c_7}}{Q^2}.$$
Again, by Bessel’s inequality, Lemma \[bessineq\], we have $$\label{T2E1bess}
\begin{split}
\sum_{k \leq y}& \left|\int\limits_{\mathfrak{M}}T_2(\alpha)E_1(\alpha)e(-\alpha k) {\mathrm{d}}\alpha\right|^2 \\
& \ll \int\limits_{\mathfrak{M}}|T_2(\alpha)E_1(\alpha) |^2 {\mathrm{d}}\alpha \ll \sup_{\alpha \in \mathfrak{M}}|T_2(\alpha)|^2\int\limits_{\mathfrak{M}}|E_1(\alpha)|^2 {\mathrm{d}}\alpha \ll x^2\int\limits_{\mathfrak{M}}|E_1(\alpha)|^2 {\mathrm{d}}\alpha.
\end{split}$$ In a manner similar to the estimate of the analogous terms in [@SBLZ], we get, using Lemma \[mikawalem\], that $$\label{4}
\int\limits_{\mathfrak{M}}|E_1(\alpha)|^2 {\mathrm{d}}\alpha \ll \sum_{q< Q_1}\frac{q}{\varphi(q)}(qQ)^{-2}\mathfrak{J}(q,Q/2) + Q_1^3Q(\log x)^2 \ll \sum_{q<Q_1} \frac{q}{\varphi(q)}z(\log z)^{-A},$$ for any $A>0$, with the implied constant depending on $A$. Here $\mathfrak{J}$ is the same as that appearing in Lemma \[mikawalem\]. Thus, and give that $$\label{T2E1est}
\sum_{k \leq y}\left|\int\limits_{\mathfrak{M}}T_2(\alpha)E_1(\alpha)e(-\alpha k) d\alpha\right|^2 \ll \frac{x^2z}{(\log x)^{c_8}}$$ for any $c_8 > 0$.
Finally, using Bessel’s inequality, Lemma \[bessineq\], and the trivial estimate for $E_2(\alpha)$, we have $$\label{E1E2est}
\sum_{k \leq y}\left|\int\limits_{\mathfrak{M}}E_1(\alpha)E_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha\right|^2 \ll \int\limits_{\mathfrak{M}}|E_1(\alpha)E_2(\alpha) |^2 {\mathrm{d}}\alpha \ll \sup_{\alpha \in \mathfrak{M}}|E_2(\alpha)|^2\int\limits_{\mathfrak{M}}|E_1(\alpha) |^2 {\mathrm{d}}\alpha \ll \frac{x^2 z}{(\log x)^{c_9}}$$ for any $c_9 > 0$, with the implied constant depending on $c_5$.
Combining , and , we have that $$\label{majorarcerrest}
\sum_{k \leq y} \left|\int\limits_{\mathfrak{M}}(T_1(\alpha)E_2(\alpha) + T_2(\alpha)E_1(\alpha) + E_1(\alpha)E_2(\alpha))e(-\alpha k) d\alpha\right|^2 \ll zx(\log x)^{c_6} + \frac{zx(\log x)^{c_7}}{Q^2} + \frac{x^2z}{(\log x)^{c_{10}}},$$ for any $c_{10} >0$ with the implied constant depending on $c_{10}$.
The Minor Arcs
==============
Finally, it still remains to consider the contribution from the minor arcs. We have $$\label{minorarc}
\sum_{\substack{k \leq y\\ \mu^2(k) = 1}}\left| \int\limits_{{\mathfrak{m}}} S_1(\alpha) S_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha \right|^2= \sum_{\substack{k \leq y\\ \mu^2(k) = 1}}\left|\int\limits_{{\mathfrak{m}}} \sum_{m \leq z}\Lambda(m)e(\alpha m)\sum_{n \leq x}e(-\alpha(n^3 + k)) {\mathrm{d}}\alpha\right|^2,$$ where $${\mathfrak{m}}= \left[ \frac{1}{Q} , 1+ \frac{1}{Q} \right] - {\mathfrak{M}}.$$ By Bessel’s inequality, Lemma \[bessineq\], is $$\label{minorarcbess}
\ll \int\limits_{\mathfrak{m}}|S_1(\alpha)S_2(\alpha)|^2 {\mathrm{d}}\alpha \ll \sup_{\alpha \in \mathfrak{m}}|S_2(\alpha)|^2\int\limits_0^1\left| S_1(\alpha) \right|^2 {\mathrm{d}}\alpha \ll \sup_{\alpha \in \mathfrak{m}}|S_2(\alpha)|^2 z\log z.$$
Using Weyl shift, Lemma \[weylshift\], we get $$\label{minorarcweyl}
S_2^4 (\alpha) \ll x \underset{-x < r_1,r_2 < x}{\sum\sum} \min\left(x, \frac{1}{\| 6\alpha r_1r_2\|}\right).$$
By Dirichlet approximation, there exists a rational approximation to $\alpha$ of type $$\left|\alpha - \frac{a}{q}\right| \leq \frac{1}{12x^2q}$$ with $\gcd(a,q) = 1$ and $1 \leq q \leq 12x^2$. But since $\alpha \in \mathfrak{m}$, we can assume that $q > Q_1$. Hence for $-x < r_1,r_2 < x$, $$\left|6r_1r_2\alpha - 6r_1r_2\frac{a}{q}\right| \leq \frac{1}{2q}$$ which yields $$\frac{1}{\|6r_1r_2\alpha\|} \leq \frac{2}{\|6r_1r_2a/q\|}.$$ We have $$\label{minorarcafterweyl}
\underset{-x < r_1,r_2 < x}{\sum\sum}\min\left(x, \frac{1}{\| 6 \alpha r_1r_2\|}\right)
\leq x \sum_{\substack{-x < r_1,r_2 < x\\q |6r_1r_2}} 1 + \sum_{\substack{-x < r_1,r_2 <x\\q \nmid 6r_1r_2 }}\frac{2}{\| 6 (a/q) r_1r_2\|}.$$ Now, letting $q^{\prime} = q/\gcd(q,6)$ and $d(n)$ denote the number of divisors of $n$, we get $$\label{weyl1sttermest}
\sum_{\substack{-x < r_1,r_2 < x\\q |6r_1r_2}} 1 \ll \sum_{d|q^{\prime}} \left(\frac{x}{d} + 1\right)\left(\frac{x}{q^{\prime}/d} + 1 \right) \ll \left(d(q^{\prime})\frac{x^2}{q^{\prime}} + d(q^{\prime})x \right) \ll \frac{x^2}{(\log x)^{c_{11}}}$$ for any $c_{11} >0$ and $$\label{weyl2ndtermest}
\sum_{\substack{-x < r_1,r_2 <x\\q \nmid 6r_1r_2 }}\frac{2}{\| 6(a/q) r_1r_2\|} \ll x^{2 + \varepsilon}.$$
Therefore, combining , , and gives $$\sup_{\alpha \in \mathfrak{m}}\left| S_2(\alpha) \right|^2 \ll \frac{x^2}{(\log x)^{c_{12}}},$$ for any $c_{12} >0$. Inserting the above into , we have $$\label{minorarcest}
\sum_{\substack{k \leq y\\ \mu^2(k) = 1}}\left| \int\limits_{{\mathfrak{m}}} S_1(\alpha) S_2(\alpha)e(-\alpha k) {\mathrm{d}}\alpha \right|^2 \ll \frac{x^5}{(\log x)^{c_{13}}}$$ for any $c_{13} >0$.
Proof and Discussion of the Theorem
===================================
\[Proof of the Theorem\] Using Cauchy’s inequality, combining , , , , and , we obtain the theorem.
It should be observed that the Theorem is an analogue of the Theorem in [@SBLZ]. The latter was improved in [@BZ] using a variant of the dispersion method of J. V. Linnik [@Linnik]. It would be highly desirable to have a similar improvement for the cubic polynomials. However, this proved to be not at all straightforward, as the condition $m_1-m_2=n_1^2-n_2^2$ appearing in section 5 of [@BZ], if generalized to the cubic case, would not yield such an easy correspondence to work with between $(m_1,m_2)$ and $(n_1,n_2)$ as the quadratic case does.
It is natural to ask whether this method can be used for other families of polynomials, such as those of higher degree or other families of cubic polynomials. Observe that the treatment of $\Psi(k)$, defined in , requires explicit expression of the numbers $n_p$ in terms of characters that reflect, in this case, the arithmetic of $\mathbb{Q}\left[ \omega, k^{1/3}\right]$, since that is the splitting field of $x^3+k$. Analogues of this will be necessary regardless of what family of polynomials we are considering. For the family $x^d+k$, $d\in \mathbb{Z}$, $d>3$ where $k$ runs in such a way that the polynomials $x^d+k$ are irreducible, this can be quite easily mimicked, and the characters in question are power residue symbols governing the Kummer extension $\mathbb{Q}\left[e\left(1/d\right),k^{1/d}\right]/\mathbb{Q}\left[e\left(1/d\right)\right]$. However, analogues of Lemma \[B&Y\] are missing in all these cases except $d=4$, in which case that Lemma can be replaced by Theorem 1.2 of [@GZ]. The situation with other families of cubic polynomials is slightly different. The discriminant of a cubic polynomial $f(x)=x^3+ax^2+bx+c$ is $D = 18abc+a^2b^2-4b^3-4a^3c-27c^2$, and the splitting field of $f$ contains $\mathbb{Q}[\sqrt{-D}]$ as a subfield. This field is always $\mathbb{Q}\left[ \omega \right]$ for the family of irreducible $x^3+k$, allowing the use of Lemma \[huxleylemma\] with that field. For $a\not=0$, $b\not=0$, any family of cubic polynomials with fixed $a$ and $b$ and varying $c$, no longer has a uniform $\mathbb{Q}[\sqrt{-D}]$ for the use of that Lemma, while attempts to create a family with a fixed $\mathbb{Q}[\sqrt{-D}]$, appear to preclude a straightforward use of Lemma \[bessineq\] in treating the error terms from the major arcs.
[**Acknowledgements.**]{} The authors were supported by an AcRF Tier 1 grant at Nanyang Technological University during this work.
--------------------------------------------------- ---------------------------------------------------
Div. of Math. Sci., School of Phys. & Math. Sci., Div. of Math. Sci., School of Phys. & Math. Sci.,
Nanyang Technological Univ., Singapore 637371 Nanyang Technological Univ., Singapore 637371
Email: [[email protected]]{} Email: [[email protected]]{}
--------------------------------------------------- ---------------------------------------------------
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.